qid
int64
1
74.7M
question
stringlengths
12
33.8k
date
stringlengths
10
10
metadata
sequence
response_j
stringlengths
0
115k
response_k
stringlengths
2
98.3k
5,988,673
iam developing WPF product. I want to protect my .net source code from reverse enginering Please advice me
2011/05/13
[ "https://Stackoverflow.com/questions/5988673", "https://Stackoverflow.com", "https://Stackoverflow.com/users/714681/" ]
You would use an obfuscator. There are a lot of them on the market, just google. For example, Visual Studio used to ship with the [Dotfuscator Community Edition](http://www.preemptive.com/products/dotfuscator). I never used it, so I can't say anything about its quality. This blog post shows the possible ways to try to prevent reverse engineering: <http://blogs.msdn.com/b/ericgu/archive/2004/02/24/79236.aspx>
Obfuscating your assemblies will ensure that it is difficult (if not impossible) to reverse-engineer your compiled assemblies. Obfuscators use techniques like symbol renaming, string encryption, control flow obfuscation to try to obfuscate the meaning of the original code. In some cases, it is even possible to totally hide the code from decompilers (however, decompilers are constantly evolving to overcome this). Take a look at [Crypto Obfuscator](http://www.ssware.com/cryptoobfuscator/obfuscator-net.htm). DISCLAIMER: I work for LogicNP, the developer of Crypto Obfuscator.
5,988,673
iam developing WPF product. I want to protect my .net source code from reverse enginering Please advice me
2011/05/13
[ "https://Stackoverflow.com/questions/5988673", "https://Stackoverflow.com", "https://Stackoverflow.com/users/714681/" ]
You would use an obfuscator. There are a lot of them on the market, just google. For example, Visual Studio used to ship with the [Dotfuscator Community Edition](http://www.preemptive.com/products/dotfuscator). I never used it, so I can't say anything about its quality. This blog post shows the possible ways to try to prevent reverse engineering: <http://blogs.msdn.com/b/ericgu/archive/2004/02/24/79236.aspx>
You can use FxProtect obfuscator. It successfully supports WPF and Silverlight obfuscation. You can try it... **[.NET Obfuscator](http://www.maycoms.net)**
5,988,673
iam developing WPF product. I want to protect my .net source code from reverse enginering Please advice me
2011/05/13
[ "https://Stackoverflow.com/questions/5988673", "https://Stackoverflow.com", "https://Stackoverflow.com/users/714681/" ]
In the end it will always be possible to reverse engineer the code. Obfuscation can help but your code will never completely be protected. The only way to fully protect the code is by not deploying it but instead keeping it on a server.
You can use FxProtect obfuscator. It successfully supports WPF and Silverlight obfuscation. You can try it... **[.NET Obfuscator](http://www.maycoms.net)**
5,988,673
iam developing WPF product. I want to protect my .net source code from reverse enginering Please advice me
2011/05/13
[ "https://Stackoverflow.com/questions/5988673", "https://Stackoverflow.com", "https://Stackoverflow.com/users/714681/" ]
Obfuscating your assemblies will ensure that it is difficult (if not impossible) to reverse-engineer your compiled assemblies. Obfuscators use techniques like symbol renaming, string encryption, control flow obfuscation to try to obfuscate the meaning of the original code. In some cases, it is even possible to totally hide the code from decompilers (however, decompilers are constantly evolving to overcome this). Take a look at [Crypto Obfuscator](http://www.ssware.com/cryptoobfuscator/obfuscator-net.htm). DISCLAIMER: I work for LogicNP, the developer of Crypto Obfuscator.
You can use FxProtect obfuscator. It successfully supports WPF and Silverlight obfuscation. You can try it... **[.NET Obfuscator](http://www.maycoms.net)**
233,097
The Premise =========== * An earth-like world (gravity, 1 moon, distance from star) * Oceans do not touch the ground (ignoring the how for this premise, but for consistency we'll say it's floating atop **1 km** of air) * Oceans still in contact with shorelines (allowed to touch ground starting a maximum of 1km from shore) ### What the question is *NOT* * How ocean life would be affected * What destruction would be caused to the atmosphere * Feasibility of the premise The Question ------------ Given that an ocean didn't touch the ground outside of a 1km continental shelf allowance, how would natural ocean phenomena such as waves change? #### Clarifications I haven't been able to get on since posting the question, so I'll give some clarifications here: * The illustration below shows what I was trying to say: the water can only sit on top of land within 1 km of a continent, everywhere else has ~ 1 km of air between the water and ground [![MS Paint example](https://i.stack.imgur.com/nz9QZ.png)](https://i.stack.imgur.com/nz9QZ.png) * For the purposes of this premise, we can assume that the air being contained by the water bodies is air that is either more dense or less buoyant than the water atop it. * We can assume there is very little, if any, air flow getting below the water bodies from above them. * **I care less about the way the waves react upon reaching the shelf, and more about how they change (if they change at all) out on the open waters** **Disclaimer** If you notice anything wrong with my post, I'm still learning, and am always open to suggestions for improvement!
2022/07/22
[ "https://worldbuilding.stackexchange.com/questions/233097", "https://worldbuilding.stackexchange.com", "https://worldbuilding.stackexchange.com/users/97249/" ]
It wouldn't change much ======================= Your scenario is roughly equivalent to "There's a huge, 1 km deep, pocket of air at the bottom of the ocean, and all the ground below the continental shelf is dry" Waves on the ocean are mostly governed by surface effects, so what is happening deep below the surface will be largely unaffected. That isn't to say that if this change happened abruptly that there wouldn't be some big changes in the ocean, but after things settled down, the waves and the ocean would settle down and a new normal would be established.
Same amount of fluid ==================== If the ocean is sitting atop a layer of air, then we still have the same volume of fluid. The "ceiling" of the air layer, where it meets the ocean, might fluctuate by a very small amount (centimeters at most) in some places. "What about the density?" The deep ocean has very little effect on waves, nor is it very affected by waves. So, assuming that your scenario is possible in your world, it wouldn't matter. The waves on the surface of the ocean would be essentially identical. The real problem is with the air itself. In order for the ocean to sit on top of it, you will need some sort of magic holding it up, otherwise the air pressure of your underwater "atmosphere" is going to be just as high as it would be under the same depth of water--no one could live there. You will also need to explain how the air doesn't just bubble up through the ocean. Surface tension alone won't cut it.
233,097
The Premise =========== * An earth-like world (gravity, 1 moon, distance from star) * Oceans do not touch the ground (ignoring the how for this premise, but for consistency we'll say it's floating atop **1 km** of air) * Oceans still in contact with shorelines (allowed to touch ground starting a maximum of 1km from shore) ### What the question is *NOT* * How ocean life would be affected * What destruction would be caused to the atmosphere * Feasibility of the premise The Question ------------ Given that an ocean didn't touch the ground outside of a 1km continental shelf allowance, how would natural ocean phenomena such as waves change? #### Clarifications I haven't been able to get on since posting the question, so I'll give some clarifications here: * The illustration below shows what I was trying to say: the water can only sit on top of land within 1 km of a continent, everywhere else has ~ 1 km of air between the water and ground [![MS Paint example](https://i.stack.imgur.com/nz9QZ.png)](https://i.stack.imgur.com/nz9QZ.png) * For the purposes of this premise, we can assume that the air being contained by the water bodies is air that is either more dense or less buoyant than the water atop it. * We can assume there is very little, if any, air flow getting below the water bodies from above them. * **I care less about the way the waves react upon reaching the shelf, and more about how they change (if they change at all) out on the open waters** **Disclaimer** If you notice anything wrong with my post, I'm still learning, and am always open to suggestions for improvement!
2022/07/22
[ "https://worldbuilding.stackexchange.com/questions/233097", "https://worldbuilding.stackexchange.com", "https://worldbuilding.stackexchange.com/users/97249/" ]
It wouldn't change much ======================= Your scenario is roughly equivalent to "There's a huge, 1 km deep, pocket of air at the bottom of the ocean, and all the ground below the continental shelf is dry" Waves on the ocean are mostly governed by surface effects, so what is happening deep below the surface will be largely unaffected. That isn't to say that if this change happened abruptly that there wouldn't be some big changes in the ocean, but after things settled down, the waves and the ocean would settle down and a new normal would be established.
There would be (virtually) no tsunamis -------------------------------------- Tsunamis are caused by the land displacing large amounts of water when an earthquake occurs. With a 1km air buffer between land and sea, there would not be a direct land-water interface to move the water, and the air would likely absorb most (if not all) of that energy.
462,153
I have thermoelectric modules and set it in series, then I connected to a DC to DC step up (CN6009) to enhance the voltage. Then I connected to my phone with usb charger to try charge with it, but the voltage suddenly drops. Should I change the step up or increase the amount of the module?
2019/10/09
[ "https://electronics.stackexchange.com/questions/462153", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/233718/" ]
You've got a couple of things going on that are causing your problems: 1. The output resistance of [TEG modules](https://en.wikipedia.org/wiki/Thermoelectric_generator#Practical_limitations) is fairly high. They act like a battery with a big resistor in series. 2. By boosting the voltage, you increase the needed current. Say you need 5V at 1A to charge your phone, and your TEG can deliver 2.5V. Your TEG will have to provide 2.5V at 2A in order to deliver the 5 watts of power your charger needs. Those two combine to cause your voltage drop. 1. Voltage **always** drops when current flows through a resistor. The TEG doesn't have a seperate resistor in it. Its construction involves materials and connections that raise the electrical resistance. They **must** be made that way to work, it isn't an artificial limit. 2. You have the TEGs in series, so the internal resistance adds up. If you have, say, three TEGs in series then you have three times the resistance. 3. Boosting the voltage increases the current draw from the TEG, and makes the voltage drop worse. The problem boils down to your charger needing more power than the TEGs can provide. * You can try putting more TEGs in parallel. * You can try using a more efficient boost converter. * You could use larger TEGs. * You could put the TEGs in parallel and charge a low voltage battery, then charge your phone from the low voltage battery using a boost converter. That last solution means you charge a low voltage battery slowly with your TEGs. When that battery is fully charged, you use it with a boost converter to quickly charge your phone.
I think that peltier devices act as current sources up to some maximum voltage. If you demand more current than the devices supply, the voltage will drop. Measure the current produced by one device and then calculate the number of peltier devices you need to deliver the target current into your boost converter.
118,458
Does any know how to get the RDP version Windows is running with?
2010/03/10
[ "https://superuser.com/questions/118458", "https://superuser.com", "https://superuser.com/users/-1/" ]
Windows RDP uses the executable mstsc.exe located in c:\windows\system32 Simply right click on this file, and go to properties, then click the version tab. hope this helps.
or you could also click Start > Run > mstsc and when you see the Remote Desktop Connection window appear, click the top left hand corner "computer" icon and select "About".
118,458
Does any know how to get the RDP version Windows is running with?
2010/03/10
[ "https://superuser.com/questions/118458", "https://superuser.com", "https://superuser.com/users/-1/" ]
Or you could Right click on the window and select About ![enter image description here](https://i.stack.imgur.com/CvjyF.png)
Windows RDP uses the executable mstsc.exe located in c:\windows\system32 Simply right click on this file, and go to properties, then click the version tab. hope this helps.
118,458
Does any know how to get the RDP version Windows is running with?
2010/03/10
[ "https://superuser.com/questions/118458", "https://superuser.com", "https://superuser.com/users/-1/" ]
Or you could Right click on the window and select About ![enter image description here](https://i.stack.imgur.com/CvjyF.png)
or you could also click Start > Run > mstsc and when you see the Remote Desktop Connection window appear, click the top left hand corner "computer" icon and select "About".
969,824
I'm trying to set up a brand-new Epson WorkForce WF-3640 printer and it seems there are some weird mechanical issues. It would seem the carriage is not moving freely. This is *before* I get to install the ink cartridges. The printer may make a loud grinding noise and return errors 0xF1, 0xEA, 0xE8, or 0xE1. Alternatively, the printer may report a paper jam when there is no paper in the paper path at all. Any ideas? --- It seems the carriage is getting stuck on a movable plastic clip at the front right end of the unit or is not engaging correctly at the right end. Why would this be happening?
2015/09/08
[ "https://superuser.com/questions/969824", "https://superuser.com", "https://superuser.com/users/73918/" ]
### 0xE8 and later 0xEA codes > > There is a blue tape on the interior that needs to be removed- once I did this it worked fine. > > > ... > > I had the same issue: 0xE8 and later 0xEA codes. I could see that it > was the white moving clip under the ink tank holder when on the far > right that was catching it. The ONLY thing that fixed it was: once it > made a noise and errored I unwillingly pushed the tank holder over to > the left until the tanks pushed past the white clip and all the way to > the left.. > > > Then there were no more errors. > > > Source [0xE8 and later 0xEA codes](http://www.askmefast.com/Epson_wf3620__need_resolution_for_error_code_0xE8-qna8892260.html#q6764814) --- ### 520 FATAL CODE:0xF1 EPSON Workforce > > This relates to the print head not being able to completely pass from the left to right side during startup. I had a plastic carriage on the one side that was stuck in a position that stop the carriage from make the complete run from side to side. When I forced the plastic carriage down and it clicked into place the error stopped and the printer started up normally with no codes. > > > If anything is causing the print head to not travel completely from left to right during startup, this will probable cause the code. It will be hard to see if the print head is being obstructed if you don't remove the sides. > > > Source [520 FATAL CODE:0xF1 EPSON Workforce](http://hpprintermanual.blogspot.co.uk/2013/03/520-fatal-code0xf1-epson-workforce.html) --- ### Print Error Code0xE3 and 0xEA > > We have seen some success with this issue by following these > instructions. Please try this procedure one more time using the > instructions below. > > > 1. Turn the printer off, then disconnect the power and the interface cable. Open the cover and check for any torn or jammed paper > and remove it. > 2. Reconnect the power cable and turn the printer back on. > 3. Press the Copy button and see if the unit responds. > > > Note: Also check that the ink cartridges and lids are pushed down > fully. > > > If the issue persists, the hardware itself is malfunctioning and will > require service > > > Source [Print Error Code0xE3 and 0xEA](http://www.fixya.com/support/t25141495-print_error_code0xe3_0xea)
Epson technical support stated that this is a hardware failure. Standard troubleshooting steps have not produced a solution. The printer is being replaced under warranty. --- **Update:** The replacement printer has been set up and is fully operational. Well, defective product is defective...
969,824
I'm trying to set up a brand-new Epson WorkForce WF-3640 printer and it seems there are some weird mechanical issues. It would seem the carriage is not moving freely. This is *before* I get to install the ink cartridges. The printer may make a loud grinding noise and return errors 0xF1, 0xEA, 0xE8, or 0xE1. Alternatively, the printer may report a paper jam when there is no paper in the paper path at all. Any ideas? --- It seems the carriage is getting stuck on a movable plastic clip at the front right end of the unit or is not engaging correctly at the right end. Why would this be happening?
2015/09/08
[ "https://superuser.com/questions/969824", "https://superuser.com", "https://superuser.com/users/73918/" ]
### 0xE8 and later 0xEA codes > > There is a blue tape on the interior that needs to be removed- once I did this it worked fine. > > > ... > > I had the same issue: 0xE8 and later 0xEA codes. I could see that it > was the white moving clip under the ink tank holder when on the far > right that was catching it. The ONLY thing that fixed it was: once it > made a noise and errored I unwillingly pushed the tank holder over to > the left until the tanks pushed past the white clip and all the way to > the left.. > > > Then there were no more errors. > > > Source [0xE8 and later 0xEA codes](http://www.askmefast.com/Epson_wf3620__need_resolution_for_error_code_0xE8-qna8892260.html#q6764814) --- ### 520 FATAL CODE:0xF1 EPSON Workforce > > This relates to the print head not being able to completely pass from the left to right side during startup. I had a plastic carriage on the one side that was stuck in a position that stop the carriage from make the complete run from side to side. When I forced the plastic carriage down and it clicked into place the error stopped and the printer started up normally with no codes. > > > If anything is causing the print head to not travel completely from left to right during startup, this will probable cause the code. It will be hard to see if the print head is being obstructed if you don't remove the sides. > > > Source [520 FATAL CODE:0xF1 EPSON Workforce](http://hpprintermanual.blogspot.co.uk/2013/03/520-fatal-code0xf1-epson-workforce.html) --- ### Print Error Code0xE3 and 0xEA > > We have seen some success with this issue by following these > instructions. Please try this procedure one more time using the > instructions below. > > > 1. Turn the printer off, then disconnect the power and the interface cable. Open the cover and check for any torn or jammed paper > and remove it. > 2. Reconnect the power cable and turn the printer back on. > 3. Press the Copy button and see if the unit responds. > > > Note: Also check that the ink cartridges and lids are pushed down > fully. > > > If the issue persists, the hardware itself is malfunctioning and will > require service > > > Source [Print Error Code0xE3 and 0xEA](http://www.fixya.com/support/t25141495-print_error_code0xe3_0xea)
Same error code 0xEA for WF-3640 received today. ...multiple attempts to clear it, but each time (after I finally got the carriage to go to the left), the carriage returned to the right, locked up, and error'd out. Called Epson...cycled through the same process, even sent them a picture of the l-shaped bracket (for whatever they needed to look at)...tried again, and...surprise the same error occurred. They finally stated "hardware error", please return it to the place of purchase for a replacement. This is the first Epson in 20+ years...have had nothing but Canons and first one out of the box...it's really a shame. Hopefully the warranty replacement will be happier.
969,824
I'm trying to set up a brand-new Epson WorkForce WF-3640 printer and it seems there are some weird mechanical issues. It would seem the carriage is not moving freely. This is *before* I get to install the ink cartridges. The printer may make a loud grinding noise and return errors 0xF1, 0xEA, 0xE8, or 0xE1. Alternatively, the printer may report a paper jam when there is no paper in the paper path at all. Any ideas? --- It seems the carriage is getting stuck on a movable plastic clip at the front right end of the unit or is not engaging correctly at the right end. Why would this be happening?
2015/09/08
[ "https://superuser.com/questions/969824", "https://superuser.com", "https://superuser.com/users/73918/" ]
### 0xE8 and later 0xEA codes > > There is a blue tape on the interior that needs to be removed- once I did this it worked fine. > > > ... > > I had the same issue: 0xE8 and later 0xEA codes. I could see that it > was the white moving clip under the ink tank holder when on the far > right that was catching it. The ONLY thing that fixed it was: once it > made a noise and errored I unwillingly pushed the tank holder over to > the left until the tanks pushed past the white clip and all the way to > the left.. > > > Then there were no more errors. > > > Source [0xE8 and later 0xEA codes](http://www.askmefast.com/Epson_wf3620__need_resolution_for_error_code_0xE8-qna8892260.html#q6764814) --- ### 520 FATAL CODE:0xF1 EPSON Workforce > > This relates to the print head not being able to completely pass from the left to right side during startup. I had a plastic carriage on the one side that was stuck in a position that stop the carriage from make the complete run from side to side. When I forced the plastic carriage down and it clicked into place the error stopped and the printer started up normally with no codes. > > > If anything is causing the print head to not travel completely from left to right during startup, this will probable cause the code. It will be hard to see if the print head is being obstructed if you don't remove the sides. > > > Source [520 FATAL CODE:0xF1 EPSON Workforce](http://hpprintermanual.blogspot.co.uk/2013/03/520-fatal-code0xf1-epson-workforce.html) --- ### Print Error Code0xE3 and 0xEA > > We have seen some success with this issue by following these > instructions. Please try this procedure one more time using the > instructions below. > > > 1. Turn the printer off, then disconnect the power and the interface cable. Open the cover and check for any torn or jammed paper > and remove it. > 2. Reconnect the power cable and turn the printer back on. > 3. Press the Copy button and see if the unit responds. > > > Note: Also check that the ink cartridges and lids are pushed down > fully. > > > If the issue persists, the hardware itself is malfunctioning and will > require service > > > Source [Print Error Code0xE3 and 0xEA](http://www.fixya.com/support/t25141495-print_error_code0xe3_0xea)
Had 0xF1 and 0x69 errors on our Epson. Solved 0xF1 by discovering the lever on the ink carriage was open and required closing, then rebooting the machine. 0x69 was solved by re-seating the ink cartridges. Not a good start for a brand new printer.
969,824
I'm trying to set up a brand-new Epson WorkForce WF-3640 printer and it seems there are some weird mechanical issues. It would seem the carriage is not moving freely. This is *before* I get to install the ink cartridges. The printer may make a loud grinding noise and return errors 0xF1, 0xEA, 0xE8, or 0xE1. Alternatively, the printer may report a paper jam when there is no paper in the paper path at all. Any ideas? --- It seems the carriage is getting stuck on a movable plastic clip at the front right end of the unit or is not engaging correctly at the right end. Why would this be happening?
2015/09/08
[ "https://superuser.com/questions/969824", "https://superuser.com", "https://superuser.com/users/73918/" ]
Epson technical support stated that this is a hardware failure. Standard troubleshooting steps have not produced a solution. The printer is being replaced under warranty. --- **Update:** The replacement printer has been set up and is fully operational. Well, defective product is defective...
Same error code 0xEA for WF-3640 received today. ...multiple attempts to clear it, but each time (after I finally got the carriage to go to the left), the carriage returned to the right, locked up, and error'd out. Called Epson...cycled through the same process, even sent them a picture of the l-shaped bracket (for whatever they needed to look at)...tried again, and...surprise the same error occurred. They finally stated "hardware error", please return it to the place of purchase for a replacement. This is the first Epson in 20+ years...have had nothing but Canons and first one out of the box...it's really a shame. Hopefully the warranty replacement will be happier.
969,824
I'm trying to set up a brand-new Epson WorkForce WF-3640 printer and it seems there are some weird mechanical issues. It would seem the carriage is not moving freely. This is *before* I get to install the ink cartridges. The printer may make a loud grinding noise and return errors 0xF1, 0xEA, 0xE8, or 0xE1. Alternatively, the printer may report a paper jam when there is no paper in the paper path at all. Any ideas? --- It seems the carriage is getting stuck on a movable plastic clip at the front right end of the unit or is not engaging correctly at the right end. Why would this be happening?
2015/09/08
[ "https://superuser.com/questions/969824", "https://superuser.com", "https://superuser.com/users/73918/" ]
Epson technical support stated that this is a hardware failure. Standard troubleshooting steps have not produced a solution. The printer is being replaced under warranty. --- **Update:** The replacement printer has been set up and is fully operational. Well, defective product is defective...
Had 0xF1 and 0x69 errors on our Epson. Solved 0xF1 by discovering the lever on the ink carriage was open and required closing, then rebooting the machine. 0x69 was solved by re-seating the ink cartridges. Not a good start for a brand new printer.
189,160
I want to include an image that contains 4-5 graphs from diverse sources next to each other, in order to illustrate their similarities. Since I do not own any of those images I want to include, how do I properly cite them?
2022/09/28
[ "https://academia.stackexchange.com/questions/189160", "https://academia.stackexchange.com", "https://academia.stackexchange.com/users/163127/" ]
I'd suggest no link at all for several reasons. Most important is that you don't want the reader to go off somewhere else while reading your SoP. You want them to focus on what you write. Also, as you note, you don't know how the SoP will be read (paper, electronic...). And, you probably have a word limit and might find a better use for the few words it would take for the link. You should consider that such statements are likely to be viewed, at least in part, as examples of your writing. A commercial site (Amazon,...) would probably be incorrect. I would only consider it if the work were obscure. It is likely however, that if it is important to what you say, then others in the field can probably find it easily enough.
From my experience, only mention it in the PS if that book is truly important for your development (Do you have a good reason to justify that to yourself?). If the book is popular, it is probably unnecessary to provide extra information on the details of the book. If it is obscure, as Buffy answered, it is not a terrible idea to mention the details in the essay. However, you have to take into account that you only have limited space for the PS. You have more to say than what you have learnt from a book. The committee has limited time for each applicant too and probably will not look at it. It is better if you can provide evidence that you truly learnt something from the book, and produced something as the direct result of reading the book. In fact, that was what I did. The course I wanted to take was not available, so I instead bought an advanced book for that course, self-studied, built a small project based on what I learnt and rewrote code for all algorithms in the book in another programming language. I mentioned it in "**Miscellaneous**" section of my CV instead. Also, it is good to have some other supplement documents.
27,216,645
I'm looking into ways of deploying my application (web / DB / application tier) across multiple hosts while utilizing Chef. What I've come up with is **using Chef recipes to represent each step of the deployment as an individual node state**. For example if there is a step that handles the stopping of X daemons & monitoring, it could be written as a chef recipe that simply expects the specific X daemons to be stopped. In the same way, the deployment step that moves an artifact from a shared location to the web root could also be referenced as a chef recipe that represents that specific state of the node (having the artifact copied from point A to point B). The whole deployment process will consist of various steps that basically do these three things: 1. Modify the run list of the nodes depending on the current deployment step. 2. Have chef-client run on each node 3. Log any failures and allow for a repeat of the chef run on the failed nodes or the skipping of the step so the deployment can continue. Questions: * Is using Chef in such a way (constantly modifying the run list of my nodes in order to alter the node state) a bad practice? And if so why? * What are the best ways to orchestrate all this? I can use any kind of CI tools there, but I'm having trouble figuring out how to capture the output of chef-client and be able to repeat or ignore the chef-client runs on specific nodes.
2014/11/30
[ "https://Stackoverflow.com/questions/27216645", "https://Stackoverflow.com", "https://Stackoverflow.com/users/1906965/" ]
This is really not the kind of thing Chef is best for. Chef excels at convergent configuration, less so with the procedural bits. Use Chef for handling the parts where you do a convergent change like deploying new code or rewriting config files, use a procedural tool for the other bits. As for tools to coordinate this, RunDeck is one choice if you want something more service-y. If you want a command-line tool look at Fabric or maybe Capistrano. Personally I use a mix, RunDeck plus Fabric to get the best of both. Some other less complete options include Chef Push Jobs, Mcollective, and Saltstack.
Puppet and Chef are not orchestration tools and they do a very bad job from this perspective. They were not designed to be orchestration and even though some parties with specific interests are pushing the boundaries of the definition of orchestration to get Chef to be considered for orchestration, they are ignoring critical facts/needs. Unfortunately, I am not aware of a single serious solution for orchestration of large environments - most of the tools are quite specific to some needs and some are really not production ready yet. I had to invent my own workarounds to get this done but there was nothing elegant in doing so.
2,807,771
I know there are many algorithms to verify whether two line segments are intersected. The line segments I'm talking about are length line constructed by 2 end points. But once they encountered parallel condition, they just tell the user a big "No" and pretend there is no overlap, share end point, or end point collusion. I know I can can calculate the distance between 2 lines segments. If the distance is 0, check the end points located in the other line segments or not. And this means I have to use a lot of if else and && || conditions. This is not difficult, but my question is **"Is there a trick( or mathematics) method to calculate this special parallel case?"** **I hope this picture clarify my question <http://judark.myweb.hinet.net/parallel.JPG>**
2010/05/11
[ "https://Stackoverflow.com/questions/2807771", "https://Stackoverflow.com", "https://Stackoverflow.com/users/305662/" ]
Yes, given the formulas for both of the lines, test whether their slopes are equal. If they are, the lines are parallel and never intersect. If you have points on each of the lines, you can use the [slope formula](http://cs.selu.edu/~rbyrd/math/slope/). If both are perpendicular to the x-axis, they will both have infinite slopes, but they will be parallel. All points on each line will have equal x coordinates. To deal with line segments, calculate the point of intersection, then determine if that point of intersection exists for both of the segments.
I just got the same problem: The easiest way I have come up with just to check whether the lines overlap: Assuming the segments are colinear (parallel and have the same intersection with the x axis). Take one point A from the longer Segment (A,B) as starting point. Now find the point among the other three points that has the minimal distance to point A (squared Distance is better, even manhattan-length might work too) measuring the distance in the direction of B. If the closest point to A is B, the lines do not intersect. If it belongs to the other segment they do. Perhaps you have to check for special cases like zero length lines or identical lines but this should be easy.
12,322
On his retirement, Bill Gates left behind a four number combination safe with a dial similar to [this](http://thumbs.dreamstime.com/x/realistic-safe-combination-lock-wheel-23526580.jpg). In that safe is a secret that had eluded him many years ago...along with a small fortune of course. He left behind 10 clues - a deck of cards and a note. The order of the cards is as follows: > > Three of Spades, > Six of Spades, > Four of Spades, > Seven of Clubs, > Six of Clubs, > Five of Clubs, > Eight of Clubs, > Two of Clubs, > Four of Diamonds, > Five of Diamonds, > Five of Spades, > Two of Spades, > Six of Diamonds, > Eight of Hearts, > Three of Clubs, > Five of Hearts, > Three of Hearts, > Seven of Hearts, > Six of Hearts, > Two of Diamonds, > Eight of Spades, > Three of Diamonds, > Four of Clubs, > Seven of Spades, > Four of Hearts, > Two of Hearts, > Seven of Diamonds, > Eight of Diamonds, > > > The note reads: > > Sly snakes slither into eternity seeking stigma and style. > > > Is your knowledge potent enough to figure out the combination of the safe?
2015/04/17
[ "https://puzzling.stackexchange.com/questions/12322", "https://puzzling.stackexchange.com", "https://puzzling.stackexchange.com/users/11499/" ]
The combination is > > 37 - 2 - 60 - 41 > > > Explanation: > > If you lay out all cards in a row, and look at the layout of the pips on the card, they represent a 1 or 0 (1 if there is a centre pip, 0 if there isn't). 3, 5, 7 are 1s, and 2, 4, 6 are 0s. 8 represents the end of a number. > So 100101 - 0010 - 111100 - 101001 > > > Extra explanation of how I got here: > > This is based on a game called 'Petals around the rose' (related to the title), which is played with dice. Bill Gates struggled with finding out the solution to this game for many years. <http://www.borrett.id.au/computing/petals-bg.htm> > > > I don't quite know what the second clue is about yet. > > There are 10 clues, and 10 in binary is 2. > > >
Is your knowledge potent enough to figure out the combination of the safe? > > Possibly. Would calling a lock smith to open the safe and find out the combination work? Maybe finding out what those 10 clues he left behind are might help as well. ;-) > > > This might not be the answer you were looking for, but maybe it can inspire some other answers. (hopefully ones that aren't so lateral) Plot twist: > > The 'secret' in the safe is the combination to the safe! He never told anyone which keeps it a secret and then it eluded him (he forgot it) many years ago. Now he forgot the safe and no one else knows (since he never shared it) which is why it's still there! Why he would do this, who knows. But what kind of person would leave a deck of just 28 cards in the open? (Someone really smart or someone not so smart, for this twist I'll pretend not so smart) > > >
70,729
Once in a while, some well-meaning user appends the string " [closed]" to a question title after receiving a good answer. This is, of course, confusing to people who, for example, visit the question later and don't see a "closed as *reason* by *voters* at *time*" box. I can't think of any case where this would be a desired behavior, so I propose rejecting any title that ends in " [closed]" except for the ones generated by the system for actually-closed questions. A quick explanation under "Oops! Your question couldn't be submitted because:" should be enough to make this feature non-confusing.
2010/11/22
[ "https://meta.stackexchange.com/questions/70729", "https://meta.stackexchange.com", "https://meta.stackexchange.com/users/131713/" ]
How about a distinct style for questions that are closed? Perhaps a red title or something of that sort would do.
> > How often does this actually happen? > > > There are 29 questions on SO that currently (as of Oct 31 data dump) have something like this in the title (I figured [closed], (closed), and {closed} would cover the vast majority of them), and an additional 39 questions where that same pattern has been edited out at one time or another. Only one of those 68 questions is actually closed. <http://odata.stackexchange.com/stackoverflow/s/653/closed-questions> I think editing the questions and leaving a comment (to say to leave it alone, except if it's a duplicate, in which case flag for mod attention) is probably sufficient. **EDIT:** I went in and manually edited the offending posts. I think I got all of them. Unfortunately we'll have to wait until January to see that reflected in the December data dump.
70,729
Once in a while, some well-meaning user appends the string " [closed]" to a question title after receiving a good answer. This is, of course, confusing to people who, for example, visit the question later and don't see a "closed as *reason* by *voters* at *time*" box. I can't think of any case where this would be a desired behavior, so I propose rejecting any title that ends in " [closed]" except for the ones generated by the system for actually-closed questions. A quick explanation under "Oops! Your question couldn't be submitted because:" should be enough to make this feature non-confusing.
2010/11/22
[ "https://meta.stackexchange.com/questions/70729", "https://meta.stackexchange.com", "https://meta.stackexchange.com/users/131713/" ]
You're right, there's no reason to ever allow this — added a check for it. EDIT: The filter has been updated to check for the strings "[migrated]", "[on hold]", and "[duplicate]" as well.
How about a distinct style for questions that are closed? Perhaps a red title or something of that sort would do.
70,729
Once in a while, some well-meaning user appends the string " [closed]" to a question title after receiving a good answer. This is, of course, confusing to people who, for example, visit the question later and don't see a "closed as *reason* by *voters* at *time*" box. I can't think of any case where this would be a desired behavior, so I propose rejecting any title that ends in " [closed]" except for the ones generated by the system for actually-closed questions. A quick explanation under "Oops! Your question couldn't be submitted because:" should be enough to make this feature non-confusing.
2010/11/22
[ "https://meta.stackexchange.com/questions/70729", "https://meta.stackexchange.com", "https://meta.stackexchange.com/users/131713/" ]
You're right, there's no reason to ever allow this — added a check for it. EDIT: The filter has been updated to check for the strings "[migrated]", "[on hold]", and "[duplicate]" as well.
> > How often does this actually happen? > > > There are 29 questions on SO that currently (as of Oct 31 data dump) have something like this in the title (I figured [closed], (closed), and {closed} would cover the vast majority of them), and an additional 39 questions where that same pattern has been edited out at one time or another. Only one of those 68 questions is actually closed. <http://odata.stackexchange.com/stackoverflow/s/653/closed-questions> I think editing the questions and leaving a comment (to say to leave it alone, except if it's a duplicate, in which case flag for mod attention) is probably sufficient. **EDIT:** I went in and manually edited the offending posts. I think I got all of them. Unfortunately we'll have to wait until January to see that reflected in the December data dump.
6,550,400
i'm building a travel blog (Php) where I might be loading dozens of pictures (size 500x375 weight 150-200kb) so that the page weights more than 4-5Mb. Which is the way to go apart from caching/gzip to decrease waiting time and make a better user experience? I'm on a shared server as my budget is very low thanks
2011/07/01
[ "https://Stackoverflow.com/questions/6550400", "https://Stackoverflow.com", "https://Stackoverflow.com/users/505762/" ]
Some options: * split up the images across multiple pages * use a 'lazy load' script that will only request images as they come into the viewport * use AJAX to request images as needed via a user action * leverage external hosting of the images (flickr, etc) to split the server requests amongst different servers.
If you're displaying dozens of images on one page, I would consider just showing small images / thumbnails that get enlarged when the visitor clicks on them.
6,550,400
i'm building a travel blog (Php) where I might be loading dozens of pictures (size 500x375 weight 150-200kb) so that the page weights more than 4-5Mb. Which is the way to go apart from caching/gzip to decrease waiting time and make a better user experience? I'm on a shared server as my budget is very low thanks
2011/07/01
[ "https://Stackoverflow.com/questions/6550400", "https://Stackoverflow.com", "https://Stackoverflow.com/users/505762/" ]
If you're displaying dozens of images on one page, I would consider just showing small images / thumbnails that get enlarged when the visitor clicks on them.
There are some points that solve this issue 1) Show few images and below that show more link or icon 2) After clicking on that give ajax call and show other images 3) Also you use 'jQuery lazy loading plugin'(it's very easy to integrate..[click here](http://sandeepshirsat.wordpress.com/2012/12/11/35/) to see integration step)
6,550,400
i'm building a travel blog (Php) where I might be loading dozens of pictures (size 500x375 weight 150-200kb) so that the page weights more than 4-5Mb. Which is the way to go apart from caching/gzip to decrease waiting time and make a better user experience? I'm on a shared server as my budget is very low thanks
2011/07/01
[ "https://Stackoverflow.com/questions/6550400", "https://Stackoverflow.com", "https://Stackoverflow.com/users/505762/" ]
Some options: * split up the images across multiple pages * use a 'lazy load' script that will only request images as they come into the viewport * use AJAX to request images as needed via a user action * leverage external hosting of the images (flickr, etc) to split the server requests amongst different servers.
There are some points that solve this issue 1) Show few images and below that show more link or icon 2) After clicking on that give ajax call and show other images 3) Also you use 'jQuery lazy loading plugin'(it's very easy to integrate..[click here](http://sandeepshirsat.wordpress.com/2012/12/11/35/) to see integration step)
9,829,465
I am having trouble finding a clear answer on this one. I have an ASP.NET 4.0 Silverlight app, but recently a ton of users are complaining about not being able to use the site on mobile devices and Linux distro's. The app is built on MVVM architecture, and thus we are considering changing the UI to alleviate the complaints. We are leaning toward HTML5, but I'm not sure if this is even technically possible with ASP.NET 4.0. I've seen some posts saying that HTML5 only works with javascript code behinds, and that with ASP.NET 4.5 HTML5 support will be added. Am I understanding this correctly? Maybe it would make more sense to just go with an ASPX UI, what are the advantages of HTML5 over .ASPX? Any help is appreciated.
2012/03/22
[ "https://Stackoverflow.com/questions/9829465", "https://Stackoverflow.com", "https://Stackoverflow.com/users/763398/" ]
HTML5 is a set of client-side technologies. ASP.Net is a server-side technology. They have nothing to do with each-other. However, it will be easier in ASP.Net MVC.
You would replace what is currently in the Silverlight plugin that runs in the users browser with some html and javascript instead.
9,829,465
I am having trouble finding a clear answer on this one. I have an ASP.NET 4.0 Silverlight app, but recently a ton of users are complaining about not being able to use the site on mobile devices and Linux distro's. The app is built on MVVM architecture, and thus we are considering changing the UI to alleviate the complaints. We are leaning toward HTML5, but I'm not sure if this is even technically possible with ASP.NET 4.0. I've seen some posts saying that HTML5 only works with javascript code behinds, and that with ASP.NET 4.5 HTML5 support will be added. Am I understanding this correctly? Maybe it would make more sense to just go with an ASPX UI, what are the advantages of HTML5 over .ASPX? Any help is appreciated.
2012/03/22
[ "https://Stackoverflow.com/questions/9829465", "https://Stackoverflow.com", "https://Stackoverflow.com/users/763398/" ]
HTML5 is a set of client-side technologies. ASP.Net is a server-side technology. They have nothing to do with each-other. However, it will be easier in ASP.Net MVC.
HTML5 works in conjunction with Javascript on the client side. You can still use ASP.NET to process data and deliver content server side. Here are some quick links. <http://visualstudiomagazine.com/articles/2011/09/01/pfcov_html5.aspx> <http://mvchtml5.codeplex.com/> (I know it's mvc, but it might be helpful regardless.)
9,829,465
I am having trouble finding a clear answer on this one. I have an ASP.NET 4.0 Silverlight app, but recently a ton of users are complaining about not being able to use the site on mobile devices and Linux distro's. The app is built on MVVM architecture, and thus we are considering changing the UI to alleviate the complaints. We are leaning toward HTML5, but I'm not sure if this is even technically possible with ASP.NET 4.0. I've seen some posts saying that HTML5 only works with javascript code behinds, and that with ASP.NET 4.5 HTML5 support will be added. Am I understanding this correctly? Maybe it would make more sense to just go with an ASPX UI, what are the advantages of HTML5 over .ASPX? Any help is appreciated.
2012/03/22
[ "https://Stackoverflow.com/questions/9829465", "https://Stackoverflow.com", "https://Stackoverflow.com/users/763398/" ]
HTML5 is a set of client-side technologies. ASP.Net is a server-side technology. They have nothing to do with each-other. However, it will be easier in ASP.Net MVC.
It is not technically possible with ASP.NET 4.0. ASP.NET certainly needs the upgrade in order to handle any HTML5-producing code behind or 'plug-ins.' I second the notion of ASP.NET MVC. Also it seems you are comparing a car to gasoline when you ask the advantages of HTML5 over ASPX.
9,829,465
I am having trouble finding a clear answer on this one. I have an ASP.NET 4.0 Silverlight app, but recently a ton of users are complaining about not being able to use the site on mobile devices and Linux distro's. The app is built on MVVM architecture, and thus we are considering changing the UI to alleviate the complaints. We are leaning toward HTML5, but I'm not sure if this is even technically possible with ASP.NET 4.0. I've seen some posts saying that HTML5 only works with javascript code behinds, and that with ASP.NET 4.5 HTML5 support will be added. Am I understanding this correctly? Maybe it would make more sense to just go with an ASPX UI, what are the advantages of HTML5 over .ASPX? Any help is appreciated.
2012/03/22
[ "https://Stackoverflow.com/questions/9829465", "https://Stackoverflow.com", "https://Stackoverflow.com/users/763398/" ]
HTML5 is a set of client-side technologies. ASP.Net is a server-side technology. They have nothing to do with each-other. However, it will be easier in ASP.Net MVC.
I'm really surprised where somebody telling that all the C# code of the ViewModel and xaml code behind replace with Javascript . Is is possible ? Where in MVVM architecture code are huge and all are responsible for higher task(like communicate with service layer or database). Is it possible to convert all C# code in JavaScript. Even I have seen if a JavaScript function take over 25 lines than it becomes some clumsy fro developer to understand . Simple or small functionality can easy develop with javaScript . C# is very standard and object oriented language ,to develop LOB it has great role but Javascript is not too much . I'm think Using asp.net mvc is nice to use HTML5 , see we just need to change UI page look using HTML5 but rest of application functionality should remain Same .
176
Firstly, I'd like to say that I've just discovered that this new SE site exists and I'm very excited! There are a few games that I played in my childhood that I would like to play again but I don't know the titles. Is it on-topic to describe the games as much as possible and what platform they ran on and ask what they were called (one game per question)? It doesn't seem appropriate to ask on gaming.SE as it seems that site is only for recent games.
2016/06/26
[ "https://retrocomputing.meta.stackexchange.com/questions/176", "https://retrocomputing.meta.stackexchange.com", "https://retrocomputing.meta.stackexchange.com/users/2067/" ]
There are a number of sites within the SE network that have [identify-this](https://retrocomputing.stackexchange.com/questions/tagged/identify-this "show questions tagged 'identify-this'") tags, so why not this one? Note that a few of these sites periodically have on their metas a proposal to ban them. Particularly where there are a lot of such questions. See the soul searching on [Movies & TV](https://movies.meta.stackexchange.com/questions/2292/maintaining-improving-and-cleaning-up-identification-questions), a site that has a large number of them. As with all questions, they would go on merit. A good question describing one game from yesteryear clearly enough for a chance of an answer would be good for the site. We may attract vague questions - they should be closed as "*Unclear what you're asking*". Let us take some and see how we go. Note also, this site isn't for asking "*Where can I buy...*" questions, so we would restrict ourselves to identification. While on the subject - why not "*Identify this computer*" questions. After all, they have already been asked [elsewhere](https://movies.stackexchange.com/questions/26832/what-is-the-computer-that-benny-hill-hacks-in-the-italian-job-1969/31762#31762) too.
My only hesitation to a qualified "yes" is that I think Arquade already has a [retro-gaming](https://gaming.stackexchange.com/questions/tagged/backwards-compatibility) and [game-identification](https://gaming.stackexchange.com/questions/tagged/game-identification) tag. I don't see us usurping that. They have a vibrant and knowledgeable population of users over there that I feel will be able to talk about finding, identifying, and running old game and gaming systems much more effectively than the average cross-section of Retro folks. To be fair, the "retro-gaming" tag is a synonym for "backwards-compatibility" (what?) so there is some strange overlap.
176
Firstly, I'd like to say that I've just discovered that this new SE site exists and I'm very excited! There are a few games that I played in my childhood that I would like to play again but I don't know the titles. Is it on-topic to describe the games as much as possible and what platform they ran on and ask what they were called (one game per question)? It doesn't seem appropriate to ask on gaming.SE as it seems that site is only for recent games.
2016/06/26
[ "https://retrocomputing.meta.stackexchange.com/questions/176", "https://retrocomputing.meta.stackexchange.com", "https://retrocomputing.meta.stackexchange.com/users/2067/" ]
There are a number of sites within the SE network that have [identify-this](https://retrocomputing.stackexchange.com/questions/tagged/identify-this "show questions tagged 'identify-this'") tags, so why not this one? Note that a few of these sites periodically have on their metas a proposal to ban them. Particularly where there are a lot of such questions. See the soul searching on [Movies & TV](https://movies.meta.stackexchange.com/questions/2292/maintaining-improving-and-cleaning-up-identification-questions), a site that has a large number of them. As with all questions, they would go on merit. A good question describing one game from yesteryear clearly enough for a chance of an answer would be good for the site. We may attract vague questions - they should be closed as "*Unclear what you're asking*". Let us take some and see how we go. Note also, this site isn't for asking "*Where can I buy...*" questions, so we would restrict ourselves to identification. While on the subject - why not "*Identify this computer*" questions. After all, they have already been asked [elsewhere](https://movies.stackexchange.com/questions/26832/what-is-the-computer-that-benny-hill-hacks-in-the-italian-job-1969/31762#31762) too.
I don't think retrocomputing is appropriate for these types of questions. [Gaming.SE](http://gaming.stackexchange.com) might be a better fit for these types of questions, as they are dedicated to gaming across multiple generations. If we allow game identification questions, this site might become a dumping ground with them, rather than general retrocomputing questions.
176
Firstly, I'd like to say that I've just discovered that this new SE site exists and I'm very excited! There are a few games that I played in my childhood that I would like to play again but I don't know the titles. Is it on-topic to describe the games as much as possible and what platform they ran on and ask what they were called (one game per question)? It doesn't seem appropriate to ask on gaming.SE as it seems that site is only for recent games.
2016/06/26
[ "https://retrocomputing.meta.stackexchange.com/questions/176", "https://retrocomputing.meta.stackexchange.com", "https://retrocomputing.meta.stackexchange.com/users/2067/" ]
There are a number of sites within the SE network that have [identify-this](https://retrocomputing.stackexchange.com/questions/tagged/identify-this "show questions tagged 'identify-this'") tags, so why not this one? Note that a few of these sites periodically have on their metas a proposal to ban them. Particularly where there are a lot of such questions. See the soul searching on [Movies & TV](https://movies.meta.stackexchange.com/questions/2292/maintaining-improving-and-cleaning-up-identification-questions), a site that has a large number of them. As with all questions, they would go on merit. A good question describing one game from yesteryear clearly enough for a chance of an answer would be good for the site. We may attract vague questions - they should be closed as "*Unclear what you're asking*". Let us take some and see how we go. Note also, this site isn't for asking "*Where can I buy...*" questions, so we would restrict ourselves to identification. While on the subject - why not "*Identify this computer*" questions. After all, they have already been asked [elsewhere](https://movies.stackexchange.com/questions/26832/what-is-the-computer-that-benny-hill-hacks-in-the-italian-job-1969/31762#31762) too.
I think "identify-this" should be permitted, subject to the same restriction as on Gaming.SE: you need to have some artifact from the software. This could be * A screenshot * A clip of the soundtrack or a distinctive sound effect * A photo of a pair of peril-sensitive sunglasses * A distinctive line of text ("YOU ARE IN A MAZE OF TWISTY LITTLE PASSAGES, ALL ALIKE.") * and so on In short, there needs to be a way for the person answering to verify that yes, their answer matches the question. Note that "identify-this" should not be restricted to games. "What Apple II calculation program is [this screenshot](https://upload.wikimedia.org/wikipedia/commons/7/7a/Visicalc.png) from?" should be perfectly acceptable.
85,660
I made this circuit to amplify the output of a load cell. ![schematic](https://i.stack.imgur.com/Jq9z3.png) [simulate this circuit](/plugins/schematics?image=http%3a%2f%2fi.stack.imgur.com%2fJq9z3.png) – Schematic created using [CircuitLab](https://www.circuitlab.com/) The instrumental amplifier has a Gain = 750. I use a dedicated voltage regulator (LM350) because I want an output of max 5V. The circuit seems to work well (there's a bit of background noise), but when I'm close to the circuit the output is greater than before. The Output change a lot when I move my arms in the air. The cable from Load Cell to circuit are close in an aluminum foil connected to GND. Is this the correct way to read data form load Cell? Do you know how to remove the influence of a body? --- **More details:** *Thanks everyone for suggestion (it's only the second time I use electronics.stakexchange)* The schematic shows only a single part of the PCB circuit of my project. In the real circuit, between load cells and amplifier, there are two [demultiplexers](http://www.ti.com/lit/ds/symlink/cd74hc4051.pdf). They switch signal from 8 Load Cells to 1 amplifier. (1 demux manage the +In and the other the -In). On PCB there aren't capacitors. I tried to add capacitors in the breadboard version. ![The PCB without capacitors](https://i.stack.imgur.com/A41d1.jpg) The circuit and the load cells are mounted into a big aluminum case, not yet completely closed. ![The aluminum case](https://i.stack.imgur.com/IoBnW.jpg) In this picture the cables are uncovered, but during tests they are covered with silver foil. Every part of the aluminum case is connected to GND. For @ANDY AKA This is what the oscilloscope sees when I put my head near to the circuit in Breadboard version. ![CH 1 is the ampli supply, CH2 is the ampli output](https://i.stack.imgur.com/8fqTV.png) If I set AREF to 1V and move my arm near to the wires, you are right, the output do the opposite: it decreases.
2013/10/17
[ "https://electronics.stackexchange.com/questions/85660", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/30344/" ]
Rolling codes require several part to function correctly. Here I'll describe a generic implementation that uses all the parts in a specific way. Other systems are variations on this theme, but generally employ many of the same techniques in a similar way. Rather than try to describe the complete implementation and how it works at once, I'll describe a simple system, and add complexity as we go until we reach a cryptographically secure system. A non cryptographic rolling code is simply a transmitter and receiver that both use the same pseudo random number generator (PRNG). This generator has two pieces of important information: a calculation, and the previously generated number. The calculation is generally a linear feedback equation that can be represented by a single number. By feeding the PRNG with the previous number, and keeping the feedback number the same a specific sequence of numbers is generated. The sequence has no repeated sequences until it's gone through every number it can generate, and then it starts over again with the same sequence. If both remote and transmitter know the feedback number, and the current number, then when the remote transmits the next number, the receiver can test it against its own generator. If it matches, it activates. If it doesn't, it rolls through the sequence until it finds the number the remote sent. If you press the remote again, then it should match, and it'll activate because the previous transmission already synchronized the number generators. This is why you sometimes have to press the unlock button twice - your receiver or transmitter are out of sync. That's the rolling part of the code. If the PRNG is long enough, it's very hard to find out the feedback number without many numbers in the sequence in a row, which is hard to obtain in normal use. But it's not cryptographically secure. On top of that you add typical encryption. The vehicle manufacturer uses a specific secret key for the transmitter and receiver. Depending on the manufacturer you might find that each model and year have a different code, or they might share the code among several models of vehicles and over several years. The trade off is that each one then requires a different remote to be stocked, but the problem with sharing a code over many models is that if it's broken then more cars are vulnerable. Behind the encryption you have button info, the PRNG generated number, and a little information about the feedback number. Not enough to make the PRNG from scratch, but enough that after a certain number of button presses, and with some inside information about the limited space a feedback number can involve (again, manufacturer, line specific) then the receiver can, after several training transmissions, determine the feedback number, and start tracking the PRNG for that remote. The rolling code is meant only to stop replay attacks. The encryption is meant to secure the rolling code to avoid it being broken. With only one or the other the system would be too easy to break. Since the manufacturer controls both the transmitter and receiver, training doesn't involve public key cryptography or anything particularly involved. It also prevents aftermarket fobs from working in cars with this type of system. Rolling code isn't impervious, though. The old keeloq system was successfully attacked just a few years ago (after a decade of use) so the manufacturer encryption code can be found, and the rolling codes can be found more easily. Earlier than that it has been attacked in ways that allowed people to take vehicles without actually breaking the code. In response the new encryption key is 60 bits. Not as secure as many modern encryption systems, but secure enough that it'll probably last many more years before it's broken.
I first encountered KeeLoq when researching the chip in a garage door opener. The [Microchip datasheet](http://ww1.microchip.com/downloads/en/devicedoc/21143b.pdf) does a good job of explaining how it works. In a nutshell: * the receiver maintains a database of all transmitters, keyed on their serial number. * each transmitter is associated with a symmetric encryption key (64 bit), which is on the chip, and also in the receiver's database. * each transmitter is associated with a 16 bit cyclic sequence number, also stored on the chip and in the database. * when the transmitter is activated, it increments its sequence number modulo 65536 (wraparound 16 bits), and sends a packet consisting of a bitmask representing what buttons are pressed, its serial ID, and an encrypted version of the serial number. * the receiver matches the serial number in the database, pulls out the key and decrypts the serial number. * the serial number has to be new; it cannot be a recently used serial number, which guards against replay attacks. (See Fig. 7.3 in the datasheet). * if the serial number verifies, then the receiver can activate functionality based on the bit mask of what buttons are pressed. * if the new serial number is ahead by more than 16 values (the user pushed the buttons many times accidentally while away from the receiver) then an extra hand-shake has to take place to resynchronize, which requires an extra button press. (The user will perform the extra button press, believing there is bad reception). Adding a new transmitter to the receiver database is vaguely analogous, on a high level, to the button-press configuration method for adding clients to a Wi-Fi access point. The receiver is somehow told put into a mode whereby it accepts a new transmitter. A new transmitter can be accepted from information passed in ordinary activation messages, if the receiver and transmitter share the same secret manufacturer ID. This is because the 64 bit encryption key is derived from the manufacturer ID and serial information of the receiver. (See Sec. 7.1). There is a more secure alternative to this: the "Secure Learn". This is initiated in a special way on the transmitter (three buttons pressed at once). The transmitter sends a special packet: a 60 bit seed value from which the encryption key is derived, presumably not depending on the manufacturer ID or serial number. When the receiver is not in learn mode, it of course rejects transmissions from transmitters that it does not know about.
351,145
The answer at first sight seems quite obvious and negative. Consider this: There is an electron. Right of it is a positive charge. It gets accelerated towards right. Now, instantaneously, I remove the charge on right and put a charge on the left. The acceleration changes from right to left instantly. Won't it make an inverted 'V' like velocity time graph ? Now if such a graph exists then acceleration at the kink would be undefined and hence force applied at that time would be undefined which it is not at any point of time. PS:The distance between the electron and the positive charge is always same, though electron moves towards one charge, the charge is also moved away with some external force.
2017/08/09
[ "https://physics.stackexchange.com/questions/351145", "https://physics.stackexchange.com", "https://physics.stackexchange.com/users/72343/" ]
What you describe is a mathematically contrived situation where the forces on a particle change instantaneously. In such situations, I fall back on really boring and obvious tautologies: "If the forces on a particle change instantaneously, the forces on that particle change instantaneously." You will define the forces on this particle in a piece-wise manner. It is up to you and your mathematical games as to whether the force is defined to be positive at that instant, negative at that instant, or undefined. If it's undefined, then you can no longer integrate acceleration to get velocity, and all of your physics breaks. However, if you choose to define the velocity to be positive or negative at that moment, you can at least integrate acceleration to get velocity. You will get the "V' curve you refer to. However, in reality, forces do not pop into existence instantaneously as you describe. Particles move continuously. You will never come across a case where you get undefined values like this. At least, you wont come across them until you get into modeling black holes. But they're a special sort of problem.
Note that the velocity time graph in this case has a sharp kink at that point implying that the function is not differentiable at that points. In any realistic situation, the functions will be differentiable at all points and the graphs will be smooth. What this means physically is that and velocity cannot change values abruptly at an instant. Changes are always continuous.
6,781,886
can anybody tell me that in c# why does char takes two bytes although it can be stored in one byte. Don't you think it is wastage of a memory. if not , then how is extra 1-byte used? in simple words ..please make me clear what is the use of extra 8-bits.!!
2011/07/21
[ "https://Stackoverflow.com/questions/6781886", "https://Stackoverflow.com", "https://Stackoverflow.com/users/856755/" ]
> > The char keyword is used to declare a Unicode character in the range indicated in the following table. Unicode characters are 16-bit characters used to represent most of the known written languages throughout the world. > > > <http://msdn.microsoft.com/en-us/library/x9h8tsay%28v=vs.80%29.aspx>
Unicode characters. True, we have enough room in 8bits for the English alphabet, but when it comes to Chinese and such, it takes a lot more characters.
6,781,886
can anybody tell me that in c# why does char takes two bytes although it can be stored in one byte. Don't you think it is wastage of a memory. if not , then how is extra 1-byte used? in simple words ..please make me clear what is the use of extra 8-bits.!!
2011/07/21
[ "https://Stackoverflow.com/questions/6781886", "https://Stackoverflow.com", "https://Stackoverflow.com/users/856755/" ]
Unicode characters. True, we have enough room in 8bits for the English alphabet, but when it comes to Chinese and such, it takes a lot more characters.
because utf-8 was probably still too young for microsoft to consider using it
6,781,886
can anybody tell me that in c# why does char takes two bytes although it can be stored in one byte. Don't you think it is wastage of a memory. if not , then how is extra 1-byte used? in simple words ..please make me clear what is the use of extra 8-bits.!!
2011/07/21
[ "https://Stackoverflow.com/questions/6781886", "https://Stackoverflow.com", "https://Stackoverflow.com/users/856755/" ]
> > although it can be stored in one byte > > > What makes you think that? It only takes one byte to represent every character in the English language, but other languages use other characters. Consider the number of different alphabets (Latin, Chinese, Arabic, Cyrillic...), and the number of symbols in each of these alphabets (not only letters or digits, but also punctuation marks and other special symbols)... there are tens of thousands of different symbols in use in the world ! So one byte is never going to be enough to represent them all, that's why the [Unicode](http://en.wikipedia.org/wiki/Unicode) standard was created. Unicode has several representations (UTF-8, UTF-16, UTF-32...). .NET strings use UTF-16, which takes two bytes per character (code points, actually). Of course, two bytes is still not enough to represent all the different symbols in the world; surrogate pairs are used to represent characters above U+FFFF
Unicode characters. True, we have enough room in 8bits for the English alphabet, but when it comes to Chinese and such, it takes a lot more characters.
6,781,886
can anybody tell me that in c# why does char takes two bytes although it can be stored in one byte. Don't you think it is wastage of a memory. if not , then how is extra 1-byte used? in simple words ..please make me clear what is the use of extra 8-bits.!!
2011/07/21
[ "https://Stackoverflow.com/questions/6781886", "https://Stackoverflow.com", "https://Stackoverflow.com/users/856755/" ]
> > The char keyword is used to declare a Unicode character in the range indicated in the following table. Unicode characters are 16-bit characters used to represent most of the known written languages throughout the world. > > > <http://msdn.microsoft.com/en-us/library/x9h8tsay%28v=vs.80%29.aspx>
In C#, char's are 16-bit Unicode characters by default. Unicode supports a much larger character set than can be supported by ASCII. If memory really is a concern, here is a good discussion on SO regarding how you might work with 8-bit chars: [Is there a string type with 8 BIT chars?](https://stackoverflow.com/questions/4916838/is-there-a-string-type-with-8-bit-chars) References: On C#'s char datatype: <http://msdn.microsoft.com/en-us/library/x9h8tsay(v=vs.80).aspx> On Unicode: <http://en.wikipedia.org/wiki/Unicode>
6,781,886
can anybody tell me that in c# why does char takes two bytes although it can be stored in one byte. Don't you think it is wastage of a memory. if not , then how is extra 1-byte used? in simple words ..please make me clear what is the use of extra 8-bits.!!
2011/07/21
[ "https://Stackoverflow.com/questions/6781886", "https://Stackoverflow.com", "https://Stackoverflow.com/users/856755/" ]
> > The char keyword is used to declare a Unicode character in the range indicated in the following table. Unicode characters are 16-bit characters used to represent most of the known written languages throughout the world. > > > <http://msdn.microsoft.com/en-us/library/x9h8tsay%28v=vs.80%29.aspx>
because utf-8 was probably still too young for microsoft to consider using it
6,781,886
can anybody tell me that in c# why does char takes two bytes although it can be stored in one byte. Don't you think it is wastage of a memory. if not , then how is extra 1-byte used? in simple words ..please make me clear what is the use of extra 8-bits.!!
2011/07/21
[ "https://Stackoverflow.com/questions/6781886", "https://Stackoverflow.com", "https://Stackoverflow.com/users/856755/" ]
> > although it can be stored in one byte > > > What makes you think that? It only takes one byte to represent every character in the English language, but other languages use other characters. Consider the number of different alphabets (Latin, Chinese, Arabic, Cyrillic...), and the number of symbols in each of these alphabets (not only letters or digits, but also punctuation marks and other special symbols)... there are tens of thousands of different symbols in use in the world ! So one byte is never going to be enough to represent them all, that's why the [Unicode](http://en.wikipedia.org/wiki/Unicode) standard was created. Unicode has several representations (UTF-8, UTF-16, UTF-32...). .NET strings use UTF-16, which takes two bytes per character (code points, actually). Of course, two bytes is still not enough to represent all the different symbols in the world; surrogate pairs are used to represent characters above U+FFFF
> > The char keyword is used to declare a Unicode character in the range indicated in the following table. Unicode characters are 16-bit characters used to represent most of the known written languages throughout the world. > > > <http://msdn.microsoft.com/en-us/library/x9h8tsay%28v=vs.80%29.aspx>
6,781,886
can anybody tell me that in c# why does char takes two bytes although it can be stored in one byte. Don't you think it is wastage of a memory. if not , then how is extra 1-byte used? in simple words ..please make me clear what is the use of extra 8-bits.!!
2011/07/21
[ "https://Stackoverflow.com/questions/6781886", "https://Stackoverflow.com", "https://Stackoverflow.com/users/856755/" ]
In C#, char's are 16-bit Unicode characters by default. Unicode supports a much larger character set than can be supported by ASCII. If memory really is a concern, here is a good discussion on SO regarding how you might work with 8-bit chars: [Is there a string type with 8 BIT chars?](https://stackoverflow.com/questions/4916838/is-there-a-string-type-with-8-bit-chars) References: On C#'s char datatype: <http://msdn.microsoft.com/en-us/library/x9h8tsay(v=vs.80).aspx> On Unicode: <http://en.wikipedia.org/wiki/Unicode>
because utf-8 was probably still too young for microsoft to consider using it
6,781,886
can anybody tell me that in c# why does char takes two bytes although it can be stored in one byte. Don't you think it is wastage of a memory. if not , then how is extra 1-byte used? in simple words ..please make me clear what is the use of extra 8-bits.!!
2011/07/21
[ "https://Stackoverflow.com/questions/6781886", "https://Stackoverflow.com", "https://Stackoverflow.com/users/856755/" ]
> > although it can be stored in one byte > > > What makes you think that? It only takes one byte to represent every character in the English language, but other languages use other characters. Consider the number of different alphabets (Latin, Chinese, Arabic, Cyrillic...), and the number of symbols in each of these alphabets (not only letters or digits, but also punctuation marks and other special symbols)... there are tens of thousands of different symbols in use in the world ! So one byte is never going to be enough to represent them all, that's why the [Unicode](http://en.wikipedia.org/wiki/Unicode) standard was created. Unicode has several representations (UTF-8, UTF-16, UTF-32...). .NET strings use UTF-16, which takes two bytes per character (code points, actually). Of course, two bytes is still not enough to represent all the different symbols in the world; surrogate pairs are used to represent characters above U+FFFF
In C#, char's are 16-bit Unicode characters by default. Unicode supports a much larger character set than can be supported by ASCII. If memory really is a concern, here is a good discussion on SO regarding how you might work with 8-bit chars: [Is there a string type with 8 BIT chars?](https://stackoverflow.com/questions/4916838/is-there-a-string-type-with-8-bit-chars) References: On C#'s char datatype: <http://msdn.microsoft.com/en-us/library/x9h8tsay(v=vs.80).aspx> On Unicode: <http://en.wikipedia.org/wiki/Unicode>
6,781,886
can anybody tell me that in c# why does char takes two bytes although it can be stored in one byte. Don't you think it is wastage of a memory. if not , then how is extra 1-byte used? in simple words ..please make me clear what is the use of extra 8-bits.!!
2011/07/21
[ "https://Stackoverflow.com/questions/6781886", "https://Stackoverflow.com", "https://Stackoverflow.com/users/856755/" ]
> > although it can be stored in one byte > > > What makes you think that? It only takes one byte to represent every character in the English language, but other languages use other characters. Consider the number of different alphabets (Latin, Chinese, Arabic, Cyrillic...), and the number of symbols in each of these alphabets (not only letters or digits, but also punctuation marks and other special symbols)... there are tens of thousands of different symbols in use in the world ! So one byte is never going to be enough to represent them all, that's why the [Unicode](http://en.wikipedia.org/wiki/Unicode) standard was created. Unicode has several representations (UTF-8, UTF-16, UTF-32...). .NET strings use UTF-16, which takes two bytes per character (code points, actually). Of course, two bytes is still not enough to represent all the different symbols in the world; surrogate pairs are used to represent characters above U+FFFF
because utf-8 was probably still too young for microsoft to consider using it
2,335
I've been having problems compiling wolf's miners. Are there any pre-built linux miners, and if not, what are the packages that I need to fully build his CPU and GPU miners?
2016/10/15
[ "https://monero.stackexchange.com/questions/2335", "https://monero.stackexchange.com", "https://monero.stackexchange.com/users/727/" ]
No, and in my experience you shouldn't rely on anyone trying to give you compiled resources. For the CPU miner you would need at least "automake libcurl4-openssl-dev make", those are the names for Ubuntu. For the GPU miner if you are using R9 cards you can use these instructions. <https://www.reddit.com/r/Monero/comments/50dzu7/questionany_guide_on_linux_gpu_ati_mining/d74eyip> Those instructions are for Ubuntu 14.04 as the newer versions of Ubuntu only support the AMD pro drivers.
For **CPU** Miner on linux, you can check Yam releases here: <https://mega.nz/#F!UlkU0RyR!E8n4CFkqVu0WoOnsJnQkSg> And for **GPU** releases, you could try Claymore v9.1, available here: <https://drive.google.com/drive/folders/0B69wv2iqszefdkVDNkxla3BCZHc> Note: Both are **closed source**.
2,335
I've been having problems compiling wolf's miners. Are there any pre-built linux miners, and if not, what are the packages that I need to fully build his CPU and GPU miners?
2016/10/15
[ "https://monero.stackexchange.com/questions/2335", "https://monero.stackexchange.com", "https://monero.stackexchange.com/users/727/" ]
No, and in my experience you shouldn't rely on anyone trying to give you compiled resources. For the CPU miner you would need at least "automake libcurl4-openssl-dev make", those are the names for Ubuntu. For the GPU miner if you are using R9 cards you can use these instructions. <https://www.reddit.com/r/Monero/comments/50dzu7/questionany_guide_on_linux_gpu_ati_mining/d74eyip> Those instructions are for Ubuntu 14.04 as the newer versions of Ubuntu only support the AMD pro drivers.
There is no pre-built linux pool miner provided by the core team. You can only solo mine with the official binaries. For Wolf's CPU Miner, you need "libcurl4-openssl-dev make automake gcc" For his GPU Miner, You can refer to [this SE question](https://monero.stackexchange.com/q/1626/110)
2,335
I've been having problems compiling wolf's miners. Are there any pre-built linux miners, and if not, what are the packages that I need to fully build his CPU and GPU miners?
2016/10/15
[ "https://monero.stackexchange.com/questions/2335", "https://monero.stackexchange.com", "https://monero.stackexchange.com/users/727/" ]
No, and in my experience you shouldn't rely on anyone trying to give you compiled resources. For the CPU miner you would need at least "automake libcurl4-openssl-dev make", those are the names for Ubuntu. For the GPU miner if you are using R9 cards you can use these instructions. <https://www.reddit.com/r/Monero/comments/50dzu7/questionany_guide_on_linux_gpu_ati_mining/d74eyip> Those instructions are for Ubuntu 14.04 as the newer versions of Ubuntu only support the AMD pro drivers.
Most pools maintain a "Getting started" page which includes a list of available miners, as well as startup instructions. Links to binaries (or to announcement pages which old those, to reasonably enough ensure the link is from the original author) are included. See for instance <http://monero.crypto-pool.fr/#getting_started>
2,335
I've been having problems compiling wolf's miners. Are there any pre-built linux miners, and if not, what are the packages that I need to fully build his CPU and GPU miners?
2016/10/15
[ "https://monero.stackexchange.com/questions/2335", "https://monero.stackexchange.com", "https://monero.stackexchange.com/users/727/" ]
For **CPU** Miner on linux, you can check Yam releases here: <https://mega.nz/#F!UlkU0RyR!E8n4CFkqVu0WoOnsJnQkSg> And for **GPU** releases, you could try Claymore v9.1, available here: <https://drive.google.com/drive/folders/0B69wv2iqszefdkVDNkxla3BCZHc> Note: Both are **closed source**.
There is no pre-built linux pool miner provided by the core team. You can only solo mine with the official binaries. For Wolf's CPU Miner, you need "libcurl4-openssl-dev make automake gcc" For his GPU Miner, You can refer to [this SE question](https://monero.stackexchange.com/q/1626/110)
2,335
I've been having problems compiling wolf's miners. Are there any pre-built linux miners, and if not, what are the packages that I need to fully build his CPU and GPU miners?
2016/10/15
[ "https://monero.stackexchange.com/questions/2335", "https://monero.stackexchange.com", "https://monero.stackexchange.com/users/727/" ]
For **CPU** Miner on linux, you can check Yam releases here: <https://mega.nz/#F!UlkU0RyR!E8n4CFkqVu0WoOnsJnQkSg> And for **GPU** releases, you could try Claymore v9.1, available here: <https://drive.google.com/drive/folders/0B69wv2iqszefdkVDNkxla3BCZHc> Note: Both are **closed source**.
Most pools maintain a "Getting started" page which includes a list of available miners, as well as startup instructions. Links to binaries (or to announcement pages which old those, to reasonably enough ensure the link is from the original author) are included. See for instance <http://monero.crypto-pool.fr/#getting_started>
2,335
I've been having problems compiling wolf's miners. Are there any pre-built linux miners, and if not, what are the packages that I need to fully build his CPU and GPU miners?
2016/10/15
[ "https://monero.stackexchange.com/questions/2335", "https://monero.stackexchange.com", "https://monero.stackexchange.com/users/727/" ]
There is no pre-built linux pool miner provided by the core team. You can only solo mine with the official binaries. For Wolf's CPU Miner, you need "libcurl4-openssl-dev make automake gcc" For his GPU Miner, You can refer to [this SE question](https://monero.stackexchange.com/q/1626/110)
Most pools maintain a "Getting started" page which includes a list of available miners, as well as startup instructions. Links to binaries (or to announcement pages which old those, to reasonably enough ensure the link is from the original author) are included. See for instance <http://monero.crypto-pool.fr/#getting_started>
102,846
OK, this might sound a bit confusing and complicated, so bear with me. We've written a framework that allows us to define friendly URLs. If you surf to any arbitrary URL, IIS tries to display a 404 error (or, in some cases, 403;14 or 405). However, IIS is set up so that anything directed to those specific errors is sent to an .aspx file. This allows us to implement an HttpHandler to handle the request and do stuff, which involves finding the an associated template and then executing whatever's associated with it. Now, this all works in IIS 5 and 6 and, to an extent, on IIS7 - but for one catch, which happens when you post a form. See, when you post a form to a non-existent URL, IIS says "ah, but that url doesn't exist" and throws a 405 "method not allowed" error. Since we're telling IIS to redirect those errors to our .aspx page and therefore handling it with our HttpHandler, this normally isn't a problem. But as of IIS7, all POST information has gone missing after being redirected to the 405. And so you can no longer do the most trivial of things involving forms. To solve this we've tried using a HttpModule, which preserves POST data but appears to not have an initialized Session at the right time (when it's needed). We also tried using a HttpModule for all requests, not just the missing requests that hit 404/403;14/405, but that means stuff like images, css, js etc are being handled by .NET code, which is terribly inefficient. Which brings me to the actual question: has anyone ever encountered this, and does anyone have any advice or know what to do to get things working again? So far someone has suggested using Microsoft's own [URL Rewriting module](http://learn.iis.net/page.aspx/460/using-url-rewrite-module/). Would this help solve our problem? Thanks.
2008/09/19
[ "https://Stackoverflow.com/questions/102846", "https://Stackoverflow.com", "https://Stackoverflow.com/users/16308/" ]
Microsoft released a hotfix for this : [<http://support.microsoft.com/default.aspx/kb/956578>](http://support.microsoft.com/default.aspx/kb/956578)
Just a guess: the handler specified in IIS7's %windir%\system32\inetsrv\config\applicationhost.config which is handling your request is not allowing the POST verb to get through at all, and it is evaluating that rule before determining whether the URL doesn't exist.
102,846
OK, this might sound a bit confusing and complicated, so bear with me. We've written a framework that allows us to define friendly URLs. If you surf to any arbitrary URL, IIS tries to display a 404 error (or, in some cases, 403;14 or 405). However, IIS is set up so that anything directed to those specific errors is sent to an .aspx file. This allows us to implement an HttpHandler to handle the request and do stuff, which involves finding the an associated template and then executing whatever's associated with it. Now, this all works in IIS 5 and 6 and, to an extent, on IIS7 - but for one catch, which happens when you post a form. See, when you post a form to a non-existent URL, IIS says "ah, but that url doesn't exist" and throws a 405 "method not allowed" error. Since we're telling IIS to redirect those errors to our .aspx page and therefore handling it with our HttpHandler, this normally isn't a problem. But as of IIS7, all POST information has gone missing after being redirected to the 405. And so you can no longer do the most trivial of things involving forms. To solve this we've tried using a HttpModule, which preserves POST data but appears to not have an initialized Session at the right time (when it's needed). We also tried using a HttpModule for all requests, not just the missing requests that hit 404/403;14/405, but that means stuff like images, css, js etc are being handled by .NET code, which is terribly inefficient. Which brings me to the actual question: has anyone ever encountered this, and does anyone have any advice or know what to do to get things working again? So far someone has suggested using Microsoft's own [URL Rewriting module](http://learn.iis.net/page.aspx/460/using-url-rewrite-module/). Would this help solve our problem? Thanks.
2008/09/19
[ "https://Stackoverflow.com/questions/102846", "https://Stackoverflow.com", "https://Stackoverflow.com/users/16308/" ]
The problem in IIS 7 of post variables not being passed through to custom error handlers is fixed in service pack 2 for Vista. Haven't tried it on Windows Server but I'm sure it will be fixed there too.
Just a guess: the handler specified in IIS7's %windir%\system32\inetsrv\config\applicationhost.config which is handling your request is not allowing the POST verb to get through at all, and it is evaluating that rule before determining whether the URL doesn't exist.
102,846
OK, this might sound a bit confusing and complicated, so bear with me. We've written a framework that allows us to define friendly URLs. If you surf to any arbitrary URL, IIS tries to display a 404 error (or, in some cases, 403;14 or 405). However, IIS is set up so that anything directed to those specific errors is sent to an .aspx file. This allows us to implement an HttpHandler to handle the request and do stuff, which involves finding the an associated template and then executing whatever's associated with it. Now, this all works in IIS 5 and 6 and, to an extent, on IIS7 - but for one catch, which happens when you post a form. See, when you post a form to a non-existent URL, IIS says "ah, but that url doesn't exist" and throws a 405 "method not allowed" error. Since we're telling IIS to redirect those errors to our .aspx page and therefore handling it with our HttpHandler, this normally isn't a problem. But as of IIS7, all POST information has gone missing after being redirected to the 405. And so you can no longer do the most trivial of things involving forms. To solve this we've tried using a HttpModule, which preserves POST data but appears to not have an initialized Session at the right time (when it's needed). We also tried using a HttpModule for all requests, not just the missing requests that hit 404/403;14/405, but that means stuff like images, css, js etc are being handled by .NET code, which is terribly inefficient. Which brings me to the actual question: has anyone ever encountered this, and does anyone have any advice or know what to do to get things working again? So far someone has suggested using Microsoft's own [URL Rewriting module](http://learn.iis.net/page.aspx/460/using-url-rewrite-module/). Would this help solve our problem? Thanks.
2008/09/19
[ "https://Stackoverflow.com/questions/102846", "https://Stackoverflow.com", "https://Stackoverflow.com/users/16308/" ]
Microsoft released a hotfix for this : [<http://support.microsoft.com/default.aspx/kb/956578>](http://support.microsoft.com/default.aspx/kb/956578)
Yes, I would definitely recommend URL rewriting (using Microsoft's IIS7 one or one of the many alternatives). This is specifically designed for providing friendly URLs, whereas error documents are a last-ditch backstop for failures, which tends to munge the incoming data so it may not be what you expect.
102,846
OK, this might sound a bit confusing and complicated, so bear with me. We've written a framework that allows us to define friendly URLs. If you surf to any arbitrary URL, IIS tries to display a 404 error (or, in some cases, 403;14 or 405). However, IIS is set up so that anything directed to those specific errors is sent to an .aspx file. This allows us to implement an HttpHandler to handle the request and do stuff, which involves finding the an associated template and then executing whatever's associated with it. Now, this all works in IIS 5 and 6 and, to an extent, on IIS7 - but for one catch, which happens when you post a form. See, when you post a form to a non-existent URL, IIS says "ah, but that url doesn't exist" and throws a 405 "method not allowed" error. Since we're telling IIS to redirect those errors to our .aspx page and therefore handling it with our HttpHandler, this normally isn't a problem. But as of IIS7, all POST information has gone missing after being redirected to the 405. And so you can no longer do the most trivial of things involving forms. To solve this we've tried using a HttpModule, which preserves POST data but appears to not have an initialized Session at the right time (when it's needed). We also tried using a HttpModule for all requests, not just the missing requests that hit 404/403;14/405, but that means stuff like images, css, js etc are being handled by .NET code, which is terribly inefficient. Which brings me to the actual question: has anyone ever encountered this, and does anyone have any advice or know what to do to get things working again? So far someone has suggested using Microsoft's own [URL Rewriting module](http://learn.iis.net/page.aspx/460/using-url-rewrite-module/). Would this help solve our problem? Thanks.
2008/09/19
[ "https://Stackoverflow.com/questions/102846", "https://Stackoverflow.com", "https://Stackoverflow.com/users/16308/" ]
The problem in IIS 7 of post variables not being passed through to custom error handlers is fixed in service pack 2 for Vista. Haven't tried it on Windows Server but I'm sure it will be fixed there too.
Yes, I would definitely recommend URL rewriting (using Microsoft's IIS7 one or one of the many alternatives). This is specifically designed for providing friendly URLs, whereas error documents are a last-ditch backstop for failures, which tends to munge the incoming data so it may not be what you expect.
102,846
OK, this might sound a bit confusing and complicated, so bear with me. We've written a framework that allows us to define friendly URLs. If you surf to any arbitrary URL, IIS tries to display a 404 error (or, in some cases, 403;14 or 405). However, IIS is set up so that anything directed to those specific errors is sent to an .aspx file. This allows us to implement an HttpHandler to handle the request and do stuff, which involves finding the an associated template and then executing whatever's associated with it. Now, this all works in IIS 5 and 6 and, to an extent, on IIS7 - but for one catch, which happens when you post a form. See, when you post a form to a non-existent URL, IIS says "ah, but that url doesn't exist" and throws a 405 "method not allowed" error. Since we're telling IIS to redirect those errors to our .aspx page and therefore handling it with our HttpHandler, this normally isn't a problem. But as of IIS7, all POST information has gone missing after being redirected to the 405. And so you can no longer do the most trivial of things involving forms. To solve this we've tried using a HttpModule, which preserves POST data but appears to not have an initialized Session at the right time (when it's needed). We also tried using a HttpModule for all requests, not just the missing requests that hit 404/403;14/405, but that means stuff like images, css, js etc are being handled by .NET code, which is terribly inefficient. Which brings me to the actual question: has anyone ever encountered this, and does anyone have any advice or know what to do to get things working again? So far someone has suggested using Microsoft's own [URL Rewriting module](http://learn.iis.net/page.aspx/460/using-url-rewrite-module/). Would this help solve our problem? Thanks.
2008/09/19
[ "https://Stackoverflow.com/questions/102846", "https://Stackoverflow.com", "https://Stackoverflow.com/users/16308/" ]
Microsoft released a hotfix for this : [<http://support.microsoft.com/default.aspx/kb/956578>](http://support.microsoft.com/default.aspx/kb/956578)
The problem in IIS 7 of post variables not being passed through to custom error handlers is fixed in service pack 2 for Vista. Haven't tried it on Windows Server but I'm sure it will be fixed there too.
9,133,982
I recently deployed my app engine app and when I went to check it, the css was not showing in Chrome or in my Iphone Safari browser. I simply redeployed (no code changes at all) and now the site is running fine. What is going on here? Is this a bug, or is something wrong with my code but only sometimes?
2012/02/03
[ "https://Stackoverflow.com/questions/9133982", "https://Stackoverflow.com", "https://Stackoverflow.com/users/190629/" ]
I am experiencing the same issue, sometimes css - sometimes not Interestingly enough, if I use chrome, and select "Request Desktop Site" from the menu, my css loads fine. I would assume this is something to do with the headers sent in the request, specifying the browser to be mobile. I'll investigate further and update my answer when I have a solid solution, just a bit of food for thought for now.
Sounds like a deployment problem. Speaking in generalities, code doesn't "sometimes" work unless it's written that way.. in which case it's working 100% of the time :)
9,133,982
I recently deployed my app engine app and when I went to check it, the css was not showing in Chrome or in my Iphone Safari browser. I simply redeployed (no code changes at all) and now the site is running fine. What is going on here? Is this a bug, or is something wrong with my code but only sometimes?
2012/02/03
[ "https://Stackoverflow.com/questions/9133982", "https://Stackoverflow.com", "https://Stackoverflow.com/users/190629/" ]
I am experiencing the same issue, sometimes css - sometimes not Interestingly enough, if I use chrome, and select "Request Desktop Site" from the menu, my css loads fine. I would assume this is something to do with the headers sent in the request, specifying the browser to be mobile. I'll investigate further and update my answer when I have a solid solution, just a bit of food for thought for now.
This has happened to me when the css is cached and out of date. Your browser cannot detect that the css file has changed, and doesn't bother reloading it. Could it be that the CSS showed up the second time, not because you redeployed, but because your browsers were refreshed?
34,674
Let's say I have standard scenario of commerce site that has categories on the left and items on the right. What I would like to do is that when user clicks on category it will pass it's ID to js, js will get all items from API by using that id and load them very prettily to my content. It looks all cool and pro but what is the situation from SEO point of view? AFAIK google bot enters my site, sees I have span with categories and that's all?
2012/09/18
[ "https://webmasters.stackexchange.com/questions/34674", "https://webmasters.stackexchange.com", "https://webmasters.stackexchange.com/users/11428/" ]
What URL can users bookmark to get back to that *item* and tell their friends? What URL can search engines index to show that item in the SERPs? I would have said that an e-commerce site should be implemented initially so that it *works* without any JavaScript at all. You click a category (an HTML anchor) that makes another request and the server returns a page with the items in that category. Your site is SEO'd and works for everyone. Your site *is* "pro". You then want to make it more whizzy and implement AJAX as a *progressive enhancement*. If JavaScript is available and AJAX ready then assign behaviours that override the default action of the anchors that submit requests to the server. The requests are now submitted by JavaScript, but the underlying search engine friendly HTML is still the same. Your site *looks* "pro". When developing the site in the beginning keep in mind that you'll want to implement AJAX on top later.
You can use lynx or the tool in google webmaster tools to show you what google is seeing. (The content you load via AJAX will not be seen by crawlers). PS, you should also keep in mind that not every user has javascript enabled.
19,523
Before the Buddha introduced nirvana and enlightenment, was there any way to escape from the cycle of birth and death? What is written in Buddhist texts?
2017/03/07
[ "https://buddhism.stackexchange.com/questions/19523", "https://buddhism.stackexchange.com", "https://buddhism.stackexchange.com/users/-1/" ]
Some Buddhist texts say that there were previous Buddhas (and that there will be others in the distant future): that they too taught Buddhism; and that the current Buddha rediscovered Buddhism. For example, in the commentary to [Dhammapada verse 183](http://www.tipitaka.net/tipitaka/dhp/verseload.php?verse=183): > > On one occasion, Thera Ananda asked the Buddha whether the Fundamental Instructions to bhikkhus given by the preceding Buddhas were the same as those of the Buddha himself. To him the Buddha replied that the instructions given by all the Buddhas are as given in the following verses: > > > Then the Buddha spoke in verse as follows: > > > > > > > 183. Not to do evil, to cultivate merit, to purify one's mind - this is the Teaching of the Buddhas. > > 184. The best moral practice is patience and forbearance; "Nibbana is Supreme", said the Buddhas. A bhikkhu does not harm others; one who harms others is not a bhikkhu. > > 185. Not to revile, not to do any harm, to practise restraint according to the Fundamental Instructions for the bhikkhus, to be moderate in taking food, to dwell in a secluded place, to devote oneself to higher concentration - this is the Teaching of the Buddhas. > > > > > > > > > There are even some suttas, such as [SN 12.65](http://www.themindingcentre.org/dharmafarer/wp-content/uploads/2009/12/14.2-Nagara-S-s12.65-piya.pdf) which implies that Nirvana and the path the Nirvana was "inhabited in ancient times" and rediscovered by the Buddha. > > In the Nagara Sutta, the delightful ancient fortress city [§20.2] clearly refers to nirvana, and the city is > populated by saints (called “seers,” *rsi*, in the Sanskrit Nagara Sutra, §5.28). Both the Pali and Sanskrit > versions of the Sutta speak of ancient people using the path. > > > Texts also say that a few people (called "[private Buddhas](https://en.wikipedia.org/wiki/Pratyekabuddha)") discover Nirvana for themselves, but (unlike a true Buddha) aren't able or aren't willing to teach other people.
Because the Buddha discovered reality as it actually is, rather than coming up with a personalised conception of what it is, the Dhamma is open and discoverable to all, whether there is a Buddha to light the path or not. Specifically within Buddhism there is a concept of [Pratyekabuddha-hood](https://en.m.wikipedia.org/wiki/Pratyekabuddha), which are beings who discover the Dhamma without a teacher - without a living Buddha or his message being available. They do not teach though. The Buddha was not the inventor of the Dhamma within Buddhism, he was 'only' an expounder of the Truths and the Eightfold Path. Before him there were past Buddhas within the Tripitaka, and a future one too (Maitreya). Effectively the path is always discoverable and Nibbana achievable, to any individual willing to search.
19,523
Before the Buddha introduced nirvana and enlightenment, was there any way to escape from the cycle of birth and death? What is written in Buddhist texts?
2017/03/07
[ "https://buddhism.stackexchange.com/questions/19523", "https://buddhism.stackexchange.com", "https://buddhism.stackexchange.com/users/-1/" ]
Some Buddhist texts say that there were previous Buddhas (and that there will be others in the distant future): that they too taught Buddhism; and that the current Buddha rediscovered Buddhism. For example, in the commentary to [Dhammapada verse 183](http://www.tipitaka.net/tipitaka/dhp/verseload.php?verse=183): > > On one occasion, Thera Ananda asked the Buddha whether the Fundamental Instructions to bhikkhus given by the preceding Buddhas were the same as those of the Buddha himself. To him the Buddha replied that the instructions given by all the Buddhas are as given in the following verses: > > > Then the Buddha spoke in verse as follows: > > > > > > > 183. Not to do evil, to cultivate merit, to purify one's mind - this is the Teaching of the Buddhas. > > 184. The best moral practice is patience and forbearance; "Nibbana is Supreme", said the Buddhas. A bhikkhu does not harm others; one who harms others is not a bhikkhu. > > 185. Not to revile, not to do any harm, to practise restraint according to the Fundamental Instructions for the bhikkhus, to be moderate in taking food, to dwell in a secluded place, to devote oneself to higher concentration - this is the Teaching of the Buddhas. > > > > > > > > > There are even some suttas, such as [SN 12.65](http://www.themindingcentre.org/dharmafarer/wp-content/uploads/2009/12/14.2-Nagara-S-s12.65-piya.pdf) which implies that Nirvana and the path the Nirvana was "inhabited in ancient times" and rediscovered by the Buddha. > > In the Nagara Sutta, the delightful ancient fortress city [§20.2] clearly refers to nirvana, and the city is > populated by saints (called “seers,” *rsi*, in the Sanskrit Nagara Sutra, §5.28). Both the Pali and Sanskrit > versions of the Sutta speak of ancient people using the path. > > > Texts also say that a few people (called "[private Buddhas](https://en.wikipedia.org/wiki/Pratyekabuddha)") discover Nirvana for themselves, but (unlike a true Buddha) aren't able or aren't willing to teach other people.
According to Buddhism, there was no way to escape from *samsara* (the cycle of ignorance, craving & ego-becoming) before the Buddha. What is called 'Hinduism' arose after Buddhism. Before Buddhism, the main religion was called 'Brahmanism', which focused on 'Brahma' (god) & the Brahmin caste. There appears to be no evidence Brahmanism found a way to be free from 'samsara'. If Brahmanism actually knew of a way to be free from samsara, the word 'Buddha' is a lie & false, since the Buddha declared his discovery was something completely brand new, i.e., "never heard before". > > *This noble truth of the way leading to the cessation of suffering has been developed’: thus, bhikkhus, in regard to things unheard before, > there arose in me vision, knowledge, wisdom, true knowledge, and > light.* > > > [*SN 56.11 The First Sermon*](https://suttacentral.net/en/sn56.11) > > >
19,523
Before the Buddha introduced nirvana and enlightenment, was there any way to escape from the cycle of birth and death? What is written in Buddhist texts?
2017/03/07
[ "https://buddhism.stackexchange.com/questions/19523", "https://buddhism.stackexchange.com", "https://buddhism.stackexchange.com/users/-1/" ]
Some Buddhist texts say that there were previous Buddhas (and that there will be others in the distant future): that they too taught Buddhism; and that the current Buddha rediscovered Buddhism. For example, in the commentary to [Dhammapada verse 183](http://www.tipitaka.net/tipitaka/dhp/verseload.php?verse=183): > > On one occasion, Thera Ananda asked the Buddha whether the Fundamental Instructions to bhikkhus given by the preceding Buddhas were the same as those of the Buddha himself. To him the Buddha replied that the instructions given by all the Buddhas are as given in the following verses: > > > Then the Buddha spoke in verse as follows: > > > > > > > 183. Not to do evil, to cultivate merit, to purify one's mind - this is the Teaching of the Buddhas. > > 184. The best moral practice is patience and forbearance; "Nibbana is Supreme", said the Buddhas. A bhikkhu does not harm others; one who harms others is not a bhikkhu. > > 185. Not to revile, not to do any harm, to practise restraint according to the Fundamental Instructions for the bhikkhus, to be moderate in taking food, to dwell in a secluded place, to devote oneself to higher concentration - this is the Teaching of the Buddhas. > > > > > > > > > There are even some suttas, such as [SN 12.65](http://www.themindingcentre.org/dharmafarer/wp-content/uploads/2009/12/14.2-Nagara-S-s12.65-piya.pdf) which implies that Nirvana and the path the Nirvana was "inhabited in ancient times" and rediscovered by the Buddha. > > In the Nagara Sutta, the delightful ancient fortress city [§20.2] clearly refers to nirvana, and the city is > populated by saints (called “seers,” *rsi*, in the Sanskrit Nagara Sutra, §5.28). Both the Pali and Sanskrit > versions of the Sutta speak of ancient people using the path. > > > Texts also say that a few people (called "[private Buddhas](https://en.wikipedia.org/wiki/Pratyekabuddha)") discover Nirvana for themselves, but (unlike a true Buddha) aren't able or aren't willing to teach other people.
According to Theravada tradition, Buddhas come and go over countless eons. Only in the times when their teaching is alive and true will we get a chance of escaping from this endless samsara. Every once in a great while, after a long period of spiritual darkness blankets the world, an individual is eventually born who, through his own efforts, rediscovers the long-forgotten path to Awakening and liberates himself once and for all from the long round of rebirth, thereby becoming an arahant ("worthy one," one who has fully realized Awakening). In Digha Nikáya Sutta 14, [Mahapadana Sutta (The Great Discourse on the Lineage)](http://buddhasutra.com/files/mahapadana_sutta.htm)) & DN 32 The Ātaānātiiya Discourse, the Supreme Buddha stated that six Supreme Buddhas appeared over 91 world-cycles. The seven Buddhas, including "our" Supreme Buddha, mentioned in DN 14 & DN 32: Vipassi, Sikhi, Vessabhu, Kakusandha, Konagamana, Kassapa, and Gotama. They were all born in this earth, in the land of the Rose Apple (Jambudipa), in north-central India in the area known then as the Middle Land (Majjhima Desa). If such a being lacks the requisite development of a Supreme Buddha, he is unable to articulate his discovery to others and is known as a "Silent" or "Private" Buddha (paccekabuddha). These silent Buddhas come to pass a few hundred years prior to a birth of a Supreme Buddha. These Silent Buddhas too are born only in the land of the Rose Apple (Jambudipa), in north-central India in the area known then as the Middle Land (Majjhima Desa). This is a Dharmatha - a set of natural laws. Only at a time when the Dhamma of a Supreme Buddha is alive in the world will arahants walk this earth because they require a Buddha to show them the way to Awakening. (All Buddhas and paccekabuddhas are arahants) No matter how far and wide the sasana spreads, sooner or later it succumbs to the inexorable law of anicca (impermanence), and fades from memory. The world descends again into darkness, and the eons-long cycle repeats. The most recent Buddha was born Siddhattha Gotama in India in the sixth century BCE. He is the one we usually mean when we refer to "The Buddha." The next Buddha due to appear is said to be Maitreya (Metteyya), a bodhisatta currently residing in the Tusita heavens. Legend has it that at some time in the far distant future, once the teachings of the current Buddha have long been forgotten, he will be reborn as a human being, rediscover the Four Noble Truths, and teach the Noble Eightfold Path once again. His name is mentioned only once in the entire Tipitaka, in the Cakkavatti-Sihanada Sutta (DN 26; The Lion's Roar on the Turning of the Wheel): > > [The Buddha:] And in that time of the people with an eighty-thousand-year life-span, there will arise in the world a Blessed Lord, an arahant fully enlightened Buddha named Metteyya, endowed with wisdom and conduct, a Well-farer, Knower of the worlds, incomparable Trainer of men to be tamed, Teacher of gods and humans, enlightened and blessed, just as I am now. > > >
19,523
Before the Buddha introduced nirvana and enlightenment, was there any way to escape from the cycle of birth and death? What is written in Buddhist texts?
2017/03/07
[ "https://buddhism.stackexchange.com/questions/19523", "https://buddhism.stackexchange.com", "https://buddhism.stackexchange.com/users/-1/" ]
Because the Buddha discovered reality as it actually is, rather than coming up with a personalised conception of what it is, the Dhamma is open and discoverable to all, whether there is a Buddha to light the path or not. Specifically within Buddhism there is a concept of [Pratyekabuddha-hood](https://en.m.wikipedia.org/wiki/Pratyekabuddha), which are beings who discover the Dhamma without a teacher - without a living Buddha or his message being available. They do not teach though. The Buddha was not the inventor of the Dhamma within Buddhism, he was 'only' an expounder of the Truths and the Eightfold Path. Before him there were past Buddhas within the Tripitaka, and a future one too (Maitreya). Effectively the path is always discoverable and Nibbana achievable, to any individual willing to search.
According to Buddhism, there was no way to escape from *samsara* (the cycle of ignorance, craving & ego-becoming) before the Buddha. What is called 'Hinduism' arose after Buddhism. Before Buddhism, the main religion was called 'Brahmanism', which focused on 'Brahma' (god) & the Brahmin caste. There appears to be no evidence Brahmanism found a way to be free from 'samsara'. If Brahmanism actually knew of a way to be free from samsara, the word 'Buddha' is a lie & false, since the Buddha declared his discovery was something completely brand new, i.e., "never heard before". > > *This noble truth of the way leading to the cessation of suffering has been developed’: thus, bhikkhus, in regard to things unheard before, > there arose in me vision, knowledge, wisdom, true knowledge, and > light.* > > > [*SN 56.11 The First Sermon*](https://suttacentral.net/en/sn56.11) > > >
19,523
Before the Buddha introduced nirvana and enlightenment, was there any way to escape from the cycle of birth and death? What is written in Buddhist texts?
2017/03/07
[ "https://buddhism.stackexchange.com/questions/19523", "https://buddhism.stackexchange.com", "https://buddhism.stackexchange.com/users/-1/" ]
Because the Buddha discovered reality as it actually is, rather than coming up with a personalised conception of what it is, the Dhamma is open and discoverable to all, whether there is a Buddha to light the path or not. Specifically within Buddhism there is a concept of [Pratyekabuddha-hood](https://en.m.wikipedia.org/wiki/Pratyekabuddha), which are beings who discover the Dhamma without a teacher - without a living Buddha or his message being available. They do not teach though. The Buddha was not the inventor of the Dhamma within Buddhism, he was 'only' an expounder of the Truths and the Eightfold Path. Before him there were past Buddhas within the Tripitaka, and a future one too (Maitreya). Effectively the path is always discoverable and Nibbana achievable, to any individual willing to search.
According to Theravada tradition, Buddhas come and go over countless eons. Only in the times when their teaching is alive and true will we get a chance of escaping from this endless samsara. Every once in a great while, after a long period of spiritual darkness blankets the world, an individual is eventually born who, through his own efforts, rediscovers the long-forgotten path to Awakening and liberates himself once and for all from the long round of rebirth, thereby becoming an arahant ("worthy one," one who has fully realized Awakening). In Digha Nikáya Sutta 14, [Mahapadana Sutta (The Great Discourse on the Lineage)](http://buddhasutra.com/files/mahapadana_sutta.htm)) & DN 32 The Ātaānātiiya Discourse, the Supreme Buddha stated that six Supreme Buddhas appeared over 91 world-cycles. The seven Buddhas, including "our" Supreme Buddha, mentioned in DN 14 & DN 32: Vipassi, Sikhi, Vessabhu, Kakusandha, Konagamana, Kassapa, and Gotama. They were all born in this earth, in the land of the Rose Apple (Jambudipa), in north-central India in the area known then as the Middle Land (Majjhima Desa). If such a being lacks the requisite development of a Supreme Buddha, he is unable to articulate his discovery to others and is known as a "Silent" or "Private" Buddha (paccekabuddha). These silent Buddhas come to pass a few hundred years prior to a birth of a Supreme Buddha. These Silent Buddhas too are born only in the land of the Rose Apple (Jambudipa), in north-central India in the area known then as the Middle Land (Majjhima Desa). This is a Dharmatha - a set of natural laws. Only at a time when the Dhamma of a Supreme Buddha is alive in the world will arahants walk this earth because they require a Buddha to show them the way to Awakening. (All Buddhas and paccekabuddhas are arahants) No matter how far and wide the sasana spreads, sooner or later it succumbs to the inexorable law of anicca (impermanence), and fades from memory. The world descends again into darkness, and the eons-long cycle repeats. The most recent Buddha was born Siddhattha Gotama in India in the sixth century BCE. He is the one we usually mean when we refer to "The Buddha." The next Buddha due to appear is said to be Maitreya (Metteyya), a bodhisatta currently residing in the Tusita heavens. Legend has it that at some time in the far distant future, once the teachings of the current Buddha have long been forgotten, he will be reborn as a human being, rediscover the Four Noble Truths, and teach the Noble Eightfold Path once again. His name is mentioned only once in the entire Tipitaka, in the Cakkavatti-Sihanada Sutta (DN 26; The Lion's Roar on the Turning of the Wheel): > > [The Buddha:] And in that time of the people with an eighty-thousand-year life-span, there will arise in the world a Blessed Lord, an arahant fully enlightened Buddha named Metteyya, endowed with wisdom and conduct, a Well-farer, Knower of the worlds, incomparable Trainer of men to be tamed, Teacher of gods and humans, enlightened and blessed, just as I am now. > > >
19,523
Before the Buddha introduced nirvana and enlightenment, was there any way to escape from the cycle of birth and death? What is written in Buddhist texts?
2017/03/07
[ "https://buddhism.stackexchange.com/questions/19523", "https://buddhism.stackexchange.com", "https://buddhism.stackexchange.com/users/-1/" ]
According to Theravada tradition, Buddhas come and go over countless eons. Only in the times when their teaching is alive and true will we get a chance of escaping from this endless samsara. Every once in a great while, after a long period of spiritual darkness blankets the world, an individual is eventually born who, through his own efforts, rediscovers the long-forgotten path to Awakening and liberates himself once and for all from the long round of rebirth, thereby becoming an arahant ("worthy one," one who has fully realized Awakening). In Digha Nikáya Sutta 14, [Mahapadana Sutta (The Great Discourse on the Lineage)](http://buddhasutra.com/files/mahapadana_sutta.htm)) & DN 32 The Ātaānātiiya Discourse, the Supreme Buddha stated that six Supreme Buddhas appeared over 91 world-cycles. The seven Buddhas, including "our" Supreme Buddha, mentioned in DN 14 & DN 32: Vipassi, Sikhi, Vessabhu, Kakusandha, Konagamana, Kassapa, and Gotama. They were all born in this earth, in the land of the Rose Apple (Jambudipa), in north-central India in the area known then as the Middle Land (Majjhima Desa). If such a being lacks the requisite development of a Supreme Buddha, he is unable to articulate his discovery to others and is known as a "Silent" or "Private" Buddha (paccekabuddha). These silent Buddhas come to pass a few hundred years prior to a birth of a Supreme Buddha. These Silent Buddhas too are born only in the land of the Rose Apple (Jambudipa), in north-central India in the area known then as the Middle Land (Majjhima Desa). This is a Dharmatha - a set of natural laws. Only at a time when the Dhamma of a Supreme Buddha is alive in the world will arahants walk this earth because they require a Buddha to show them the way to Awakening. (All Buddhas and paccekabuddhas are arahants) No matter how far and wide the sasana spreads, sooner or later it succumbs to the inexorable law of anicca (impermanence), and fades from memory. The world descends again into darkness, and the eons-long cycle repeats. The most recent Buddha was born Siddhattha Gotama in India in the sixth century BCE. He is the one we usually mean when we refer to "The Buddha." The next Buddha due to appear is said to be Maitreya (Metteyya), a bodhisatta currently residing in the Tusita heavens. Legend has it that at some time in the far distant future, once the teachings of the current Buddha have long been forgotten, he will be reborn as a human being, rediscover the Four Noble Truths, and teach the Noble Eightfold Path once again. His name is mentioned only once in the entire Tipitaka, in the Cakkavatti-Sihanada Sutta (DN 26; The Lion's Roar on the Turning of the Wheel): > > [The Buddha:] And in that time of the people with an eighty-thousand-year life-span, there will arise in the world a Blessed Lord, an arahant fully enlightened Buddha named Metteyya, endowed with wisdom and conduct, a Well-farer, Knower of the worlds, incomparable Trainer of men to be tamed, Teacher of gods and humans, enlightened and blessed, just as I am now. > > >
According to Buddhism, there was no way to escape from *samsara* (the cycle of ignorance, craving & ego-becoming) before the Buddha. What is called 'Hinduism' arose after Buddhism. Before Buddhism, the main religion was called 'Brahmanism', which focused on 'Brahma' (god) & the Brahmin caste. There appears to be no evidence Brahmanism found a way to be free from 'samsara'. If Brahmanism actually knew of a way to be free from samsara, the word 'Buddha' is a lie & false, since the Buddha declared his discovery was something completely brand new, i.e., "never heard before". > > *This noble truth of the way leading to the cessation of suffering has been developed’: thus, bhikkhus, in regard to things unheard before, > there arose in me vision, knowledge, wisdom, true knowledge, and > light.* > > > [*SN 56.11 The First Sermon*](https://suttacentral.net/en/sn56.11) > > >
6,298,861
We have a web application that creates a web page. In one section of the page, a graph is diplayed. The graph is created by calling graphing program with an "img src=..." tag in the HTML body. The graphing program takes a number of arguments about the height, width, legends, etc., and the data to be graphed. The only way we have found so far to pass the arguments to the graphing program is to use the GET method. This works, but in some cases the size of the query string passed to the grapher is approaching the 2058 (or whatever) character limit for URLs in Internet Explorer. I've included an example of the tag below. If the length is too long, the query string is truncated and either the program bombs or even worse, displays a graph that is not correct (depending on where the truncation occurs). The POST method with an auto submit does not work for our purposes, because we want the image inserted on the page where the grapher is invoked. We don't want the graph displayed on a separate web page, which is what the POST method does with the URL in the "action=" attribute. Does anyone know a way around this problem, or do we just have to stick with the GET method and inform users to stay away from Internet Explorer when they're using our application? Thanks!
2011/06/09
[ "https://Stackoverflow.com/questions/6298861", "https://Stackoverflow.com", "https://Stackoverflow.com/users/791636/" ]
One solution is to have the page put data into the session, then have the img generation script pull from that session information. For example page stores $\_SESSION['tempdata12345'] and creates an img src="myimage.php?data=tempdata12345". Then myimage.php pulls from the session information.
One solution is to have the web application that generates the entire page to pre-emptively call the actual graphing program with all the necessary parameters. Perhaps store the generated image in a /tmp folder. Then have the web application create the web page and send it to the browser with a "img src=..." tag that, instead of referring to the graphing program, refers to the pre-generated image.
176,351
My Circuit is: USBTinyISP <-usi/icsp-> ATTiny85 <-usi/i2c-> [MCP4725](https://www.adafruit.com/products/935). That is, the [USI](http://www.atmel.com/Images/doc4300.pdf) pins used to program the t85 are also used for i2c in the final circuit. When I try to flash-program the t85 in-circuit, it fails. If I disconnect the 4725's SDA line during programming, it works. I **assume** that the 4725 is confusedly pulling SDA low to ACK I2C packets and thus interfering with the shared MOSI line during programming. But if so, then my ICSP isn't truly In-Circuit :(. That is, if the circuit was permanent then I couldn't program the MCU except by removing it. Yet I see many circuits with ICSP headers on them that presumably work. How do circumvent logical interference from the circuit when I program via ICSP? The only solution I can think of is to use a microcontroller with dedicated ICSP pins. But is there some other common-practice solution to this problem?
2015/06/19
[ "https://electronics.stackexchange.com/questions/176351", "https://electronics.stackexchange.com", "https://electronics.stackexchange.com/users/77667/" ]
Add a suitable resistor between any external circuit *that drives an ICSP pin* and the AT chip. The resistor must be high enough that the ISP circuit can override the the external circuit, yet low enough that the external circuit can still drive the AT fast enough. You could start with 1k. An ICSP capability is a combined property of the target chip, the programmer, *and the target circuit*.
There are very many options, that you may not think of right away. One is: ![schematic](https://i.stack.imgur.com/fSE4Z.png) [simulate this circuit](/plugins/schematics?image=http%3a%2f%2fi.stack.imgur.com%2ffSE4Z.png) – Schematic created using [CircuitLab](https://www.circuitlab.com/) This way the Tiny will still likely be able to drive the DAC at 400kHz at least, maybe even faster, depending on the length of your wires. The DAC still has enough "strength" through the 470 Ohm resistors to pull the lines low enough to be seen as a zero on ACK or when holding the SCL line, but usually an ICSP programmer is strong enough to "win" from the 470 Ohm. At least on SPI. The even smaller TPI-requiring chips like the Tiny10 and Tiny20 wouldn't be able to win from 470 Ohm, but SPI is hard driven both by the Tiny85 target and the programmer. If you want even more control/ensurance, you can add a PNP transistor driven by any free pin, that you actively pull low. When your Tiny is reset for programming, the default resistor R1 will pull the transistor closed: ![schematic](https://i.stack.imgur.com/GrZzU.png) [simulate this circuit](/plugins/schematics?image=http%3a%2f%2fi.stack.imgur.com%2fGrZzU.png) In this case the 470 Ohm resistors prevent the DAC from "powering up" from voltages on the data lines. This is, in my opinion a less neat solution, but if the first one doesn't work, this one might. From there on there's all kinds of things like I2C buffer chips that can be enabled/disabled and more such, but it all increases in complexity. You can even also connect the PNP transistor in this example through another NPN to the RESET pin of the tiny, automatically powering down the DAC when the Tiny's reset is activated. But again, putting signals on a chip that has no power is never a really neat solution, in my opinion, and if the resistors wouldn't work I would first look at buffer chips.
8,374
Nowadays, everyone talks about it: climate change, and more importantly, how to stop it from happening. Although there's a lot of debate around the topic, the consensus is that by inventing a way of generating clean energy, we can slow down (and maybe even reverse) the effects global warming has on the planet. Using energy that was created without burning millions of years worth of stored carbon, we can not only power our everyday lives, but also capture the carbon we've been blowing into our atmosphere using the energy-intensive process of carbon capture and sequestration (CCS). But there are still some problems. Currently, photovoltaic cells are pretty much useless during winter when it comes to fueling the homes of millions (at least where I live). We want to warm our homes, but there's not enough clean energy during those months, so we fall back on nuclear. **Wouldn't it therefore be better to have a small rise in global temperature?** * A higher temperature means you don't have to warm your home during winter (as much). * During summer, you can use the extra energy generated by the photovoltaic cells to power air conditioners in order to cool buildings and * Use some excess energy to stop the [runaway greenhouse effect](https://en.wikipedia.org/wiki/Runaway_greenhouse_effect) using CCS. Even though cooling requires more energy than heating, might it break even? **Will the rise in temperature have a positive effect on our energy production and the ability to satisfy energy demand?**
2019/07/23
[ "https://sustainability.stackexchange.com/questions/8374", "https://sustainability.stackexchange.com", "https://sustainability.stackexchange.com/users/6811/" ]
> > Wouldn't it therefore be better to have a small rise in global temperature? > **No.** The radiative forcing by carbon dioxide does not increase solar radiation that can be captured by solar cells. The warmer temperatures actually **decrease** solar cell output, because solar cells really like cold temperatures and lots of radiation. You can't have both optimal temperature and optimal radiation at the same time, because solar radiation causes the temperature to increase. If we could somehow manage radiative forcing, to reduce worldwide temperatures, we could make more power with solar cells. **Edit:** Nice to have unexplained downvotes after an upvote. So, let me add some sources: * <https://www.civicsolar.com/article/how-does-heat-affect-solar-panel-efficiencies> * <https://greentumble.com/effect-of-temperature-on-solar-panel-efficiency/> * <https://www.sciencedirect.com/science/article/pii/S1876610213000829> * and many, many other (I'm not going to link to all sources I could link to because the list of sources would be very long)
The US isn't the only land mass in the world you know. In much of Africa people and animals are dying in the heat. And on a global scale the man made energy requirements are rising exponentially because of global warming. In Britain for instance people are busy installing air conditioning which they never needed before. If you even looked at the extra air conditioning installed in the US you would probably find it outweighs all this estimated saving. Scientists are also working on superconductors which save quite a bit of energy, but necessitate very cold temperatures which will become increasingly difficult to maintain.
8,374
Nowadays, everyone talks about it: climate change, and more importantly, how to stop it from happening. Although there's a lot of debate around the topic, the consensus is that by inventing a way of generating clean energy, we can slow down (and maybe even reverse) the effects global warming has on the planet. Using energy that was created without burning millions of years worth of stored carbon, we can not only power our everyday lives, but also capture the carbon we've been blowing into our atmosphere using the energy-intensive process of carbon capture and sequestration (CCS). But there are still some problems. Currently, photovoltaic cells are pretty much useless during winter when it comes to fueling the homes of millions (at least where I live). We want to warm our homes, but there's not enough clean energy during those months, so we fall back on nuclear. **Wouldn't it therefore be better to have a small rise in global temperature?** * A higher temperature means you don't have to warm your home during winter (as much). * During summer, you can use the extra energy generated by the photovoltaic cells to power air conditioners in order to cool buildings and * Use some excess energy to stop the [runaway greenhouse effect](https://en.wikipedia.org/wiki/Runaway_greenhouse_effect) using CCS. Even though cooling requires more energy than heating, might it break even? **Will the rise in temperature have a positive effect on our energy production and the ability to satisfy energy demand?**
2019/07/23
[ "https://sustainability.stackexchange.com/questions/8374", "https://sustainability.stackexchange.com", "https://sustainability.stackexchange.com/users/6811/" ]
> > Wouldn't it therefore be better to have a small rise in global temperature? > **No.** The radiative forcing by carbon dioxide does not increase solar radiation that can be captured by solar cells. The warmer temperatures actually **decrease** solar cell output, because solar cells really like cold temperatures and lots of radiation. You can't have both optimal temperature and optimal radiation at the same time, because solar radiation causes the temperature to increase. If we could somehow manage radiative forcing, to reduce worldwide temperatures, we could make more power with solar cells. **Edit:** Nice to have unexplained downvotes after an upvote. So, let me add some sources: * <https://www.civicsolar.com/article/how-does-heat-affect-solar-panel-efficiencies> * <https://greentumble.com/effect-of-temperature-on-solar-panel-efficiency/> * <https://www.sciencedirect.com/science/article/pii/S1876610213000829> * and many, many other (I'm not going to link to all sources I could link to because the list of sources would be very long)
This question seems to have several components. Firstly, in terms of PV energy production, a rise in temperature would most likely have a slightly negative effect due to decreased efficiency of solar cells at higher temperatures and perhaps an even greater decrease caused by an increase of cloud cover created by oceanic evaporation. On the other hand, an increase in temperature might cause an increase in rainfall, thereby improving the efficiency of hydroelectric power generation. Secondly, global climate change in general has a wide variety of effects on regional, local, and micro climates, many of which are indirectly influenced by changes in temperature. The various consequences that might be considered "better" are infrequently examined, and I think it is because climate change enthusiasts would be tarred and feathered more adamantly than climate change deniers. Next, a rise in temperature might be warmly welcomed by those living in colder climates, but there are many other methods to approach our heating/cooling power requirements which are better addressed through other means. For instance, building subterranean or partially underground homes has a huge impact on indoor climate for both Winter and Summer, in both hot and cold climates. Personally, I've managed comfortably now for a year without any heating or cooling power use at all, but it took ten years to create a comfortable and sustainable micro-climate for myself, my plants, and my livestock. Your question inspires me to explore how I can now use my excess PV power for CCS because it's a shame to see that going to waste.
144,036
I have an [IKEA desk](https://www.ikea.com/us/en/catalog/products/S49932175/#/S59133593) with a subtle fake wood grain paint layer (black) on the surface. The paint in the area that my mouse moves has worn away leaving a shiny textureless surface that the laser mouse doesn't detect. What's the simplest, cheapest, most effective way to re-surface that region (or the whole desk)? I don't like mouse mats. I'm thinking wallpaper or paint, but I don't know what types of paint will provide the surface required for the laser mouse to accurately detect movement.
2018/07/27
[ "https://diy.stackexchange.com/questions/144036", "https://diy.stackexchange.com", "https://diy.stackexchange.com/users/89230/" ]
Shelf liner (contact paper). Cut it into an oversize mouse pad shape with rounded corners and stick it down. You want it large enough that you aren't snagging the edge with your arm, keyboard, etc. It'll feel like nothing but give your mouse a better view. You could use black so it's not so conspicuous, but a bold color pattern might be spiffy. Change it out when it gets polished or worn.
You need a surface with a pattern or texture on it. In my experience any 1/4" span of distance needs to have at least 2-3 transitions on it. If all the sensor sees is a solid color, it can't realize you have moved it. The wood grain was doing that for you. As you have discovered, routine use of a computer mouse will polish the surface to a mirror finish, and **optical mice don't work on mirror finishes**. The reflectivity blinds the mouse regardless of underlying pattern. You could remove the gloss by light sanding (we painters call it 'scuff sanding'), but there still needs to be an underlying pattern. Obviously, a weak or fragile finish will quickly fail. So we can cross a few things off the list * any latex paint (fragile) * any solid-color paint (no pattern) * gloss paint (too reflective) * any LPU coating without some flattening additive (too reflective) That leaves us little choice. It's a tough problem. Only paint I can really think is epoxy garage floor paint, with plenty of chips to add a pattern. This surface may not end up smooth, so I would sand it smooth. So I would look at other coatings, e.g. A vinyl stick-down coating, I print out a sheet of hashmarks and change it regularly, but now we're just talking a mouse pad, which is what you don't want.
144,036
I have an [IKEA desk](https://www.ikea.com/us/en/catalog/products/S49932175/#/S59133593) with a subtle fake wood grain paint layer (black) on the surface. The paint in the area that my mouse moves has worn away leaving a shiny textureless surface that the laser mouse doesn't detect. What's the simplest, cheapest, most effective way to re-surface that region (or the whole desk)? I don't like mouse mats. I'm thinking wallpaper or paint, but I don't know what types of paint will provide the surface required for the laser mouse to accurately detect movement.
2018/07/27
[ "https://diy.stackexchange.com/questions/144036", "https://diy.stackexchange.com", "https://diy.stackexchange.com/users/89230/" ]
Shelf liner (contact paper). Cut it into an oversize mouse pad shape with rounded corners and stick it down. You want it large enough that you aren't snagging the edge with your arm, keyboard, etc. It'll feel like nothing but give your mouse a better view. You could use black so it's not so conspicuous, but a bold color pattern might be spiffy. Change it out when it gets polished or worn.
Rub the area with 000 or 0000 steel wool to bring back some subtle texture. It won't be possibly lumpy or uneven like paint, and you can easily texture the affected area only. This is a far cheaper solution in terms of money and time than paint.
144,036
I have an [IKEA desk](https://www.ikea.com/us/en/catalog/products/S49932175/#/S59133593) with a subtle fake wood grain paint layer (black) on the surface. The paint in the area that my mouse moves has worn away leaving a shiny textureless surface that the laser mouse doesn't detect. What's the simplest, cheapest, most effective way to re-surface that region (or the whole desk)? I don't like mouse mats. I'm thinking wallpaper or paint, but I don't know what types of paint will provide the surface required for the laser mouse to accurately detect movement.
2018/07/27
[ "https://diy.stackexchange.com/questions/144036", "https://diy.stackexchange.com", "https://diy.stackexchange.com/users/89230/" ]
Shelf liner (contact paper). Cut it into an oversize mouse pad shape with rounded corners and stick it down. You want it large enough that you aren't snagging the edge with your arm, keyboard, etc. It'll feel like nothing but give your mouse a better view. You could use black so it's not so conspicuous, but a bold color pattern might be spiffy. Change it out when it gets polished or worn.
Use fabric. The threads are detectable by the mouse. You can cover the whole top with something like a table cloth, or use a smaller piece in the work area. You say you don't like mouse pads but you're considering something like wallpaper, so I assume the mouse pad thickness is it's main problem, and maybe the issue of keeping it positioned. Soft fabric, like cotton, will have more friction; something like nylon will have less. If you want almost no friction, laminate a piece of cloth. That will also keep the surface from wearing and make it easily cleanable. It will be cardstock thickness. Something the size of a mouse pad or place mat can be laminated in readily available heat laminators, or you can use adhesive laminating pockets that don't require heat. A matte lamination film will probably work better than a glossy film for mouse purposes. You could also print out a custom pattern on paper, as Harper suggested, and laminate it to give it a longer life. An area-sized piece can be stuck to the desktop with a temporary adhesive so that it doesn't move around. Note that if you're going to laminate cloth (or a paper pattern) and stick it to the desktop, you're essentially making your own shelf liner, so isherwood's idea is a lot simpler. I'd use my idea instead only if you were going with a table cloth or wanted a material or pattern that wasn't available in a ready-made shelf liner.
133,701
There is kind of a risk gaining an access to personal data or some way getting debts with a passport. Is there a chance someone can read biometric RFID data from an international passport, like fingerprints or a photograph?
2019/03/12
[ "https://travel.stackexchange.com/questions/133701", "https://travel.stackexchange.com", "https://travel.stackexchange.com/users/68847/" ]
The RFID chip in a biometric passport can be convinced to communicate all the data stored therein if the right keys are provided to it. Note it’s not a matter of downloading encrypted data from the chip and then having a go at it with decryption tools of some kind; the communication with the chip is bi-directional and authentication has to be provided first. The core data such as name and the photograph are secured by Basic Access Control, where the key can be derived from machine readable data visible on the passport itself. In essence, after viewing the passport, it’s possible to download the same data you’ve just seen, plus the digital signature of the issuing authority confirming it’s genuine. There’s also Extended Access Control, where the idea is that more sensitive data such as fingerprints is protected by keys that the issuing authority only provides to parties such as other countries’ immigration departments. Thus any random person who knows the document number, the owner’s birth date and the passport’s expiry date (that’s what comprises the BAC key) can use this to read basic data and download the photo (there are multiple Android apps that do just this), while it’s not possible to take a powerful scanner to an airport and load lots of passports of passers-by. Downloading the fingerprints and other such data requires special keys which are, in theory, distributed by some secure channels among proper authorities. I’ve heard, without proof, that this process involves many hurdles and my country (Ukraine) simply hasn’t shared such keys with any other countries.
The data on your passport chip is encrypted, but the encryption key is the machine-readable data on the personal data page (see e.g. [this slide deck from ICAO](https://www.icao.int/Meetings/TAG-MRTD/Documents/Tag-Mrtd-18/Kinneging.pdf)). So it is useless if someone just reads the chip alone, but if they also can see the personal data page, then they can also decrypt the contents of the chip. There are Android apps which can read the passport chip, and require you to scan the data page with the camera, then read the chip with the phone's NFC reader. It will be very obvious if someone is doing this to your passport.
111,580
Torque seal is a lacquer-like product used in critical applications after a screw is tightened to provide a visual indication if the screw comes loose. Considering that a terminal screw on an electrical device coming loose can give you a Bad Day™, is it legal to use torque seal or an equivalent product (nail polish is said to work in a pinch) on electrical device terminal screws to visually indicate loosening? Or is there something in Code that would prohibit such a practice?
2017/04/06
[ "https://diy.stackexchange.com/questions/111580", "https://diy.stackexchange.com", "https://diy.stackexchange.com/users/27099/" ]
I don't know of any restrictions for the use you are proposing. In fact I have used it myself. Mostly for time clocks so if we can tell if someone is jacking with it.
I have never used this product, but often make a sharpie mark, which IME is a very common practice, and never raises an eyebrow. Since the amount is so small, I don't think there's a danger of fumes around the equipment while it cures, or introducing a combustible into the interior of the equipment. I think since there is no rule specifically restricting making identifying marks, labels, etc. inside the panel, there is nothing that prohibits torque seal on terminals.
59,080
I want to re-edit my question so that it will come in latest questions again and you all experts can look and help me. But i forgot my actual login id i used to post for the question. This is the link to my actual question. [My actual question](https://stackoverflow.com/questions/2854989/dsnless-connection-for-aruna-db/3370865#3370865) I deeply apologize for my action here if it is a violation of SO community ethics.
2010/07/30
[ "https://meta.stackexchange.com/questions/59080", "https://meta.stackexchange.com", "https://meta.stackexchange.com/users/-1/" ]
No worries, we're glad to see you care about your question. Did you insert your email address when you created that account? If so, have a look at this: <https://stackoverflow.com/users/account-recovery> As pointed out by George in the actual question, you can also have a moderator merge your new account with the old one, so you can access all old data again. And finally, you can also drop a message to the SO team by email and explain your situation. I'm sure they'll find a way to help you. Hope that helps :)
You [are a registered user](https://stackoverflow.com/users/309131/vijay), do you remember if you used OpenID or made an account directly? If OpenID, Flag your question for moderator and ask them to e-mail you the OpenID URL associated with your account, if you provided your e-mail when filling out your profile. That should jog your memory enough to login. You may need to do password recovery with the OpenID provider if you can't remember it. If that doesn't work, e-mail ***team@stackoverflow.com***
422,149
I have a Windows 7 Ultimate machine where the wireless adapter all of a sudden started having trouble connecting to wireless networks. Whenever I go to a new place and try to connect to a wireless network, it says that the DNS server is not responding, and tells me to go unplug the router and try again. After several locations in a row telling me this, I began to realize something was wrong with my adapter, not the routers. I am no longer asked to identify the security level for any new networks (Work, Home, or Public) like I used to be (it defaults to Public now - with the park bench icon). Often, resetting the router doesn't even work. Running the Windows 7 troubleshooter doesn't give me anything better than the advice to reset the router. However, the adapter will still connect to the wireless network at my main office without any problems. Does anyone know why a wireless network adapter can get so finicky so suddenly? Thanks!
2012/05/08
[ "https://superuser.com/questions/422149", "https://superuser.com", "https://superuser.com/users/133169/" ]
It really depends on the case you have. I would personally get one of solid state picoPSU things and power the PC through a battery. Charge up a capacitor while the car is running, then when the car shuts off let it slowly drain through a resistor to ground and when the capacitor is empty get a mosfet to poke the "power button" pins for it to go into automatic powerdown Note: I would try to avoid going DC(car) to AC(inverter/UPS) to DC(pc) as you'll loose alot of power that way and inverters always pull power even if they are not powering anything
I think, given the sheer form factor of a M-ITX case you would be better off getting some soft of UPS that is plugged into the car battery and working out how to do a graceful shutdown if the ignition has been cut for more than x minutes. You could try and bodge a load of batteries into a hard disk space but that probably wouldn't be a high enough current/voltage to power a PC. Also consider a laptop. Inbuilt battery. See if you can find a model that can also take an external secondary battery.
84,956
I am researching this company <https://boldip.com/> How do I verify that 1. It's even a real law firm 2. It's credible 3. It's great at its practice
2022/10/02
[ "https://law.stackexchange.com/questions/84956", "https://law.stackexchange.com", "https://law.stackexchange.com/users/46457/" ]
As to #1, the US Patent and Trademark Office has a [patent practitioner search](https://oedci.uspto.gov/OEDCI/practitionerSearchEntry) where you can verify if someone is a registered patent practitioner. If so, it means they passed a [registration process] that evaluates their "legal, scientific, and technical qualifications, as well as good moral character and reputation", as well as a multiple-choice exam. I looked up a few of the attorneys listed on [the firm's About Us page](https://boldip.com/about-us/) and they show up as registered. So this seems like a good indication that they are a "real law firm". This does not address whether they are "credible" (I'm not sure what that means), or how to evaluate the firm's quality, and I will leave it to someone else to answer that.
1. The lawyer makes the law firm. If there is at least one lawyer who will sign off on any advice and representation on behalf of the client in a company, in other words, they provide legal services under the lead of a lawyer, and its formally established, it is — potentially besides of any other services they may provide per state law — a law firm. Accordingly, you may want to look at the individual attorneys track records (state bar disciplinary history) which may at least give a hint about the individual composition of the firm from this perspective. (The Auditor of State of California simply [declared](https://www.auditor.ca.gov/reports/2022-030/index.html) the Cal. State Bar to practically doing nothing about attorney misconduct, and acting as a cover-up agency for attorneys, and while I’m not sure about the practices in other states, I’d not put my hopes up high) At times, incorporation or tax documents will list the scope of services a company provides, but typically even there one will on mot find “Any legal activity” or similar wording. In many states law firms are given a special designation and will be entitled tot the company form “professional corporation” or P.C. for short; however, that may not everywhere be the case in the U.S. Do note, patent agents may prosecute patents before the USPTO, the Patent and Trademark Administrative Board (PTAB), but not before the Federal Circuit or the U.S. Supreme Court; in a patent dispute, you will need a patent attorney, not an agent. 2. The test of the pudding is the eating. You can read reviews about law firms, but be mindful of the fact that both Google and Yelp will take down reviews of the really nasty stuff on lawyers regardless if a judge declared that a review is substantially factual. You will simply not find reasons of the one-star reviews, the type of things you want to know to judge a law firm's trustworthiness and loyalty to their clients. It’s probably good practice to act according to the specific nature of the interaction within the relationship in terms of whether or not interest surely or necessarily align or not. And if not, proceed with caution. 3. That’s probably even harder without not only knowing the patents they gained and lost, but the prior art around those patents, the actual scope of the invention provided by the inventor or inventors, and the scope of patent issued which tends to require some rather in-depth understanding of various fields of science and technology other than the legal aspect. It’s important to know that “approval rates” may be misleading for many reasons: A law firm may simply refuse a case they are not confident they wouldn’t win (that *might* be a good test of that anecdotal pudding relative to whether or not a patent could be obtained *based on how one presents it to a law firm*); another issue, although it’s probably less probable, frankly, it is really hard to not be able to squeeze something out of a utility idea that at least in the most extremely narrowest sense would not pass obviousness. In other words, one may obtain a completely useless patent with no commercial value, and even the USPTO will probably gladly approved it if it’s narrow enough in hope one would keep paying the maintenance fees. I have not come across law firms that would run a scheme to get approvals at all costs, typically the client has enough discretion on the high-level strategizing to decide whether or not to pursue a patent whatever narrow it may be, or call it as the result would be worthless even with a grant. **The most important thing is** an average-income individual or first inventor is generally best off spending the time to educate themselves to understand patent drafting at least to the extent that they would feel confident to file a provisional application as though they themselves would have to follow up on it, and file the non-provisional on it. (Of course, this on the balance of the competitiveness of the field and expectable probability of someone filing first and rendering one’s application obvious or rarely but even worse: Not novel) There is no much wiggle room in the scope of the claims, and the overall description or specification of the patent in a non-provisional relative to its parent provisional. One must go in with knowing that however narrow they managed to squeeze out the invention and it’s exemplary embodiments in the provisional, that will be the widest breadth of the non-provisional. If one drafts it as though they would then file it as though they would then have to follow through with the non-provisional on the premise that anything left out is lost (unless of course one files another patent application before the publication of the non-provisional), then that’s the material one wants a patent attorney to see. Patent attorneys typically work with big companies who have in-house patent attorneys and their engineers are also thoroughly educated in drafting patents. When their application lands before their in-house or outside counsel, it is almost in shape and form. One simply must thoroughly understand the prior art, that’s a work you just can’t afford paying a lawyer for. In-house attorneys will do it, or outside counsels for multiple tens of thousands if it came to that but it does not, because the engineers do that job before the drafts get before a lawyer or patent agent. The specific law firm in question: It would probably be the decades U.S. patent law scam if this law firm would not be a law firm, and even if only one of the attorneys listed as such would not have been admitted and in good standing with the USPTO and/or the Federal Circuit that would be quite the story too. As @Nate Eldredge informed that doesn’t seem to be the case, and one may view individual attorneys to have been registered and/or to be in good standing with the USPTO.
210,779
> > **Is the Solar System stable?** > > > You can see [this](https://en.wikipedia.org/wiki/Stability_of_the_Solar_System) Wikipedia page. In May 2015 I was at the conference of Cedric Villani at Sharif university of technology with this title: "Of planets, stars and eternity (stabilization and long-time behavior in classical celestial mechanics)" , at the end of this conference one of the students asked him this question and he laughed strangely(!) with no convincing answer! **Edit**: The purpose of "long-time" is timescale more than [Lyapunov time](https://en.wikipedia.org/wiki/Lyapunov_time), hence billions of years.
2015/07/04
[ "https://mathoverflow.net/questions/210779", "https://mathoverflow.net", "https://mathoverflow.net/users/-1/" ]
[This paper](http://iopscience.iop.org/0004-637X/683/2/1207/fulltext/) (Batyrin and Laughlin, 2008) seems to indicate that we are doomed.
in a conference in Paris, Jacques Féjoz said (and i quote from memory) that the big planets seem to be stable, while the small ones chaotic. if i remember well, it was based on numerical evidence, intuition and the known results on the stability of the planar many-body problem...
210,779
> > **Is the Solar System stable?** > > > You can see [this](https://en.wikipedia.org/wiki/Stability_of_the_Solar_System) Wikipedia page. In May 2015 I was at the conference of Cedric Villani at Sharif university of technology with this title: "Of planets, stars and eternity (stabilization and long-time behavior in classical celestial mechanics)" , at the end of this conference one of the students asked him this question and he laughed strangely(!) with no convincing answer! **Edit**: The purpose of "long-time" is timescale more than [Lyapunov time](https://en.wikipedia.org/wiki/Lyapunov_time), hence billions of years.
2015/07/04
[ "https://mathoverflow.net/questions/210779", "https://mathoverflow.net", "https://mathoverflow.net/users/-1/" ]
[This paper](http://iopscience.iop.org/0004-637X/683/2/1207/fulltext/) (Batyrin and Laughlin, 2008) seems to indicate that we are doomed.
The work of Wisdom is relevant (for example, see <http://rspa.royalsocietypublishing.org/content/413/1844/109.short>) but not conclusive. Numerical work suggests overall stability over very long time periods.
210,779
> > **Is the Solar System stable?** > > > You can see [this](https://en.wikipedia.org/wiki/Stability_of_the_Solar_System) Wikipedia page. In May 2015 I was at the conference of Cedric Villani at Sharif university of technology with this title: "Of planets, stars and eternity (stabilization and long-time behavior in classical celestial mechanics)" , at the end of this conference one of the students asked him this question and he laughed strangely(!) with no convincing answer! **Edit**: The purpose of "long-time" is timescale more than [Lyapunov time](https://en.wikipedia.org/wiki/Lyapunov_time), hence billions of years.
2015/07/04
[ "https://mathoverflow.net/questions/210779", "https://mathoverflow.net", "https://mathoverflow.net/users/-1/" ]
Due to chaotic behaviour of the Solar System, it is not possible to precisely predict the evolution of the Solar System over 5 Gyr and the question of its long-term stability can only be answered in a statistical sense. For example, in <http://www.nature.com/nature/journal/v459/n7248/full/nature08096.html> (Existence of collisional trajectories of Mercury, Mars and Venus with the Earth, by J. Laskar and M. Gastineau) 2501 orbits with different initial conditions all consistent with our present knowledge of the parameters of the Solar System were traced out in computer simulations up to 5 Gyr. The main finding of the paper is that one percent of the solutions lead to a large enough increase in Mercury's eccentricity to allow its collisions with Venus or the Sun. Probably the most surprising result of the paper (see also <http://arxiv.org/abs/1209.5996>) is that in a pure Newtonian world the probability of collisions within 5 Gyr grows to 60 percent and therefore general relativity is crucial for long-term stability of the inner solar system. Many questions remain, however, about reliability of the present day consensus that the odds for the catastrophic destabilization of the inner planets are in the order of a few percent. I do not know if the effects of galactic tidal perturbations or possible perturbations from passing stars are taken into account. Also different numerical algorithms lead to statistically different results (see, for example, <http://arxiv.org/abs/1506.07602>). Some interesting historical background of solar system stability studies can be found in <http://arxiv.org/abs/1411.4930> (Michel Henon and the Stability of the Solar System, by Jacques Laskar).
[This paper](http://iopscience.iop.org/0004-637X/683/2/1207/fulltext/) (Batyrin and Laughlin, 2008) seems to indicate that we are doomed.
210,779
> > **Is the Solar System stable?** > > > You can see [this](https://en.wikipedia.org/wiki/Stability_of_the_Solar_System) Wikipedia page. In May 2015 I was at the conference of Cedric Villani at Sharif university of technology with this title: "Of planets, stars and eternity (stabilization and long-time behavior in classical celestial mechanics)" , at the end of this conference one of the students asked him this question and he laughed strangely(!) with no convincing answer! **Edit**: The purpose of "long-time" is timescale more than [Lyapunov time](https://en.wikipedia.org/wiki/Lyapunov_time), hence billions of years.
2015/07/04
[ "https://mathoverflow.net/questions/210779", "https://mathoverflow.net", "https://mathoverflow.net/users/-1/" ]
Due to chaotic behaviour of the Solar System, it is not possible to precisely predict the evolution of the Solar System over 5 Gyr and the question of its long-term stability can only be answered in a statistical sense. For example, in <http://www.nature.com/nature/journal/v459/n7248/full/nature08096.html> (Existence of collisional trajectories of Mercury, Mars and Venus with the Earth, by J. Laskar and M. Gastineau) 2501 orbits with different initial conditions all consistent with our present knowledge of the parameters of the Solar System were traced out in computer simulations up to 5 Gyr. The main finding of the paper is that one percent of the solutions lead to a large enough increase in Mercury's eccentricity to allow its collisions with Venus or the Sun. Probably the most surprising result of the paper (see also <http://arxiv.org/abs/1209.5996>) is that in a pure Newtonian world the probability of collisions within 5 Gyr grows to 60 percent and therefore general relativity is crucial for long-term stability of the inner solar system. Many questions remain, however, about reliability of the present day consensus that the odds for the catastrophic destabilization of the inner planets are in the order of a few percent. I do not know if the effects of galactic tidal perturbations or possible perturbations from passing stars are taken into account. Also different numerical algorithms lead to statistically different results (see, for example, <http://arxiv.org/abs/1506.07602>). Some interesting historical background of solar system stability studies can be found in <http://arxiv.org/abs/1411.4930> (Michel Henon and the Stability of the Solar System, by Jacques Laskar).
in a conference in Paris, Jacques Féjoz said (and i quote from memory) that the big planets seem to be stable, while the small ones chaotic. if i remember well, it was based on numerical evidence, intuition and the known results on the stability of the planar many-body problem...
210,779
> > **Is the Solar System stable?** > > > You can see [this](https://en.wikipedia.org/wiki/Stability_of_the_Solar_System) Wikipedia page. In May 2015 I was at the conference of Cedric Villani at Sharif university of technology with this title: "Of planets, stars and eternity (stabilization and long-time behavior in classical celestial mechanics)" , at the end of this conference one of the students asked him this question and he laughed strangely(!) with no convincing answer! **Edit**: The purpose of "long-time" is timescale more than [Lyapunov time](https://en.wikipedia.org/wiki/Lyapunov_time), hence billions of years.
2015/07/04
[ "https://mathoverflow.net/questions/210779", "https://mathoverflow.net", "https://mathoverflow.net/users/-1/" ]
Due to chaotic behaviour of the Solar System, it is not possible to precisely predict the evolution of the Solar System over 5 Gyr and the question of its long-term stability can only be answered in a statistical sense. For example, in <http://www.nature.com/nature/journal/v459/n7248/full/nature08096.html> (Existence of collisional trajectories of Mercury, Mars and Venus with the Earth, by J. Laskar and M. Gastineau) 2501 orbits with different initial conditions all consistent with our present knowledge of the parameters of the Solar System were traced out in computer simulations up to 5 Gyr. The main finding of the paper is that one percent of the solutions lead to a large enough increase in Mercury's eccentricity to allow its collisions with Venus or the Sun. Probably the most surprising result of the paper (see also <http://arxiv.org/abs/1209.5996>) is that in a pure Newtonian world the probability of collisions within 5 Gyr grows to 60 percent and therefore general relativity is crucial for long-term stability of the inner solar system. Many questions remain, however, about reliability of the present day consensus that the odds for the catastrophic destabilization of the inner planets are in the order of a few percent. I do not know if the effects of galactic tidal perturbations or possible perturbations from passing stars are taken into account. Also different numerical algorithms lead to statistically different results (see, for example, <http://arxiv.org/abs/1506.07602>). Some interesting historical background of solar system stability studies can be found in <http://arxiv.org/abs/1411.4930> (Michel Henon and the Stability of the Solar System, by Jacques Laskar).
The work of Wisdom is relevant (for example, see <http://rspa.royalsocietypublishing.org/content/413/1844/109.short>) but not conclusive. Numerical work suggests overall stability over very long time periods.
1,051,944
I have been using AWS SES for a year now for my SaaS. I added DKIM, SPF and DMARC. Yet some people are still not receiving emails. I provide instructions for them to add my domain to their safe senders list. That seems to resolve it. I would like to avoid end-users having to do that. I am wondering if using a dedicated IP would help to improve delivery. Anyone has experience with that?
2021/02/01
[ "https://serverfault.com/questions/1051944", "https://serverfault.com", "https://serverfault.com/users/615222/" ]
Yes, technically you can modify files in the underling volume, but Gluster will **not** be notified about your changes, and therefore they may not be replicated to other Gluster nodes. This is **very much not recommended**, and could mean that your servers end up with different underlying files, which can lead to unpredictable behaviour/data loss.
Once I have several files copied directly to the brick, is there any way to make Gluster be aware about those new files?
12,990
I am a student of computer science with interest in image processing. I have learned how to apply a few effects to images like making them grayscale, sketching them out of lines, etc. I would like to learn more about the algorithmic techniques behind creative manipulation of images like making them sepia-tone, smudging them, etc. **Can someone please point me in the right direction?** How do I learn the fundamentals of these algorithms?
2013/06/30
[ "https://cs.stackexchange.com/questions/12990", "https://cs.stackexchange.com", "https://cs.stackexchange.com/users/8418/" ]
I have been working on that very same thing recently, albeit for a very different purpose. Asides from SO, I have been referring to Google code examples, and forums related to the platform and language that I am working in. Sometimes, I found some information and techniques from [Image Processing Online](http://www.ipol.im/). One technique that I found was particularly helpful, was to outline the processes that I wanted to do first and research each major programming decision, testing it as I go. I hope this helps.
If you already know the name of some specific image processing technique, Wikipedia has pretty good explanations -- such as ["closing"](http://en.wikipedia.org/wiki/closing_%28morphology%29) in [morphological image processing](http://en.wikipedia.org/wiki/morphological_image_processing). The [computer vision wiki](http://computervision.wikia.com/) is a good place to read and write about computer vision stuff that doesn't fit into an encyclopedia format -- HOWTOs, example code, etc. There exist many "type this, click there" tutorials on how to get a specific effect with a specific piece of software -- [GIMP Tutorials](http://en.wikibooks.org/wiki/GIMP/External_Tutorials), [Kdenlive/Video effects](http://en.wikibooks.org/wiki/Kdenlive/Video_effects), [MATLAB Image Processing Toolbox](http://en.wikibooks.org/wiki/MATLAB_Programming/Image_Processing_Toolbox), etc. Many universities have classes in image processing or computer graphics or both; a few have an entire department dedicated to image processing. Some of them even have a web site that has lots of information about image processing. A few even record class lectures and post them online. * [Virginia Image and Video Analysis laboratory](http://viva.ee.virginia.edu/) * [Berkeley Computer Vision Group](http://www.eecs.berkeley.edu/Research/Projects/CS/vision/) * [USC Computer Vision Laboratory](http://iris.usc.edu/USC-Computer-Vision.html) * [Coursera Computer Vision in English](https://www.coursera.org/course/computervision) * [Coursera Computer Vision in German](https://www.coursera.org/course/compvision) * [MIT Computer Vision Group](http://groups.csail.mit.edu/vision/) * [Cornell Graphics and Vision Group](http://rgb.cs.cornell.edu/)
12,990
I am a student of computer science with interest in image processing. I have learned how to apply a few effects to images like making them grayscale, sketching them out of lines, etc. I would like to learn more about the algorithmic techniques behind creative manipulation of images like making them sepia-tone, smudging them, etc. **Can someone please point me in the right direction?** How do I learn the fundamentals of these algorithms?
2013/06/30
[ "https://cs.stackexchange.com/questions/12990", "https://cs.stackexchange.com", "https://cs.stackexchange.com/users/8418/" ]
you seem to be interested in image enhancement techniques such as used on Instagram to add "cool effects". these algorithms tend to be somewhat more advanced and proprietary. however for basic image manipulation which is the underlying basis for these algorithms, there are many places to start. here are a few: * [Image processing basics](http://www.imageprocessingbasics.com/) tutorials with java applets. * [Image processing basics tutorial](http://www.imageprocessingplace.com/root_files_V3/tutorials.htm) Zip files of many tutorials. the overall site has a comprehensive list of resources eg books. * [CMSC 426 Image processing U Maryland/Jacobs 2003](http://www.cs.umd.edu/~djacobs/CMSC426/CMSC426.htm) there are many university classes some with very good online notes/resources * [Image processing in MATLAB](http://robotix.in/tutorials/category/imageprocessing/ip_matlab) Matlab is a leading mathematical package for image processing and math packages can simplify many algorithms. another leading package for image manipulation is Mathematica. * [Coursera](https://www.coursera.org/course/images), now a leading online course site, has a course on Image and video processing by Sapiro/Duke university. * another option for the hacker types is taking an open source image processing program and reverse-engineering the code, ie learning the algorithms by looking at code and/or tweaking it. [Gimp](http://en.wikipedia.org/wiki/GIMP) is a very full featured open source image processing program with many advanced effects, some as "plugins". note most image techniques have many fundamental connections to [linear/matrix/vector algebra](http://en.wikipedia.org/wiki/Matrix_%28mathematics%29) and thats an important mathematical area to study for basic background.
If you already know the name of some specific image processing technique, Wikipedia has pretty good explanations -- such as ["closing"](http://en.wikipedia.org/wiki/closing_%28morphology%29) in [morphological image processing](http://en.wikipedia.org/wiki/morphological_image_processing). The [computer vision wiki](http://computervision.wikia.com/) is a good place to read and write about computer vision stuff that doesn't fit into an encyclopedia format -- HOWTOs, example code, etc. There exist many "type this, click there" tutorials on how to get a specific effect with a specific piece of software -- [GIMP Tutorials](http://en.wikibooks.org/wiki/GIMP/External_Tutorials), [Kdenlive/Video effects](http://en.wikibooks.org/wiki/Kdenlive/Video_effects), [MATLAB Image Processing Toolbox](http://en.wikibooks.org/wiki/MATLAB_Programming/Image_Processing_Toolbox), etc. Many universities have classes in image processing or computer graphics or both; a few have an entire department dedicated to image processing. Some of them even have a web site that has lots of information about image processing. A few even record class lectures and post them online. * [Virginia Image and Video Analysis laboratory](http://viva.ee.virginia.edu/) * [Berkeley Computer Vision Group](http://www.eecs.berkeley.edu/Research/Projects/CS/vision/) * [USC Computer Vision Laboratory](http://iris.usc.edu/USC-Computer-Vision.html) * [Coursera Computer Vision in English](https://www.coursera.org/course/computervision) * [Coursera Computer Vision in German](https://www.coursera.org/course/compvision) * [MIT Computer Vision Group](http://groups.csail.mit.edu/vision/) * [Cornell Graphics and Vision Group](http://rgb.cs.cornell.edu/)
12,990
I am a student of computer science with interest in image processing. I have learned how to apply a few effects to images like making them grayscale, sketching them out of lines, etc. I would like to learn more about the algorithmic techniques behind creative manipulation of images like making them sepia-tone, smudging them, etc. **Can someone please point me in the right direction?** How do I learn the fundamentals of these algorithms?
2013/06/30
[ "https://cs.stackexchange.com/questions/12990", "https://cs.stackexchange.com", "https://cs.stackexchange.com/users/8418/" ]
What you are referring to is not a branch of just image processing, but specifically [digital image processing.](http://en.wikipedia.org/wiki/Digital_image_processing) Also, [linear filtering](http://en.wikipedia.org/wiki/Linear_filter) should help a bit.
If you already know the name of some specific image processing technique, Wikipedia has pretty good explanations -- such as ["closing"](http://en.wikipedia.org/wiki/closing_%28morphology%29) in [morphological image processing](http://en.wikipedia.org/wiki/morphological_image_processing). The [computer vision wiki](http://computervision.wikia.com/) is a good place to read and write about computer vision stuff that doesn't fit into an encyclopedia format -- HOWTOs, example code, etc. There exist many "type this, click there" tutorials on how to get a specific effect with a specific piece of software -- [GIMP Tutorials](http://en.wikibooks.org/wiki/GIMP/External_Tutorials), [Kdenlive/Video effects](http://en.wikibooks.org/wiki/Kdenlive/Video_effects), [MATLAB Image Processing Toolbox](http://en.wikibooks.org/wiki/MATLAB_Programming/Image_Processing_Toolbox), etc. Many universities have classes in image processing or computer graphics or both; a few have an entire department dedicated to image processing. Some of them even have a web site that has lots of information about image processing. A few even record class lectures and post them online. * [Virginia Image and Video Analysis laboratory](http://viva.ee.virginia.edu/) * [Berkeley Computer Vision Group](http://www.eecs.berkeley.edu/Research/Projects/CS/vision/) * [USC Computer Vision Laboratory](http://iris.usc.edu/USC-Computer-Vision.html) * [Coursera Computer Vision in English](https://www.coursera.org/course/computervision) * [Coursera Computer Vision in German](https://www.coursera.org/course/compvision) * [MIT Computer Vision Group](http://groups.csail.mit.edu/vision/) * [Cornell Graphics and Vision Group](http://rgb.cs.cornell.edu/)
37,373
We have been introduced to 5 characters (all Impel Down Jailers) with "Awakened" Zoan Devil Fruits. Minotaurus is the most famous of them. A normal Zoan fruit allows the user to transform between Natural (normally human), Animal (based on the devil fruit type), and Hybrid forms. I can't, however, remember any of the "Mino" guards in any form besides a exageratedly strong version of the Hybrid one. At the time I thought "Awakened" meant that the fruit was always active so they were stuck as partially animals (as well as give the strengths Crocodile mentioned). With Doflamingo's Paramecia fruit, however, awakening appears to just unlock new abilities. This made me doubt my previous understanding. Wiki doesn't help and I don't have time to rewatch everything. Do we know whether Awakened Zoan Devil Fruit users retain the ability to transform into multiple forms? If the answer to this is "Oda hasn't told us yet", that is sufficient.
2016/11/17
[ "https://anime.stackexchange.com/questions/37373", "https://anime.stackexchange.com", "https://anime.stackexchange.com/users/3561/" ]
My theory about its "real name" is, either Towako, the name of hakumen's avatar, maybe Hakumen named its avatar the real name for self-satisfaction? and the second probable name, is kirio/Inasa, as in Kirio Inasa, the boy whom it kidnapped. after the kidnapping, it probably change his name as it likes. I personally think, Towako is the "Real name".
In my opinion i think Hakumen’s real name might have something to do with the baby shown as he vaporizes and yin/yang because in the end he does not fall as negative ki which creates other youkai, but he floats upwards as positive ki. Maybe in the end he died a peaceful death and he could now be reborn as a human, not a mass of hate.We can also assume he would have a woman name since his voice when he whispers “The name i was called was...” is a feminine one.
148,852
I deployed a webpart by admin user in Visual Studio 2013 through in SharePoint 2013. It's working fine. But when I deploy using another user which already have admin permission then it is showing below error. > > Error occurred in deployment step 'Retract Solution': Object reference not set to an instance of an object. > > >
2015/07/10
[ "https://sharepoint.stackexchange.com/questions/148852", "https://sharepoint.stackexchange.com", "https://sharepoint.stackexchange.com/users/37342/" ]
It's most likely your SpSite or SpWeb object is null because of insufficient rights to the user. Can you make sure that the user have rights on the webapp you are deploying the solution too. A simple way to make sure that the user have sufficient rights is to create a new test solution and deploy, it will make things clear.
I had same problem and i tried whatever was suggested in the blog. Like uninstall solution etc. Then I did a restart of system. And opened a new solution and tired to deploy. But this time the error was very clear, I got a message it was due to increase in database size. After reducing deployment worked correctly.
148,852
I deployed a webpart by admin user in Visual Studio 2013 through in SharePoint 2013. It's working fine. But when I deploy using another user which already have admin permission then it is showing below error. > > Error occurred in deployment step 'Retract Solution': Object reference not set to an instance of an object. > > >
2015/07/10
[ "https://sharepoint.stackexchange.com/questions/148852", "https://sharepoint.stackexchange.com", "https://sharepoint.stackexchange.com/users/37342/" ]
It's most likely your SpSite or SpWeb object is null because of insufficient rights to the user. Can you make sure that the user have rights on the webapp you are deploying the solution too. A simple way to make sure that the user have sufficient rights is to create a new test solution and deploy, it will make things clear.
I had same problem and my solution was related with no space in disk for deployment. ps.: database disk.
148,852
I deployed a webpart by admin user in Visual Studio 2013 through in SharePoint 2013. It's working fine. But when I deploy using another user which already have admin permission then it is showing below error. > > Error occurred in deployment step 'Retract Solution': Object reference not set to an instance of an object. > > >
2015/07/10
[ "https://sharepoint.stackexchange.com/questions/148852", "https://sharepoint.stackexchange.com", "https://sharepoint.stackexchange.com/users/37342/" ]
I had same problem and my solution was related with no space in disk for deployment. ps.: database disk.
I had same problem and i tried whatever was suggested in the blog. Like uninstall solution etc. Then I did a restart of system. And opened a new solution and tired to deploy. But this time the error was very clear, I got a message it was due to increase in database size. After reducing deployment worked correctly.
18,962
In Western Europe, we currently have a long spell of warm and dry weather. The perfect moment to combine sunbathing with working from home. However, outside in the sun, the legibility of my laptop screen is greatly reduced when compared to inside the house/office, even when I maximize the display's brightness. Are there any tricks I can use to increase the legibility (and ultimately my productivity)? I'll share one trick I often use, but I'm definitely interested in more suggestions.
2018/07/28
[ "https://lifehacks.stackexchange.com/questions/18962", "https://lifehacks.stackexchange.com", "https://lifehacks.stackexchange.com/users/5345/" ]
I forgot where I saw/heard this tip, but I'm often using a [cardboard](https://en.wikipedia.org/wiki/Cardboard) box slightly larger than the laptop to provide shade on the display. At least for me, this makes it more legible. An additional benefit is that this also reduces the amount of sunlight on the laptop, which keeps it cooler and avoids the processor being underclocked, which would decrease performance. A slight disadvantage is that it also partially blocks the sun's rays to my upper body and arms. Here is how it looks like: [![enter image description here](https://i.stack.imgur.com/LtVDtm.jpg)](https://i.stack.imgur.com/LtVDt.jpg)
The cardboard box suggested by Glorfindel, and improved upon by Stan's comment, is fantastic. I have another suggestion that can be used with the box or by itself. Wear a dark shirt, and make sure whatever is behind you is dark. (A wall, a shaded forest, etc.) Much of the reason you can't see the screen is because of reflected light. Therefore, if the screen is facing something dark, there will be less light for it to reflect, and you will be able to see the screen's display better.
18,962
In Western Europe, we currently have a long spell of warm and dry weather. The perfect moment to combine sunbathing with working from home. However, outside in the sun, the legibility of my laptop screen is greatly reduced when compared to inside the house/office, even when I maximize the display's brightness. Are there any tricks I can use to increase the legibility (and ultimately my productivity)? I'll share one trick I often use, but I'm definitely interested in more suggestions.
2018/07/28
[ "https://lifehacks.stackexchange.com/questions/18962", "https://lifehacks.stackexchange.com", "https://lifehacks.stackexchange.com/users/5345/" ]
I forgot where I saw/heard this tip, but I'm often using a [cardboard](https://en.wikipedia.org/wiki/Cardboard) box slightly larger than the laptop to provide shade on the display. At least for me, this makes it more legible. An additional benefit is that this also reduces the amount of sunlight on the laptop, which keeps it cooler and avoids the processor being underclocked, which would decrease performance. A slight disadvantage is that it also partially blocks the sun's rays to my upper body and arms. Here is how it looks like: [![enter image description here](https://i.stack.imgur.com/LtVDtm.jpg)](https://i.stack.imgur.com/LtVDt.jpg)
To increase the legibility of a computer screen, arrangement should be made to control the fall of run rays directly on the computer screen. This can be done by covering the top of the computer by some wooden surface or cloth , in such a way that the area of the wooden surface or the cloth is more than the area of the top surface of the computer.
18,962
In Western Europe, we currently have a long spell of warm and dry weather. The perfect moment to combine sunbathing with working from home. However, outside in the sun, the legibility of my laptop screen is greatly reduced when compared to inside the house/office, even when I maximize the display's brightness. Are there any tricks I can use to increase the legibility (and ultimately my productivity)? I'll share one trick I often use, but I'm definitely interested in more suggestions.
2018/07/28
[ "https://lifehacks.stackexchange.com/questions/18962", "https://lifehacks.stackexchange.com", "https://lifehacks.stackexchange.com/users/5345/" ]
I forgot where I saw/heard this tip, but I'm often using a [cardboard](https://en.wikipedia.org/wiki/Cardboard) box slightly larger than the laptop to provide shade on the display. At least for me, this makes it more legible. An additional benefit is that this also reduces the amount of sunlight on the laptop, which keeps it cooler and avoids the processor being underclocked, which would decrease performance. A slight disadvantage is that it also partially blocks the sun's rays to my upper body and arms. Here is how it looks like: [![enter image description here](https://i.stack.imgur.com/LtVDtm.jpg)](https://i.stack.imgur.com/LtVDt.jpg)
In my experience many command line users have bright font on dark background terminal emulators (usually white on black, or matrix-movie green on black), while in bright environments (e.g. outside work), a dark font on a bright background is more legible (black on white). Of cause this mainly addresses command line users, afaik, office applications are already dark on bright.
18,962
In Western Europe, we currently have a long spell of warm and dry weather. The perfect moment to combine sunbathing with working from home. However, outside in the sun, the legibility of my laptop screen is greatly reduced when compared to inside the house/office, even when I maximize the display's brightness. Are there any tricks I can use to increase the legibility (and ultimately my productivity)? I'll share one trick I often use, but I'm definitely interested in more suggestions.
2018/07/28
[ "https://lifehacks.stackexchange.com/questions/18962", "https://lifehacks.stackexchange.com", "https://lifehacks.stackexchange.com/users/5345/" ]
The cardboard box suggested by Glorfindel, and improved upon by Stan's comment, is fantastic. I have another suggestion that can be used with the box or by itself. Wear a dark shirt, and make sure whatever is behind you is dark. (A wall, a shaded forest, etc.) Much of the reason you can't see the screen is because of reflected light. Therefore, if the screen is facing something dark, there will be less light for it to reflect, and you will be able to see the screen's display better.
To increase the legibility of a computer screen, arrangement should be made to control the fall of run rays directly on the computer screen. This can be done by covering the top of the computer by some wooden surface or cloth , in such a way that the area of the wooden surface or the cloth is more than the area of the top surface of the computer.
18,962
In Western Europe, we currently have a long spell of warm and dry weather. The perfect moment to combine sunbathing with working from home. However, outside in the sun, the legibility of my laptop screen is greatly reduced when compared to inside the house/office, even when I maximize the display's brightness. Are there any tricks I can use to increase the legibility (and ultimately my productivity)? I'll share one trick I often use, but I'm definitely interested in more suggestions.
2018/07/28
[ "https://lifehacks.stackexchange.com/questions/18962", "https://lifehacks.stackexchange.com", "https://lifehacks.stackexchange.com/users/5345/" ]
In my experience many command line users have bright font on dark background terminal emulators (usually white on black, or matrix-movie green on black), while in bright environments (e.g. outside work), a dark font on a bright background is more legible (black on white). Of cause this mainly addresses command line users, afaik, office applications are already dark on bright.
To increase the legibility of a computer screen, arrangement should be made to control the fall of run rays directly on the computer screen. This can be done by covering the top of the computer by some wooden surface or cloth , in such a way that the area of the wooden surface or the cloth is more than the area of the top surface of the computer.
3,792,402
I have an update to my app that has been reviewed and is **Pending Developer Release**. I have found a bug in this version and would actually like to reject this binary and keep my existing binary. Once I fix the bug, I would like to re-upload a new binary. Is this possible?
2010/09/25
[ "https://Stackoverflow.com/questions/3792402", "https://Stackoverflow.com", "https://Stackoverflow.com/users/457940/" ]
It took me a little while to find the answer to this. It can be done, but the answer isn't very intuitive. In iTunes Connect, select your new application & load the details for it. Click the "Binary Details" link. Once this screen is open, there will be a "Reject this Binary" button on this screen. I found this within the [iTunes Developer guide](http://itunesconnect.apple.com/docs/iTunesConnect_DeveloperGuide.pdf)(Link PDF has only 1 page and no other information) under the Rejecting Your Binary section. Apple really could have made the button more obvious. > > **Note :** Link PDF has only 1 page and no other information > > >
From the [iTunes Connect Developer Guide:](http://itunesconnect.apple.com/docs/iTunesConnect_DeveloperGuide.pdf) NOTE: You can only use the Version Release Control on app updates. It is not available for the first version of your app since you already have the ability to control when your first version goes live, using the Availability Date setting within Rights and Pricing. If you decide that you do not want to ever release a Pending Developer Release version, you will need to click the Release This Version button anyway, in order to create a new version of your app. You are not permitted to skip over an entire version.
3,792,402
I have an update to my app that has been reviewed and is **Pending Developer Release**. I have found a bug in this version and would actually like to reject this binary and keep my existing binary. Once I fix the bug, I would like to re-upload a new binary. Is this possible?
2010/09/25
[ "https://Stackoverflow.com/questions/3792402", "https://Stackoverflow.com", "https://Stackoverflow.com/users/457940/" ]
Attaching screenshots from the App Store Connect (previously known as iTunes Connect) interface: Select the version, then click on "cancel this release" > > You can only edit some information while your version is pending developer release. To edit all information, **cancel this release**. > > > [![ Stor](https://i.stack.imgur.com/yTgvX.png)](https://i.stack.imgur.com/yTgvX.png) You will be prompted with a confirmation dialog: > > Are you sure you want to cancel the release of version 2.6? > > > If you cancel this release, you must choose a new binary and resubmit this app version. Your version will be reviewed again by App Review. > > > ![Cancel Release dialog](https://i.stack.imgur.com/bRYGR.png)
From the [iTunes Connect Developer Guide:](http://itunesconnect.apple.com/docs/iTunesConnect_DeveloperGuide.pdf) NOTE: You can only use the Version Release Control on app updates. It is not available for the first version of your app since you already have the ability to control when your first version goes live, using the Availability Date setting within Rights and Pricing. If you decide that you do not want to ever release a Pending Developer Release version, you will need to click the Release This Version button anyway, in order to create a new version of your app. You are not permitted to skip over an entire version.
3,792,402
I have an update to my app that has been reviewed and is **Pending Developer Release**. I have found a bug in this version and would actually like to reject this binary and keep my existing binary. Once I fix the bug, I would like to re-upload a new binary. Is this possible?
2010/09/25
[ "https://Stackoverflow.com/questions/3792402", "https://Stackoverflow.com", "https://Stackoverflow.com/users/457940/" ]
Explanation to kernix answer: * select app in itunesconnect; * select app version; * you will see "You can only edit some information while your version is pending developer release. To edit all information, cancel this release." at the top of the page. Just click on "cancel this release" link.
From the [iTunes Connect Developer Guide:](http://itunesconnect.apple.com/docs/iTunesConnect_DeveloperGuide.pdf) NOTE: You can only use the Version Release Control on app updates. It is not available for the first version of your app since you already have the ability to control when your first version goes live, using the Availability Date setting within Rights and Pricing. If you decide that you do not want to ever release a Pending Developer Release version, you will need to click the Release This Version button anyway, in order to create a new version of your app. You are not permitted to skip over an entire version.
3,792,402
I have an update to my app that has been reviewed and is **Pending Developer Release**. I have found a bug in this version and would actually like to reject this binary and keep my existing binary. Once I fix the bug, I would like to re-upload a new binary. Is this possible?
2010/09/25
[ "https://Stackoverflow.com/questions/3792402", "https://Stackoverflow.com", "https://Stackoverflow.com/users/457940/" ]
It took me a little while to find the answer to this. It can be done, but the answer isn't very intuitive. In iTunes Connect, select your new application & load the details for it. Click the "Binary Details" link. Once this screen is open, there will be a "Reject this Binary" button on this screen. I found this within the [iTunes Developer guide](http://itunesconnect.apple.com/docs/iTunesConnect_DeveloperGuide.pdf)(Link PDF has only 1 page and no other information) under the Rejecting Your Binary section. Apple really could have made the button more obvious. > > **Note :** Link PDF has only 1 page and no other information > > >
Attaching screenshots from the App Store Connect (previously known as iTunes Connect) interface: Select the version, then click on "cancel this release" > > You can only edit some information while your version is pending developer release. To edit all information, **cancel this release**. > > > [![ Stor](https://i.stack.imgur.com/yTgvX.png)](https://i.stack.imgur.com/yTgvX.png) You will be prompted with a confirmation dialog: > > Are you sure you want to cancel the release of version 2.6? > > > If you cancel this release, you must choose a new binary and resubmit this app version. Your version will be reviewed again by App Review. > > > ![Cancel Release dialog](https://i.stack.imgur.com/bRYGR.png)
3,792,402
I have an update to my app that has been reviewed and is **Pending Developer Release**. I have found a bug in this version and would actually like to reject this binary and keep my existing binary. Once I fix the bug, I would like to re-upload a new binary. Is this possible?
2010/09/25
[ "https://Stackoverflow.com/questions/3792402", "https://Stackoverflow.com", "https://Stackoverflow.com/users/457940/" ]
It took me a little while to find the answer to this. It can be done, but the answer isn't very intuitive. In iTunes Connect, select your new application & load the details for it. Click the "Binary Details" link. Once this screen is open, there will be a "Reject this Binary" button on this screen. I found this within the [iTunes Developer guide](http://itunesconnect.apple.com/docs/iTunesConnect_DeveloperGuide.pdf)(Link PDF has only 1 page and no other information) under the Rejecting Your Binary section. Apple really could have made the button more obvious. > > **Note :** Link PDF has only 1 page and no other information > > >
Explanation to kernix answer: * select app in itunesconnect; * select app version; * you will see "You can only edit some information while your version is pending developer release. To edit all information, cancel this release." at the top of the page. Just click on "cancel this release" link.
3,792,402
I have an update to my app that has been reviewed and is **Pending Developer Release**. I have found a bug in this version and would actually like to reject this binary and keep my existing binary. Once I fix the bug, I would like to re-upload a new binary. Is this possible?
2010/09/25
[ "https://Stackoverflow.com/questions/3792402", "https://Stackoverflow.com", "https://Stackoverflow.com/users/457940/" ]
Attaching screenshots from the App Store Connect (previously known as iTunes Connect) interface: Select the version, then click on "cancel this release" > > You can only edit some information while your version is pending developer release. To edit all information, **cancel this release**. > > > [![ Stor](https://i.stack.imgur.com/yTgvX.png)](https://i.stack.imgur.com/yTgvX.png) You will be prompted with a confirmation dialog: > > Are you sure you want to cancel the release of version 2.6? > > > If you cancel this release, you must choose a new binary and resubmit this app version. Your version will be reviewed again by App Review. > > > ![Cancel Release dialog](https://i.stack.imgur.com/bRYGR.png)
Explanation to kernix answer: * select app in itunesconnect; * select app version; * you will see "You can only edit some information while your version is pending developer release. To edit all information, cancel this release." at the top of the page. Just click on "cancel this release" link.
13,525
Hi guys, I'm working on the sound design for an animation and I need to create the sound of fairy dust. Visually think magic wand, or the sparkles that surround Tinkerbell. Light and magical and darting around. I'm thinking the sounds needs to be light, high frequency and glistening, maybe chime like but not too musical. I have been recording bells and Glockenspiel with various processing but its not quite getting there. Have any of you had any experience creating sounds for fairy dust or magical sparkles? Would be great to hear some techniques on how you have achieved this abstract sound. Many thanks, Tom
2012/04/13
[ "https://sound.stackexchange.com/questions/13525", "https://sound.stackexchange.com", "https://sound.stackexchange.com/users/3766/" ]
Have you tried the Bell Tree? It's definitely the staple for that type of sound. Check out the samples [here](http://www.compositiontoday.com/sound_bank/percussion/bell_tree.asp). It is a bit of a cliché, but you can always process it more to make it more unique. Perhaps also think about other layers to go with it. This depends very much on how it looks visually, is it real world (like smoke or something) or is it CGI particle type effects? Sometimes it helps to start with lots of very small sounds and find ways of triggering them at different rates, with a bit of randomness. This can be a bit difficult with traditional sequencers, but is much easier in something like Max/msp or PD. Because they're not linear you don't have to paste loads of individual sounds onto a timeline, just set up the patch and adjust the settings. I've probably got a few patches around if you're interested. Banks of tuned resonant filters can also add that bell like resonance to otherwise non-tuned sounds. Good luck!
You can always use matchstick striking sounds instead of white noise. They have their own dynamics, and i find them very easy to use on abstract sounds.