prompt
stringlengths
49
4.73k
response
stringlengths
238
35k
Executing shell command containing quotations in Perl I have method that is listing files/folders to be deleted on unix. Then I delete it with the code: ``` for my $line (@linesFiles) { my $command = "rm '$line';"; # or my $command = "rmdir '$line';"; my ($stdout, $stderr) = capture { system ($command); }; } ``` Mots of the time it works, but sometimes files/folders have incorrect names containing quotation marks, like `some\folder\incorrect'name` or `some\folder\incorrect"name`. I would need to delete those files/folder as well. But using my code I'm getting EOF error or that file/folder does not exist error. When using `q` or `qq`, the quotation marks were removed from the filename resulting in file/folder does not exist error. Would anybody help me with how to modify the code, so it would be able to delete files/folders containing any potentially dangerous (at least for this case) characters like `" ' $ { }` ?
To build a shell command, you can use [String::ShellQuote](https://metacpan.org/pod/String::ShellQuote) (or [Win32::ShellQuote](https://metacpan.org/pod/Win32::ShellQuote)). ``` use String::ShellQuote qw( shell_quote ); my $shell_cmd = shell_quote( "rm", "--", $qfn ); system( $shell_cmd ); die( "Couldn't launch shell to unlink \"$qfn\": $!\n" ) if $? == -1; die( "Shell killed by signal ".( $? & 0x7F )." while trying to unlink \"$qfn\"\n" ) if $? & 0x7F; die( "Shell exited with error ".( $? >> 8 )." while trying to unlink \"$qfn\"\n" ) if $? >> 8; ``` --- But why involve a shell at all? You can use the multi-argument form of `system`. ``` system( "rm", "--", $qfn ); die( "Couldn't launch rm to unlink \"$qfn\": $!\n" ) if $? == -1; die( "rm killed by signal ".( $? & 0x7F )." while trying to unlink \"$qfn\"\n" ) if $? & 0x7F; die( "rm exited with error ".( $? >> 8 )." while trying to unlink \"$qfn\"\n" ) if $? >> 8; ``` --- But why involve an external tool at all. You can use [`unlink`](https://perldoc.perl.org/functions/unlink) to delete files. ``` unlink( $qfn ) or die( "Can't unlink \"$qfn\": $!\n" ); ```
How much will be the performance difference between using own xml parser and system.xml namespace I am trying to write a xml parser for my game, since the size of the final build increase by 1+ mb when using the `system.xml` namespace. The parser class is a singleton and will be ready for access anytime in the game. Though the amount of data I am going to handle is not much, I am still worried about the performance (since it is a game, I cannot afford to sacrifice any performance). Is there any way to effectively handle the parsing. Btw i am using c# and if a tag named `<tag>` is there, I am just breaking down the string into pieces searching for `<tag>` and `</tag>`. This will continue recursively until the whole string is broken down completely. And the result will be saved in a jagged list class I have created. Are there any ways, I can improve my method or should I just go with System.XML name space. Also another note: The xml data is from server not from a local file.
Some notes (this should be a comment, but is too long): 1. I could not reproduce your problem. Using a console application, I added a reference to `System.Xml`, created an `XmlDocument`, used it, and compiled, and didn't see any meaningful increase in size. 2. `System.Xml` is part of the .Net base class library. Generally, you can count on it being on every machine that runs .Net (according the msdn, [XmlDocument](http://msdn.microsoft.com/en-us/library/system.xml.xmldocument.aspx) is available in XNA, for example, but not on the portable class library). 3. If it is a part of your framework, make sure you did not set [Copy Local](http://msdn.microsoft.com/en-us/library/t1zz5y8c.aspx) = True on the reference. The default should be false, but if it is true, it will copy a 900 KB DLL to your target folder. 4. No one can ever answer which is faster - The answer is extremely relative - to what types of XMLs (small? large?)? On what types of machines? What operations will your users do more with these XMLs? Only you can answer these questions, by profiling your code. 5. Even if you discover XmlDocument is slow or too large, there are countless [XML parsers](http://nuget.org/packages?q=parse%20xml) for .Net - maybe one of them is good for you. (even .Net has `XDocument`, but it also requires `System.Xml.dll`, so no gain there) 6. **Generally**, multiple string manipulations are slow. If the method you describe is constantly searching and splitting the sting, it *sounds* slow - I doubt you need to scan the string more than once to parse XML.
VBA - new built-in enumerations and backwards compatibility msoGraphic is an MsoShapeType enumeration that is available in Office PowerPoint 2016 and later (versions which can handle svg graphics). If you attempt to check an MsoShapeType against msoGraphic in earlier versions of Office you will get a compile error - as msoGraphic is not defined (in any module containing Option Explicit). I am handling this by putting this property in a module where Option Explicit is NOT declared - and calling it from anywhere that needs the value of the constant. The module only contains this property (and any other properties that need to handle any other constants in this way). ``` Public Property Get myMsoGraphic() As Long If msoGraphic = 0 Then myMsoGraphic = 28 Else myMsoGraphic = msoGraphic End If End Property ``` I could of course have just re-declared msoGraphic as a constant with value 28, but best practice seems to be that you should avoid using the actual value and use the enumerate constant instead - in case the value gets changed at some point in the future (which I guess is probably highly unlikely). Does this seem like the best way to handle this situation?
Without `Option Explicit`, the `msoGraphic` identifier in that property scope is a `Variant/Empty`; there's an implicit type conversion happening when you do this: > > > ``` > If msoGraphic = 0 Then > > ``` > > Sure `vbEmpty` will equate to `0`, or even `vbNullString` or `""`, but that's after converting to a comparable type (`Integer`, or `String`). There's a better way. ``` If IsEmpty(msoGraphic) Then ``` The `IsEmpty` function will only ever return `True` when given a `Variant/Empty` value - which is exactly what we're dealing with here. > > *I could of course have just re-declared msoGraphic as a constant with value 28, but best practice seems to be that you should avoid using the actual value and use the enumerate constant instead* > > > One doesn't exclude the other. If you define a public constant in an appropriately named standard module (e.g. `OfficeConstants`), *and use it*, then you *are* adhering to the best practice. What happens then is deliberate *shadowing* of the `MsoShapeType.msoGraphic` declaration - something [Rubberduck would normally warn about](http://rubberduckvba.com/Inspections/Details/ShadowedDeclaration), but with a descriptive `@Ignore` or `@IgnoreModule` annotation comment, the intention is clarified, and the static code analysis tool knows to ignore them - and with a link to the official documentation, you ensure the values correctly match the actual documented underlying values for each identifier: ``` '@IgnoreModule ShadowedDeclaration: these constants are only available in Office 2016+ Option Explicit 'https://docs.microsoft.com/en-us/office/vba/api/office.msoshapetype Public Const msoGraphic As Long = 28 '... ``` What you want to avoid, is code like this, where `28` is some magic value that has no clear meaning: ``` If shapeType = 28 Then ``` > > *Does this seem like the best way to handle this situation?* > > > The problem is that you can't name your property `msoGraphic` (well you *could*, but then you'd have to fully-qualify the `msoGraphic` constant, and then that wouldn't be compilable, even without `Option Explicit`), so any code (hopefully with `Option Explicit` specified) that means to use the name `msoGraphic` now needs to use `myMsoGraphic` instead, and that isn't ideal, because it *adds* to the overall cognitive load: you, future you, and eventual maintainers have to remember to avoid `msoGraphic` and use `myMsoGraphic` instead, whereas with a `Public Const` in a standard module that hides/shadows the constant from the referenced PowerPoint library when it exists, usage is much more seamless. That said, while VBA is case-insensitive, a `camelCase` public member clashes with the naming convention of pretty much *everything else* in your standard libraries - enum members only have a lowercase prefix as a namespace surrogate; every type, method, member, property, procedure, function, constant, in every standard library, uses `PascalCase`. There's no reason to not make every single one of your own procedures use this naming convention too; also while the `my` prefix is ubiquitous in so many VB examples, it's not a good prefix to use in actual code. One last thing: > > *in case the value gets changed at some point in the future* > > > It won't: `MsoShapeType` is defined in a shared Office library that isn't only referenced by PowerPoint projects - VBA is ridiculously backward-compatible (line numbers, `GoSub`/`Return` statements, `Call`, `Let`, `DefBool` keywords, to name a few *should-be-extinct-but-aren't-because-backward-compatibility* still-supported language features): there is no way any constant ever released into the wild is ever going to get a new value in any future release. Not going to happen.
C++ -- type of the division? I want to make sure that my understanding of the return type of C++ division, ``` int / int => return is int? float / float => return is which type? float? double /double => return is double? int / double => return is double? int / float => return is float? ``` Please correct me if I am wrong.
All of those are correct. Here's what the C++03 standard says (§5/9): > > Many binary operators that expect operands of arithmetic or enumeration type cause conversions and yield result types in a similar way. The purpose is to yield a common type, which is also the type of the result. This pattern is called the *usual arithmetic conversions*, which are defined as follows: > > > - If either operand is of type `long double`, the other shall be converted to `long double`. > - Otherwise, if either operand is `double`, the other shall be converted to `double`. > - Otherwise, if either operand is `float`, the other shall be converted to `float`. > - Otherwise, the integral promotions (4.5) shall be performed on both operands. > - Then, if either operand is `unsigned long` the other shall be converted to `unsigned long`. > - Otherwise, if one operand is a `long int` and the other `unsigned int`, then if a `long int` can represent all the values of an `unsigned int`, the `unsigned int` shall be converted to a `long int`; otherwise both operands shall be converted to `unsigned long int`. > - Otherwise, if either operand is `long`, the other shall be converted to `long`. > - Otherwise, if either operand is `unsigned`, the other shall be converted to `unsigned`. > > > [*Note:* otherwise, the only remaining case is that both operands are `int`] > > >
Setting direction for UISwipeGestureRecognizer I want to add simple swipe gesture recognition to my view based iPhone project. Gestures in all directions (right, down, left, up) should be recognized. It is stated in the docs for UISwipeGestureRecognizer: > > You may specify multiple directions by specifying multiple UISwipeGestureRecognizerDirection constants using bitwise-OR operands. The default direction is UISwipeGestureRecognizerDirectionRight. > > > However for me it doesn't work. When all four directions are OR'ed only left and right swipes are recognized. ``` - (void)viewDidLoad { UISwipeGestureRecognizer *recognizer; recognizer = [[UISwipeGestureRecognizer alloc] initWithTarget:self action:@selector(handleSwipeFrom:)]; [recognizer setDirection:(UISwipeGestureRecognizerDirectionRight | UISwipeGestureRecognizerDirectionDown | UISwipeGestureRecognizerDirectionLeft | UISwipeGestureRecognizerDirectionUp)]; [[self view] addGestureRecognizer:recognizer]; [recognizer release]; [super viewDidLoad]; } -(void)handleSwipeFrom:(UISwipeGestureRecognizer *)recognizer { NSLog(@"Swipe received."); } ``` I fixed this with adding four recognizers to the view but I'm curious to know why didn't it work as advertised in docs? ``` - (void)viewDidLoad { UISwipeGestureRecognizer *recognizer; recognizer = [[UISwipeGestureRecognizer alloc] initWithTarget:self action:@selector(handleSwipeFrom:)]; [recognizer setDirection:(UISwipeGestureRecognizerDirectionRight)]; [[self view] addGestureRecognizer:recognizer]; [recognizer release]; recognizer = [[UISwipeGestureRecognizer alloc] initWithTarget:self action:@selector(handleSwipeFrom:)]; [recognizer setDirection:(UISwipeGestureRecognizerDirectionUp)]; [[self view] addGestureRecognizer:recognizer]; [recognizer release]; recognizer = [[UISwipeGestureRecognizer alloc] initWithTarget:self action:@selector(handleSwipeFrom:)]; [recognizer setDirection:(UISwipeGestureRecognizerDirectionDown)]; [[self view] addGestureRecognizer:recognizer]; [recognizer release]; recognizer = [[UISwipeGestureRecognizer alloc] initWithTarget:self action:@selector(handleSwipeFrom:)]; [recognizer setDirection:(UISwipeGestureRecognizerDirectionLeft)]; [[self view] addGestureRecognizer:recognizer]; [recognizer release]; [super viewDidLoad]; } -(void)handleSwipeFrom:(UISwipeGestureRecognizer *)recognizer { NSLog(@"Swipe received."); } ```
Seems like there is a bug. You can specify the allowed direction(s) as you did. But when you try to access the *actual* direction that triggered the swipe in the action selector method you still get the bit mask you originally set (for the allowed directions). This means that checks for the actual direction will always fail when more than 1 direction is allowed. You can see it for yourself quite easily when you output the value of 'direction' in the selector method (ie `-(void)scrollViewSwiped:(UISwipeGestureRecognizer *)recognizer`). Filed a bug report (#8276386) to Apple. [Update] I got an answer from Apple saying that the behavior works as was intended. So for example in a table view you can swipe left or right in a table view cell to trigger 'delete' (this would have directions of the swipe gesture set to left and right) This means that the original workaround is the way it's supposed to be used. The direction property can only be used to get the gestures recognized correctly, but not in the method performed on a successful recognition to compare for the actual direction that triggered the recognition.
What does [opt] mean in MSIL? I found the "optional parameters" feature in C# 4.0 very interesting, so I tried to figure out how they made it happen. so I wrote a method like this: ``` private static void A(int a = 5) { } ``` Compiled it, then decompiled it in IL DASM, this is the IL code: ``` .method private hidebysig static void A([opt] int32 a) cil managed { .param [1] = int32(0x00000005) // Code size 2 (0x2) .maxstack 8 IL_0000: nop IL_0001: ret } // end of method Program::A ``` And it has got this in its metadata: (1) ParamToken : (08000002) Name : a flags: [Optional] [HasDefault] (00001010) Default: (I4) 5 So I followed the clue and wrote a method like this: ``` private static void B([Optional, DefaultParameterValue(78)]int b) { } ``` Compiled it and decompiled it, and I found that the C# compiler generated almost the identical MSIL code for method A and B(except for the name). As we can see there is no sign of attributes in the IL code and it felt wrong, so I wrote a custom attribute like this: ``` [AttributeUsage(AttributeTargets.Parameter)] public class MyTestAttribute : Attribute { } ``` Then used it in method C like this: ``` private static void C([MyTest]int c) { } ``` Compiled it and then decompiled it, and hah, I found this: ``` .method private hidebysig static void C(int32 c) cil managed { .param [1] .custom instance void ConsoleApplication1.MyTestAttribute::.ctor() = ( 01 00 00 00 ) // Code size 2 (0x2) .maxstack 8 IL_0000: nop IL_0001: ret } // end of method Program::C ``` The second line of the method body calls to the ctor of my custom attribute. So this leads to my doubts: 1. What does [opt] mean? I mean the one that appears in front of method A and B's parameter. 2. How come method C calls the constructor of the Attribute that is applied to its parameter and method A and B do not? 3. I can not seem to find any sign of DefaultParameterValueAttribute in the metadata, but I can find OptionalAttribute and MyTestAttribute. Why is that? Is there something that I am missing? Thanks in advance.
The C# compiler doesn't need to emit the attributes since the Param metadata table can already describe optional and default values via the `Flags` column. From 23.1.13 in [ECMA 335](http://www.ecma-international.org/publications/standards/Ecma-335.htm): ``` Flag Value Description ----------------------------------------------------- In 0x0001 Parameter is [In] Out 0x0002 Parameter is [Out] Optional 0x0010 Parameter is optional HasDefault 0x1000 Parameter has a default value HasFieldMarshal 0x2000 Parameter has FieldMarshal ``` A parameter can have a flag value that specifies it is optional and has a default value (0x0010 | 0x1000). Parameters that have a default value will have an associated token in the Constant metadata table. The Constant metadata table has a `Parent` column that would be the Param token in question and a `Value` column that would be an index into the blob heap where the default value is stored. So to answer your questions: 1. [opt] means the `Flags` column for the Param token has the Optional flag set. 2. As I stated above, my guess here is that the C# compiler is recognizing the Optional/DefaultParameterValue attributes and simply converting them to parameter flags. 3. **Edit**: It appears that the C# compiler is emitting an unused TypeRef for OptionalAttribute, despite the Optional flag being used for the parameter. It doesn't emit a TypeRef for DefaultParameterValueAttribute, though. It could be a small compiler bug for emitting unused TypeRefs/MemberRefs.
How can I get a String from HID device in Python with evdev? I am new to python but have experience with HID devices and evdev. I have a 2D barcode scanner which interfaces as HID device. The goal is to get the string from a QR code. I am able to recognize the scanner in Linux and even found its location in /dev/input. I found evdev and have implemented the example below with my scanner. This is just the default code on their site. It reads the values but it prints long event codes with downs and ups. I can't see an easy way to turn this into string. All I want to do is read in a string from the HID scanner in Python. Any help or direction would be appreciated (maybe evdev isnt the answer). Here is my current python code with some example output: ``` from evdev import * dev = InputDevice('/dev/input/event1') print(dev) for event in dev.read_loop(): if event.type == ecodes.EV_KEY: print(categorize(event)) ``` Here is the output from some barcodes: ``` key event at 1383327570.147000, 2 (KEY_1), down key event at 1383327570.147990, 2 (KEY_1), up key event at 1383327570.148997, 3 (KEY_2), down key event at 1383327570.150010, 3 (KEY_2), up key event at 1383327570.151009, 29 (KEY_LEFTCTRL), down key event at 1383327570.151009, 42 (KEY_LEFTSHIFT), down key event at 1383327570.152017, 36 (KEY_J), down key event at 1383327570.153005, 36 (KEY_J), up key event at 1383327570.154004, 29 (KEY_LEFTCTRL), up key event at 1383327570.155005, 32 (KEY_D), down key event at 1383327570.155993, 32 (KEY_D), up key event at 1383327570.157002, 48 (KEY_B), down key event at 1383327570.158015, 48 (KEY_B), up key event at 1383327570.158997, 48 (KEY_B), down key event at 1383327570.282002, 18 (KEY_E), up key event at 1383327570.283004, 49 (KEY_N), down key event at 1383327570.284005, 49 (KEY_N), up key event at 1383327570.284968, 18 (KEY_E), down ``` Many thanks!
There's a conversion step you're missing here. Your output is already in a pretty format, so i'll help you break it down a little more: ``` Timestamp , scancode, keycode, keystate key event at 1383327570.147000, 2 (KEY_1), down key event at 1383327570.147990, 2 (KEY_1), up ``` To make any useful sense of this, you need to do a couple of things: 1. Only listen to key\_down type events by filter only for keystate of a specific type (Down = 1, Up = 0) 2. Convert the scancode into a ASCII code, which can vary by device and vary by how it's mapped to the system! There's a simple-ish way to map them however. Generate a known barcode with all useable characters using an online service, then scan that barcode and map each scancode outputted to the correct letter/number for your scanner. You can use the following slightly modified piece of code to take better control of the output: ``` import evdev from evdev import InputDevice, categorize # import * is evil :) dev = InputDevice('/dev/input/event1') # Provided as an example taken from my own keyboard attached to a Centos 6 box: scancodes = { # Scancode: ASCIICode 0: None, 1: u'ESC', 2: u'1', 3: u'2', 4: u'3', 5: u'4', 6: u'5', 7: u'6', 8: u'7', 9: u'8', 10: u'9', 11: u'0', 12: u'-', 13: u'=', 14: u'BKSP', 15: u'TAB', 16: u'Q', 17: u'W', 18: u'E', 19: u'R', 20: u'T', 21: u'Y', 22: u'U', 23: u'I', 24: u'O', 25: u'P', 26: u'[', 27: u']', 28: u'CRLF', 29: u'LCTRL', 30: u'A', 31: u'S', 32: u'D', 33: u'F', 34: u'G', 35: u'H', 36: u'J', 37: u'K', 38: u'L', 39: u';', 40: u'"', 41: u'`', 42: u'LSHFT', 43: u'\\', 44: u'Z', 45: u'X', 46: u'C', 47: u'V', 48: u'B', 49: u'N', 50: u'M', 51: u',', 52: u'.', 53: u'/', 54: u'RSHFT', 56: u'LALT', 100: u'RALT' } for event in dev.read_loop(): if event.type == evdev.ecodes.EV_KEY: data = evdev.categorize(event) # Save the event temporarily to introspect it if data.keystate == 1: # Down events only key_lookup = scancodes.get(data.scancode) or u'UNKNOWN:{}'.format(data.scancode) # Lookup or return UNKNOWN:XX print u'You Pressed the {} key!'.format(key_lookup) # Print it all out! ``` Here's some sample output from this script for me ``` You Pressed the A key! You Pressed the B key! You Pressed the C key! You Pressed the UNKNOWN:99 key! ``` Once you generate some barcodes online, you'll know which scancode gets mapped to which value! Build your own table and profit! HTH
What should I decorate with @asyncio.coroutine for async operations? I want to run my code asnychronously. What should I decorate with `@asyncio.coroutine` and what should I call with `yield from` for async operations? In my case, I have some example code without decorator. (Simple chat bot look-like IRC) ``` import asyncio class ChatBot: def __init__(self, loop): conn = asyncio.open_connection(HOST, PORT, loop=loop) self.reader, self.writer = yield from conn def send(self, msg): self.writer.write(msg) def read(self): msg = yield from self.reader.readline() return msg def run(self): while True: msg = self.read() self.parse(msg) def parse(self, msg): if msg.startswith('PING'): self.some_work(msg) elif msg.startswith('ERROR'): self.some_error() else: self.server_log(msg) def some_work(self, msg): # some work. It can call asynchronous function like I/O or long operation. It can use self.send(). def some_error(self, msg): # some work. It can call asynchronous function like I/O or long operation. It can use self.send(). def server_log(self, msg): # some work. It can call asynchronous function like I/O or long operation. It can use self.send(). loop = asyncio.get_event_loop() bot = ChatBot(loop) loop.run_until_complete(???) loop.close() ``` I think `???` is `bot.run()` and `ChatBot.run` must be decorated with `@asyncio.coroutine`. Then, how about other methods? I can't understand when use `@asyncio.coroutine` decorator and call method with `yield from` or `asyncio.async`. (I already read PEP-3156 for understanding asnycio. But I can't understand fully.)
## When to use the `@asyncio.coroutine` decorator If you have a function that needs to use `yield from` to call a coroutine, you should decorate it with `asyncio.coroutine`. Also note that coroutines are often (not always) "viral". As soon as you add `yield from` to a function it becomes a coroutine, and additionally any function that *calls* that coroutine usually (though not always) needs to be come a coroutine, too. ## When to use `asyncio.async` Why are coroutines not always viral? Because you actually don't always need to use `yield from` to call a coroutine. You only need to use `yield from` if you want to call a coroutine *and wait for it to finish*. If you just want to kick off a coroutine in the background, you can just do this: ``` asyncio.async(coroutine()) ``` This will schedule `coroutine` to run as soon as control returns to the event loop; it won't wait for `coroutine` to finish before moving on to the next line. An ordinary function can use this to schedule a coroutine to run without also having to become a coroutine itself. You can also use this approach to run multiple `coroutines` concurrently. So, imagine you have these two coroutines: ``` @asyncio.coroutine def coro1(): yield from asyncio.sleep(1) print("coro1") @asyncio.coroutine def coro2(): yield from asyncio.sleep(2) print("coro2") ``` If you had this: ``` @asyncio.coroutine def main(): yield from coro1() yield from coro2() yield from asyncio.sleep(5) asyncio.get_event_loop().run_until_complete(main()) ``` After 1 second, `"coro1"` would be printed. Then, after two more seconds (so three seconds total), `"coro2"` would be printed, and five seconds later the program would exit, making for 8 seconds of total runtime. Alternatively, if you used `asyncio.async`: ``` @asyncio.coroutine def main(): asyncio.async(coro1()) asyncio.async(coro2()) yield from asyncio.sleep(5) asyncio.get_event_loop().run_until_complete(main()) ``` This will print `"coro1"` after one second, `"coro2"` one second later, and the program would exit 3 seconds later, for a total of 5 seconds of runtime. ## How does this affect your code? So following those rules, your code needs to look like this: ``` import asyncio class ChatBot: def __init__(self, reader, writer): # __init__ shouldn't be a coroutine, otherwise you won't be able # to instantiate ChatBot properly. So I've removed the code that # used yield from, and moved it outside of __init__. #conn = asyncio.open_connection(HOST, PORT, loop=loop) #self.reader, self.writer = yield from conn self.reader, self.writer = reader, writer def send(self, msg): # writer.write is not a coroutine, so you # don't use 'yield from', and send itself doesn't # need to be a coroutine. self.writer.write(msg) @asyncio.coroutine def read(self): msg = yield from self.reader.readline() return msg @asyncio.coroutine def run(self): while True: msg = yield from self.read() yield from self.parse(msg) @asyncio.coroutine def parse(self, msg): if msg.startswith('PING'): yield from self.some_work(msg) elif msg.startswith('ERROR'): yield from self.some_error() else: yield from self.server_log(msg) @asyncio.coroutine def some_work(self, msg): # some work. It can call asynchronous function like I/O or long operation. It can use self.send(). @asyncio.coroutine def some_error(self, msg): # some work. It can call asynchronous function like I/O or long operation. It can use self.send(). @asyncio.coroutine def server_log(self, msg): # some work. It can call asynchronous function like I/O or long operation. It can use self.send() @asyncio.coroutine def main(host, port): reader, writer = yield from asyncio.open_connection(HOST, PORT, loop=loop) bot = ChatBot(reader, writer) yield from bot.run() loop = asyncio.get_event_loop() loop.run_until_complete(main()) loop.close() ``` One other thing to keep in mind - adding `yield from` in front of a function doesn't magically make that call non-blocking. Neither does adding the `@asyncio.coroutine` decorator. Functions are only non-blocking if they're actually directly or indirectly calling native `asyncio` coroutines, which use non-blocking I/O and are integrated with the `asyncio` event loop. You mentioned making REST API calls, for example. In order for those REST API calls to not block the event loop, you'd need to use the `aiohttp` library, or `asyncio.open_connection`. Using something like `requests` or `urllib` will block the loop, because they're not integrated with `asyncio.
Architecture for manager that gets shared among different components It started with a simple log manager. Then I wanted to implement more features, for example printing the name of the component that send a message. Later on, I can think of giving each component that prints log messages it's own text color in the terminal and options to mute or solo single modules. Therefore, the manager needs the identity of the component that called its methods. So I decided to wrap the manager with an instance for every component. It tunnels function calls to the manager together with the name of the component. ``` #include <iostream> #include <string> #include <memory> #include <unordered_map> using namespace std; //////////////////////////////////////////////////////////////// // Manager declaration //////////////////////////////////////////////////////////////// class manager { public: class instance; instance &get_instance(string name); void print(string sender, string message); private: unordered_map<string, unique_ptr<instance>> m_instances; }; //////////////////////////////////////////////////////////////// // Instance declaration //////////////////////////////////////////////////////////////// class manager::instance { public: instance(string name, manager &manager); void print(string message); private: string m_name; manager &m_manager; }; //////////////////////////////////////////////////////////////// // Manager implementation //////////////////////////////////////////////////////////////// manager::instance &manager::get_instance(string name) { if (m_instances.find(name) == m_instances.end()) m_instances[name] = unique_ptr<instance>(new instance(name, *this)); return *m_instances[name].get(); } void manager::print(string sender, string message) { cout << sender << ": " << message << "." << endl; } //////////////////////////////////////////////////////////////// // Instance implementation //////////////////////////////////////////////////////////////// manager::instance::instance(string name, manager &manager) : m_name(name), m_manager(manager) {} void manager::instance::print(string message) { m_manager.print(m_name, message); } //////////////////////////////////////////////////////////////// // Main function //////////////////////////////////////////////////////////////// int main() { manager m; m.get_instance("Window").print("Open up"); m.get_instance("Asset").print("Load textures"); return 0; } ``` Here is a [working example](http://ideone.com/pS7Wbv) of my code. What do you think about the class structure and architecture? Is there an easier or more expressive way? I'd like to get feedback and improve the code structure before implementing other managers the same way.
About the architecture and thinking exclusively on the given example application (logging), I believe `manager` does not need to track all `manager::instance` instances. Maybe in other applications this might be needed, but not in this one. Thus, I would just turn `get_instance` into a convenience method (`make_instance`) to create new `manager::instance`s: ``` class manager { public: class instance; std::unique_ptr<instance> make_instance(const std::string& name); void print(const std::string& sender, const std::string& message); }; std::unique_ptr<manager::instance> manager::make_instance(const std::string& name) { return std::unique_ptr<instance>(new instance(name, *this)); } ``` If you plan using the `manager` like a singleton (having only one instance and everyone can access it), maybe the `manager` and `manager::instance` classes could be merge into a single one, where `manager` members would become `static` and `manager::instance` would become an actual *instance* of `manager`. Please note that both simplifications above may not apply to other applications. Now, the general code review comments (some of which were already applied on the snippet above): The code is very well organized, very easy to read and understand. The coding style is consistent. I have only 3 suggestions for improvement: Always pass `std::string` to a function as `const std::string&` instead of just `std::string` to avoid an unnecessary copy. Avoid `using namespace std;`. If, for instance, you really want to use some `std` symbols unqualified, import them explicitly (like `using std::string;`). Also avoid `using` declarations on headers where they may affect user code (in global or namespace scope), otherwise it may break other's code when they include your header. Also, use braces after `if`, `while`, `do` and `for`. Those kind of bugs (adding another indented line thinking it inside a block) are hard to spot.
Does malloc + fork create a memory leak? 1. Parent process does malloc() 2. fork 3. parent modifies the allocated block 4. CoW kicks in and creates one copy of the modified block while keeping the old copy in place 5. The child doesn't know or care about the parent's stuff and doesn't call free() on its copy of it 6. Memory leak! Am I correct? Am I wrong? If the latter what does actually happen?
Neither the `malloc()` nor the `fork()` creates a leak — so I suppose you're wrong. The parent has its own copy of the allocated block. It may do as it wishes with it. The child has its own copy of the allocated block. It too may do as it wishes with it. If the child ignores the block, it is not a leak (yet). If the child blithely tramples a pointer, or returns from a function that holds the only pointer to the allocated memory without releasing it first, that would lead to a leak. But it isn't the `fork()` or the `malloc()` that's at fault. Remember, the same code is running after the `fork()` — the major difference between the processes is the PID and the return value from `fork()`. Everything else (almost everything else — see the POSIX specification of [`fork()`](http://pubs.opengroup.org/onlinepubs/9699919799/functions/fork.html) for the details) is the same. So, if the code leaks, that's a bug introduced by the programmer — it is not the fault of either `malloc()` or `fork()`. Note that if the child uses one of the `exec*()` family of functions, all the allocated memory from the original process is released. The new process gets new memory allocated. Similarly, if the child exits, then the memory will be released. There isn't a long-term risk of the O/S losing track of memory.
Replace element in discriminated union Is it possible to do a 'search and replace' on a discriminated union, for example replacing `Foo` with `Bar` with an instance of e.g. ``` type Expression | Foo of Expression list | Bar of Expression list ``` ? The highly nested definition of expressions can be of any depth.
There is no built-in feature in the language that would let you do this automatically. The basic method is to write a recursive function - if you want to switch `Foo` for `Bar`, this is quite easy: ``` let rec switch = function | Foo es -> Bar(List.map switch es) | Bar es -> Foo(List.map switch es) ``` You could try to abstract the part that walks over the tree from the bit that specifies what should be transformed how. This doesn't really help with this simple problem, but it can be useful for more complex transformations. For example, the following takes a function and calls it on all nodes. If the function returns `Some`, the node is replaced: ``` let rec transform f e = match f e with | Some n -> n | None -> match e with | Foo es -> Foo(List.map (transform f) es) | Bar es -> Bar(List.map (transform f) es) ``` Now you can, for example, easily replace `Bar []` with `Foo []` and keep all other expressions unchanged: ``` Foo [ Bar []; Bar[] ] |> transform (fun e -> match e with | Bar [] -> Some(Foo []) | _ -> None) ```
How to use Google Drive API from Google App Engine? I have an app in GAE (Python 2.7), and now need access to Google Drive to display a (shared) list of folders and documents. Searching usually results in pointers to DrEdit, including [App Engine and Google Drive API](https://stackoverflow.com/questions/12185504/app-engine-and-google-drive-api), which asks the same question but accepts an answer I don't agree with, as DrEdit is an example app for Google Drive, not GAE. The files list from the Drive API is what I'd like to be able to use from GAE: <https://developers.google.com/drive/v2/reference/files/list>
Although Google App Engine and Google Drive are both Google products, unfortunately they are not directly linked. Google Drive APIs can be accessed by the `google-api-python-client` library, which you have to install. The process can be found at [Python Google Drive API Quickstart Guide](https://developers.google.com/drive/web/quickstart/python), and the summarized form is as follows: 1. On Google's side: Allow Drive API Access for your GAE program - [Activate Drive API](https://console.developers.google.com/flows/enableapi?apiid=drive). Click the **Go to credentials** button to continue... - **Create your consent screen:** Setup your **OAuth Consent Screen** as Google will throw weird errors if this has not been set up: - Click on the **OAuth Consent Screen** tab - Select an **Email address** and enter a **Product name**. - **Get credentials:** - Click on the **Credentials** tab - Select **Add credentials** and then **OAuth 2.0 client ID**. Choose your application type, and enter the relevant details. You can change them later! - Back on the Credentials tab, download the JSON credentials (all the way to the right in the table, the download button only appears when you hover near it). Rename it `client_secret.json` and place it in your root code directory. You will need this to request credentials from users. 2. On your side: [Download the `google-api-python-client` library](https://pypi.python.org/pypi/google-api-python-client/), unzip it in your code directory and run `python setup.py install`. This will install the library which holds many Google product's APIs. 3. Now you are ready to use the Drive API. You can test your access using the [sample code](https://developers.google.com/drive/web/quickstart/python#step_3_set_up_the_sample). Read it because it's a good guide for writing your own code! If you are accessing user data, you will need to request user credentials when they log in and most probably store them. Then, to use the API, the easiest way would be to get the `service` object: ``` import httplib2 from apiclient import discovery credentials = get_credentials() #Your function to request / access stored credentials #Authorise access to Drive using the user's credentials http = credentials.authorise(httplib2.Http()) #The service object is the gateway to your API functions service = discovery.build('drive', 'v2', http=http) #Run your requests using the service object. e.g. list first 10 files: results = service.files().list(maxResults=10).execute() # ... etc ... Do something with results ``` Above code snippet is modified from [sample code](https://developers.google.com/drive/web/quickstart/python#step_3_set_up_the_sample). The Reference API for Google Drive can be [found here](https://developers.google.com/drive/v2/reference/). The same general procedure is required to link GAE to other Google product's APIs as well e.g. Calendar. All the best writing your program!
How to add action listener that listens to multiple buttons I'm trying to figure out what i am doing wrong with action listeners. I'm following multiple tutorials and yet netbeans and eclipse are giving me errors when im trying to use an action listener. Below is a simple program that im trying to get a button working in. What am i doing wrong? ``` import java.awt.event.ActionEvent; import java.awt.event.ActionListener; import javax.swing.JButton; import javax.swing.JFrame; public class calc extends JFrame implements ActionListener { public static void main(String[] args) { JFrame calcFrame = new JFrame(); calcFrame.setSize(100, 100); calcFrame.setVisible(true); JButton button1 = new JButton("1"); button1.addActionListener(this); calcFrame.add(button1); } public void actionPerformed(ActionEvent e) { if(e.getSource() == button1) } } ``` the action listener is never registered because with the `if(e.getSource() == button1)` it cant see `button1`, errors saying cannot find symbol.
There is no `this` pointer in a static method. (I don't believe this code will even compile.) You shouldn't be doing these things in a static method like `main()`; set things up in a constructor. I didn't compile or run this to see if it actually works, but give it a try. ``` public class Calc extends JFrame implements ActionListener { private Button button1; public Calc() { super(); this.setSize(100, 100); this.setVisible(true); this.button1 = new JButton("1"); this.button1.addActionListener(this); this.add(button1); } public static void main(String[] args) { Calc calc = new Calc(); calc.setVisible(true); } public void actionPerformed(ActionEvent e) { if(e.getSource() == button1) } } ```
Unable to show UIAlertView in my app i am using validation keys to download content from a server using Wi-Fi. I need to show a UIAlert if the licence keys are wrong or if the wi-fi is not available. I have written the coed for displaying the alert view but the alert is not being displayed... This is scking the blood out my head... Can anyone help please....the control is going over this line, but still the alert is not being displayed. ``` -(void)connectionDidFinishLoading:(NSURLConnection *)connection{ NSFileManager *fileManager = [NSFileManager defaultManager]; NSString *documentsDirectory= [[[UIApplication sharedApplication] delegate] applicationDocumentsDirectory]; //[pathToStore objectAtIndex:0]; NSString *path = [documentsDirectory stringByAppendingFormat:@"packages"]; NSString *packagePath = [NSString stringWithFormat:@"%@/%@", path,isbnTemp]; [recievedData writeToFile:[documentsDirectory stringByAppendingPathComponent:@"file.zip"] atomically:YES]; NSString *zipPath=[documentsDirectory stringByAppendingPathComponent:@"file.zip"]; [fileManager createDirectoryAtPath:documentsDirectory withIntermediateDirectories:NO attributes:nil error:nil]; ZipArchive *zipArchive = [[ZipArchive alloc]init]; if([zipArchive UnzipOpenFile:zipPath]){ if([zipArchive UnzipFileTo:packagePath overWrite:YES]){ [self loadContent]; } else{ NSLog(@"Unable to UnArchieve the packages"); } } else { NSLog(@"Failure To Open Archive"); UIAlertView *alert=[[UIAlertView alloc]initWithTitle:@"Your ISBN and/or Licence Key are incorrect" message:Nil delegate:self cancelButtonTitle:@"ok" otherButtonTitles:nil]; [alert show]; [alert release]; } ``` }
Are you trying to show the UIAlertView in a method that is being called from a thread other than the main thread? For example, if you are trying to show the UIAlertView in an asynchronous callback, it could be running on a separate thread. If so, you need to move the code that shows the UIAlertView to a separate selector, and call it on the main thread using one of the `performSelectorOnMainThread:` methods. For example, add the following method to your class: ``` -(void)showAlert { UIAlertView *alert=[[UIAlertView alloc]initWithTitle:@"Your ISBN and/or Licence Key are incorrect" message:Nil delegate:self cancelButtonTitle:@"ok" otherButtonTitles:nil]; [alert show]; [alert release]; } ``` And then change the last else clause in your current code so that it uses: ``` [self performSelectorOnMainThread:@selector(showAlert) withObject:nil waitUntilDone:NO]; ``` See the [NSObject class reference](http://developer.apple.com/library/mac/#documentation/Cocoa/Reference/Foundation/Classes/NSObject_Class/Reference/Reference.html) for more information on the `performSelectorOnMainThread:` methods.
HTML to PDF conversion in iPhoneSDK I want to convert some html page into PDF format.Is it possible using iPhone SDK? Are there any APIs or 3rd party libraries available to so? I have googled around for the solution but was not able to find any substantial material. Cheers
I created a class based on every good advice I found around. I've been digging a lot and I hope my class will offer some good start for anyone trying to create multi-page PDF directly out of some HTML source. You'll find the whole code here with some basic sample code : <https://github.com/iclems/iOS-htmltopdf> I had just the same issue as you and my requirements were: - full PDF (real text, no bitmap) - smart multi-pages (compared to cutting a full height webview every X pixels...) Thus, the solution I use is pretty nice as it resorts to the same tools iOS uses to split pages for print. Let me explain, I setup a UIPrintPageRenderer based on the web view print formatter (first tip) : ``` UIPrintPageRenderer *render = [[UIPrintPageRenderer alloc] init]; [render addPrintFormatter:webView.viewPrintFormatter startingAtPageAtIndex:0]; CGRect printableRect = CGRectMake(self.pageMargins.left, self.pageMargins.top, self.pageSize.width - self.pageMargins.left - self.pageMargins.right, self.pageSize.height - self.pageMargins.top - self.pageMargins.bottom); CGRect paperRect = CGRectMake(0, 0, self.pageSize.width, self.pageSize.height); [render setValue:[NSValue valueWithCGRect:paperRect] forKey:@"paperRect"]; [render setValue:[NSValue valueWithCGRect:printableRect] forKey:@"printableRect"]; NSData *pdfData = [render printToPDF]; [pdfData writeToFile: self.PDFpath atomically: YES]; ``` In the meantime, I have created a category on UIPrintPageRenderer to support: ``` -(NSData*) printToPDF { NSMutableData *pdfData = [NSMutableData data]; UIGraphicsBeginPDFContextToData( pdfData, CGRectZero, nil ); [self prepareForDrawingPages: NSMakeRange(0, self.numberOfPages)]; CGRect bounds = UIGraphicsGetPDFContextBounds(); for ( int i = 0 ; i < self.numberOfPages ; i++ ) { UIGraphicsBeginPDFPage(); [self drawPageAtIndex: i inRect: bounds]; } UIGraphicsEndPDFContext(); return pdfData; } ```
C# GetHashCode/Equals override not called I'm facing a problem with GetHashCode and Equals which I have overridden for a class. I am using the operator == to verify if both are equal and I'd expect this would be calling both GetHashCode and Equals if their hash code are the same in order to validate they are indeed equal. But to my surprise, neither get called and the result of the equality test is false (while it should in fact be true). Override code: ``` public class User : ActiveRecordBase<User> [...] public override int GetHashCode() { return Id; } public override bool Equals(object obj) { User user = (User)obj; if (user == null) { return false; } return user.Id == Id; } } ``` Equality check: ``` if (x == y) // x and y are both of the same User class // I'd expect this test to call both GetHashCode and Equals ```
Operator `==` is completely separate from either `.GetHashCode()` or `.Equals()`. You might be interested in the Microsoft [Guidelines for Overloading Equals() and Operator ==](http://msdn.microsoft.com/en-us/library/ms173147.aspx). The short version is: Use `.Equals()` to implement **equality** comparisons. Use operator `==` for **identity** comparisons, or if you are creating an immutable type (where every equal instance can be considered to be effectively identical). Also, `.Equals()` is a virtual method and can be overridden by subclasses, but operator `==` depends on the compile-time type of the expression where it is used. Finally, to be consistent, implement `.GetHashCode()` any time you implement `.Equals()`. Overload operator `!=` any time you overload operator `==`.
Highlight all values from a group on hover Assume data ``` library(ggplot2) library(plotly) set.seed(357) xy <- data.frame(letters = rep(c("a", "b", "c"), times = 3), values = runif(9), groups = rep(c("group1", "group2", "group3"), each = 3)) letters values groups 1 a 0.9913409 group1 2 b 0.6245529 group1 3 c 0.5245744 group1 4 a 0.4601817 group2 5 b 0.2254525 group2 6 c 0.5898001 group2 7 a 0.1716801 group3 8 b 0.3195294 group3 9 c 0.8953055 group3 ggplotly( ggplot(xy, aes(x = letters, y = values, group = groups)) + theme_bw() + geom_point() ) ``` My goal is to, on hover, highlight all points that belong to the same group. E.g. on hover over the point in the upper right corner, all points from this group (circles) would turn red. Something similar can be achieved using `layout(hovermode = "x")` but only if one is interested in highlighting all points on one of the axes. I would like the same behavior for custom variable other than `x`, `y` or `closest` (which are modes of `hovermode`). [![enter image description here](https://i.stack.imgur.com/tbwcl.jpg)](https://i.stack.imgur.com/tbwcl.jpg)
this will probably suit your needs **sample data** ``` set.seed(357) xy <- data.frame(letters = rep(c("a", "b", "c"), times = 3), values = runif(9), groups = rep(c("group1", "group2", "group3"), each = 3)) ``` **plotting** ``` #create a SharedData object for use in the ggplot below, group by 'groups' d <- highlight_key(xy, ~groups ) #create a normal ggplot to fit your needs, but use the SharedData object as data for the chart p <- ggplot( d, aes(x = letters, y = values, group = groups)) + theme_bw() + geom_point() #now ggplotly the newly created ggplot, and add text for the tooltips as needed gg <- ggplotly( p, tooltip = "groups" ) #set the highlight-options to your liking, and plot... highlight( gg, on = "plotly_hover", off = "plotly_deselect", color = "red" ) ``` **plot results** [![enter image description here](https://i.stack.imgur.com/o4NfE.gif)](https://i.stack.imgur.com/o4NfE.gif)
Why are unparanthesized tuples in generators not allowed in the expression field? ``` # why is the following invalid x = (k, v for k, v in some_dict.items()) # but if we wrap the expression part in parentheses it works x = ((k, v) for k, v in some_dict.items()) ``` After reviewing the documentation, I couldn't find any information on this issue. What could be causing confusion for the parser to the extent that the syntax is not permitted? This seems strange, since despite that, more complex syntax works just fine: ``` # k, v somehow confuses the parser but this doesn't??? x = ('%s:%s:%s' % (k, v, k) for k, v in some_dict.items()) ``` If there is actually ambiguity. How come we don't also need to wrap `%s:%s:%s % (k, v, k)` with a surrounding parentheses too then?
Look at `x = (k, v for k, v in some_dict.items())`: ``` x = (k, v for k, v in some_dict.items()) x = ((k, v) for k, v in some_dict.items()) x = (k, (v for k, v in some_dict.items())) ``` Parentheses are needed to remove the ambiguity. `x = ('%s:%s:%s' % (k, v, k) for k, v in some_dict.items())` requires parentheses too: ``` x = ('%s:%s:%s' % k, v, k for k, v in some_dict.items()) x = ('%s:%s:%s' % k, (v, k) for k, v in some_dict.items()) x = ('%s:%s:%s' % (k, v, k) for k, v in some_dict.items()) ``` It just so happens that you already had enough parentheses to resolve the ambiguity there in a way that allowed it to run in the expected manner.
Vue.js: Collapse/expand all elements from parent I need to add "expand/collapse all" functionality for my Vue component(some collapsible panel). If user clicks collapse button then clicks on some panel and expand it then clicking on collapse button **will do nothing** because watched parameter will not change. So how to implement this functionality properly (buttons must collapse and expand components always)? I prepared simple example(sorry for bad formatting, it looks nice in editor :( ): ``` var collapsible = { template: "#collapsible", props: ["collapseAll"], data: function () { return { collapsed: true } }, watch: { collapseAll: function(value) { this.collapsed = value } } } var app = new Vue({ template: "#app", el: "#foo", data: { collapseAll: true }, components: { collapsible: collapsible } }); ``` ``` .wrapper { width: 100%; } .wrapper + .wrapper { margin-top: 10px; } .header { height: 20px; width: 100%; background: #ccc; } .collapsible { height: 100px; width: 100%; background: #aaa; } ``` ``` <script src="https://cdnjs.cloudflare.com/ajax/libs/vue/2.3.3/vue.min.js"></script> <div id="foo"></div> <script type="text/x-template" id="collapsible"> <div class="wrapper"> <div class="header" v-on:click="collapsed = !collapsed"></div> <div class="collapsible" v-show="!collapsed"></div> </div> </script> <script type="text/x-template" id="app"> <div> <button v-on:click="collapseAll = true">Collapse All</button> <button v-on:click="collapseAll = false">Expand All</button> <collapsible v-for="a in 10" v-bind:collapseAll="collapseAll" v-bind:key="a"></collapsible> </div> </script> ``` Thanks!
This is a case where I might use a `ref`. ``` <button v-on:click="collapseAll">Collapse All</button> <button v-on:click="expandAll">Expand All</button> <collapsible ref="collapsible" v-for="a in 10" v-bind:key="a"></collapsible> ``` And add methods to your Vue. ``` var app = new Vue({ template: "#app", el: "#foo", methods:{ collapseAll(){ this.$refs.collapsible.map(c => c.collapsed = true) }, expandAll(){ this.$refs.collapsible.map(c => c.collapsed = false) } }, components: { collapsible: collapsible } }); ``` [Example](https://codepen.io/Kradek/pen/MmzzvY?editors=1010).
Specific rules for writing an SDK for Android Are there any specific rules to follow if I want to write an SDK for Android? Think the Paypal SDK or the Facebook SDK.
I like to think of an SDK as a way of creating a specific DSL for the problem your SDK is trying to solve. As such, creating an SDK (or an API, in the broader sense) is similar to creating a new language. If this happens to be the way you look at the problem, check out the [keynote](http://www.youtube.com/watch?v=aAb7hSCtvGw) that Josh Bloch gave about API design. A lot of his advice applies to SDK design as well. --- When it comes to the specific case of Android, there is one thing you have to keep in mind: **deployment**. [Deployment on Android sucks](http://jakewharton.com/the-android-build-system-is-broken/). Yes, we are in the phase of transitioning into gradle-based build system, but most projects are still either using Maven or are simply Eclipse projects. If you are to provide an SDK for Android, you pretty much have to support: - People who want to use gradle: in such a case, you need to export your project in the **aar** format. More information about it [here](http://tools.android.com/tech-docs/new-build-system/user-guide#TOC-Differences-between-a-Project-and-a-Library-Project) - People who are still using Maven. In this case, you have to deploy your project on the Maven repository, using the Maven android plugin. More information [here](http://code.google.com/p/maven-android-plugin/wiki/DeploymentInstructions) - People who want to include your SDK as a library in Eclipse. This is similar to how Facebook describes it on their [Getting Started](https://developers.facebook.com/docs/android/getting-started/#import) Page. There are ways to avoid so much duplication of deployment channels (providing the apklib from Maven unzipped as an Eclipse project, using Maven's aar support), but, until the build system matures, you will probably have to juggle between all three deployment paths. **Hint**: You probably don't need to cover all three cases. To find out which deployment paths can be dropped, look at how your competitors are offering their SDK. As for documentation, the minimum Javadocs are required. You could, however, be creative, and try [Parse.com's style of API guide](https://parse.com/docs/android_guide), which reads very nicely.
Fail2ban on Debian Buster - the right way to configure? Fail2ban can be configured in so many places. ``` $ fail2ban-client -i Fail2Ban v0.10.2 reads log file that contains password failure report and bans the corresponding IP addresses using firewall rules. ``` On Debian Buster I can edit my settings in several config files: ``` /etc/fail2ban/jail.d/defaults-debian.conf /etc/fail2ban/fail2ban.conf /etc/fail2ban/jail.conf /etc/fail2ban/action.d/ ``` And - last but not least - some tutorials recommend: ``` cp /etc/fail2ban/jail.conf /etc/fail2ban/jail.local ``` The [Documentation](https://www.fail2ban.org/wiki/index.php/MANUAL_0_8) of fail2ban says: *Modifications should take place in the .local and not in the .conf. This avoids merging problem when upgrading. These files are well documented and detailed information should be available there.* Does that mean, that every .conf File I want to edit should exist as a .local file? I am confused! Can someone shed some light on this please?
You only need to edit one file. To avoid problems during system upgrades, you should always copy `jail.conf` to `jail.local` and modify the latter only. The same for all other fail2ban config files. ``` cp /etc/fail2ban/jail.conf /etc/fail2ban/jail.local ``` Then edit this file and scroll down to the filters you want to use. In those filters, add `enabled = true`. I recommend not to enable too many filters at the beginning. One or two is enough. Be carefull with the SSH filter if you're are connected via SSH. You might lock yourself out. **Filters** Look in the `filter.d` directory to see all available filters. Choose one or two suitable ones. Be sure to understand what their regexes match and what log files you need. Fail2ban works with log files. So the filters must match the appropriate log files. You can test this with `fail2ban-regex <logfile> <filter>` For example `fail2ban-regex /var/log/nginx/default_access.log /etc/fail2ban/filter.d/nginx-botsearch.conf` This filter - for example - looks for `404` errors in your NGINX `access.log` and blocks them, if the conditions match. For the conditions, see below. Restart fail2ban after you finished editing: ``` systemctl restart fail2ban ``` **Other settings in your `jail.local` file:** All settings can be made global as well as filter specific. ``` bantime = 7200 findtime = 10m maxretry = 10 ``` means 10 errors in 10 minutes will result in a 2 hour ban. If you don't want to use iptables for the banning, you can change your `banaction`. The default banaction uses iptables, which should work on all systems I know but you might not see the bans in your familiar firewall interface. ``` banaction = ufw ``` See the actions in `action.d`. With this setting, fail2ban will use ufw to block IPs. Then you can see the ban via `ufw status`. Especially for SSH, be sure to exclude your local IP range from banning, so you can't ban yourself : ``` ignoreip = 127.0.0.1/8 ::1 192.168.178.0/24 ``` I would suggest you **not** to create or modify new filters or actions. Use the included ones and be happy. It's not easy to buiild your own regex patterns and the log file format changes from time to time - which will break your filters. Your system won't be secured then. You should not edit `Apaches` default log format, too.
dplyr: how to reference columns by column index rather than column name using mutate? Using dplyr, you can do something like this: ``` iris %>% head %>% mutate(sum=Sepal.Length + Sepal.Width) Sepal.Length Sepal.Width Petal.Length Petal.Width Species sum 1 5.1 3.5 1.4 0.2 setosa 8.6 2 4.9 3.0 1.4 0.2 setosa 7.9 3 4.7 3.2 1.3 0.2 setosa 7.9 4 4.6 3.1 1.5 0.2 setosa 7.7 5 5.0 3.6 1.4 0.2 setosa 8.6 6 5.4 3.9 1.7 0.4 setosa 9.3 ``` But above, I referenced the columns by their column names. How can I use `1` and `2` , which are the column indices to achieve the same result? Here I have the following, but I feel it's not as elegant. ``` iris %>% head %>% mutate(sum=apply(select(.,1,2),1,sum)) Sepal.Length Sepal.Width Petal.Length Petal.Width Species sum 1 5.1 3.5 1.4 0.2 setosa 8.6 2 4.9 3.0 1.4 0.2 setosa 7.9 3 4.7 3.2 1.3 0.2 setosa 7.9 4 4.6 3.1 1.5 0.2 setosa 7.7 5 5.0 3.6 1.4 0.2 setosa 8.6 6 5.4 3.9 1.7 0.4 setosa 9.3 ```
You can try: ``` iris %>% head %>% mutate(sum = .[[1]] + .[[2]]) Sepal.Length Sepal.Width Petal.Length Petal.Width Species sum 1 5.1 3.5 1.4 0.2 setosa 8.6 2 4.9 3.0 1.4 0.2 setosa 7.9 3 4.7 3.2 1.3 0.2 setosa 7.9 4 4.6 3.1 1.5 0.2 setosa 7.7 5 5.0 3.6 1.4 0.2 setosa 8.6 6 5.4 3.9 1.7 0.4 setosa 9.3 ```
Overlay text over imageview in framelayout programmatically - Android I am trying to implement a textview over an image in a framelayout at the center and bottom of the layout as it is seen here: <http://developer.android.com/resources/articles/layout-tricks-merge.html> ``` <FrameLayout xmlns:android="http://schemas.android.com/apk/res/android" android:layout_width="fill_parent" android:layout_height="fill_parent"> <ImageView android:layout_width="fill_parent" android:layout_height="fill_parent" android:scaleType="center" android:src="@drawable/golden_gate" /> <TextView android:layout_width="wrap_content" android:layout_height="wrap_content" android:layout_marginBottom="20dip" android:layout_gravity="center_horizontal|bottom" android:padding="12dip" android:background="#AA000000" android:textColor="#ffffffff" android:text="Golden Gate" /> </FrameLayout> ``` I am trying to implement this programmatically but with no luck.. I always get the textview on the top left corner..can anyone help? Here is my code: ``` FrameLayout frameLay = new FrameLayout(MainScreen.this); LayoutParams layoutParamsFrame = new LayoutParams(LayoutParams.FILL_PARENT, LayoutParams.FILL_PARENT); frameLay.setLayoutParams(layoutParamsFrame); LinearLayout.LayoutParams layoutParamsImage= new LinearLayout.LayoutParams(LayoutParams.FILL_PARENT, LayoutParams.FILL_PARENT); ImageView imageView= new ImageView(MainScreen.this); imageView.setImageResource(R.drawable.movie); imageView.setLayoutParams(layoutParamsImage); TextView theText=new TextView(MainScreen.this); theText.setText("GOLDEN Gate"); theText.setTextColor(Color.WHITE); theText.setTypeface(Typeface.DEFAULT_BOLD); LayoutParams layoutParamsText= new LayoutParams(LayoutParams.WRAP_CONTENT, LayoutParams.WRAP_CONTENT); theText.setLayoutParams(layoutParamsText); theText.setGravity(Gravity.CENTER_HORIZONTAL|Gravity.BOTTOM); frameLay.addView(theText); frameLay.addView(imageView); ```
All you need to do ias make the textview to fill the parent like ``` LayoutParams layoutParamsText= new LayoutParams(LayoutParams.FILL__PARENT, LayoutParams.FILL_PARENT); ``` When you set gravity to the textview, it mean you are telling the textview where to position its children. But since your textview only has the size of your text, the gravity wont show any difference. So just make the textview to fill the parent. But I think RelativeLayout is a lot more suitable for this than the FrameLayout. Using the RelativeLayout this is how it would look ``` RelativeLayout rLayout = new RelativeLayout(this); LayoutParams rlParams = new LayoutParams(LayoutParams.FILL_PARENT ,LayoutParams.FILL_PARENT); rLayout.setLayoutParams(rlParams); ImageView image= new ImageView(this); image.setImageResource(R.drawable.icon); image.setLayoutParams(rlParams); RelativeLayout.LayoutParams tParams = new RelativeLayout.LayoutParams (LayoutParams.WRAP_CONTENT,LayoutParams.WRAP_CONTENT); tParams.addRule(RelativeLayout.CENTER_HORIZONTAL, RelativeLayout.TRUE); tParams.addRule(RelativeLayout.ALIGN_PARENT_BOTTOM, RelativeLayout.TRUE); TextView text=new TextView(this); text.setText("GOLDEN Gate"); text.setTextColor(Color.WHITE); text.setTypeface(Typeface.DEFAULT_BOLD); text.setLayoutParams(tParams); rLayout.addView(image); rLayout.addView(text); setContentView(rLayout); ```
EF Core 2.0.0 Query Filter is Caching TenantId (Updated for 2.0.1+) I'm building a multi-tenant application, and am running into difficulties with what I think is EF Core caching the tenant id across requests. The only thing that seems to help is constantly rebuilding the application as I sign in and out of tenants. I thought it may have something to do with the `IHttpContextAccessor` instance being a singleton, but it can't be scoped, and when I sign in and out without rebuilding I can see the tenant's name change at the top of the page, so it's not the issue. The only other thing I can think of is that EF Core is doing some sort of query caching. I'm not sure why it would be considering that it's a scoped instance and it should be getting rebuild on every request, unless I'm wrong, which I probably am. I was hoping it would behave like a scoped instance so I could simply inject the tenant id at model build time on each instance. I'd really appreciate it if someone could point me in the right direction. Here's my current code: **TenantProvider.cs** ``` public sealed class TenantProvider : ITenantProvider { private readonly IHttpContextAccessor _accessor; public TenantProvider( IHttpContextAccessor accessor) { _accessor = accessor; } public int GetId() { return _accessor.HttpContext.User.GetTenantId(); } } ``` ...which is injected into **TenantEntityConfigurationBase.cs** where I use it to setup a global query filter. ``` internal abstract class TenantEntityConfigurationBase<TEntity, TKey> : EntityConfigurationBase<TEntity, TKey> where TEntity : TenantEntityBase<TKey> where TKey : IEquatable<TKey> { protected readonly ITenantProvider TenantProvider; protected TenantEntityConfigurationBase( string table, string schema, ITenantProvider tenantProvider) : base(table, schema) { TenantProvider = tenantProvider; } protected override void ConfigureFilters( EntityTypeBuilder<TEntity> builder) { base.ConfigureFilters(builder); builder.HasQueryFilter( e => e.TenantId == TenantProvider.GetId()); } protected override void ConfigureRelationships( EntityTypeBuilder<TEntity> builder) { base.ConfigureRelationships(builder); builder.HasOne( t => t.Tenant).WithMany().HasForeignKey( k => k.TenantId); } } ``` ...which is then inherited by all other tenant entity configurations. Unfortunately it doesn't seem to work as I had planned. I have verified that the tenant id being returned by the user principal is changing depending on what tenant user is logged in, so that's not the issue. Thanks in advance for any help! **Update** For a solution when using EF Core 2.0.1+, look at the not-accepted answer from me. **Update 2** Also look at Ivan's update for 2.0.1+, it proxies in the filter expression from the DbContext which restores the ability to define it once in a base configuration class. Both solutions have their pros and cons. I've opted for Ivan's again because I just want to leverage my base configurations as much as possible.
Currently (as of EF Core 2.0.0) the dynamic global query filtering is quite limited. It works *only* if the dynamic part is provided by *direct property* of the target `DbContext` derived class (or one of its base `DbContext` derived classes). Exactly as in the [**Model-level query filters**](https://learn.microsoft.com/en-us/ef/core/what-is-new/) example from the documentation. Exactly that way - no method calls, no nested property accessors - just property of the context. It's sort of explained in the link: > > Note the use of a `DbContext` instance level property: `TenantId`. Model-level filters will use the value from the correct context instance. i.e. the one that is executing the query. > > > To make it work in your scenario, you have to create a base class like this: ``` public abstract class TenantDbContext : DbContext { protected ITenantProvider TenantProvider; internal int TenantId => TenantProvider.GetId(); } ``` derive your context class from it and somehow inject the `TenantProvider` instance into it. Then modify the `TenantEntityConfigurationBase` class to receive `TenantDbContext`: ``` internal abstract class TenantEntityConfigurationBase<TEntity, TKey> : EntityConfigurationBase<TEntity, TKey> where TEntity : TenantEntityBase<TKey> where TKey : IEquatable<TKey> { protected readonly TenantDbContext Context; protected TenantEntityConfigurationBase( string table, string schema, TenantDbContext context) : base(table, schema) { Context = context; } protected override void ConfigureFilters( EntityTypeBuilder<TEntity> builder) { base.ConfigureFilters(builder); builder.HasQueryFilter( e => e.TenantId == Context.TenantId); } protected override void ConfigureRelationships( EntityTypeBuilder<TEntity> builder) { base.ConfigureRelationships(builder); builder.HasOne( t => t.Tenant).WithMany().HasForeignKey( k => k.TenantId); } } ``` and everything will work as expected. And remember, the `Context` variable type must be a `DbContext` derived *class* - replacing it with *interface* won't work. **Update for 2.0.1**: As @Smit pointed out in the comments, v2.0.1 removed most of the limitations - now you can use methods and sub properties. However, it introduced another requirement - the dynamic expression *must* be *rooted* at the `DbContext`. This requirement breaks the above solution, since the expression root is `TenantEntityConfigurationBase<TEntity, TKey>` class, and it's not so easy to create such expression outside the `DbContext` due to lack of compile time support for generating constant expressions. It could be solved with some low level expression manipulation methods, but the easier in your case would be to move the filter creation in *generic instance* method of the `TenantDbContext` and call it from the entity configuration class. Here are the modifications: *TenantDbContext class*: ``` internal Expression<Func<TEntity, bool>> CreateFilter<TEntity, TKey>() where TEntity : TenantEntityBase<TKey> where TKey : IEquatable<TKey> { return e => e.TenantId == TenantId; } ``` *TenantEntityConfigurationBase<TEntity, TKey> class*: ``` builder.HasQueryFilter(Context.CreateFilter<TEntity, TKey>()); ```
How can I host a large list in my 8G DDR3 RAM? I am new to python and just wondering how memory allocation works there. It turns out that one way to measure the size of a variable stored is to use `sys.getsizeof(x)` and it will return the number of bytes that are occupied by `x` in the memory, right? The following is an example code: ``` import struct import sys x = struct.pack('<L', 0xffffffff) print(len(x)) print(sys.getsizeof(x)) ``` which gives: ``` 4 37 ``` The variable `x` that I have just created is a 4-byte string and the first question rises here. Why is the memory allocated to a 4-byte string is `37` bytes? Is not that too much extra space? The story gets more complicated when I start to create a list of 2 \* 4-byte strings. Bellow you will find another few lines: ``` import struct import sys k = 2 rng = range(0, k) x = [b''] * k for i in rng: x[i] = struct.pack('<L', 0xffffffff) print(len(x)) print(len(x[0])) print(sys.getsizeof(x)) print(sys.getsizeof(x[0])) ``` from which I get: ``` 2 4 80 37 ``` Another question is that why when I store two 4-byte strings in a list the total sum of the memory allocated to them is not equal to the sum of their solo sizes?! That is `37 + 37 != 80`. What are those extra 6 bytes for? Lets enlarge `k` to `10000`, the previous code gives: ``` 10000 4 80064 37 ``` Here the difference rises dramatically when comparing the solo size to the whole: `37 * 10000 = 370000 != 80064`. It looks like that each item in the list is now occupying `80064/10000 = 8.0064` bytes. Sounds feasible but I still cannot address previously shown conflicts. After all, the main question of mine is that when I rise `k` to `0xffffffff` and expect to get a list of size `~ 8 * 0xffffffff = 34359738360` I actually encounter an exception of MemoryError. Is there any way to eliminate non-critical memory spaces so that my 8G DDR3 RAM can host this variable `x`?
> > Why is the memory allocated to a 4-byte string is 37 bytes? Is not that too much extra space? > > > *All* objects in Python have some amount of "slop" on a per-object basis. Note that in the case of `bytes` and probably all immutable stdlib types, this padding (here 33 bytes) is *independent* of the length of the object: ``` from sys import getsizeof as gso print(gso(b'x'*1000) - gso(b'')) # 1000 ``` Note, this ***is not*** the same as: ``` print(gso([b'x']*1000) - gso(b'')) # 8031 ``` In the former, you're making a `bytes` object of 1000 x's. In the latter, you're making a list of 1000 bytes objects. The important distinction is that in the latter, you're (a) replicating the bytes object 1000 times, *and* incorporating the size of the list container. (The reason for the difference being only ~8,000 and not ~34,000 (i.e. 8 bytes per element instead of 34 bytes per element (=`sizeof(b'x')`) comes next.) Lets talk about containers: ``` print(gso([b'x'*100,]) - gso([])) ``` Here we print the difference between the `getsizeof` of a one element list (of a 100 byte long `byte` object) and an empty list. We're effectively [taring](https://en.wikipedia.org/wiki/Tare_weight) out the size of the container. We might expect that this is equal to `getsizeof(b'x' * 100)`. It is not. The result of `print(gso([b'x'*100,]) - gso([]))` is 8 bytes (on my machine) and is because the list contains just references/pointers to underlying objects and those 8 bytes are just that -- the pointer to the single element of the list. > > That is 37 + 37 != 80. What are those extra 6 bytes for? > > > Lets do the same thing and look at the net size, by subtracting the size of the container: ``` x = [b'\xff\xff\xff\xff', b'\xff\xff\xff\xff'] print(gso(x[0]) - gso(b'')) # 4 print(gso(x) - gso([])) # 16 ``` In the first, the 4 returned is just as the 1000 returned in the first example I provided, one per byte. (`len(x[0])` is 4). In the second, it's 8 bytes per reference-to-sublist. It has nothing to do with the *contents* of those sublists: ``` N = 1000 x = [b'x'] * N y = [b'xxxx'] * N print(gso(x) == gso(y)) # True ``` But while mutable containers don't seem to have a fixed "slop": ``` lst = [] for _ in range(100): lst.append('-') x = list(lst) slop = gso(x) - (8 * len(x)) print({"len": len(x), "slop": slop}) ``` Output: ``` {'len': 1, 'slop': 88} {'len': 2, 'slop': 88} {'len': 3, 'slop': 88} {'len': 4, 'slop': 88} {'len': 5, 'slop': 88} {'len': 6, 'slop': 88} {'len': 7, 'slop': 88} {'len': 8, 'slop': 96} {'len': 9, 'slop': 120} {'len': 10, 'slop': 120} {'len': 11, 'slop': 120} {'len': 12, 'slop': 120} {'len': 13, 'slop': 120} {'len': 14, 'slop': 120} {'len': 15, 'slop': 120} {'len': 16, 'slop': 128} {'len': 17, 'slop': 128} {'len': 18, 'slop': 128} {'len': 19, 'slop': 128} {'len': 20, 'slop': 128} {'len': 21, 'slop': 128} {'len': 22, 'slop': 128} {'len': 23, 'slop': 128} {'len': 24, 'slop': 136} ... ``` ...*Immutable* containers do: ``` lst = [] for _ in range(100): lst.append('-') x = tuple(lst) slop = gso(x) - (8 * len(x)) print({"len": len(x), "slop": slop}) ``` ``` {'len': 1, 'slop': 48} {'len': 2, 'slop': 48} {'len': 3, 'slop': 48} {'len': 4, 'slop': 48} {'len': 5, 'slop': 48} {'len': 6, 'slop': 48} {'len': 7, 'slop': 48} {'len': 8, 'slop': 48} {'len': 9, 'slop': 48} {'len': 10, 'slop': 48} {'len': 11, 'slop': 48} {'len': 12, 'slop': 48} {'len': 13, 'slop': 48} {'len': 14, 'slop': 48} ... ``` > > Is there any way to eliminate non-critical memory spaces so that my 8G DDR3 RAM can host this variable x? > > > First, recall that the sizeof a container will not reflect the entire amount of memory used by Python. The ~8 bytes per element is *the size of the pointer*, each of those elements will consume an additional 37 (or whatever) bytes (sans interning or similar optimization). But the good news is that it's unlikely you probably don't need the *entire* list at the same time. If you're just building a list to iterate over, then generate it one element at a time, with a for loop or generator function. Or generate it a chunk at a time, process it, and then continue, letting the garbage collector clean up the no-longer-used memory. --- One other interesting thing to point out ``` N = 1000 x = [b'x' for _ in range(N)] y = [b'x'] * N print(x == y) # True print(gso(x) == gso(y)) # False ``` (This is likely due to the size of `y` being known *a priori*, while the size of `x` is not and has been resized as it grew).
Why we have to specify data type again after the arrow symbol ( -> ) `auto` can deduce the return type then why do we need trailing arrow symbol (->) to deduce the return type ``` #include <iostream> auto add (int i, int j)->int { return i+j; } int main() { int x=10,y=20; std::cout<<add(x,y); } ```
In C++11, there is no return type deduction for functions. `auto` is not a placeholder type to be deduced here. You could say its meaning is overloaded. For functions, `auto` simply means the return type will be specified as a trailing return type. You cannot omit the trailing return, or your program will be ill-formed. This feature was added to the language to allow return type specification to depend on the functions parameters, or enclosing class for members. Those are considered "seen" by the time the trailing return type is reached. For instance, in this class: ``` namespace baz { struct foo { enum bar {SOMETHING}; bar func(); }; } ``` If we implement that member function out of line in C++03, it would have to look like this: ``` baz::foo::bar baz::foo::func() { return SOMETHING; } ``` We must specify the fully qualified name for the return type. Which can quickly become unreadable. But with trailing return types: ``` auto baz::foo::func() -> bar { return SOMETHING; } ``` The full enclosing namespace is already seen, and `bar` can be specified using an unqualified id.
C# reflection - load assembly and invoke a method if it exists I want to load an assembly (its name is stored in a string), use reflection to check if it has a method called "CustomType MyMethod(byte[] a, int b)" and call it or throw an exception otherwise. I guess I should do something like this, but would appreciate if someone could offer same advice on how best to do it: ``` Assembly asm = Assembly.Load("myAssembly"); /* 1. does it matter if write myAssembly or myAssembly.dll? */ Type t = asm.GetType("myAssembly.ClassName"); // specify parameters byte[] a = GetParamA(); int b = GetParamB(); object[] params = new object[2]; params[0] = a; params[1] = b; /* 2. invoke method MyMethod() which returns object "CustomType" - how do I check if it exists? */ /* 3. what's the meaning of 4th parameter (t in this case); MSDN says this is "the Object on which to invoke the specified member", but isn't this already accounted for by using t.InvokeMember()? */ CustomType result = t.InvokeMember("MyMethod", BindingFlags.InvokeMethod, null, t, params); ``` Is this good enough, or are there better/faster/shorter ways? What about constructors, given that these methods are not static - can they simply be ignored? When invoking void Methods(), is it ok to just write t.InvokeMember(...) or should you always do Object obj = t.InvokeMember(...)? Thanks in advance. --- **EDIT** I have provided a working example as a separate answer below.
> > use reflection to check if it has a method called "CustomType MyMethod(byte[] a, int b)" and call it or throw an exception otherwise > > > Your current code isn't fulfilling that requirement. But you can pretty easily with something like this: ``` var methodInfo = t.GetMethod("MyMethod", new Type[] { typeof(byte[]), typeof(int) }); if (methodInfo == null) // the method doesn't exist { // throw some exception } var o = Activator.CreateInstance(t); var result = methodInfo.Invoke(o, params); ``` > > Is this good enough, or are there better/faster/shorter ways? > > > As far as I'm concerned this is the best way and there isn't really anything faster per say. > > What about constructors, given that these methods are not static - can they simply be ignored? > > > You are still going to have to create an instance of `t` as shown in my example. This will use the default constructor with no arguments. If you need to pass arguments you can, just see the [MSDN documentation](http://msdn.microsoft.com/en-us/library/wcxyzt4d.aspx) and modify it as such.
GraphRbacManagementClient.applications.create() returns Access Token missing or malformed We're trying to create an Azure application registration using the Python SDK (v2.0) and the current user's CLI credentials. ``` from azure.common.credentials import get_azure_cli_credentials from azure.graphrbac import GraphRbacManagementClient credentials, subscription_id = get_azure_cli_credentials() client = GraphRbacManagementClient(credentials, 'my-tenant-id') app_parameters = { 'available_to_other_tenants': False, 'display_name': 'my-app-name', 'identifier_uris': ['http://my-app-name.com'] } app = client.applications.create(app_parameters) ``` But this returns ``` Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/my-app-code/.venv/lib/python3.6/site-packages/azure/graphrbac/operations/applications_operations.py", line 86, in create raise models.GraphErrorException(self._deserialize, response) azure.graphrbac.models.graph_error.GraphErrorException: Access Token missing or malformed. ``` We noted that we can avoid this error when using `ServicePrincipalCredentials` by including `resource='https://graph.windows.net'` in the constructor, but there doesn't seem to be an equivalent way to do this when using `get_azure_cli_credentials()`. Are we doing something wrong, or should this work? Please do not reply that we should be using `ServicePrincipalCredentials`. Our use case is explicitly that the interactive user can create/register an Azure application using the Python SDK.
`get_azure_cli_credentials` is indeed not able yet to provide you a Credentials class with a "resource" definition different than ARM for now (now being: azure-common 1.1.10 and below) You can workaround by doing: ``` from azure.common.credentials import get_cli_profile profile = get_cli_profile() cred, subscription_id, _ = profile.get_login_credentials(resource='https://graph.windows.net') ``` Please create an issue on <https://github.com/Azure/azure-sdk-for-python>, with a link to this SO, and I will try to do it for the next release of azure-common. (I work at MS and own this code) **Edit:** Released part of 1.1.11 <https://pypi.org/project/azure-common/1.1.11/> ``` from azure.common.credentials import get_azure_cli_credentials cred, subscription_id = get_azure_cli_credentials(resource='https://graph.windows.net') ```
UILabel sizeToFit and constraints Is there any simple way which can help me to change position of dependent views dynamically using their content size? I want to show several views in column which all have varying content. And I want them to be placed one after another (I've created layout using constraints which looks like this) ![initial layout](https://i.stack.imgur.com/ehmMe.png) But whenever I change content of labels and call `sizeToFit`, system seems to ignore layout. ![after size to fit call](https://i.stack.imgur.com/vub1F.png) At the moment I'm interested only in height property, I know that constraining rect can be used too and in the past I wrote many categories on UIView to change sizes dynamically (I guess everyone did). But maybe there is a simple way which I don't know?
`-sizeToFit` should not be called if you are using auto-layout. That's part of the 'old' system. It looks like IB has inserted explicit heights into your constraints (the vertical bars next to the labels indicate this). Try selecting the labels and hitting Cmd+= to clear these. For multiline labels you will also need to do the following in your view controller to make everything work correctly when rotating/resizing the view: ``` - (void)updateLabelPreferredMaxLayoutWidthToCurrentWidth:(UILabel *)label { label.preferredMaxLayoutWidth = [label alignmentRectForFrame:label.frame].size.width; } - (void)viewDidLayoutSubviews { [super viewDidLayoutSubviews]; [self updateLabelPreferredMaxLayoutWidthToCurrentWidth:self.label1]; [self updateLabelPreferredMaxLayoutWidthToCurrentWidth:self.label2]; [self updateLabelPreferredMaxLayoutWidthToCurrentWidth:self.label3]; [self.view layoutSubviews]; } ``` Multiline labels expose one of the weaknesses of auto-layout. We have to update `preferredMaxLayoutWidth` to force the label to reflow and adjust its height, otherwise if the view is resized/rotated, auto-layout does not realize the label needs to be reflowed and resized.
Powershell GetEnvironmentVariable vs $Env I have run into a couple cases where I am trying to use a command via command line, but the command is not recognized. I have narrowed it down to an issue with environment variables. In each case, the variable is present when I retrieve the variable with the underlying C# method, but not with the shorthand, $env:myVariable For example, if I retrieve the variable like this, I will get a value. ``` [Environment]::GetEnvironmentVariable('ChocolateyInstall', 'Machine') ``` But, if I retrieve the variable like this, nothing is returned ``` $env:ChocolateyInstall ``` I then have to do something like this to to get my command to work. ``` $env:ChocolateyInstall = [Environment]::GetEnvironmentVariable('ChocolateyInstall', 'Machine') ``` I have not been able to find a good explanation as to why I have to do this. I've looked at [this documentation](https://learn.microsoft.com/en-us/powershell/module/microsoft.powershell.core/about/about_environment_variables?view=powershell-7), but nothing stands out to me. Ideally, I would like to install a CLI and then not have to deal with checking for and assigning environment variables for the command to work.
When opening a PowerShell session, all permanently stored environment variables1 will be loaded into the Environment drive (`Env:`) of this current session ([source](https://learn.microsoft.com/en-us/powershell/module/microsoft.powershell.core/about/about_environment_provider?view=powershell-7#detailed-description)): > > The Environment drive is a flat namespace containing the environment > variables specific to the current user's session. > > > The [documentation](https://learn.microsoft.com/en-us/powershell/module/microsoft.powershell.core/about/about_environment_variables?view=powershell-7#changing-environment-variables) you linked states: > > When you change environment variables in PowerShell, the change > affects only the current session. This behavior resembles the behavior > of the `Set` command in the Windows Command Shell and the `Setenv` command > in UNIX-based environments. To change values in the Machine or User > scopes, you must use the methods of the **System.Environment** class. > > > So defining/changing an environment variable like this: ``` $env:ChocolateyInstall = [Environment]::GetEnvironmentVariable('ChocolateyInstall', 'Machine') ``` Will change it for the current session, thus being immediately effective, but will also only be valid for the current session. The methods of `[System.Environment]` are more fine grained. There you can choose which environment variable scope to address. There are three scopes available: - Machine - User - Process The `Process` scope is equivalent to the Environment drive and covers the environment variables available in your current session. The `Machine` and the `User` scope address the permanently stored environment variables1. You can get variables from a particular scope like this: ``` [Environment]::GetEnvironmentVariable('ChocolateyInstall', 'Machine') ``` And set them with: ``` [Environment]::SetEnvironmentVariable('ChocolateyInstall', 'any/path/to/somewhere', 'Machine') ``` If you want to have new variables from the `Machine` or `User` scope available in your current PowerShell session, you have to create a new one. But don't open a new PowerShell session from your current PowerShell session, as it will then inherit all environment variables from your current PowerShell session ([source](https://learn.microsoft.com/en-us/powershell/module/microsoft.powershell.core/about/about_environment_variables?view=powershell-7#long-description)): > > Environment variables, unlike other types of variables in PowerShell, > are inherited by child processes, such as local background jobs and > the sessions in which module members run. This makes environment > variables well suited to storing values that are needed in both parent > and child processes. > > > So, to address the problem you described, you most probably changed your permanently stored environment variables1, while already having an open PowerShell session. If so, you just need to open a new (really new, see above) session and you will be able to access your environment variables via the Environment drive. Just to be clear, opening a new session will even reload environment variables of the `Machine` scope. There is no reboot required. --- 1 That are the environment variables you see in the GUI when going to the *System Control Panel*, selecting *Advanced System Settings* and on the *Advanced* tab, clicking on *Environment Variable*. Those variables cover the `User` and the `Machine` scope. Alternatively, you can open this GUI directly by executing: ``` rundll32 sysdm.cpl,EditEnvironmentVariables ```
c++ library to make tar files Is there any c++ library available which can create tar files ? I have stream of text which I need to break into parts and make small files which should all reside in a tar ball ! Regards, Lalith
A quick Google search uncovers the **[Chilkat C/C++ TAR Library](http://www.chilkatsoft.com/tar-c++.asp)**. A sample of its use is available here: [Chilkat C++ Examples: Create TAR Archive](http://www.example-code.com/vcpp/tar_create.asp) Another possible option is [libtar](https://github.com/tklauser/libtar) (note, however, that it is a C library). And, of course, you could read the TAR file spec (all of what you need is linked through [Wikipedia](http://en.wikipedia.org/wiki/Tar_(file_format))) and implement your own library. It looks like someone has already done that [here](http://plindenbaum.blogspot.com/2010/08/creating-tar-file-in-c.html), with the source available.
The object is null, however checking if it's null returning false I am suffering a weird problem in C# 4.5. I have this in my model: ``` private DataMatrix<T> _matrix; public DataMatrix<T> Matrix { get { return _matrix; } set { _matrix = value; } } ``` And I have a property which uses this: ``` public object SingleElement { get { if (Matrix == null) return String.Empty; if (Matrix.ColumnCount >= 1 && Matrix.RowCount >= 1) { return Matrix[0, 0]; } return null; } } ``` When I run it, before calling `SingleElement`, the Matrix property is null. But it doesn't return `String.Empty`, it goes to the second if-statement. That's my Immediate window says: ![Immediate window](https://i.stack.imgur.com/j2Sv7.png) I'm a bit confused. What did I do wrong?
This is a most likely a broken equality operator (`==`), which can be reproduced with the following code: ``` class Foo { public static bool operator == (Foo x, Foo y) { return false; // probably more complex stuff here in the real code } public static bool operator != (Foo x, Foo y) { return !(x == y); } static void Main() { Foo obj = null; System.Diagnostics.Debugger.Break(); } // note there are two compiler warnings here about GetHashCode/Equals; // I am ignoring those for brevity } ``` now at the breakpoint in the immediate window: ``` ?obj null ?(obj==null) false ``` Two fixes: - preferred would be to fix the operator, perhaps adding before anything else: ``` if(ReferenceEquals(x,y)) return true; if((object)x == null || (object)y == null) return false; // the rest of the code... ``` - alternative, if you can't edit that type, is to avoid using the operator; consider using `ReferenceEquals` explicitly in your code, or performing `object`-based `null` checks; for example: ``` if(ReferenceEquals(Matrix, null)) ... ``` or ``` if((object)Matrix == null) ... ```
Asp.NET MVC: validating age > 18 having 3 combo box I've 3 combo box showing the day, month and year of birth in MVC. I'd like to calculate the age and disallow registration for guys which are younger than 18 dinamically. Though JS. Something similar to what is shown in this image: ![enter image description here](https://i.stack.imgur.com/bR1C7.png) This is done by using DataAnnotations and EditorFor. The actual source code is similar to what follows. How should I modify to validate 3 controls together? ``` [Required(ErrorMessageResourceType = typeof (Resources), ErrorMessageResourceName = "RequiredField")] [Range(1, 31)] [LocalizedDisplayName(typeof (RA.Resources), "RegistrationDayOfBirth")] public int BirthDay { get; set; } [Required(ErrorMessageResourceType = typeof (Resources), ErrorMessageResourceName = "RequiredField")] [Range(1, 12)] [LocalizedDisplayName(typeof (RA.Resources), "RegistrationMonthOfBirth")] public int BirthMonth { get; set; } [Required(ErrorMessageResourceType = typeof (Resources), ErrorMessageResourceName = "RequiredField")] [Range(1800, int.MaxValue, ErrorMessageResourceType = typeof (Resources), ErrorMessageResourceName = "MoreThanFieldRequired")] [LocalizedDisplayName(typeof (RA.Resources), "RegistrationYearOfBirth")] public int BirthYear { get; set; } [LocalizedDisplayName(typeof (RA.Resources), "RegistrationDateOfBirth")] public DateTime DateOfBirth { get; set; } ```
If you want to stick with the 3 field approach, as well as have dynamic validation (i.e. deny me access today if my 18th birthday is tomorrow, but let me in tomorrow) you're going to need to get creative. You'll then need to create a custom validator, and some custom attributes to go with it. How you go about this depends on the amount of work you want to do, and where you want to apply the validation logic. # Server-side only validation The simplest option is to define this on the class itself - however this will limit you to server-side validation only. Create a custom attribute that is applied at the class level that expects there to be three fields on the class (I've added an interface to make this simpler and not require reflection) and validate this as required: ``` // Interface to define individual fields: public interface IHasIndividualDateOfBirth { int BirthDay { get; set; } int BirthMonth { get; set; } int BirthYear { get; set; } } // Note new class level attribute, and interface declaration: [MinAge(AgeInYears = 18)] public class Birthday: IHasIndividualDateOfBirth { [Required] [Range(1, 31)] public int BirthDay { get; set; } [Required] [Range(1, 12)] public int BirthMonth { get; set; } [Required] [Range(1800, 2200)] public int BirthYear { get; set; } public DateTime BirthDate { get; set; } } // Declare a new ValidationAttribute that can be used at the class level: [AttributeUsage(AttributeTargets.Class)] public class MinAgeAttribute : ValidationAttribute { public int AgeInYears { get; set; } // Implement IsValid method: protected override ValidationResult IsValid(object value, ValidationContext validationContext) { // Retrieve the object that was passed in as our DateOfBirth type var objectWithDob = validationContext.ObjectInstance as IHasIndividualDateOfBirth; if (null != objectWithDob) { // TODO: Handle invalid dates from the front-end (30 Feb for example) DateTime dateOfBirth = new DateTime(objectWithDob.BirthYear, objectWithDob.BirthMonth, objectWithDob.BirthDay); // Check that the age is more than the minimum requested if (DateTime.Now >= dateOfBirth.AddYears(AgeInYears)) { return ValidationResult.Success; } return new ValidationResult("You are not yet 18 years old"); } return new ValidationResult("Class doesn't implement IHasIndividualBirthday"); } } ``` While implementing `IValidatableObject` may seem simpler still, it isn't as flexible as using an attribute and also (like the class-based validation above) doesn't provide a way to perform client-side validation. Other options would be to create a validator that depends on a number of other fields (in which case you would probably need to use reflection to look for other fields, and work out which ones go where) and you'd need to ensure you're only firing the validator once (rather than on every field), or to write a custom validator and editor for the DateTime property that instead of rendering a single field that you could drop a calendar control onto creates the three separate fields you're after. # Client- and server-side validation To get client-side validation working, you'll need to do this at the property level, which will require you to do some additional work - you could for example use the DateTime field you have on the model as a hidden field that is populated via JS as the user fills in the individual fields and then validate that. Your attribute would then need to implement `IClientValidatable` which would enable you to hook into the client-side validation options and also render out some metadata on the elements to expose the age requirement: ``` [AttributeUsage(AttributeTargets.Property)] public class MinAgeAttribute : ValidationAttribute, IClientValidatable { public int AgeInYears { get; set; } protected override ValidationResult IsValid(object value, ValidationContext validationContext) { // [Similar to before] } public IEnumerable<ModelClientValidationRule> GetClientValidationRules( ModelMetadata metadata, ControllerContext context) { return new[] { new ModelClientValidationMinAgeRule(ErrorMessage, AgeInYears) }; } } public class ModelClientValidationMinAgeRule : ModelClientValidationRule { public ModelClientValidationMinAgeRule(string errorMessage, int ageInYears) { ErrorMessage = errorMessage; // Validation Type and Parameters must be lowercase ValidationType = "minage"; ValidationParameters.Add("ageinyears", ageInYears); } } ``` Then for the client side you need to register some custom validators into jQuery.Validate or similar (I recommend your own JS file included in the `jqueryval` bundle): ``` $(function ($) { $.validator.addMethod("minage", function(value, element, params) { if ($(element).val() != '') { var ageInYears = params; // take date from BirthDate element and compare with ageInYears. return false; } }); $.validator.unobtrusive.adapters.addSingleVal("minage", "ageinyears"); }(jQuery)); ```
Typescript interface for elliptic I'm trying to create a typescript interface for the elliptic library. <https://www.npmjs.com/package/elliptic> I've read the docs over here, but apparently I'm just getting it. <https://www.typescriptlang.org/docs/handbook/declaration-files/library-structures.html> The JS is normally used as follows: ``` var EC = require('elliptic').ec; var ec = new EC('secp256k1'); var key = ec.genKeyPair(); var key = ec.keyFromPublic(pub, 'hex'); ``` I feel like I've tried so many combinations at this point that I'd just be confusing this post to show you what I've tried. The two functions shown are the only ones that I need to call genKeyPair() & keyFromPublic(). Could someone please get me started with a definition file?
Put this in a file called `elliptic-types.ts`: ``` declare module "elliptic" { type CurvePreset = 'secp256k1' | 'p192' | 'p224' | 'p256' | 'p384' | 'p521' | 'curve25519' | 'ed25519' ; class EllipticCurve { constructor(preset: CurvePreset); genKeyPair(): any; keyFromPublic(publicKey: string, type: 'hex'): any; } export { EllipticCurve as ec } } ``` And then use it like this: ``` import "./elliptic-types"; import * as elliptic from "elliptic"; var EC = elliptic.ec; var ec = new EC('secp256k1'); var key = ec.genKeyPair(); var key = ec.keyFromPublic(pub, 'hex'); ``` Note that I just glanced at the README.md for the repo, which is how I got the curve presets. I didn't see what either function was actually returning, so I returned `any`. You'll probably want to improve on that, since `any` is generally bad.
How to calculate a packet checksum without sending it? I'm using scapy, and I want to create a packet and calculate its' checksum without sending it. Is there a way to do it? Thanks.
You need to delete the `.chksum` value from the packet after you create it; then call `.show2()` ``` >>> from scapy.layers.inet import IP >>> from scapy.layers.inet import ICMP >>> from scapy.layers.inet import TCP >>> target = "10.9.8.7" >>> ttl = 64 >>> id = 32711 >>> sport = 2927 >>> dport = 80 >>> pak = IP(dst=target, src = "100.99.98.97", ttl=ttl, flags="DF", id=id, len=1200, chksum = 0)/TCP(flags="S", sport=sport, dport=int(dport), options=[('Timestamp',(0,0))], chksum = 0) >>> del pak[IP].chksum >>> del pak[TCP].chksum >>> pak.show2() ###[ IP ]### version = 4L ihl = 5L tos = 0x0 len = 1200 id = 32711 flags = DF frag = 0L ttl = 64 proto = tcp chksum = 0x9afd src = 100.99.98.97 dst = 10.9.8.7 \options \ ###[ TCP ]### sport = 2927 dport = www seq = 0 ack = 0 dataofs = 8L reserved = 0L flags = S window = 8192 chksum = 0x2c0e urgptr = 0 options = [('Timestamp', (0, 0)), ('EOL', None)] >>> ```
How can I toggle the touchpad depending on whether a mouse is connected? I want to have my touchpad disabled automatically when an external mouse is connected and enabled when there is none. I have tried using `touchpad-indicator` but that fails in cases when the computer has been put to sleep with a mouse connected and awoken with the mouse disconnected. I have tried to make the following script into a daemon to solve this issue but I can't get it to work: ``` #!/bin/bash declare -i TID declare -i MID TID=`xinput list | grep -Eo 'Touchpad\s*id\=[0-9]{1,2}' | grep -Eo '[0-9]{1,2}'` MID=`xinput list | grep -Eo 'Mouse\s*id\=[0-9]{1,2}' | grep -Eo '[0-9]{1,2}'` if [ $MID -gt 0 ] then xinput disable $TID else xinput enable $TID fi ``` I tried `start-stop-daemon -S -x ./myscript.sh -b` and `setsid ./myscript.sh >/dev/null 2>&1 < /dev/null &` and `nohup ./myscript 0<&- &>/dev/null &` and even `./myscript.sh &` All of these return some 4-digit number, which, I guess, should be PID of the started process but when I launch lxtask **there are no processes with this PID**, even if I tick "view all processes". And, of course, it doesn't work!
The basic script you need is simply: ``` #!/usr/bin/env bash ## Get the touchpad id. The -P means perl regular expressions (for \K) ## the -i makes it case insensitive (better portability) and the -o ## means print only the matched portion. The \K discards anything matched ## before it so this command will print the numeric id only. TID=$(xinput list | grep -iPo 'touchpad.*id=\K\d+') ## Run every second while : do ## Disable the touchpad if there is a mouse connected ## and enable it if there is none. xinput list | grep -iq mouse && xinput disable "$TID" || xinput enable "$TID" ## wait one second to avoind spamming your CPU sleep 1 done ``` The script above will toggle the trackpad depending on whether a mouse is connected. When launched, it will run for ever and will check for a mouse every second, disabling or enabling the touchpad accordingly. Now, save the script as `~/touchpad.sh`, make it executable (`chmod +x ~/touchpad.sh`) and add it to your GUI session startup programs. You have not specified which desktop environment you are using but since you mentioned `lxtask`, I will assume you are using `LXDE`. In any case, here are instructions for both `LXDE` and `Unity`: 1. Add the script to LXDE's autostart files ``` echo "@$HOME/touchpad.sh" >> ~/.config/lxsession/PROFILE/autostart file ``` **Make sure you replace "PROFILE" with the actual name of your LXDE profile**, you can find out what it is by running `ls ~/.config/lxsession/`. 2. Add the script to Unity's autostart files Open `Startup Applications` (search in the dashboard for "Startup") ![enter image description here](https://i.stack.imgur.com/XwmkG.png) Click on "Add" and then paste the path to your script in the command field: ![enter image description here](https://i.stack.imgur.com/DzcQM.png)
Change the complex number output format There is the `complex<>` template in C++ standard library, and it has an overloaded << operator so that it outputs complex numbers in the (real\_part, im\_part) format. I need to change the behavior of that operator for complex numbers so that the output format is changed to something completely different. Specifically, I need the output to be in the form `real_part\tim_part`. How do I do that?
There's no direct way to replace `operator <<`, but you do have a few options. First, you could just write your own function to print complex numbers: ``` template <typename T> void PrintComplex(const complex<T>& c) { /* ... */ } ``` If you want to still use the nice stream syntax, then one trick you could do would be to make a wrapper class that wraps a `complex` and then defines its own `operator <<` that prints it out in a different way. For example: ``` template <typename T> class ComplexPrinter { public: /* Conversion constructor allows for implicit conversions from * complex<T> to ComplexPrinter<T>. */ ComplexPrinter(const complex<T>& value) : c(value) { // Handled in initializer list } /* Output the complex in your own format. */ friend ostream& operator<< (ostream& out, const ComplexPrinter& cp) { /* ... print in your own format ... */ } private: complex<T> c; }; ``` Once you have this, you could write something like ``` cout << ComplexPrinter<double>(myComplex) << endl; ``` You can make this even cleaner by writing a function like this one to wrap the object for you: ``` template <typename T> ComplexPrinter<T> wrap(const complex<T>& c) { return ComplexPrinter<T>(c); } ``` This then lets you write ``` cout << wrap(myComplex) << endl; ``` Which isn't perfect, but is pretty good. One thing to note about the above wrapper is that it has an implicit conversion constructor set up to let you convert `complex<T>`s to `ComplexPrinter<T>`s. This means that if you have a `vector< complex<T> >`, you can print it out using your custom code by calling ``` vector< complex<double> > v = /* ... */ copy (v.begin(), v.end(), ostream_iterator< ComplexPrinter<double> >(cout, " ")); ``` On output, the implicit conversion constructor will transform your `complex<double>`s into the wrappers, and your custom code will do the printing for you. If you want to be very adventurous and cast caution to the wind, you could even write the class so that it just stores a reference to the original `complex`, as shown here: ``` template <typename T> class ComplexPrinter { public: /* Conversion constructor allows for implicit conversions from * complex<T> to ComplexPrinter<T>. */ ComplexPrinter(const complex<T>& value) : c(value) { // Handled in initializer list } /* Output the complex in your own format. */ friend ostream& operator<< (ostream& out, const ComplexPrinter& cp) { /* ... print in your own format ... */ } private: const complex<T>& c; }; ``` This completely eliminates any copying and just makes the wrapper a thin veneer around a real `complex`. (No pun intended). You'd have to be very careful if you did this not to pass these objects around across scope boundaries where the original objects go out of scope, but if it's what you want it might work out just great. Hope this helps!
Is there a way to use a custom selected image for UITabBarItem? I like to have a custom selected image when a user selects an item on the tab bar, by default it selects as blue like but would like to have a green color instead. something like below any thoughts? [![alt text](https://i.stack.imgur.com/JLwKt.png)](https://i.stack.imgur.com/JLwKt.png)
Just found my solution. Basically, I subclassed UITabItem and set this in the navigation controller: ``` -(void) viewDidAppear:(BOOL)animated { [super viewDidAppear:animated]; CustomTabBarItem *tabItem = [[CustomTabBarItem alloc] initWithTitle:@"Events" image:[UIImage imageNamed:@"tabIcon.png"] tag:0]; tabItem.customHighlightedImage=[UIImage imageNamed:@"tabIconSelected.png"]; self.tabBarItem = tabItem; [tabItem release]; tabItem=nil; } ``` Here's what the CustomTabBarItem class looks like: ``` @interface CustomTabBarItem : UITabBarItem { UIImage *customHighlightedImage; } @property (nonatomic, retain) UIImage *customHighlightedImage; @end ``` implementation: ``` #import "CustomTabBarItem.h @implementation CustomTabBarItem @synthesize customHighlightedImage; - (void)dealloc { [customHighlightedImage release]; customHighlightedImage=nil; [super dealloc]; } -(UIImage *)selectedImage { return self.customHighlightedImage; } @end ```
How to make C language context-free? I know that **C** is not a context-free language, a famous example is: ``` int foo; typedef int foo; foo x; ``` In this case the lexer doesn't know, whether `foo` in the 3rd line, is an identifier, or `typedef`. My question is, is this the only reason that makes **C** a *Context-Sensitive Language*? I mean, if we get rid of `typedef`, would it become context-free language? Or there are other reasons (examples) that prevent it from being so?
The post-processor C *syntax* can be parsed with a classical lex + yacc combo. The lexer definition and the yacc grammar are freely available at <http://www.quut.com/c/ANSI-C-grammar-l-2011.html> and <http://www.quut.com/c/ANSI-C-grammar-y-2011.html> As you can see from the lex file, it's straightforward except for the context-sensitive `check_type()` (and `comment()`, but comment processing technically belongs to the preprocessor), which makes `typedef` the only source of context-sensitivity there. Since the yacc file doesn't contain any context-sensitivity introducing tricks either, a `typedef`-less C would be a perfectly context-free syntax. The subsequent typechecking of C (matching declarations with use sites) is context sensitive, so you could say that overall, C is context sensitive.
MathJS in ionic 2 Can someone please help me how to use mathjs in ionic 2? I just don't get it imported to use, really don't know what to do. In ionic 1 it was easy to load the library and use it, but not so in ionic 2. Thanks!
try this ``` npm install mathjs --save npm install @types/mathjs --save-dev ``` in Component: ``` import * as math from 'mathjs'; // don't named as Math, this will conflict with Math in JS ``` in some method: ``` let rs = math.eval('cos(45 deg)'); ``` or you could using CDN: add this line into `index.html` ``` <script src="https://cdnjs.cloudflare.com/ajax/libs/mathjs/3.9.1/math.min.js"></script> ``` in Component: ``` // other import ....; declare const math: any; ``` in some method: ``` let rs = math.eval('cos(45 deg)'); ```
TableView Cell reuse and unwanted checkmarks - this is killing me Apple's iOS `TableView` and cell reuse is killing me. I searched and searched and studied, but can't find good docs or good answers. The problem is that when the `TableView` reuses cells things like Checkmarks (cell accessory) set on a selected Cell are repeated in the cells further down in the table view. I understand that cell reuse is by design, due to memory constraints, but if you have a list with say 50 items, and it starts setting extra checkmarks where they're not wanted, this makes whole endeavor useless. All I want to do is set a checkmark on a cell I've selected. I've tried this using my own custom cell class, and standard cells generated by a boiler plate TableView class, but it always ends up the same. Apple even have an example project called TouchCell you can download from the dev center, that is supposed to show a different way of setting a checkmark using a custom cell with an image control on the left. The project uses a dictionary object for a data source instead of a muteable array, so for each item there is a string value and bool checked value. This bool checked value is supposed to set the checkmark so it can track selected items. This sample project also displays this goofy behavior as soon as you populate the TableView with 15+ cells. The reuse of cells starts setting unwanted check marks. I've even tried experimenting with using a truely unique Cell Identifier for each cell. So instead of each cell having something like @"Acell" I used a static int, cast to a string so the cells got @"cell1", @"cell2" etc. During testing though, I could see that hundreds of new cells where generated during scrolling, even if the table only had 30 items. It did fix the checkmark repeat problem, but I suspect the memory usage was going way too high. It's as though the cells that are not currently in the viewable area of the table are created all over again when they are scrolled back into view. Has anyone come up with an elegant solution to this irritating behavior?
cell reusing can be tricky but you have to keep 2 things in mind: - Use **one** identifier for one **type** of cell - Using multiple identifiers is really only needed when you use different UITableViewCell-subclasses in one table view and you have to rely on their different behaviour for different cells - The cell you reuse can be in **any state**, which means you have to configure every aspect of the cell again - especially checkmars / images / text / accessoryViews / accessoryTypes and more What you need to do is to create a storage for your checkmark states - a simple array containing bools (or NSArray containing boolean NSNumber objects respectively) should do it. Then when you have to create/reuse a cell use following logic: ``` -(UITableViewCell *)tableView:(UITableView *)tableView cellForRowAtIndexPath:(NSIndexPath *)indexPath { static NSString *reuseIdentifier = @"MyCellType"; UITableViewCell *cell = [tableView dequeueReusableCellWithIdentifier:reuseIdentifier]; if(cell == nil) { /* create cell here */ } // Configure cell now cell.textLabel.text = @"Cell text"; // load from datasource if([[stateArray objectAtIndex:indexPath.row] boolValue]) { cell.accessoryType = UITableViewCellAccessoryCheckmark; } else { cell.accessoryType = UITableViewCellAccessoryNone; } return cell; } ``` then you will have to react on taps: ``` - (void)tableView:(UITableView *)tableView didSelectRowAtIndexPath:(NSIndexPath *)indexPath { [stateArray replaceObjectAtIndex:indexPath.row withObject:[NSNumber numberWithBool:![[stateArray objectAtIndex:indexPath.row] boolValue]]]; [tableView reloadRowsAtIndexPaths:[NSArray arrayWithObject:indexPath] withRowAnimation:UITableViewRowAnimationFade]; } ``` Just remember to use `NSMutableArray` for your data store ;)
Is it best practice to use a static database connection across multiple threads? Is there any consensus out there about whether it's best practice for a multi-threaded application to **(1)** use a single, shared, static connection to a SQL database, or **(2)** for each BackgroundWorker to open its own unique connection to the database? I'm obviously assuming that each thread needs to connect to the same database. And does the type of usage impact the answer? For instance, what if each thread is only running SELECT statements? Or if some threads may do UPDATE statements as well? Or does the usage not really make a difference, and you should always/never share a static connection regardless?
As discussed on the comments of your question. The best practice would be to leave the connection handling to ADO.Net as it contains connection pooling control so all you should do is open a connection every time you need execute a bit of SQL and then close it. The connection pool will not immediately close the connection as it will leave it open for configurable time to be able to pass it over to other threads requesting a new connection to be open. Additionally connections are not thread safe so each thread should have its on connection but again ADO.Net will deal with that. If you want to learn more about the connection pool i suggest the following MSDN article: <http://msdn.microsoft.com/en-us/library/8xx3tyca(v=vs.110).aspx> I also highly recommend that you read Microsofts Best practices for ado .net here: <http://msdn.microsoft.com/en-us/library/ms971481.aspx> Some other articles: - ADO.Net Best practices <http://www.codemag.com/Article/0311051> - GOOD READ is the Enterprise Patterns and Practices for Improving .Net application has a great part on ADO.net: <http://msdn.microsoft.com/en-us/library/ff649152.aspx>
Spring ORM or Hibernate I'm just wondering why the combination of Spring and Hibernate is so popular, when it would be possible to leave Hibernate out and use just Spring ORM?
Spring is popular because it takes care of the 'boilerplate' cut and paste code you have with any ORM framework. Think `try ... finally` blocks, dealing with the session object (Hibernate or otherwise) and `commit / rollback` (transactions). Transaction management is also Spring's strength. You can define transactions using annotations or in the Spring xml config file. In the config file, the benefit is that you can use wildcards to specify that, for example, all find methods in some set of packages should support transactions (PROPAGATION\_SUPPORTS) but all insert, update, delete methods should require transactions (PROPAGATION\_REQUIRED). So, I would always use Spring, regardless of the ORM framework. If you have simple requirements or not that much JDBC code, Spring's JDBC templates may be enough for you. And, Spring makes it easy to upgrade to Hibernate when needed.
Should web browsers include popular web framework libraries? What reasons are there for web browsers to not have a library of popular web frameworks. For example if a web page included jQuery, why shouldn't the browser have it's own static version, which is separate from the normal cached items? I would imagine if most libraries were preloaded into a browsers cache, before any request was made with a document that included a library, then this would remove the initial request to a CDN and have a potential performance improvement. But if the browser 'preloaded cache' didn't include the library then the browser should request the resource that is included in the document.
Because within a month, the browser's built-in version of jQuery would be out-of-date, negating the whole reason for using a third party library in the first place. Within 6 months, developers would be wanting to use features in the latest release of jQuery that weren't in what was built-in to the browser, but they wouldn't without including the whole new version of jQuery - negating the whole reason of bundling it in in the first place. Browsers should follow and implement standards and developers will benefit years later when the older browsers stop being used or by writing browser-specific code to take advantage of the newest features. The path for browsers to get more jQuery-like functionality is via that standards process. The whole point of the cross browser framework is that you can specify one specific version of the framework and know that you get the exact same behavior everywhere right now because that version of the framework takes care of it for you. That would never be the case with built-in frameworks (because the web developer is stuck with a different version of jQuery built into each targetted browser) so developers would just go about including the latest version of the framework and now you'd not be leveraging what was built-in.
What is the difference between rdf:resource and rdfs:Resource? In RDF 1.1 XML Syntax documentation **`rdf:resource`** is used as a shortened form when defining [Empty Property Elements](https://www.w3.org/TR/rdf-syntax-grammar/#section-Syntax-empty-property-elements): > > When a predicate arc in an RDF graph points to an object node which has no further predicate arcs, which appears in RDF/XML as an empty node element (or ) this form can be shortened. This is done by using the IRI of the object node as the value of an XML attribute `rdf:resource` on the containing property element and making the property element empty. > > > In RDF Schema 1.1 **`rdfs:Resource`** is defined as a [class](https://www.w3.org/TR/rdf-schema/#ch_resource): > > All things described by RDF are called resources, and are instances of the class `rdfs:Resource`. This is the class of everything. All other classes are subclasses of this class. `rdfs:Resource` is an instance of `rdfs:Class`. > > > How are the two related? Does an `rdf:resource` value always belong to `rdfs:Resource` class and the other way around?
They are not related, at all. They just happen to share a name because they both have *something* to do with resources. The term "resource" is central to the RDF data model (it's *Resource* Description Framework, after all). A resource in RDF is, very generally speaking, anything that can be identified by a URI (there's heaps of technical details regarding how things like blank nodes and literals fall under this definition, but for simplicity's sake we'll ignore that here). `rdf:resource` is just a syntax element in the RDF/XML syntax, namely an attribute to identify the resource that is the property value. For example, here's a simple RDF model (1 triple), in RDF/XML: ``` <rdf:Description rdf:about="http://example.org/Bob"> <foaf:address rdf:resource="http://example.org/address1"/> </rdf:Description> ``` Here, `http://example.org/Bob` is the subject resource, and `foaf:address` is a property of that subject (used to link the subject resource to a value). The property value in this case is also a resource (`http://example.org/address1`), so in the RDF/XML syntax we use `rdf:resource` attribute to link it. If you were to write the same RDF model in a different syntax though (for example, Turtle), you wouldn't see `rdf:resource` appear at all: ``` <http://example.org/Bob> foaf:address <http://example.org/address1> . ``` In RDF Schema, the class `rdfs:Resource` is the class of all resources. It is a concept, not a syntax-specific mechanism. Since pretty much anything in RDF is a resource, it is the 'top-level' class of things. All things are resources, so if you introduce a new class, for example "Person", it will (automatically) be a subclass of `rdfs:Resource`. ``` <http://example.org/Bob> rdf:type <http://example.org/Person> . <http://example.org/Bob> rdf:type rdfs:Resource . ``` Note that the second triple is a logical consequence of the first triple. Therefore, in practice, the fact that bob is a Resource is almost never explicitly written down in RDF models - if needed, it can be inferred.
What's the difference between feed-forward and recurrent neural networks? What is the difference between a [feed-forward](http://en.wikipedia.org/wiki/Feedforward_neural_network) and [recurrent](http://en.wikipedia.org/wiki/Recurrent_neural_networks) neural network? Why would you use one over the other? Do other network topologies exist?
[**Feed-forward**](http://www.doc.ic.ac.uk/~nd/surprise_96/journal/vol4/cs11/report.multil.jpg) ANNs allow signals to travel one way only: from input to output. There are no feedback (loops); *i.e.*, the output of any layer does not affect that same layer. Feed-forward ANNs tend to be straightforward networks that associate inputs with outputs. They are extensively used in pattern recognition. This type of organisation is also referred to as bottom-up or top-down. ![A feed-forward network](https://i.stack.imgur.com/fH8kp.gif) [**Feedback**](http://www.doc.ic.ac.uk/~nd/surprise_96/journal/vol4/cs11/report.neural2.jpg) (or recurrent or interactive) networks can have signals traveling in both directions by introducing loops in the network. Feedback networks are powerful and can get extremely complicated. Computations derived from earlier input are fed back into the network, which gives them a kind of memory. Feedback networks are dynamic; their 'state' is changing continuously until they reach an equilibrium point. They remain at the equilibrium point until the input changes and a new equilibrium needs to be found. ![feedback architecture](https://i.stack.imgur.com/bSO3r.gif) Feedforward neural networks are ideally suitable for modeling relationships between a set of predictor or input variables and one or more response or output variables. In other words, they are appropriate for any functional mapping problem where we want to know how a number of input variables affect the output variable. The multilayer feedforward neural networks, also called [multi-layer perceptrons](http://en.wikipedia.org/wiki/Multilayer_perceptron) (MLP), are the most widely studied and used neural network model in practice. As an example of feedback network, I can recall [Hopfield’s network](http://en.wikipedia.org/wiki/Hopfield_net). The main use of Hopfield’s network is as associative memory. An associative memory is a device which accepts an input pattern and generates an output as the stored pattern which is most closely associated with the input. The function of the associate memory is to recall the corresponding stored pattern, and then produce a clear version of the pattern at the output. Hopfield networks are typically used for those problems with binary pattern vectors and the input pattern may be a noisy version of one of the stored patterns. In the Hopfield network, the stored patterns are encoded as the weights of the network. **Kohonen’s self-organizing maps** (SOM) represent another neural network type that is markedly different from the feedforward multilayer networks. Unlike training in the feedforward MLP, the SOM training or learning is often called unsupervised because there are no known target outputs associated with each input pattern in SOM and during the training process, the SOM processes the input patterns and learns to cluster or segment the data through adjustment of weights (that makes it an important neural network model for dimension reduction and data clustering). A two-dimensional map is typically created in such a way that the orders of the interrelationships among inputs are preserved. The number and composition of clusters can be visually determined based on the output distribution generated by the training process. With only input variables in the training sample, SOM aims to learn or discover the underlying structure of the data. (The diagrams are from Dana Vrajitoru's [C463 / B551 Artificial Intelligence web site](http://www.cs.iusb.edu/~danav/teach/c463/12_nn.html).)
How to display back button on action bar in android another activity I want to need back button on action bar when I move another activity from main activity. Please help me how can I do now. I am new in android development, please explain something details. Thank you.
Just add code this in the `onCreate` method of your `[CurrentActivity].java` file. ``` getSupportActionBar().setDisplayHomeAsUpEnabled(true); ``` And this line of code will just add a back button in your `Action Bar`, but nothing would happen after tapping that right now. And add this in your `[CurrentActivity].java`, this will add the working of that button: ``` @Override public boolean onOptionsItemSelected(MenuItem item) { switch (item.getItemId()) { case android.R.id.home: Intent intent = new Intent(CurrentActivity.this, MainActivity.class); startActivity(intent); finish(); return true; default: return super.onOptionsItemSelected(item); } } ``` And replace `CurrentActivity` to your activity name and replace `MainActivity` to the activity you want to send user after pressing back button
Directive priority in Angular not working I have this element: `<div ace-editor dl-editor></div>` And these directives: ``` angular.module('DLApp') .directive 'aceEditor', () -> restrict: 'A' priority: 10 scope: false link: linkFunc1 .directive 'dlEditor', (Graph) -> restrict: 'A' priority: 0 scope: false link: linkFunc2 ``` (I'm aware that `0` is the default) `dlEditor` always gets executed first, then `aceEditor`. What am I doing wrong?
According to **[the docs](http://docs.angularjs.org/api/ng/service/$compile)**: emphasis mine > > **priority** > > When there are multiple directives defined on a single DOM element, sometimes it is necessary to specify the order in which the directives are applied. The priority is used to sort the directives before their compile functions get called. Priority is defined as a number. **Directives with greater numerical priority are compiled first. Pre-link functions are also run in priority order, *but post-link functions are run in reverse order*.** The order of directives with the same priority is undefined. The default priority is 0. > > > So, the directive with the higher priority (`aceEditor`) is compiled first, but it's post-link function (which seems to be the one of interest to you) is run last. You should either move the logic in the pre-link function (if that is applicable in your case) or reverse the priorities.
jquery click not firing ``` <td><input name="SeeUsers" id="SeeUsers" class="SeeUsersButton" value="See Users" type="button" title="#qPages.id#" onclick="ListUsers();"></td> <script type="text/javascript"> ListUsers=function(){ var userid = $(this).title; $('.userslistdiv').text(userid); $('.userslistdiv').show; }; </script> ``` I've been trying to bind this input to a jquery click event but couldn't get it to fire. So, I dumped the jquery click function and just used onclick=. Neither one fires the event. The problem may be that the main page has a cfdiv that dynamically loads content that has the input with the onclick=. But I do this on several pages using jquery datepicker without a problem. In dev tools I can see the script and it is in the head. Edited code: ``` ListUsers=function(el){ var userid = el.title; $('.userslistdiv').text(userid); $('.userslistdiv').show; }; ``` ``` <input name="SeeUsers" id="SeeUsers" class="SeeUsersButton" value="See Users" type="button" title="#qPages.id#" onclick="ListUsers(this);"></td> ```
If you are trying to fire an event on a dynamically added element you have to first select an element that already existed that encloses the dynamically added element. This could be a div in which you have appended the new element or you can use the document object if you don't know ahead of time where the element will be added. Javascript needed:(alert added to let you know the event works) Code Pen example: <http://codepen.io/larryjoelane/pen/zrEWvL> ``` /* You can replace the document object in the parentheses below with the id or class name of a div or container that you have appended the td tag */ $(document).on("click","#SeeUsers",function(){//begin on click event var userid = $(this).title; $('.userslistdiv').text(userid); $('.userslistdiv').show; //test to see if event fired! alert("it worked"); });//end on click event ```
How to work with Camera API in Firefox OS I want to work with camera api in firefox os simulator. Docs suggests that it is only available for certified apps. If i want to take picture using camera in my app, how can i proceed developing the app?? Thanks in advance
You have to use the [Web Activities API](https://wiki.mozilla.org/WebAPI/WebActivities) to take pictures. Simply put, it's the equivalent of Android's `Intents` for the Open Web. I'd write a lot about it, but there are good code examples out there, [like this one](https://github.com/robnyman/Firefox-OS-Boilerplate-App/blob/gh-pages/js/webapp.js#L68), implementing just that. You have to a few stuff: Create a Web Activity: ``` var recordActivity = new MozActivity({ name: "record" }); ``` Set a `onsuccess` callback, and do whatever you want with the result on it: ``` recordActivity.onsuccess = function () { console.log(this); } ``` There are a few more details, and all of them are listed on [this post on Hacks](https://hacks.mozilla.org/2013/01/introducing-web-activities/).
Strange error with CreateCompatibleDC Maybe this is a foolish question, I can't see why I can not get a DC created in the following code : ``` HBITMAP COcrDlg::LoadClippedBitmap(LPCTSTR pathName,UINT maxWidth,UINT maxHeight) { HBITMAP hBmp = (HBITMAP)::LoadImage(NULL, pathName, IMAGE_BITMAP, 0, 0, LR_LOADFROMFILE | LR_CREATEDIBSECTION); if (!hBmp) return NULL; HDC hdc = (HDC)GetDC(); HDC hdcMem = CreateCompatibleDC(hdc); if (!hdcMem) { DWORD err = GetLastError(); } ... ... ... ``` The bitmap hBmp is loaded fine and hdc has a valid value. But the call to CreateCompatibleDC() returns a NULL pointer. Then, GetLastError() returns 0 ! Anybody can guess what's going on here , please ? PS : There are no memory allocations or GDI routines called before this one...so I think memory leaks should be ruled out.
You are improperly casting the result of `GetDC()` to an `HDC`. `GetDC()` returns a pointer to a `CDC` object. To do what you want you can do either of the following. The first choice fits more into how MFC likes to do things, but both work just fine: ``` CDC *pDC = GetDC(); // Option 1 CDC memDC; memDC.CreateCompatibleDC(pDC); // Option 2 HDC hMemDC = CreateCompatibleDC((HDC)(*pDC)); ``` It is important to note that option 2 does not do the same thing that you're currently doing wrong. The `CDC` class has an `operator HDC()` member that allows it to be converted to an HDC, but this does NOT apply to the pointer. You must dereference it first.
Guard Executing all specs causes Capybara + JS test to return: 'Rack application timed out during boot' I've got a Capybara script with Rsped that includes `:js => true`, script that works fine when I execute the script in isolation. Yay! Here's the script: ``` # spec/requests/capybara_and_js_spec.rb require 'spec_helper' describe "Associating Articles with Menus" do it "should include javascript", js: true do visit root_path page.should have_selector('script') end end ``` When I execute the script I get: ``` . Finished in 4.22 seconds 1 example, 0 failures ``` However, when I execute the same script with all of my specs via `Guard Run all`, I get this (I've omitted a couple thousand tests) ``` ........................*...*..............Rack application timed out during boot Rack application timed out during boot F..................................... ```
I've spend quite a bit of time researching this issue, and found some interesting blog posts on the issue, but none of the solutions worked for me. Here are the options I tried: I switched from the default js driver for Capybara Selenium to Webkit and Poltergeist, like so: ``` # Gemfile gem "capybara-webkit" # spec/spec_helper.rb Spork.prefork do Capybara.javascript_driver = :webkit end ``` and ``` # Gemfile gem "poltergeist" # spec/spec_helper.rb Spork.prefork do require 'capybara/poltergeist' Capybara.javascript_driver = :poltergeist end ``` but no luck with either. Per [this thread](https://groups.google.com/forum/?fromgroups=#!topic/ruby-capybara/XyqV890tsKU) and [this article](http://pullmonkey.com/2011/03/03/rack-application-timed-out-during-boot-capybara/) I tried: ``` # spec/spec_helper.rb Spork.prefork do Capybara.server_boot_timeout = 600 # Default is 10 my entire suite end # takes ~550s to run, that's why I # attempted such a large boot timeout in # case the time was from beginning of suite # execution. ``` To no avail. Then I found [this article](http://docs.tddium.com/troubleshooting/browser-based-integration-tests), so I attempted: ``` # spec/spec_helper.rb # initial advice was for cucumber, and thus recommended this to be placed in # the features/env.rb file def find_available_port server = TCPServer.new('127.0.0.1', 0) server.addr[1] ensure server.close if server end if ENV['TDDIUM'] then Capybara.server_port = find_available_port end ``` But no luck with that. I also checked my database\_cleaner settings to ensure DatabaseCleaner was playing nicely with my factories from FactoryGirl, [per this issue on StackOverflow.](https://stackoverflow.com/questions/8178120/capybara-with-js-true-causes-test-to-fail) Still no luck. Next I tried to parse out my capybara tests from my lower level rspec tests in my Guardfile, like so: ``` group 'integration tests' do # Capybara Tests guard 'rspec', spec_paths: ['spec/requests'] do watch(%r{^spec/requests/.+_spec\.rb}) end # Cucumber Feature Tests guard 'cucumber', bundler: true do watch(%r{^features/.+\.feature$}) end end group 'unit tests' do rspec_paths = ['spec/controllers', 'spec/helpers', 'spec/models', 'spec/views'] # RSpec Unit Tests guard 'rspec', spec_paths: rspec_paths do watch(%r{^spec/.+_spec\.rb$}) end # Jasmine JS Unit Tests guard 'jasmine', all_on_start: false, all_after_pass: false do watch(%r{spec/javascripts/.+_spec\.(js\.coffee|js|coffee)$}) end end ``` and SUCCESS! Finally!
All datetimes stored in utc I am building a c# calendar application and have stored all the datetimes in Microsoft SQL-Server DateTime2 type. This data type is searchable using operators such as ">",">=" etc.. I have now read more on the subject for example these posts: [Daylight saving time and time zone best practices](https://stackoverflow.com/questions/2532729/daylight-saving-time-and-time-zone-best-practices?lq=1) [How to store repeating dates keeping in mind Daylight Savings Time](https://stackoverflow.com/questions/19626177/how-to-store-repeating-dates-keeping-in-mind-daylight-savings-time/19627330#19627330) [Is it always a good idea to store time in UTC or is this the case where storing in local time is better?](https://stackoverflow.com/questions/11537106/is-it-always-a-good-idea-to-store-time-in-utc-or-is-this-the-case-where-storing) I believe I have made an error when using UTC and dealing with different DST values and especially when related to future repeated events. My current implementation works fine for events, until a repeating series goes over a DST time change. I believe I will now need to store local times, the local timezone and possibly UTC time How should I structure my database and what data types should I use to store my data in the database that will support different client timezones and DST values whilst also allowing me to query for matches within specified start and stop datetime ranges?
For *repeated* events, you definitely need to store the time zone, yes, and I'd store the local date/time. You *might* also want to store the UTC value of the first occurrence, if that would be useful for comparison purposes. In theory you could just store the UTC occurrence of the first date/time, as that can be unambiguously converted to the local time (if you have the time zone) - but it's likely that you'll only ever need the local time, in which case performing those conversions may be pointless. You should also consider how you want to handle changes in time zone data - because time zone rules *do* change, reasonably frequently. (It depends on the country, admittedly.) For example, for efficiency you may want to generate a certain number of occurrences and store the UTC date/time of each occurrence (having worked out what to do with skipped and ambiguous local times due to DST transitions) - but if the time zone data changes, you'll need to perform that generation step again for all repeated events.
How to modify a function in a compiled DLL I want to know if it is possible to "edit" the code inside an already compiled DLL. I.E. imagine that there is a function called `sum(a,b)` inside `Math.dll` which adds the two numbers `a` and `b` Let's say i've lost the source code of my DLL. So the only thing i have is the binary DLL file. Is there a way i could open that binary file, locate where my function resides and replace the `sum(a,b)` routine with, for example, another routine that returns the multiplication of `a` and `b` (instead of the sum)? In Summary, is it posible to edit Binary code files? maybe using reverse engineering tools like ollydbg?
Yes it is definitely possible (as long as the DLL isn't cryptographically signed), but it is challenging. You can do it with a simple Hex editor, though depending on the size of the DLL you may have to update a lot of sections. Don't try to read the raw binary, but rather run it through a disassembler. Inside the compiled binary you will see a bunch of esoteric bytes. All of the opcodes that are normally written in assembly as instructions like "call," "jmp," etc. will be translated to the machine architecture dependent byte equivalent. If you use a disassembler, the disassembler will replace these binary values with assembly instructions so that it is much easier to understand what is happening. Inside the compiled binary you will also see a lot of references to hard coded locations. For example, instead of seeing "call add()" it will be "call 0xFFFFF." The value here is typically a reference to an instruction sitting at a particular offset in the file. Usually this is the first instruction belonging to the function being called. Other times it is stack setup/cleanup code. This varies by compiler. As long as the instructions you replace are the exact same size as the original instructions, your offsets will still be correct and you won't need to update the rest of the file. However if you change the size of the instructions you replace, you'll need to manually update all references to locations (this is really tedious btw). Hint: If the instructions you're adding are *smaller* than what you replaced, you can pad the rest with NOPs to keep the locations from getting off. Hope that helps, and happy hacking :-)
What is the expected behavior when "!important" is used with the pseudo-class ":not" I'm cleaning up someone else's code and it basically looks like this: ``` a {color:red;} #contentdiv :not(h4) > a {color:green!important} ``` So all links are green, except for those under an h4 which are red. Suppose all links under "li"s need to be red too. Better yet, suppose the links under "li"s need to inherit whatever is input by the user in this particular CMS, so the color can change without declarations in a stylesheet. In other words... ``` #contentdiv ul li a {color:red!important} ``` ...wouldn't work because when the global style on "a" tags changes, the "a" tags under the "li"s would remain red. However trying to negate the "!important" like... ``` a {color:red;} #contentdiv :not(h4) > a, #contentdiv :not(li) > a {color:green!important} ``` ...just seems to cancel out both negations. I know the ":not" pseudo should work like "(!A and !B)" but adding the important seems to make it target A or B complement "(A/\* or B/\*)" which means everything turns green, even the "a" tags under the "h4"s and "li"s. Here's a JSFiddle: <https://jsfiddle.net/qebz4bbx/2/> Just looking for some clarification on this. Thanks!
You misunderstood the comma `,`. The comma is a logical OR, not a logical AND. Therefore, ``` #contentdiv :not(h4) > a, #contentdiv :not(li) > a ``` matches every element which - Is an `a` element, its parent is not a `h4` element, and that parent is a descendant of an element with `id="contentdiv"`. - OR - Is an `a` element, its parent is not a `li` element, and that parent is a descendant of an element with `id="contentdiv"`. Therefore, your selector is equivalent to `#contentdiv * > a`. Instead, you should use ``` #contentdiv :not(h4):not(li) > a ``` That matches every `a` element whose parent is neither a `h4` element nor a `li` element, and that parent is a descendant of an element with `id="contentdiv"`.
Can I store a Parquet file with a dictionary column having mixed types in their values? I am trying to store a Python Pandas DataFrame as a Parquet file, but I am experiencing some issues. One of the columns of my Pandas DF contains dictionaries as such: ``` import pandas as pandas df = pd.DataFrame({ "ColA": [1, 2, 3], "ColB": ["X", "Y", "Z"], "ColC": [ { "Field": "Value" }, { "Field": "Value2" }, { "Field": "Value3" } ] }) df.to_parquet("test.parquet") ``` Now, that works perfectly fine, the problem is when one of the nested values of the dictionary has a different type than the rest. For instance: ``` import pandas as pandas df = pd.DataFrame({ "ColA": [1, 2, 3], "ColB": ["X", "Y", "Z"], "ColC": [ { "Field": "Value" }, { "Field": "Value2" }, { "Field": ["Value3"] } ] }) df.to_parquet("test.parquet") ``` This throws the following error: `ArrowInvalid: ('cannot mix list and non-list, non-null values', 'Conversion failed for column ColC with type object')` Notice how, for the last row of the DF, the `Field` property of the `ColC` dictionary is a list instead of a string. Is there any workaround to be able to store this DF as a Parquet file?
`ColC` is a UDT (user defined type) with one field called `Field` of type `Union of String, List of String`. In theory arrow supports it, but in practice it has a hard time figuring out what the type of `ColC` is. Even if you were providing the schema of your data frame explicitly, it wouldn't work because this type of conversion (converting unions from pandas to arrow/parquet) isn't supported yet. ``` union_type = pa.union( [pa.field("0",pa.string()), pa.field("1", pa.list_(pa.string()))], 'dense' ) col_c_type = pa.struct( [ pa.field('Field', union_type) ] ) schema=pa.schema( [ pa.field('ColA', pa.int32()), pa.field('ColB', pa.string()), pa.field('ColC', col_c_type), ] ) df = pd.DataFrame({ "ColA": [1, 2, 3], "ColB": ["X", "Y", "Z"], "ColC": [ { "Field": "Value" }, { "Field": "Value2" }, { "Field": ["Value3"] } ] }) pa.Table.from_pandas(df, schema) ``` This gives you this error: `('Sequence converter for type union[dense]<0: string=0, 1: list<item: string>=1> not implemented', 'Conversion failed for column ColC with type object'` Even if you create the arrow table manually it won't be able to convert it to parquet (again, union are not supported). ``` import io import pyarrow.parquet as pq col_a = pa.array([1, 2, 3], pa.int32()) col_b = pa.array(["X", "Y", "Z"], pa.string()) xs = pa.array(["Value", "Value2", None], type=pa.string()) ys = pa.array([None, None, ["value3"]], type=pa.list_(pa.string())) types = pa.array([0, 0, 1], type=pa.int8()) col_c = pa.UnionArray.from_sparse(types, [xs, ys]) table = pa.Table.from_arrays( [col_a, col_b, col_c], schema=pa.schema([ pa.field('ColA', col_a.type), pa.field('ColB', col_b.type), pa.field('ColC', col_c.type), ]) ) with io.BytesIO() as buffer: pq.write_table(table, buffer) ``` ``` Unhandled type for Arrow to Parquet schema conversion: sparse_union<0: string=0, 1: list<item: string>=1> ``` I think your only option for now it to use a struct where fields have got different names for string value and list of string values. ``` df = pd.DataFrame({ "ColA": [1, 2, 3], "ColB": ["X", "Y", "Z"], "ColC": [ { "Field1": "Value" }, { "Field1": "Value2" }, { "Field2": ["Value3"] } ] }) df.to_parquet('/tmp/hello') ```
Don't understand the proof that unbiased sample variance is unbiased Wikipedia gives the [following proof why to use Bessel's correction for the unbiased sample variance](http://en.wikipedia.org/wiki/Variance#Population_variance_and_sample_variance): \begin{align} E[\sigma\_y^2] & = E\left[ \frac 1n \sum\_{i=1}^n \left(y\_i - \frac 1n \sum\_{j=1}^n y\_j \right)^2 \right] \\ & = \frac 1n \sum\_{i=1}^n E\left[ y\_i^2 - \frac 2n y\_i \sum\_{j=1}^n y\_j + \frac{1}{n^2} \sum\_{j=1}^n y\_j \sum\_{k=1}^n y\_k \right] \\ & = \frac 1n \sum\_{i=1}^n \left[ \frac{n-2}{n} E[y\_i^2] - \frac 2n \sum\_{j \neq i} E[y\_i y\_j] + \frac{1}{n^2} \sum\_{j=1}^n \sum\_{k \neq j}^n E[y\_j y\_k] +\frac{1}{n^2} \sum\_{j=1}^n E[y\_j^2] \right] \\ & = \frac 1n \sum\_{i=1}^n \left[ \frac{n-2}{n} (\sigma^2+\mu^2) - \frac 2n (n-1) \mu^2 + \frac{1}{n^2} n (n-1) \mu^2 + \frac 1n (\sigma^2+\mu^2) \right] \\ & = \frac{n-1}{n} \sigma^2. \end{align} The proof is clear so far. The only part that I don't understand is the following identity which is used in the penultimate step: \begin{align} &\sum\_{j \neq i} E[y\_i y\_j] = (n-1) \mu^2\\ \end{align} This would only make sense if $y\_i$ and $y\_j$ were independent - but they are not because $i$ has to be unequal to $j$! To give the simplest possible example: A coin toss which gives $-1$ for heads and $1$ for tails. When you take two independent coin tosses and multiply the results the expected value is indeed $0^2=0$. But if you are only allowed to take the the opposite coin toss as the other result (so you *have to* take $1$ if $-1$ and $-1$ if $1$) your expectation becomes clearly unequal to $0$ and therefore $\mu^2$ cannot be right! **My question** Could you please explain the identity and where my potential fallacy lies?
The independence of $Y\_i$ and $Y\_j$ (whenever $i \neq j$) is an assumption, i.e. you are assuming you are dealing with an i.i.d. sample $Y\_1, Y\_2, \ldots, Y\_n$ from some distribution -- and you are trying to estimate that distribution's variance. When the Ys are iid, $\mathbb{E}\left[Y\_1\;Y\_2\right]=\mu^2$, while $\mathbb{E}\left[Y\_1\;Y\_1\right]=\mathbb{E}\left[Y\_1^2\right]=\mu^2 + \mathbb{V}\left[Y\right]$. That's why the $i \neq j$ condition matters for the value of $\mathbb{E}\left[Y\_i\;Y\_j\right]$. The example you give at the bottom of your question is a poor analogy: taking "the opposite coin toss" would indeed induce correlation between $X\_1$ (the first coin toss) and $X\_2$ (the second coin toss, if we defined $X\_2 \;|\; X\_1$ to be "the opposite" of $X\_1$). But that is not at all the meaning of $i \neq j$ in the context of the Ys.
Why is $E[Z|Z>c] = \int\_c^\infty z\_i \phi({z\_i})\mathrm{d}z\_i $ ($Z$ is censored) In a problem set I proved this "lemma," whose result is not intuitive to me. $Z$ is a standard normal distribution in a censored model. Formally, $Z^\* \sim Norm(0, \sigma^2)$, and $Z = max(Z^\*, c)$. Then, \begin{align} E[Z|Z>c] &= \int\_c^\infty z\_i \phi({z\_i})\mathrm{d}z\_i \\ &= \frac{1}{\sqrt{2\pi}}\int\_c^\infty z\_i \exp\!\bigg(\frac{-1}{2}z\_i^2\bigg)\mathrm{d}z\_i \\ &= \frac{1}{\sqrt{2\pi}} \exp\!\bigg(\frac{-1}{2}c^2\bigg) \quad\quad\quad\quad\text{ (Integration by substitution)}\\ &= \phi(c) \end{align} So there is some sort of connection between the expectation formula over a truncated domain and the density at the point of truncation $(c)$. Could anyone explain the intuition behind this?
Would the Fundamental Theorem of Calculus work for you as intuition? Let $\phi(x)$ denote the density function $\frac{1}{\sqrt{2\pi}}e^{-x^2/2}$ of a standard normal random variable. Then, the derivative is $\frac{\mathrm d}{\mathrm dx}\phi(x) = -x\phi(x)$. The Fundamental Theorem of Calculus then gives us that $$\phi(x) = \int\_{-\infty}^x -t\phi(t)\,\mathrm dt = \int\_{-x}^\infty u\phi(u)\,\mathrm du = \int\_x^\infty u\phi(u)\,\mathrm du$$ where the second integral is obtained on substituting $u = -t$ and using the fact that $\phi(-u) = \phi(u)$ and the third upon noting that $\phi(-x) = \phi(x)$. Alternatively, write the second integral as the integral from $-x$ to $+x$ plus the integral from $+x$ to $\infty$, and note that integrating an odd function from $-x$ to $+x$ results in $0$.
How to decode a JSFuck script? I have this code in JavaScript: ``` [(![]+[])[+[]]+([![]]+[][[]])[+!+[]+[+[]]]+ (![]+[])[!+[]+!+[]]+(!![]+[])[+[]]+(!![]+[])[!+[]+!+[]+!+[]]+ (!![]+[])[+!+[]]] ``` In the console, it will return ``` Array [ "filter" ] ``` And how can I decode a lot of text that’s similar to the text above? E.g.: ``` [][(![]+[])[+[]]+([![]]+[][[]])[+!+[]+[+[]]]+(![]+[])[!+[]+!+[]]+ (!![]+[])[+[]]+(!![]+[])[!+[]+!+[]+!+[]]+(!![]+[])[+!+[]]][([][(![]+[])[+[]]+ ([![]]+[][[]])[+!+[]+[+[]]]+(![]+[])[!+[]+!+[]]+(!![]+[])[+[]]+ (!![]+[])[!+[]+!+[]+!+[]]+(!![]+[])[+!+[]]]+[])[!+[]+!+[]+!+[]]+ (!![]+[][(![]+[])[+[]]+([![]]+[][[]])[+!+[]+[+[]]]+(![]+[])[!+[]+!+[]]+ (!![]+[])[+[]]+(!![]+[])[!+[]+!+[]+!+[]]+(!![]+[])[+!+[]]])[+!+[]+[+[]]]+ ([][[]]+[])[+!+[]]+(![]+[])[!+[]+!+[]+!+[]]+(!![]+[])[+[]]+ (!![]+[])[+!+[]]+([][[]]+[])[+[]]+([][(![]+[])[+[]]+([![]]+[][[]])[+!+[]+[+[]]]+ (![]+[])[!+[]+!+[]]+(!![]+[])[+[]]+(!![]+[])[!+[]+!+[]+!+[]]+ (!![]+[])[+!+[]]]+[])[!+[]+!+[]+!+[]]+(!![]+[])[+[]]+(!![]+[][(![]+[])[+[]]+ ([![]]+[][[]])[+!+[]+[+[]]]+(![]+[])[!+[]+!+[]]+(!![]+[])[+[]]+ (!![]+[])[!+[]+!+[]+!+[]]+(!![]+[])[+!+[]]])[+!+[]+[+[]]]+(!![]+[])[+!+[]]] ``` I want to see the plain script.
I have seen many decoding attempts around, but none that work reliably. The easiest way I have found to decode Non Alphanumeric Javascript is with Chrome. Open Chrome > Go to jsfuck.com > paste the code you would like to decode in the window > hit Run This. Then open the Console, in the case of your [specific code from PasteBin](http://pastebin.com/HZrmFcUQ) there will be an error: ``` Uncaught TypeError: Cannot read property 'innerHTML' of null ``` To the right of the error, click the line number link, and the code will be revealed. The result is: ``` (function(){ window.false=document.getElementById('sc').innerHTML; }) ``` Which explains why you get the error trying to just decode it using JSFuck itself. There is no element with the id `sc` on their site.
how can I call Unix system calls interactively? I'd like to play with Unix system calls, ideally from Ruby. How can I do so? I've heard about Fiddle, but I don't know where to begin / which C library should I attach it to?
I assume by "interactively" you mean via `irb`. A high-level language like Ruby is going to provide wrappers for most kernel syscalls, of varying thickness. Occasionally these wrappers will be very thin, as with `sysread()` and `syswrite()`. These are more or less equivalent to `read(2)` and `write(2)`, respectively. Other syscalls will be hidden behind thicker layers, such as with the [socket I/O stuff](http://www.ruby-doc.org/stdlib-1.9.3/libdoc/socket/rdoc/BasicSocket.html). I don't know if calling `UNIXSocket.recv()` counts as "calling a syscall" precisely. At some level, that's exactly what happens, but who knows how much Ruby and C code stands between you and the actual system call. Then there are those syscalls that aren't in the standard Ruby API at all, most likely because they don't make a great amount of sense to be, like `mmap(2)`. That syscall is all about raw pointers to memory, something you've chosen to avoid by using a language like Ruby in the first place. There happens to be [a third party Ruby mmap module](http://rubyforge.org/projects/mmap/), but it's really not going to give you all the power you can tap from C. The `syscall()` interface Mat pointed out in the comment above is a similar story: in theory, it lets you call any system call in the kernel. But, if you don't have the ability to deal with pointers, lay out data precisely in memory for structures, etc., your ability to make useful calls is going to be quite limited. If you want to play with system calls, learn C. There is no shortcut.
Passing a predicate as a parameter c# I recently did an assessment from a company that had a case where they wanted to set up a predicate as an input parameter to a method. Have little to no experience with this I've been researching it on my own. The code looks like: ``` using System; public interface IBird { Egg Lay(); } public class Chicken : IBird { public Chicken() { } public void EggLay() { } public Egg Lay() { return new Egg(); } } public class Egg { public Egg(Func<IBird> createBird) { throw new NotImplementedException("Waiting to be implemented."); } public IBird Hatch() { throw new NotImplementedException("Waiting to be implemented."); } } public class Program { public static void Main(string[] args) { // var chicken1 = new Chicken(); // var egg = chicken1.Lay(); // var childChicken = egg.Hatch(); } } ``` My question is what is the Egg function expecting and why? I've already seen [this answer](https://stackoverflow.com/questions/1594802/passing-func-inline-as-a-parameter-eg-a-a) and [this answer](https://stackoverflow.com/questions/2082615/pass-method-as-parameter-using-c-sharp) and [this answer](https://stackoverflow.com/questions/44776126/how-pass-funct-to-method-parameter) but it's not making any sense still. It's academic at this point but I really want to understand.
`public Egg(Func<IBird> createBird)` is not a function, it's the [constructor](https://learn.microsoft.com/en-us/dotnet/csharp/programming-guide/classes-and-structs/constructors) of the `Egg` class. Since the `Egg` class must `Hatch` birds, it needs to create birds. `Func<IBird>` is a delegate, i.e., a value representing a reference to a method. In this specific case it is representing a [factory method](https://en.wikipedia.org/wiki/Factory_method_pattern). A predicate would be a method or delegate returning a Boolean. Through this parameter you can pass any method creating `IBird`s. Since the `IBird` interface does not specify an explicit implementation of a bird, you could initialize `Egg` with different methods creating different bird types. Some requiring constructor parameters, some not. You would implement `Egg` like this ``` public class Egg { private readonly Func<IBird> _createBird; public Egg(Func<IBird> createBird) { _createBird = createBird; // No "()". createBird is not called, just assigned. } public IBird Hatch() { return _createBird(); // Here createBird is called, therefore the "()". } } ``` Now, the `Hatch` method can create birds, without having the knowledge about how or which type of bird to create, through the intermediate of the `_createBird` delegate. How would you create an egg? Well, first you need some bird implementation e.g.: ``` public class BlackBird : IBird { ... your implementation goes here } ``` Then you need a method creating and returning a `IBird`. E.g.: ``` IBird CreateBlackBird() { return new BlackBird(); } ``` You can then create an egg with ``` var egg = new Egg(CreateBlackBird); // No "()". CreateBlackBird is not called but referenced. IBird newBird = egg.Hatch(); ``` Make sure to pass the method without parameter list, i.e. without parentheses, because you don't want to call the `CreateBlackBird` method at this point, you want to pass it over to the constructor, where it is stored in the private field `_createBird` to be used later. A lambda expression creates an anonymous delegate on the fly: ``` var egg = new Egg(() => new BlackBird()); ``` `() => new BlackBird()` is a lambda expression. It is equivalent to the `CreateBlackBird` method. The return type is not specified and is inferred from the parameter type of the `Egg` constructor. It has no name. Only the parameter braces are remaining from the method header. `=>` replaces the `return` keyword. After having implemented an additional bird class with a color as constructor parameter, you can write ``` var egg = new Egg(() => new ColoredBird(Color.Blue)); ``` See also: - [Lambda expressions (C# Programming Guide)](https://learn.microsoft.com/en-us/dotnet/csharp/programming-guide/statements-expressions-operators/lambda-expressions) - [Beginner's Guide to Delegates, Funcs and Actions in C#](https://www.codementor.io/aydindev/delegates-func-act-in-c-sharp-du107s5mj) - [What is constructor injection?](https://softwareengineering.stackexchange.com/questions/177649/what-is-constructor-injection)
Sorting a list based upon values in an external list (Python) This query is somewhat related to this earlier one on [sorting](https://stackoverflow.com/questions/11219770/how-to-sort-the-outer-and-inner-sublist-of-a-nested-list-in-python "sorting") where, it was required to sort the following list, `data = [[1, .45, 0], [2, .49, 2], [3, .98, 0], [4, .82, 1], [5, .77, 1], [6, .98, 2] ]` first by the values of the last member of the inner list like this way, `[[1, .45, 0], [3, .98, 0],[4, .82, 1], [5, .77, 1], [2, .49, 2], [6, .98, 2]]` and then sort within the sub-lists i.e. first sort the list with its last member as `'0'` using the middle member as the key, in descending order, then the sub-list with last member as `'1'` and so on. Now, instead of first sorting by the values of the last member, I would like like to sort based upon the order of these elements present in an external list. i.e. if the external list is List\_1 ``` `List_1 = [2, 0, 1]` ``` Sorting should produce ``` [[2, .49, 2], [6, .98, 2] [1, .45, 0], [3, .98, 0], [4, .82, 1], [5, .77, 1]] ``` Finally, sorting the sub-lists based upon the middle element in descending order should produce: ``` [ [6, .98, 2],[2, .49, 2], [3, .98, 0], [1, .45, 0], [4, .82, 1], [5, .77, 1]] ``` Any suggestions on how to go about this ?
Use [`list.index`](http://docs.python.org/tutorial/datastructures.html#more-on-lists) in the key: ``` >>> data = [[1, .45, 0], [2, .49, 2], [3, .98, 0], [4, .82, 1], [5, .77, 1], [6, .98, 2]] >>> List_1 = [2, 0, 1] >>> sorted(data, key=lambda e: (List_1.index(e[2]), -e[1])) [[6, 0.98, 2], [2, 0.49, 2], [3, 0.98, 0], [1, 0.45, 0], [4, 0.82, 1], [5, 0.77, 1]] ```
What does "++=" mean in Scala This is the [implementation](https://github.com/scala/scala/blob/v2.10.3/src/library/scala/collection/TraversableLike.scala) of `flatMap` in Scala ``` def flatMap[B, That](f: A => GenTraversableOnce[B])(implicit bf: CanBuildFrom[Repr, B, That]): That = { def builder = bf(repr) // ... val b = builder for (x <- this) b ++= f(x).seq b.result } ``` What does `++=` mean here ?
`++=` can mean two different things in Scala: 1: Invoke the `++=` method In your example with `flatMap`, the `++=` method of `Builder` takes another collection and adds its elements into the builder. Many of the other mutable collections in the Scala collections library define a similiar `++=` method. 2: Invoke the `++` method and replace the contents of a `var` `++=` can also be used to invoke the `++` method of an object in a `var` and replace the value of the `var` with the result: ``` var l = List(1, 2) l ++= List(3, 4) // l is now List(1, 2, 3, 4) ``` The line `l ++= List(3, 4)` is equivalent to `l = l ++ List(3, 4)`.
How do I show the Maven POM hierarchy? I'm doing some scripting and I need to get a list of all the parent poms for any given pom. The dependency plugin seems to be only interested in the dependencies that are listed in the dependency section of the pom, but there doesn't seem to be a way to show the parent poms, which are also required dependencies for Maven to work. Am I missing something basic?
There is no simple Maven command that will show you the chain of parent POMs for a pom.xml. The reason for this is that it is not a common question one would typically ask (more on that below). For your script, you'll just have to parse the pom.xml file, get the parent artifact coordinates, get a hold of the artifact's pom.xml file and then parse it's pom.xml file (and repeat). Sorry, but there is no short cut I know of, but [other folks have solved similar problems](https://github.com/umut/parent-checker). You are right that *technically* the parent pom is a dependency of your project, but it is not a literal Maven Dependency and is handled completely differently. The chain of parent poms, along with active profiles, your `settings.xml` file, and the Maven super pom from the installation directory are all combined together to create your project's **effective pom**. The effective POM is what Maven really uses to do its work. So basically, the parent pom inheritance chain is already resolved and combined before the dependency plugin (or any other plugin) is even activated. The questions most people typically ask is 'what does my REAL pom.xml *really* look like when Maven is done combining everything?' or 'What is the result my inheritance chain of parent poms?' or 'How are my pom.xml properties affected by an active profile?' The effective pom will tell you all of this. I know you didn't ask, but for others reading this, if you want to see your parent pom.xml, simply open up the pom.xml in the M2Eclipse POM editor and click on the parent artifact link on the overview tab. In this way you can quickly move up the chain of pom.xml files with just a single click per pom. It would be a strange project that had more than 3 or 4 parent poms of inheritance. If you want to see your effective pom, you can run the command `mvn help:effective-pom`. Alternatively, you can click on the Effective POM tab in M2Eclipse's POM editor.
How to review a specific commit on Git I sent a commit (named "A commit") to review (Gerrit) using `git review` command. Now, I make a new commit (named "B commit") and I want to send it to review as well, but I don't want to re-send the "A commit". There is no dependencies each other. How to send a review to gerrit for a specific commit?. **UPDATE**: ``` $ git add --all $ git status # On branch delete_role # Changes to be committed: # (use "git reset HEAD <file>..." to unstage) # # modified: path/to/file.ext $ git status # On branch delete_role nothing to commit (working directory clean) $ git branch *delete_role master $ git log --graph --decorate --oneline -n13 * 531bd84 (HEAD, delete_role) commit 3 * df68f1a (master) commit 2 * 4ce2d8d commit 1 * 6751c7d (origin/master, origin/HEAD, gerrit/master) ``` Commit "**df68f1a**" and "**4ce2d8d**" are dependant and they have been sent in a previous `git review` command, but commit "**531bd84**" belongs to a new branch (delete\_role) because is a new issue. ``` $ git review You have more than one commit that you are about to submit. The outstanding commits are: 531bd84 (HEAD, delete_role) commit 3 df68f1a (master) commit 2 4ce2d8d commit 1 ``` I want to **send to Gerrit** only the "**531bd84**" commit, not the other ones.
Create the B commit in a new branch. While being on this branch, use `git review` and it will only push the content of this branch to Gerrit. This way, Gerrit won't consider that your commit B needs your commit A and if you want, you can merge your commit B to the working branch before commit A If your history is like this: ``` ...-old(merged)-A(waiting for review) ``` what you want to do is: ``` ...-old(merged)-A(waiting for review) <-master branch \B(new commit) <-new branch ``` Then, if you're on branch B, and use `git review`, it won't push anything else than commit B If you're in this situation: ``` ...-old(merged)-A(waiting for review)-B ``` , what you want to do to achieve the configuration we want is: ``` git log (Note the SHA1 of your B commit) git reset HEAD^^^ (you go back in detched state three commits before, before the two you don't want to send) git checkout -b Breview (you create a new branch there) git cherry-pick +the SHA1 you noted (you copy your B commit on your new branch) git checkout master (you return on your branch with the two commit) git reset HEAD^--hard (you delete the B commit from this branch where you don't need it) ``` Now, you achieved the wanted configuration and to push your B commit, you just need to do: ``` git checkout Breview git review ``` and it will only submit your B commit
Changing python version without breaking software center I have been fiddling with my Python installation on Ubuntu 12.04 (I was having trouble installing a python library), and at one point my `python` command wasn't working. It turned out the symlink was missing (I must have removed it by accident lol), so I [made a new one](https://stackoverflow.com/questions/24451417/broken-python-command) pointing to Python 3.2 (originally pointed to 2.7): ``` sudo rm /usr/bin/python sudo ln -s /usr/bin/python3.2 /usr/bin/python ``` Problem: the software center and update manager weren't working: ``` ~$ software-center File "/usr/bin/software-center", line 152 print time.time() ^ SyntaxError: invalid syntax ``` I guessed this was because I had changed the default python version (2.7->3.2), so I changed it back to 2.7. Now they work fine, but I'd still like to change the 'default' python version (i.e. the one called with `python` in the terminal). Is it possible to do that in Ubuntu 12.04? Thanks!
You shouldn't change the symlink for `python` to point to Python 3 as you have already seen its consequences. And I would recommend you to get into the habit of calling Python 3 programs with `python3` as that would involve the least amount of trouble later on. But if you insist on calling Python 3 on your Terminal using `python`, you may create an alias for it. Remember, alias is different than symlink. Edit `~/.bash_aliases` file (create it if it doesn't exist) to add the following in it: ``` alias python='python3.2' ``` Then restart your terminal and you would be able to execute Python 3 by calling `python`. This wouldn't break anything as changing the symlink does. You may even add aliases like `alias py3='python3.2'` and then call `py3` to run Python 3. This is even shorter and less confusing.
How to create a table with fixed length inside a scrollView in Appcelerator Titanium? I'm trying to add some imageViews and a tableView into scrollView in Titanium. I want the scrollView to be scrollable but not the tableView inside, so I set tableView.scrollable to false. However, even if the height of the scrollView exceeds the height of the screen, it is not scrollable. Since it's not encouraged to put a tableView inside a scrollView, I'm wondering if there is a better way to create a table with fixed length inside a scrollView in Titanium? The following is my code: ``` var view = Ti.UI.createScrollView({ contentWidth:'auto', contentHeight:'auto', top:0, showVerticalScrollIndicator:true, showHorizontalScrollIndicator:true, }); var imageview1 = Ti.UI.createImageView({ image: "../images/headers/gscs_logo.png", height: 80, left: 10, right: 10, top: 10, }); var imageview2 = Ti.UI.createImageView({ image: "../images/headers/wellness_logo.png", height: 80, left: 10, right: 10, top: 90, }); view.add(imageview1); view.add(imageview2); var tableview = Ti.UI.createTableView({ data: [{title:'a'}, {title:'b'}, {title:'c'}, {title:'d'}, {title:'e'}, {title:'f'}, {title:'g'}], top: 180, scrollable: false, }); view.add(tableview); Ti.UI.currentWindow.add(view); ``` [This](https://lh6.googleusercontent.com/-FoDP-e80lXQ/TsVHCQFHVsI/AAAAAAAAAHQ/g8Gv1elbswk/s800/3.png) is the window I got (StackOverflow does not allow new users to post images, sorry). [This](https://lh6.googleusercontent.com/-UZd9jAt3UpI/TsVHB3Ut3tI/AAAAAAAAAHQ/aa0A1nXI020/s800/1.png) is the window I want. The table has fixed number of rows and its parent view can be scrolled. I have also tried to set currentWindow.layout to "vertical", but that failed since neither the scrollView nor the tableView would show up. Thank you for your patience and help!
After looking at Kitchen Sink, Titanium's demo app, I figured out how to do this: just set tableview.style to Titanium.UI.iPhone.TableViewStyle.GROUPED, and and set the imageView as tableview.headerView. ``` var imageview = Ti.UI.createImageView({ image: "../images/headers/bakerinstitute_logo.png", height: 100, left: 40, right: 40, top: 10, }); var tableview = Ti.UI.createTableView({ data: [{title:'a', header:'first one'}, {title:'b'}, {title:'c'}, {title:'d'}, {title:'e'}, {title:'f'}, {title:'g'}, {title:'h'}, {title:'i'}, {title:'j'}, {title:'k', header:'last one'}], style:Titanium.UI.iPhone.TableViewStyle.GROUPED, backgroundColor:'transparent', rowBackgroundColor:'white', headerView: imageview }); Ti.UI.currentWindow.add(tableview); ```
Remove lines with specific line number specified in a file I have a text file `A` which contains line numbers which I want to remove from text file `B`. For example, file `A.txt` contains lines ``` 1 4 5 ``` and file `B.txt` contains lines ``` A B C D E ``` The resulting file should be: ``` B C ``` Of course, this can be done manually with ``` sed '1d;4d;5d' B.txt ``` but I wonder how to do it without specifying line numbers manually.
You can use `awk` as well: ``` awk 'NR==FNR { nums[$0]; next } !(FNR in nums)' linenum infile ``` in specific case when 'linenum' file might empty, awk will skip it, so it won't print whole 'infile' lines then, to fix that, use below command: ``` awk 'NR==FNR && FILENAME==ARGV[1]{ nums[$0]; next } !(FNR in nums)' linenum infile ``` or even better (thanks to [Stéphane Chazelas](/users/22565)): ``` awk '!firstfile_proceed { nums[$0]; next } !(FNR in nums)' linenum firstfile_proceed=1 infile ```
How to remove the Vuetify append-icon from the sequential keyboard navigation In a Vue.js app with Vuetify, I have a set of password fields defined with a `v-text-field` and which have an `append-icon` in order to switch the text visibility, as follows: ``` <v-text-field v-model="password" :append-icon="show1 ? 'mdi-eye' : 'mdi-eye-off'" :type="show1 ? 'text' : 'password'" @click:append="show1 = !show1" ></v-text-field> ``` It is exactly similar to the [documentation example for password input](https://vuetifyjs.com/en/components/text-fields/#password-input) (See also the corresponding [codepen](https://codepen.io/pen/?&editable=true&editors=101=https%3A%2F%2Fvuetifyjs.com%2Fen%2Fcomponents%2Ftext-fields%2F)). With this set-up, if a user uses the `Tab` key to navigate across the different fields (sequential keyboard navigation), the `append-icon`s are included in the sequential keyboard navigation. I would like to exclude these icons from this sequential keyboard navigation (and be able to jump from one password field to the other without navigating to the `append-icon`). Standard way to do that is to assign a "negative value (usually `tabindex="-1"`)" which "means that the element is not reachable via sequential keyboard navigation", as explained [here](https://developer.mozilla.org/en-US/docs/Web/HTML/Global_attributes/tabindex). But I don't find how to assign a `tab-index` value only to the `append-icon` and not to the `v-text-field` itself.
You could use `v-slot:append` and place the icon there. ``` <v-text-field v-model="password" :type="show1 ? 'text' : 'password'"> <template v-slot:append> <v-button @click="show1 = !show1" tabindex="-1"> <v-icon v-if="show1" >mdi-eye</v-icon> <v-icon v-if="show1" >mdi-eye-off</v-icon> </v-button> </template> </v-text-field> ``` However, it's not because you can do this that you should. If you place this button outside of reach of tabindex, someone with a screenreader might not be able to toggle the button. From an accesibility concern, this button is an interactive element and thus should have `tabindex="0"`
Why does g++ still require -latomic In *29.5 Atomic types* of the C++ Standard November 2014 working draft it states: > > 1. There is a generic class template atomic. The type of the template argument T shall be trivially copyable (3.9). [ Note: Type arguments that are not also statically initializable may be difficult to use. —end note ] > > > So - as far as I can tell - this: ``` #include <atomic> struct Message { unsigned long int a; unsigned long int b; }; std::atomic<Message> sharedState; int main() { Message tmp{1,2}; sharedState.store(tmp); Message tmp2=sharedState.load(); } ``` should be perfectly valid standard c++14 (and also c++11) code. However, if I don't link `libatomic` manually, the command ``` g++ -std=c++14 <filename> ``` gives - at least on Fedora 22 (gcc 5.1) - the following linking error: ``` /tmp/ccdiWWQi.o: In function `std::atomic<Message>::store(Message, std::memory_order)': main.cpp:(.text._ZNSt6atomicI7MessageE5storeES0_St12memory_order[_ZNSt6atomicI7MessageE5storeES0_St12memory_order]+0x3f): undefined reference to `__atomic_store_16' /tmp/ccdiWWQi.o: In function `std::atomic<Message>::load(std::memory_order) const': main.cpp:(.text._ZNKSt6atomicI7MessageE4loadESt12memory_order[_ZNKSt6atomicI7MessageE4loadESt12memory_order]+0x1c): undefined reference to `__atomic_load_16' collect2: error: ld returned 1 exit status ``` If I write ``` g++ -std=c++14 -latomic <filename> ``` everything is fine. I know that the standard doesn't say anything about compiler flags or libraries that have to be included, but so far I thought that any standard conformant, single file code can be compiled via the first command. So why doesn't that apply to my example code? Is there a rational why `-latomic` is still necessary, or is it just something that hasn't been addressed by the compiler maintainers, yet?
[Relevant reading](https://gcc.gnu.org/wiki/Atomic/GCCMM/LIbrary) on the GCC homepage on how and why GCC makes library calls in certain cases regarding `<atomic>` in the first place. GCC and libstdc++ are only losely coupled. `libatomic` is the domain of the library, not the compiler -- and you can use GCC with a different library (which might provide the necessary definitions for `<atomic>` in its main proper, or under a different name), so GCC cannot just *assume* `-latomic`. [Also](https://gcc.gnu.org/wiki/Atomic/GCCMM): > > GCC 4.7 does not include a library implementation as the API has not been firmly established. > > > The same page claims that GCC 4.8 shall provide such a library implementation, but plans are the first victims of war. I'd guess the reason for `-latomic` still being necessary can be found in that vicinity. Besides... > > ...so far I thought that any standard conformant, single file code can be compiled via the first command. > > > ...`-lm` has been around for quite some time if you're using math functions.
Android Support Library 27.1.0 new methods requireActivity(), requireContext() According to the support library changelog and the Fragment class documentation (<https://developer.android.com/reference/android/support/v4/app/Fragment.html>), there are now new methods like requreActivity() and requireContext(). What is the purpose of these methods compared to getActivity() and getContext(), since they can still throw IllegalStateExceptions? Is this preferable to returning null when an activity or context cannot be found? And should I simply replace every getActivity() with requireActivity()?
It is basically to have a method that always returns a non null object or throw an exception. That is all. From the docs: Fragments now have requireContext(), requireActivity(), requireHost(), and requireFragmentManager() methods, which return a NonNull object of the equivalent get methods or throw an IllegalStateException. <https://developer.android.com/topic/libraries/support-library/revisions.html#27-1-0> This SO question also references the reasons behind this: "The getActivity and getContext methods return nullable types because when the Fragment is not attached to an Activity, these methods already returned null. There's no change in behaviour, it's just explicitly marked now, so you can safely handle it." <https://stackoverflow.com/a/47253335/3268303> From reddit: "I updated from support v26 to support v27, and had to add a bunch of !!s to activity/context methods in Fragments where I obviously don't expect it to be null. Nice to have require\* methods that do this for me without the ugly !!s." <https://www.reddit.com/r/androiddev/comments/80ork8/support_library_2710_has_been_released/duxp75h/>
How to delay the display of the background image in a div Here's my [fiddle.](http://jsfiddle.net/2wx9rt7t/3/) I just want the " `background-image:` " in the css to load fully and display after 3 seconds with a quick fade in effect, until then the entire div must be in black color. How it is possible in `css` or `javascript`. ``` .initials { position:absolute; background:black; background-image: url("http://static.tumblr.com/lxn1yld/Hnxnxaqya/space.gif"); color:white; margin-top:20px; padding-top:20px; padding-bottom:20px; width:60px; text-align:center; margin-left:20px; font-size:17px; letter-spacing:5px; box-shadow:0px 2px 3px rgba(0,0,0,.15); overflow: hidden; white-space: nowrap; } ``` ``` <div class="initials">A</div> ```
With some minor changes, I might have achieved what you want with only CSS3. Check the fiddle: <http://jsfiddle.net/w11r4o3u/> CSS: ``` .initials { position:relative; background:black; color:white; margin-top:20px; padding-top:20px; padding-bottom:20px; width:60px; text-align:center; margin-left:20px; font-size:17px; letter-spacing:5px; box-shadow:0px 2px 3px rgba(0,0,0,.15); overflow: hidden; white-space: nowrap; } .initials .text { position: relative; } @-webkit-keyframes test { 0% { opacity: 0; } 100% { opacity: 1 } } .initials:before{ content: ""; background-image: url("http://static.tumblr.com/lxn1yld/Hnxnxaqya/space.gif"); position: absolute; top: 0; left: 0; width: 100%; height: 100%; -webkit-animation-name: test; -webkit-animation-duration: 3s; -webkit-animation-fill-mode: forwards; -webkit-animation-timing-function: ease-out; } ``` HTML: ``` <div class="initials"><div class="text">A</div></div> ``` **Edited**: Now the animation starts after 3 seconds and takes .3s to complete. Here is the fiddle: <http://jsfiddle.net/w11r4o3u/1/> - To adjust the "velocity" that fadeIn occurs, edit `-webkit-animation-duration: .3s;` - If you want to adjust the animation "delay" to start, edit `-webkit-animation-delay: 3s;`
How to find the point where two normal distributions intersect? I have two normal distributions. Say for example, one with a mean of .76 and one with a mean of .62. Both standard deviations = .05. How do I find which point on the x-axis is where the two distributions cross? In other words, x-coordinate at the deepest part of the valley in between the two distributions? P.S. I am using R, but non-R related answers would be welcome and appreciated too.
When the standard deviations are the same, the densities intersect at the midpoint of the means. To answer the more general question in the title, presuming the distributions aren't identical, there may be either one or two intersection points (typically two, unless the means differ but the standard deviations don't, as discussed above). The two intersections are easiest to find on the log-density scale. Keeping in mind that normal densities are everywhere positive, $$\eqalign{ &f\_1(x) = f\_2(x)\\ &\implies \log f\_1(x) = \log f\_2(x)\\ &\implies \log f\_1(x) - \log f\_2(x)=0. }$$ If $f\_i$ has mean $\mu\_i$ and standard deviation $\sigma\_i$ ($i=1,2$) then $$\log(f\_i(x))= -\frac12 \log(2\pi) - \frac12 \log(\sigma\_i^2) -\frac12(x- \mu\_i)^2/\sigma\_i^2.$$ So $$\eqalign{ \log(f\_1(x))-\log(f\_2(x))&= \frac12 [\log(\sigma\_2^2)-\log(\sigma\_1^2) +(x- \mu\_2)^2/\sigma\_2^2-(x- \mu\_1)^2/\sigma\_1^2]\\ &=\frac12( Ax^2+Bx+C) }$$ where $$\eqalign{ A &= -1/\sigma\_1^2+1/\sigma\_2^2\\ B &= 2(-\mu\_2/\sigma\_2^2 + \mu\_1/\sigma\_1^2)\\ C &= \mu\_2^2/\sigma\_2^2 - \mu\_1^2/\sigma\_1^2+ \log(\sigma\_2^2/\sigma\_1^2). }$$ When $\sigma\_1\ne\sigma\_2$ we can simply apply the quadratic formula to find the (real) roots of the quadratic, which will give the x-values for the intersection points. Since $\sigma\_1^2-\sigma\_2^2$ and $\log(\sigma\_1^2/\sigma\_2^2)$ have the same sign, the discriminant $$\Delta = B^2 - 4 AC = \frac{4}{\sigma\_1^2\sigma\_2^2}\left((\mu\_1-\mu\_2)^2 + (\sigma\_1^2-\sigma\_2^2)\log\left(\frac{\sigma\_1^2}{\sigma\_2^2}\right)\right)$$ is nonnegative and equals zero only when $\mu\_1=\mu\_2$ and $\sigma\_1=\sigma\_2.$ Therefore, when $A\ne 0$ (that is, $\sigma\_1\ne \sigma\_2$) there are always *two* points of intersection (and, trivially, infinitely many when $\Delta=0$ which is the case of identical distributions).
When a faces message is added during validation error, will JSF skip the invoke application phase? I need your help to understand this better. This is my case. I have a custom validator for each of my input controls in the form. So when there is any validation error,I add a corresponding FacesMessage in the validate method. My understanding was that when there is any validation error - or when there are any FacesMessages added in the validate method of the Custom Validator, it would skip the INVOKE APPLICATION phase and would directly call the RENDER RESPONSE PHASE - showing the FacesMessage that was added in the PROCESS VALIDATION Phase - Is this correct? The problem I'm facing is - I add a FacesMessage in the PROCESS VALIDATION Phase - because of a validation error - and I add a confirmation message for the action that was taken by the user in the INVOKE APPLICATION PHASE - Now both are shown in the page in the RENDER RESPONSE Phase ? - If my understanding is correct in the above question - is it the best practice to conditionally add a confirmation FacesMessage after confirming that there are no FacesMessages in the currect FacesContext ? This is how the message is added : ``` FacesMessage facesMessage = new FacesMessage(FacesMessage.SEVERITY_ERROR,Constants.invalidMessageDetail,null); FacesContext.getCurrentInstance().addMessage(null, facesMessage); throw new ValidatorException(facesMessage); ``` This is how it is shown: ``` <h:messages errorClass="ErrorMsg" warnClass="WarningMsg" infoClass="InfoMsg" layout="table" /> ``` Appreciate your help.
> > *My understanding was that when there is any validation error - or when there are any FacesMessages added in the validate method of the Custom Validator, it would skip the INVOKE APPLICATION phase and would directly call the RENDER RESPONSE PHASE - showing the FacesMessage that was added in the PROCESS VALIDATION Phase - Is this correct?* > > > Partly true. It will only skip the update model values and invoke application phases when a `ValidatorException` is been **thrown**, not when simply a `FacesMessage` is been added to the `FacesContext`. > > *If my understanding is correct in the above question - is it the best practice to conditionally add a confirmation FacesMessage after confirming that there are no FacesMessages in the currect FacesContext ?* > > > You need to throw the `ValidatorException` as follows: ``` public void validate(FacesContext context, UIComponent component, Object value) throws ValidatorException { if (value does not meet conditions) { throw new ValidatorException(new FacesMessage("value does not meet conditions")); } } ``` Then it will skip the update model values and invoke application phases as desired. Please note that you don't need to manually add the faces message to the context. JSF will do it all by itself when it has caught a `ValidatorException`.
Java.lang.reflect.Proxy returning another proxy from invocation results in ClassCastException on assignment So I'm playing with geotools and I thought I'd proxy one of their data-access classes and trace how it was being used in their code. I coded up a dynamic proxy and wrapped a FeatureSource (interface) in it and off it went happily. Then I wanted to look at some of the transitive objects returned by the featureSource as well, since the main thing a FeatureSource does is return a FeatureCollection (FeatureSource is analogous to a sql DataSource and featurecollection to an sql statement). in my invocationhandler I just passed the call through to the underlying object, printing out the target class/method/args and result as I went, but for calls that returned a FeatureCollection (another interface), I wrapped that object in my proxy (the same class but a new instance, shouldn't matter should it?) and returned it. BAM! Classcast exception: ``` java.lang.ClassCastException: $Proxy5 cannot be cast to org.geotools.feature.FeatureCollection at $Proxy4.getFeatures(Unknown Source) at MyClass.myTestMethod(MyClass.java:295) ``` the calling code: ``` FeatureSource<SimpleFeatureType, SimpleFeature> featureSource = ... // create the FS featureSource = (FeatureSource<SimpleFeatureType, SimpleFeature>) FeatureSourceProxy.newInstance(featureSource, features); featureSource.getBounds();// ok featureSource.getSupportedHints();// ok DefaultQuery query1 = new DefaultQuery(DefaultQuery.ALL); FeatureCollection<SimpleFeatureType, SimpleFeature> results = featureSource.getFeatures(query1); //<- explosion here ``` the Proxy: ``` public class FeatureSourceProxy implements java.lang.reflect.InvocationHandler { private Object target; private List<SimpleFeature> features; public static Object newInstance(Object obj, List<SimpleFeature> features) { return java.lang.reflect.Proxy.newProxyInstance( obj.getClass().getClassLoader(), obj.getClass().getInterfaces(), new FeatureSourceProxy(obj, features) ); } private FeatureSourceProxy(Object obj, List<SimpleFeature> features) { this.target = obj; this.features = features; } public Object invoke(Object proxy, Method m, Object[] args)throws Throwable{ Object result = null; try { if("getFeatures".equals(m.getName())){ result = interceptGetFeatures(m, args); } else{ result = m.invoke(target, args); } } catch (Exception e) { throw new RuntimeException("unexpected invocation exception: " + e.getMessage(), e); } return result; } private Object interceptGetFeatures(Method m, Object[] args) throws Exception{ return newInstance(m.invoke(target, args), features); } ``` } Is it possible to dynamically return proxies of interfaces *from* a proxied interface or am I doing something wrong? cheers!
Class.getInterfaces() returns only the interfaces DIRECTLY implemented by the class. You need a transitive closure to optain all the interfaces. UPDATE Example: ``` private static Class<?>[] getInterfaces(Class<?> c) { List<Class<?>> result = new ArrayList<Class<?>>(); if (c.isInterface()) { result.add(c); } else { do { addInterfaces(c, result); c = c.getSuperclass(); } while (c != null); } for (int i = 0; i < result.size(); ++i) { addInterfaces(result.get(i), result); } return result.toArray(new Class<?>[result.size()]); } private static void addInterfaces(Class<?> c, List<Class<?>> list) { for (Class<?> intf: c.getInterfaces()) { if (!list.contains(intf)) { list.add(intf); } } } ``` You may also need to "unwrapp" the proxies that are passed as arguments.
Create Feature detection for One Javascript Feature (intersectionObserver) Is there a way of storing a built-in javascript method in a variable to set different behaviour for when this method isn't available in certain browsers? My specific case is for intersectionObserver which isn't available in Safari or older MS browsers. I have some animations triggered by this and would like to turn them off if intersectionObserver isn't available. what I want to do essentially this: ``` var iO = intersectionObserver; if ( !iO ) { // set other defaults } ``` I don't really want to load a polyfill or library for just one feature? Many thanks Emily
The **in Operator** is widely used to detect features supported by the browser. JavaScript features are globally available as a property to `window` object. So we can check if `window` element has the property. ``` if("IntersectionObserver" in window){ /* work with IntersectionObserver */ } if("FileReader" in window){ /* work with FileReader */ } ``` > > The `in` operator returns `true` if the specified property is in the > specified object or its prototype chain. > > **Syntax:** prop in object > > *[source: [developer.mozilla.org](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Operators/in)]* > > > So you can also save the result in a `boolean` variable and use it later in your code. ``` var iO = "IntersectionObserver" in window; /* true if supported */ if ( !iO ) { // set other defaults } ```
List of records not fetching updated records in Django REST framework..? In Django REST Framework API, list of database table records are not getting updated until the API restart or any code change in python files like model, serializer or view. I've tried the transaction commit but it didn't worked. Below is my view : ``` class ServiceViewSet(viewsets.ModelViewSet): #authentication_classes = APIAuthentication, queryset = Service.objects.all() serializer_class = ServiceSerializer def get_queryset(self): queryset = self.queryset parent_id = self.request.QUERY_PARAMS.get('parent_id', None) if parent_id is not None: queryset = queryset.filter(parent_id=parent_id) return queryset # Make Service readable only def update(self, request, *args, **kwargs): return Response(status=status.HTTP_400_BAD_REQUEST) def destroy(self, request, *args, **kwargs): return Response(status=status.HTTP_400_BAD_REQUEST) ``` Serializer looks like this : ``` class ServiceSerializer(serializers.ModelSerializer): class Meta: model = Service fields = ('id', 'category_name', 'parent_id') read_only_fields = ('category_name', 'parent_id') ``` and model looks like this : ``` class Service(models.Model): class Meta: db_table = 'service_category' app_label = 'api' category_name = models.CharField(max_length=100) parent_id = models.IntegerField(default=0) def __unicode__(self): return '{"id":%d,"category_name":"%s"}' %(self.id,self.category_name) ``` This problem is occuring only with this service, rest of the APIs working perfectly fine. Any help will be appreciated.
Because you are setting up the queryset on `self.queryset`, which is a class attribute, it is being cached. This is why you are not getting an updated queryset for each request, and it's also why Django REST Framework [calls `.all()` on querysets in the default `get_queryset`](https://github.com/tomchristie/django-rest-framework/blob/79e18a2a06178e8c00dfafc1cfd062f2528ec2c1/rest_framework/generics.py#L218-L220). By calling `.all()` on the queryset, it will no longer use the cached results and will force a new evaluation, which is what you are looking for. ``` class ServiceViewSet(viewsets.ModelViewSet): queryset = Service.objects.all() def get_queryset(self): queryset = self.queryset.all() parent_id = self.request.QUERY_PARAMS.get('parent_id', None) if parent_id is not None: queryset = queryset.filter(parent_id=parent_id) return queryset ```
python - how to compare a key of the dictionary with a string character I am new to python and I need to compare a character in string with a key in a dictionary. But I am not able to figure out a way to compare that character with a key. I am only able to compare it with the value at dict[key] I am trying to implement something like this: ``` score = {"a": 1, "c": 3, "b": 3, "e": 1, "d": 2, "g": 2, "f": 4, "i": 1, "h": 4, "k": 5, "j": 8, "m": 3, "l": 1, "o": 1, "n": 1, "q": 10, "p": 3, "s": 1, "r": 1, "u": 1, "t": 1, "w": 4, "v": 4, "y": 4, "x": 8, "z": 10} def compare(word): res = 0 for letter in word: if score[**What should i put in here**] == letter: res += score[letter] return res ``` where score[key] represents the value at that particular key as a whole. Is there a way to compare a key to the letter, instead of the value? My aim is to compare the "letter" in "word" with the keys in dictionary and add the values against the characters and return the result.
Looks like you're thinking about this strangely. All you need to do is check if the letter is in your `score` dict, and if it is, to add that number to your total. ``` def compare(word): res = 0 for letter in word: if letter in score: res += score[letter] return res ``` However there's an easier way to do this. Since you're just using `res` as an accumulator, you can add `score[letter]` if it exists or `0` if it doesn't. This is easy using the `dict.get` method. ``` def compare(word): res = 0 for letter in word: res += score.get(letter, 0) # dict.get(key, defaultvalue) return res ``` In fact you can even make it into an ugly `lambda`. ``` compare = lambda word: sum([scores.get(letter,0) for letter in word]) ```
How to sync songs in Rhythmbox with an iPod I can sync them with Banshee but in Rhythmbox (Am starting again to use it since it will be the default in 12.04) I can see that Rhythmbox can detect the ipod and see the songs but when I try to drag one to the ipod how can I tell it to sync it so I can remove the iPod and listen to it.
If you are using iOS 5.x, the library that Rhythmbox (and others) use to do this does not yet support the newer database version. If your iOS device has recently been updated, then you simply need to wait for libimobiledevice to be updated (watch the [website](http://www.libimobiledevice.org/) under Music/Video Synchronization Status for updates). [edit: 3/11/13 e.m.fields - **It seems that the libimobiledevices is supporting up to iOS 6.12 as of this release.** ] from [www.libimobiledevice.org](http://www.libimobiledevice.org): > > Latest Release: 1.0.7 > Development/Unstable Release: 1.1.4 > > > Tested with iPod Touch 1G/2G/3G/4G, iPhone 1G/2G/3G/3GS/4/4S/5, iPad 1/2/3rd Gen and Apple TV running up to firmware 6.1.2 on Linux, Mac OS X and Windows. > > >
Office COM add-in, dialogs don't have the luna/aero theme (they look like Windows 98 classic look) I have an addin for Office 2007 which adds a toolbar with some buttons and handlers for their OnClick events. When the user clicks on the buttons, some dialogs show. But those dialogs have the dated Windows 98 look and feel with rectangular buttons, even on Windows 7. I know that you need a manifest file to enable theming, but this addin is created with Visual Studio 2008 and it adds a manifest automatically which looks like this: ``` <?xml version="1.0" encoding="UTF-8" standalone="yes"?> <assembly xmlns="urn:schemas-microsoft-com:asm.v1" manifestVersion="1.0"> <trustInfo xmlns="urn:schemas-microsoft-com:asm.v3"> <security> <requestedPrivileges> <requestedExecutionLevel level="asInvoker" uiAccess="false"></requestedExecutionLevel> </requestedPrivileges> </security> </trustInfo> <dependency> <dependentAssembly> <assemblyIdentity type="win32" name="Microsoft.VC90.CRT" version="9.0.21022.8" processorArchitecture="x86" publicKeyToken="1fc8b3b9a1e18e3b"></assemblyIdentity> </dependentAssembly> </dependency> </assembly> ``` What should I change to make my dialog use the current theme instead of Windows 98 look? I've read about isolation awareness, but that didn't seem to work either.
As you've discovered, adding the manifest is only half of the battle. You also need to call the **[`InitCommonControlsEx` function](http://msdn.microsoft.com/en-us/library/bb775697.aspx)** when your add-in initializes, before any controls are created. This is necessary to register the window classes for the common controls you use. And just in case you're skeptical of the documentation, Microsoft's Raymond Chen posted an [article on his blog](http://blogs.msdn.com/b/oldnewthing/archive/2005/07/18/439939.aspx) a while back addressing precisely this issue. If you have any other issues, check this article: [How to apply Windows XP themes to Office COM add-ins](http://support.microsoft.com/kb/830033)
Run "cd" command as superuser in Linux Is there a way to run the `cd` command with superuser privileges to gain access to directories that are owned by root? When I run `sudo cd <path>`, I get `sudo: cd: command not found`.
The other answer isn't wrong. Possibly a better answer is: The sudo tool is intended to take actions as a superuser, and you're describing something that is more of a state change that would precede actions such as 'ls' or 'vi' or others to make them simpler. I suggest, e.g. if you wanted to edit a file in /root/private/: ``` sudo ls /root sudo ls /root/private sudoedit /root/private/<name from previous ls command> ``` This is definitely more typing, and a little harder than just changing directories. However, it is *far* more audit-able, and much more in-line with the principles behind sudo than running some variant of 'sudo bash.' If you are working in a secure environment, your IA team will thank you. If you find yourself wondering: "What change did I make the other day?," then *you* will thank you, because you won't have to wonder what file or files you edited. All of this said, enabling and executing some form of 'sudo bash' is definitely easier. If you were looking for easier, why are you using 'sudo' in the first place instead of just logging in as root?
How to get N easily distinguishable colors with Matplotlib I need to make different amounts of line plots with Matplotlib, but I have not been able to find a colormap that makes it easy to distinguish between the line plots. I have used the brg colormap like this: ``` colors=brg(np.linspace(0,1,num_plots)) ``` with ``` for i in range(num_plots): ax.step(x,y,c=colors[i]) ``` With four plots, this could look like this: [![](https://i.stack.imgur.com/hwdhZ.png)](https://i.stack.imgur.com/hwdhZ.png) Notice how hard it is to distinguish the colors of the top and bottom plots, which is especially bad if a legend is used. I've tried a lot of different colormaps like rainbow and jet, but with this setup, brg seems to give the best result for `num_plots` between 1 and 12. I did find this [How to get 10 different colors that are easily recognizable](https://graphicdesign.stackexchange.com/questions/108733/how-to-get-10-different-colors-that-are-easily-recognizable) and this Wiki page [Help:Distinguishable colors](https://en.wikipedia.org/wiki/Help:Distinguishable_colors), but I don't know if this can be used in any way.. Is there an easy fix to this, or will I have to make do with this?
I would use the `tab10` or `tab20` colormaps. See [Colormap reference](https://matplotlib.org/3.1.1/gallery/color/colormap_reference.html) [![enter image description here](https://i.stack.imgur.com/CTVkV.png)](https://i.stack.imgur.com/CTVkV.png) However, I believe you will always have trouble distinguishing hues when the number of lines becomes large (I would say >5 and certainly >10). In this case, you should combine hues with other distinguishing features like different markers or linestyles. ``` colors = matplotlib.cm.tab20(range(20)) markers = matplotlib.lines.Line2D.markers.keys() x = np.linspace(0,1,100) fig, axs = plt.subplots(2,4, figsize=(4*4,4*2)) for nlines,ax0 in zip(np.arange(5,21,5), axs.T): ax0[0].set_title('{:d} lines'.format(nlines)) for n,c,m in zip(range(nlines),colors,markers): y = x*np.random.random()+np.random.random() ax0[0].plot(x,y) ax0[1].plot(x,y, marker=m, markevery=10) axs[0,0].set_ylabel('only hues', fontsize=16, fontweight='bold') axs[1,0].set_ylabel('hues+markers', fontsize=16, fontweight='bold') fig.tight_layout() ``` [![enter image description here](https://i.stack.imgur.com/AGdJj.png)](https://i.stack.imgur.com/AGdJj.png)
Why do I get different IP addresses when pinging the same host name multiple times? I got three different IP addresses when trying to resolve a domain name three times. All three of these pings happened within seconds of one another. Is this normal? I wouldn't think DNS servers could be updated so quickly, so my guess is that this is something to do with load-balancing. So far, I've only noticed this with bestbuy.com, and I tried it because a friend was reporting that he was redirected to Best Buy Turkey, and Best Buy Korea when visiting bestbuy.com last night. ``` G:\>ping bestbuy.com Pinging bestbuy.com [77.67.19.107] with 32 bytes of data:... G:\>ping bestbuy.com Pinging bestbuy.com [69.31.49.74] with 32 bytes of data:... G:\>ping bestbuy.com Pinging bestbuy.com [69.31.49.73] with 32 bytes of data:... ```
(Probably a better question for serverfault, but...) If you do an nslookup on bestbuy.com: ``` localhost /home/me > nslookup Default Server: #############.com Address: 192.168.252.11 > bestbuy.com Server: ####################.com Address: 192.168.252.11 Non-authoritative answer: Name: bestbuy.com Addresses: 77.67.19.120, 77.67.19.107 > ``` Note that there are multiple ip addresses that correspond to this ip name. The primary purpose of this is to provide a failover. If one of their web farms goes DOA, then the other will continue to accept inbound traffic. The positive side effect of this is that when everything is operating normally, they can share the traffic load between two farms.
Are MySQL datetime and timestamp fields better for PHP apps then Unix timestamp ints? I was reading over an article that shows some really good information and benchmarks about how well the three different MySQL date/time storage options perform. [MySQL DATETIME vs TIMESTAMP vs INT performance and benchmarking with MyISAM](http://gpshumano.blogs.dri.pt/2009/07/06/mysql-datetime-vs-timestamp-vs-int-performance-and-benchmarking-with-myisam/) While reading the article you start to get the idea that using ints are just a waste and you should instead go with MySQL Datetime or Timestamp column types. However, towards the end of the article he does one more test not using MySQL functions and you suddenly see that straight INT's are *2x as fast as the two MySQL options when searching by unix timestamps*. So it suddenly dawned on me - *duh, what do PHP apps all use?* **time()!** Almost every php application bases their logic off of the Unix Epoch. Which means that most queries for results in a certain time start off based on time() *and then are converted to work with MySQL's fields*. This leaves me with the following: 1. Unix Timestamps stored as INT's are faster, take less space, and work natively with PHP's time() based calculations. 2. MySQL Date types are more suited to operations and logic from the MySQL side. 3. For the time being both Unix And MySQL Timestamps only work until 2037 which means that you must use a datetime field for larger dates in the future. 4. MySQL commands like `date = NOW()` can lag when using replication causing data inconsistencies. So applying this to real life we see that answer that these results given that most really DBA's would use a better engine like PostgreSQL - is there arny However, most apps that would be to the level of using DB logic would probably go with PostgreSQL. Which means that all the rest of us programmers only use MySQL for *a storage tank for our data* (you know it's true) which makes keeping the fields as small, fast, UNIX INT's seem like it is actually the best option. So what do you guys think? **Are timestamps really more suited to PHP apps than the MySQL date fields?**
**MySQL's date format has no year 2038 problem.** MySQL's dates are reliable from the year 1000 to the year 9999, whereas Unix timestamps can screw up after 2038 or before 1902 unless everything in your system is 64-bit. If you're using PHP, however, this can be moot: PHP uses unix timestamps for dates and times throughout most of its date and time functions and unless you are using a 64-bit build it will have the same limitation. **You'd be using the field type that was intended for this purpose.** If you care. Putting date into an INT field as a unix timestamp is not as self-describing; you can't look at the data without converting it in the appropriate way. But that may make no difference to you. The flip side of this, given that you're using PHP, is that once you get the time into PHP you'd have to convert it back to a Unix timestamp anyway to do anything useful with it, because to PHP, Unix timestamps are native. Edit: Back when I wrote this answer, I didn't use PHP's DateTime class. Using the DateTime class removes any need to use Unix timestamps, and removes the 32-/64-bit issues. Thanks to Charles' comment below for pointing out a good way to use this.
What's the difference between clang and clang++ when building a library? I was compiling a c library (to be used by a c++ project) with clang. I got linker errors (specifically, undefined symbol regarding the hqxInit function) when trying to link this library. When I switch to clang++, it works. Checking with nm, clang++ munges the names further. What is going on—and is there a better way to tell the linker that a library is munged-for-c versus munged-for-c++? It seems silly to have to build a c library with c++.... // built with clang ``` $ nm libhqx.a libhqx.bak(init.c.o) 04000000 C _RGBtoYUV 00000004 C _YUV1 00000004 C _YUV2 00000000 T _hqxInit ``` // built with clang++ ``` $ nm libhqx.a libhqx.a(init.o): 00000100 S _RGBtoYUV 04000100 S _YUV1 04000104 S _YUV2 00000000 T __Z7hqxInitv ```
`clang` and `clang++` on most systems are the same executable. One is merely a symbolic link to the other. The program checks to see what name it is invoked under, and: - for `clang`, compiles code as C - for `clang++`, compiles code as C++ In C++, the compiler generates names for functions differently than C - this is because you can have multiple functions with the same name (but different) parameters. This is called "name mangling" - and that's what you are seeing. You can use a tool called `c++filt` to "demangle" the names. Example: ``` $ c++filt __Z7hqxInitv hqxInit() ``` More information here: [why clang++ behaves differently from clang since the former is a symbol link of the latter?](https://stackoverflow.com/questions/10359624/why-clang-behaves-differently-from-clang-since-the-former-is-a-symbol-link-of?rq=1)
How to intersect two arrays of ranges? Let a *range* be an array of two integers: the `start` and the `end` (e.g. `[40, 42]`). Having two arrays of ranges (which are sorted), I want to find the optimal way to calculate their intersection (which will result into another array of ranges): ``` A = [[1, 3], [7, 9], [12, 18]] B = [[2, 3], [4,5], [6,8], [13, 14], [16, 17]] ``` Intersection: ``` [[2, 3], [7, 8], [13, 14], [16, 17]] ``` What is the optimal algorithm for this? --- The naive way would be to check each one with all the other ones, but that's obviously not optimal. I found a similar question asking for the same thing in VBA: [Intersection of two arrays of ranges](https://stackoverflow.com/q/23683939/1420197)
Since the input arrays are sorted, this should be fairly straightforward to work out. I'm assuming that the ranges in any one input array don't intersect one another (otherwise, "which are sorted" would be ambiguous). Consider one range from each array (defined by "current range" indexes `a` and `b`). There are several cases (each case other than "full overlap" has a mirror image where `A` and `B` are reversed): No intersection: ``` A[a]: |------| B[b]: |---| ``` Because the arrays are sorted, `A[a]` cannot intersect anything in `B`, so it can be skipped (increment `a`). Partial overlap (`B[b]` extends beyond `A[a]`): ``` A[a]: |-------| B[b]: |-------| ``` In this case, add the intersection to the output and then increment `a` because `A[a]` cannot intersect anything else in `B`. Containment (possibly with coinciding ends): ``` A[a]: |------| B[b]: |--| ``` Again add the intersection to the output and this time increment `b`. Note that a further slight optimization is that if `A[a]` and `B[b]` end at the same value, then you can increment `b` as well, since `B[b]` also cannot intersect anything else in `A`. (The case of coinciding ends could have been lumped into the partial overlap case. This case could then have been called "strict containment".) Full overlap: ``` A[a]: |------| B[b]: |------| ``` Add the intersection to the output and increment both `a` and `b` (neither range can intersect anything else in the other array). Continue iterating the above until either `a` or `b` runs off the end of the corresponding array and you're done. It should be trivial straightforward to translate the above to code. EDIT: To back up that last sentence (okay, it wasn't trivial), here's my version of the above in code. It's a little tedious because of all the cases, but each branch is quite straightforward. ``` const A = [[1, 3], [7, 9], [12, 18]]; const B = [[2, 3], [4, 5], [6, 8], [13, 14], [16, 17]]; const merged = []; var i_a = 0, i_b = 0; while (i_a < A.length && i_b < B.length) { const a = A[i_a]; const b = B[i_b]; if (a[0] < b[0]) { // a leads b if (a[1] >= b[1]) { // b contained in a merged.push([b[0], b[1]]); i_b++; if (a[1] === b[1]) { // a and b end together i_a++; } } else if (a[1] >= b[0]) { // overlap merged.push([b[0], a[1]]); i_a++; } else { // no overlap i_a++; } } else if (a[0] === b[0]) { // a and b start together if (a[1] > b[1]) { // b contained in a merged.push([a[0], b[1]]); i_b++; } else if (a[1] === b[1]) { // full overlap merged.push([a[0], a[1]]); i_a++; i_b++; } else /* a[1] < b[1] */ { // a contained in b merged.push([a[0], a[1]]); i_a++; } } else /* a[0] > b[0] */ { // b leads a if (b[1] >= a[1]) { // containment: a in b merged.push([a[0], b[1]]); i_a++; if (b[1] === a[1]) { // a and b end together i_b++; } } else if (b[1] >= a[0]) { // overlap merged.push([a[0], b[1]]); i_b++ } else { // no overlap i_b++; } } } console.log(JSON.stringify(merged)); ``` You asked for an optimal algorithm. I believe mine is very close to optimal. It runs in linear time with the number of ranges in the two arrays, since each iteration completes the processing of at least one range (and sometimes two). It requires constant memory plus the memory required to build the result. I should note that unlike the answer by CertainPerformance (the only other answer posted here at the time I'm writing this) my code works for any kind of numeric range data, not just integers. (You might want to replace `===` with `==` in the above if you're mixing numbers and string representations of numbers). The algorithm by CertainPerformance flattens the ranges into arrays of consecutive integers that span the ranges. If that total number of integers is n, then his algorithm runs in O(n2) time and O(n) space. (So, for instance, if one of the ranges were [1, 50000], that would require memory for 50,000 numbers and time proportional to the square of that.)
Saving S4 objects in a list of list R is giving the following message error when you want to save an S4 object into a list of list and the element was not already defined previously. `"invalid type/length (S4/0) in vector allocation"` Why is it working with a simple list, but not with a list of list? See the following code and the potential workarounds. However, I am pretty sure there is a more obvious solution. ``` # Creation of an S4 object setClass("student", slots=list(name="character", age="numeric", GPA="numeric")) s <- new("student",name="John", age=21, GPA=3.5) # Indexes for the list index1 <- "A" index2 <- "a" # Simple list (All of this works) l <- list() l[[index1]] <- s l[[index1]] <- "character" l[[index1]] <- 999 # List of list l <- list() l[[index1]][[index2]] <- s # will give an Error!! l[[index1]][[index2]] <- "character" # still working l[[index1]][[index2]] <- 999 # still working # "Workarounds" l <- list() l[[index1]][[index2]] <- rep(999, length(slotNames(s))) #define the element with a length equal to the number of slots in the s4 object l[[index1]][[index2]] <- s # this works now! l[[index1]][[index2]] <- list(s) # This works too, but that's not the same result ``` **Any suggestion on why it does not work with a list of list and how I can solve this problem? Thanks**
So when you do ``` l <- list() l[[index1]][[index2]] <- s ``` the problem is that that `l` is initialized to be a list so it makes sense to set a new named element with `l[[index1]]`, but R has no idea what's stored at `l[[index1]][[index2]]`. It could be anything. It could be a function and functions don't know what to do with a named indexing operation. For example ``` l <- list() l[[index1]] <- mean l[[index1]][[index2]] <- "character" ``` But in your case, when you try to grab a value form a list that hasn't been initialized yet, you'll get `NULL`. For example ``` l <- list() l[[index1]] # NULL ``` R happens to have special behavior when you try set a named atomic value on a NULL object. Observe ``` # NULL[["a"]] <- "character" is basically calling.... `[[<-`(NULL, "a", "character") # a # "character" ``` Note that we are getting a named vector here. Not a list. This is true for your "working" examples as well ``` l <- list() l[[index1]][[index2]] <- "character" class(l[[index1]][[index2]]) # [1] "character" ``` Also note that this doesn't have anything to do with S4 specifically. The same would happen if we tried to set a more complex objected like a function as well ``` l <- list() l[[index1]][[index2]] <- mean # Error in l[[index1]][[index2]] <- mean : # invalid type/length (closure/0) in vector allocation ``` In languages like Perl you can "magically" bring hashes to life with the correct indexing syntax via [autovivification](https://en.wikipedia.org/wiki/Autovivification), but that's not true in R. If you want a `list()` to exist at `l[[index1]]` you will need to explicitly create it. This will work ``` l <- list() l[[index1]] <- list() l[[index1]][[index2]] <- s ``` Again this is because `[[ ]]` is a bit ambiguous in R. It's a generic indexing function not used exclusively with lists.
Where the `prototype` property is defined Well, all the time i supposed that prototype property exists at all objects. But after one of job interview i came home and open chrome console ``` window.prototype **undefined** ``` WTF? ``` Object.prototype **Object {}** ``` OK ``` var a = {}; a.prototype **undefined** var a = function(){} a.prototype **Object {}** ``` I've read <https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Object/prototype> several times but still not clear. Can anybody explain? Thanx P.S. <https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Object/prototype> > > All objects in JavaScript are descended from Object; all objects > inherit methods and properties from Object.prototype, although they > may be overridden (except an Object with a null prototype, i.e. > Object.create(null) > > > window is Object, so it must have prototype property
The object an instance inherits from, its prototype, is stored in an internal property, [`[[Prototype]]`](http://ecma-international.org/ecma-262/5.1/#sec-8.6.2). Its value can be retrieved with [`Object.getPrototypeOf()`](http://ecma-international.org/ecma-262/5.1/#sec-15.2.3.2) (in ES5-compatible engines). ``` console.log(Object.getPrototypeOf(window)); // Window { ... } var a = {}; console.log(Object.getPrototypeOf(a)); // Object { ... } ``` Only `function`s explicitly have a [`prototype` property](http://ecma-international.org/ecma-262/5.1/#sec-15.3.5.2): ``` console.log(typeof Object); // 'function' ``` And, the value of the `prototype` is just used to set the value of the instance's `[[Prototype]]` when a `new` instance is created. ``` function Foo() {} var bar = new Foo(); console.log(Foo.prototype === Object.getPrototypeOf(bar)); // true ```
How to consume chuncks from ConcurrentQueue correctly I need to implement a queue of requests which can be populated from multiple threads. When this queue becomes larger than 1000 completed requests, this requests should be stored into database. Here is my implementation: ``` public class RequestQueue { private static BlockingCollection<VerificationRequest> _queue = new BlockingCollection<VerificationRequest>(); private static ConcurrentQueue<VerificationRequest> _storageQueue = new ConcurrentQueue<VerificationRequest>(); private static volatile bool isLoading = false; private static object _lock = new object(); public static void Launch() { Task.Factory.StartNew(execute); } public static void Add(VerificationRequest request) { _queue.Add(request); } public static void AddRange(List<VerificationRequest> requests) { Parallel.ForEach(requests, new ParallelOptions() {MaxDegreeOfParallelism = 3}, (request) => { _queue.Add(request); }); } private static void execute() { Parallel.ForEach(_queue.GetConsumingEnumerable(), new ParallelOptions {MaxDegreeOfParallelism = 5}, EnqueueSaveRequest ); } private static void EnqueueSaveRequest(VerificationRequest request) { _storageQueue.Enqueue( new RequestExecuter().ExecuteVerificationRequest( request ) ); if (_storageQueue.Count > 1000 && !isLoading) { lock ( _lock ) { if ( _storageQueue.Count > 1000 && !isLoading ) { isLoading = true; var requestChunck = new List<VerificationRequest>(); VerificationRequest req; for (var i = 0; i < 1000; i++) { if( _storageQueue.TryDequeue(out req)) requestChunck.Add(req); } new VerificationRequestRepository().InsertRange(requestChunck); isLoading = false; } } } } } ``` Is there any way to implement this without lock and isLoading?
The easiest way to do what you ask is to use the blocks in the [TPL Dataflow](https://msdn.microsoft.com/en-us/library/hh228603(v=vs.110).aspx) library. Eg ``` var batchBlock = new BatchBlock<VerificationRequest>(1000); var exportBlock = new ActionBlock<VerificationRequest[]>(records=>{ new VerificationRequestRepository().InsertRange(records); }; batchBlock.LinkTo(exportBlock , new DataflowLinkOptions { PropagateCompletion = true }); ``` That's it. You can send messages to the starting block with ``` batchBlock.Post(new VerificationRequest(...)); ``` Once you finish your work, you can take down the entire pipeline and flush any leftover messages by calling `batchBlock.Complete();` and await for the final block to finish: ``` batchBlock.Complete(); await exportBlock.Completion; ``` The [BatchBlock](https://msdn.microsoft.com/en-us/library/hh194745(v=vs.110).aspx) batches up to 1000 records into arrays of 1000 items and passes them to the next block. An [ActionBlock](https://msdn.microsoft.com/en-us/library/hh194684(v=vs.110).aspx) uses 1 task only by default, so it is thread-safe. You could use an existing instance of your repository without worrying about cross-thread access: ``` var repository=new VerificationRequestRepository(); var exportBlock = new ActionBlock<VerificationRequest[]>(records=>{ repository.InsertRange(records); }; ``` Almost all blocks have a concurrent input buffer. Each block runs on its own TPL task, so each step runs concurrently with each other. This means that you get asynchronous execution "for free" and can be important if you have multiple linked steps, eg you use a [TransformBlock](https://msdn.microsoft.com/en-us/library/hh194782(v=vs.110).aspx) to modify the messages flowing through the pipeline. I use such pipelines to create pipelines that call external services, parse responses, generate the final records, batch them and send them to the database with a block that uses SqlBulkCopy.
why there is a deadlock in multithreaded program given below I am new to multi threaded programming What is the reason for dead lock in this approach if one Thread has to print odd numbers from 0 to 1000 and other has to print even numbers? ``` using System; using System.Collections.Generic; using System.Linq; using System.Text; using System.Threading; namespace ConsoleApplication9 { class Program { static int count1; static int count2; static Thread t1, t2; static void MulOf2() { while (count1 < 1000) { Console.Write("Th1" + (2 * count1) + "\n"); count1++; if (t2.IsBackground) { if (!t2.IsAlive) { t2.Resume(); } } t1.Suspend(); } } static void Main(string[] args) { t1 = new Thread(MulOf2); t2 = new Thread(MulOf2Plus1); t1.Start(); t2.Start(); } static void MulOf2Plus1() { while (count2 < 1000) { Console.Write("Th2" + ((2 * count2) + 1) + "\n"); count2++; if (t1.IsBackground) { if (!t1.IsAlive) { t1.Resume(); } } t2.Suspend(); } } } } ``` I modified the code to prevent crashes
Assuming you swallow enough exceptions to even get the code running, it can deadlock when the operations execute in the following order: ``` t2.Resume() //on t1 t1.Resume() //on t2 t2.Suspend() //on t2 t1.Suspend() //on t1 ``` As a result both threads remain suspended. This is generally not the way to handle thread synchronization. Personally, I have never had to use `Resume` or `Suspend` on threads. You should read about synchronization mechanisms in .NET, starting with the `lock` statement. I recommend the [free chapters](http://www.albahari.com/threading/) from Albahari's [c# in a Nutshell](http://www.albahari.com/nutshell/):
What is the purpose of the parenthesis in this switch and case label? I am writing a function for an item service where if the user requests for all items under a certain name it will return them all. Such as all the phones that are iPhone X's etc. I got help to make one of the functions work where if there are more than 1 items it will return them all (this is the third case): ``` var itemsList = items.ToList(); switch (itemsList.Count()) { case 0: throw new Exception("No items with that model"); case 1: return itemsList; case { } n when n > 1: return itemsList; } return null; ``` What confuses me is what are the `{ }` for? I was told it was "a holding place as sub for stating the type" I am unsure of what they mean by this. How does it work too? I am not sure what `n` is for. Any help is greatly appreciated! PROGRESS: After following up with the helper, I now know that `{ }` is similar to `var`. But I am still unsure why it is only used here.
It is a capability of [pattern matching](https://learn.microsoft.com/en-us/archive/msdn-magazine/2019/may/csharp-8-0-pattern-matching-in-csharp-8-0) that was introduced in `C# 8`. `{ }` matches any non-null value. `n` is used to declare a variable that will hold matched value. Here is a sample from [MSDN](https://learn.microsoft.com/en-us/dotnet/csharp/tutorials/pattern-matching#implement-the-basic-toll-calculations) that shows usage of `{ }`. Explanation of your sample: ``` switch (itemsList.Count()) { case 0: throw new Exception("No items with that model"); case 1: return itemsList; // If itemsList.Count() != 0 && itemsList.Count() != 1 then it will // be checked against this case statement. // Because itemsList.Count() is a non-null value, then its value will // be assigned to n and then a condition agaist n will be checked. // If condition aginst n returns true, then this case statement is // considered satisfied and its body will be executed. case { } n when n > 1: return itemsList; } ```
How to flip an image without moving it In the example below, click on logo and you'll see - it is flipped but also moved How to flip it without moving ? I also tried this, without success: ``` .flip{transform:scaleX(-1) translatex(-50%);} ``` ``` $('img').on('click', function(){ $(this).addClass('flip'); }); ``` ``` .flip{transform:scaleX(-1);} .wrap{ position:relative; width:50%; height:100px; background:darkorange; } img{ position:absolute; bottom:0; width:140px; left:50%; transform:translatex(-50%); } ``` ``` <script src="https://cdnjs.cloudflare.com/ajax/libs/jquery/3.3.1/jquery.min.js"></script> <div class='wrap'> <img src='https://abuena.net/img/logo_01.png' alt='img'> </div> ```
You also need to include the `translate` property in the `.flip` class. Note that the order of properties is important, eg. `scale` before `translate`, and after will give you different outputs: ``` $('img').on('click', function() { $(this).toggleClass('flip'); }); ``` ``` .flip { transform: translateX(-50%) scaleX(-1); } .wrap { position: relative; width: 50%; height: 100px; background: darkorange; } img { position: absolute; bottom: 0; width: 140px; left: 50%; transform: translateX(-50%); } ``` ``` <script src="https://cdnjs.cloudflare.com/ajax/libs/jquery/3.3.1/jquery.min.js"></script> <div class='wrap'> <img src='https://abuena.net/img/logo_01.png' alt='img'> </div> ```
converting .wav file to .ogg in javascript I'm trying to capture user's audio input from the browser. I have done it with WAV but the files are really big. A friend of mine told me that OGG files are much smaller. Does anyone knows how to convert WAV to OGG? I also have the raw data buffer, I don't really need to convert. But I just need the OGG encoder. Here's the WAV encoder from [Matt Diamond's RecorderJS](https://github.com/mattdiamond/Recorderjs): ``` function encodeWAV(samples){ var buffer = new ArrayBuffer(44 + samples.length * 2); var view = new DataView(buffer); /* RIFF identifier */ writeString(view, 0, 'RIFF'); /* file length */ view.setUint32(4, 32 + samples.length * 2, true); /* RIFF type */ writeString(view, 8, 'WAVE'); /* format chunk identifier */ writeString(view, 12, 'fmt '); /* format chunk length */ view.setUint32(16, 16, true); /* sample format (raw) */ view.setUint16(20, 1, true); /* channel count */ view.setUint16(22, 2, true); /* sample rate */ view.setUint32(24, sampleRate, true); /* byte rate (sample rate * block align) */ view.setUint32(28, sampleRate * 4, true); /* block align (channel count * bytes per sample) */ view.setUint16(32, 4, true); /* bits per sample */ view.setUint16(34, 16, true); /* data chunk identifier */ writeString(view, 36, 'data'); /* data chunk length */ view.setUint32(40, samples.length * 2, true); floatTo16BitPCM(view, 44, samples); return view; } ``` is there one for OGG?
The [Web Audio spec](http://www.w3.org/TR/webaudio/) is actually intended to allow exactly this kind of functionality, but is just not close to fulfilling that purpose yet: > > This specification describes a high-level JavaScript API for processing and synthesizing audio in web applications. The primary paradigm is of an audio routing graph, where a number of AudioNode objects are connected together to define the overall audio rendering. The actual processing will primarily take place in the underlying implementation (typically optimized Assembly / C / C++ code), but direct ***JavaScript processing and synthesis is also supported***. > > > Here's a statement on the current w3c [audio spec draft](http://www.w3.org/TR/webaudio/#JavaScriptPerformance-section), which makes the following points: - While processing audio in JavaScript, it is extremely challenging to get reliable, glitch-free audio while achieving a reasonably low-latency, especially under heavy processor load. - JavaScript is very much slower than heavily optimized C++ code and is not able to take advantage of SSE optimizations and multi-threading which is critical for getting good performance on today's processors. Optimized native code can be on the order of twenty times faster for processing FFTs as compared with JavaScript. It is not efficient enough for heavy-duty processing of audio such as convolution and 3D spatialization of large numbers of audio sources. - setInterval() and XHR handling will steal time from the audio processing. In a reasonably complex game, some JavaScript resources will be needed for game physics and graphics. This creates challenges because audio rendering is deadline driven (to avoid glitches and get low enough latency). JavaScript does not run in a real-time processing thread and thus can be pre-empted by many other threads running on the system. - Garbage Collection (and autorelease pools on Mac OS X) can cause unpredictable delay on a JavaScript thread. - Multiple JavaScript contexts can be running on the main thread, stealing time from the context doing the processing. - Other code (other than JavaScript) such as page rendering runs on the main thread. - Locks can be taken and memory is allocated on the JavaScript thread. This can cause additional thread preemption. - The problems are even more difficult with today's generation of mobile devices which have processors with relatively poor performance and power consumption / battery-life issues. ECMAScript (js) is really fast for a lot of things, and is getting faster all the time depending on what engine is interpreting the code. For something as intensive as audio processing however, you would be much better off using a low-level tool that's compiled to optimize resources specific to the task. I'm currently using [ffmpeg on the server](https://stackoverflow.com/q/17643261/418584) side to accomplish something similar. I know that it is really inefficient to have to send a wav file across an internet connection just to obtain a more compact .ogg file, but that's the current state of things with the web audio api. To do any client-side processing the user would have to explicitly give access to the local file system and execution privileges for the file to make the conversion. Edit: You could also use Google's [native-client](https://developers.google.com/native-client/quick-start) if you don't mind limiting your users to Chrome. It seems like very promising technology that loads in a sandbox and achieves speeds nearly as good natively executed code. I'm assuming that there will be similar implementations in other browsers at some point.
Raise an exception if class attribute is set to a certain value after instantiation I have a class which gets created through a constructor at runtime. Is there a way that I can prevent some of its attributes to be set to certain values after construction? For example, in the below code, I'd like to raise an exception if the value for currRating is set to value greater than maxRating or lower than minRating. I can do that easily during instantiation, however, I am looking to do this after class has already been constructed. ``` class Machine(object): def __init__(self,Name,maxRating,minRating,currRating=1): self.name=name self.maxRating = maxRating self.minRating = minRating self.currRating = currRating ``` *I read through some of the previous replies on StackOverflow and understand that this might be a touchy topic for Python veterans. I am just interested in knowing if there is a non-hackish way of achieving this in Python or if this is one of those things that is not meant to be done in Python.*
You can achieve this using the built in function [property](https://docs.python.org/2/library/functions.html#property) Essentially, instead you create 'setter' methods for your currRating so that when that property is set by calling obj.currRating you can do your check to see if its within min and max: For example: ``` class Machine(object): def __init__(self, name, max_rating, min_rating, cur_rating=1): self.max_rating = max_rating self.min_rating = min_rating self._cur_rating = cur_rating @property def cur_rating(self): return self._cur_rating @cur_rating.setter def cur_rating(self, value): if value > self.max_rating or value < self.min_rating: raise Exception() self._cur_rating = value ``` You will still access your variable as obj.cur\_rating = 10
Is WPF and MVC same concepts? I am new for both concepts. 1) I want to know that MVC and WPF is same concepts but WPF for desktop while other is for WEB ? 2) Will be easy to learn other one If i learn one of them ?
I think you probably meant "ASP.NET MVC" technology when you were talking about "MVC" (based on the tags of your question). Anyway, here are a few points that may clarify what is going on: - **ASP.NET MVC** is a technology for developing web applications based on the [model-view-controller (MVC)](http://en.wikipedia.org/wiki/Model%E2%80%93view%E2%80%93controller) pattern. You can twist it a little bit, but the framework is specifically desgined to work with this pattern. - **WPF** is a technology for developing windows applications. You can use various different design patterns when writing WPF applications, but the most popular one these days is called [model-view-viewmodel (MVVM)](http://en.wikipedia.org/wiki/Model_View_ViewModel). You could also use the MVC pattern (organization of components) when writing WPF applications, but that wouldn't work as nicely as more native approaches. So, regarding your questions: - They are not the same concepts - the technologies are different and the usual patterns (ways of organizing code) used with the two technologies also differ. - Learning one technology may make it easier to understand the other one slightly, because they are both .NET GUI frameworks and share some concepts. However, I don't think this will help a lot.
Using JUnit categories vs simply organizing tests in separate classes I have two logical categories of tests: plain functional unit tests (pass/fail) and benchmark performance tests that are just for metrics/diagnostics. Currently, I have all test methods in a single class, call it `MyTests`: ``` public class MyTests { @Test public void testUnit1() { ... assertTrue(someBool); } @Test public void testUnit2() { ... assertFalse(someBool); } @Test @Category(PerformanceTest.class) public void bmrkPerfTest1() { ... } @Test @Category(PerformanceTest.class) public void bmrkPerfTest2() { ... } } ``` Then I have a `UnitTestSuite` defined as ``` @RunWith(Categories.class) @Categories.ExcludeCategory(PerformanceTest.class) @SuiteClasses({ MyTests.class }) public class UnitTestSuite {} ``` and a `PerformanceTestSuite` ``` @RunWith(Categories.class) @Categories.IncludeCategory(PerformanceTest.class) @SuiteClasses({ MyTests.class }) public class PerformanceTestSuite {} ``` so that I can run the unit tests in `Ant` separately from performance tests (I don't think including Ant code is necessary). This means I have a total of FOUR classes (MyTests, PerformanceTest, PerformanceTestSuite, and UnitTestSuite). I realize I could have just put all the unit tests in one class and benchmark tests in another class and be done with it, without the additional complexity with categories and extra annotations. I call tests by class name in Ant, i.e. don't run all tests in a package. Does it make sense and what are the reasons to keep it organized by category with the annotation or would it be better if I just refactored it in two simple test classes?
To the question of whether to split the tests in two classes: As they are clearly *very different kinds of tests* (unit tests and performance tests), I would put them in different classes in any case, for that reason alone. Some further musings: I don't think using `@Category` annotations is a bad idea however. What I'd do, in a more typical project with tens or hundreds of classes containing tests, is annotate the test *classes* (instead of methods) with `@Category`, then [use the ClassPathSuite library](https://stackoverflow.com/a/2176791/56285) to avoid duplicated efforts of categorising the tests. (And maybe [run the tests by category using Ant](https://stackoverflow.com/questions/6226026/how-to-run-all-junit-tests-in-a-category-suite-with-ant?lq=1).) If you will only ever have the two test classes, it of course doesn't matter much. You can keep the Categories and Suites, or throw them away (as you said the tests are run by class name in Ant) if having the extra classes bugs you. I'd keep them, and move towards the scenario described above, as usually (in a healthy project) more tests will accumulate over time. :-)
word\_tokenize TypeError: expected string or buffer When calling `word_tokenize` I get the following error: ``` File "C:\Python34\lib\site-packages\nltk\tokenize\punkt.py", line 1322, in _slices_from_text for match in self._lang_vars.period_context_re().finditer(text): TypeError: expected string or buffer ``` I have a large text file (1500.txt) from which I want to remove stop words. My code is as follows: ``` from nltk.corpus import stopwords from nltk.tokenize import word_tokenize with open('E:\\Book\\1500.txt', "r", encoding='ISO-8859-1') as File_1500: stop_words = set(stopwords.words("english")) words = word_tokenize(File_1500) filtered_sentence = [w for w in words if not w in stop_words] print(filtered_sentence) ```
The input for `word_tokenize` is a document stream sentence, i.e. a list of strings, e.g. `['this is sentence 1.', 'that's sentence 2!']`. The `File_1500` is a `File` object not a list of strings, that's why it's not working. To get a list of sentence strings, first you have to read the file as a string object `fin.read()`, then use `sent_tokenize` to split the sentence up (I'm assuming that your input file is not sentence tokenized, just a raw textfile). Also, it's better / more idiomatic to tokenize a file this way with NLTK: ``` from nltk.corpus import stopwords from nltk.tokenize import word_tokenize, sent_tokenize stop_words = set(stopwords.words("english")) with open('E:\\Book\\1500.txt', "r", encoding='ISO-8859-1') as fin: for sent in sent_tokenize(fin.read()): words = word_tokenize(sent) filtered_sentence = [w for w in words if not w in stop_words] print(filtered_sentence) ```
Data warehouse - Dimension Modeling I am new to BI/Datawarehousing, and after building some easy samples, I have the need to build a more complex structure. My project initially involved product licenses, and I was measuring how many sold, by month/year and by program, and just counting the number of licenses. Now the requirement is to introduce jump offs from those metrics. As in, when you come to a certain group of licenses, they want to see a whole different metrics of those. Such as, if 100 licenses were sold in mar 2011, how many of them installed, activated and cancelled the product. (we track that info, but not in the DW). So, I am looking for the best way to do this...I assume the first thing I have to do is add three dimensions for installed, activated and cancelled - and have three fact tables? Or have one fact table with each license, and have a row for cancelled, installed or activated? (so one license may be repeated). Or have one fact table, with different fields for installed, cancelled, activated? Also, how do you relate one fact table to another? Is it through dimensions, or they can related in some other ways? Any help would be much appreciated! EDIT: Thanks for the post... I was also thinking the second option is probably the correct one. But in this implementation, I have a unique problem. So, one of the facts that is measured is the number of licenses that are sold - by date of course. Lets say I add a row for installed, cancelled, activated. The requirement is for them to be able to see a connected fact. For example, if I add individual rows, given a timeframe, I can tell how many were sold, and how many were installed. But they want to see given a timeframe, how many were bought, and out of them, how many installed. e.g., if the timeframe is march, and 100 were sold in march, out of those 100, how many were installed - even though they could have installed much later than march, and therefore the row date would be not in the timeframe they are looking at....is this a common problem? how is it solved?
> > I assume the first thing I have to do is add three dimensions for installed, activated and cancelled - and have three fact tables? > > > Not really. A license sale is a fact. It has a price. A license sale has has dimensions like date, product, customer and program. An "installation" or "activitation" is a state-change event of a license. You have "events" for each license (sale, install, activate, etc.) So a license has a "sale" fact, an "installation" fact and an "activation" fact. Each of which is (minimally) a relationship with time. > > Or have one fact table with each license, and have a row for cancelled, installed or activated? (so one license may be repeated). > > > This gives the most flexibility, because each event can be rich with multiple dimensions. A sequence of events can be then be organized to provide the history of a license. This works out very well. You will often want to create summary tables for simple counts and sums to save having to traverse all events for the most common dashboard metrics. > > The requirement is for them to be able to see a connected fact. > > > Right. You're joining several rows from the fact table together. A row where the event was sold, outer joined with a row where the event was installed outer joined with row where the event was activated, etc. It's just outer joins among the facts. So. Count of sales in March is easy. Event = "Sale". Time is all the rows where time.month = "march". Easy. Count of sales in march which became installs. Same "march sales" where clause outer joined with all "install" events for those licenses. Count of "sales" is the same as count(\*). Count of installs may be smaller because the outer join puts in some nulls. Count of sales in march which became activations. The "march sales" where clause outer joined with all "activation" events. Note that the activation has no date constraint. > > Or have one fact table, with different fields for installed, cancelled, activated? > > > This doesn't work out as well because the table's columns dictate a business process. That business process might change and you'll be endlessly tweaking the columns in the fact table. Having said it doesn't work out "as well" means it doesn't give ultimate flexibility. In some cases, you don't need ultimate flexibility. In some cases, the industry (or regulations) may define a structure that's quite fixed. > > Also, how do you relate one fact table to another? Is it through dimensions, or they can related in some other ways? > > > Dimensions by definition. A fact table only has two things -- measurements and FK's to dimensions. Some dimensions (like "license instance") are degenerate because the dimension may have almost no usable attributes other than a PK. So you have an "sold" fact that ties to a license, a optional "installed" fact that ties to a license and an optional "activate" fact that ties to a license. The license is an object ID (the database surrogate key) and -- perhaps -- the license identifier itself (maybe a license serial number or something outside the database). Please by Ralph Kimball's Data Warehouse Toolkit before doing anything more.
Hiding the process window, why isn't it working? I have tried several things now to hide the window of a new process (in this case it's just notepad.exe for testing), but it just won't work regardless of what I try. I have read many posts now all saying the same so why isn't it working for me? I have a console app that is supposed to launch other processes without showing their windows. I have tried to make my console app launch notepad.exe without a window, but it just won't work. ``` ProcessStartInfo info = new ProcessStartInfo("path to notepad.exe"); info.RedirectStandardOutput = true; info.RedirectStandardError = true; info.CreateNoWindow = true; info.UseShellExecute = false; Process proc = Process.Start(info); ``` I have also tried using various setting for info.WindowStyle and I have tried to configure my console app to be a Windows application, but it doesn't really matter what I do, the child process always opens a window. Is this not allowed from a console app or what is the problem here - can anyone shed some light on this maybe? I'm using .NET 4.0 on Windows 7 x64
In my experience, the following works whenever I fire up "cmd.exe". ``` info.CreateNoWindow = true; info.UseShellExecute = false; ``` It doesn't seem to work with "notepad.exe". It fails with other apps too, like "excel.exe" and "winword.exe". This works, however: ``` ProcessStartInfo info = new ProcessStartInfo("notepad.exe"); info.WindowStyle = ProcessWindowStyle.Hidden; Process proc = Process.Start(info); ``` From [MSDN](http://msdn.microsoft.com/en-us/library/system.diagnostics.processwindowstyle%28v=vs.110%29.aspx): > > A window can be either visible or hidden. The system displays a hidden window by not drawing it. If a window is hidden, it is effectively disabled. A hidden window can process messages from the system or from other windows, but it cannot process input from the user or display output. Frequently, an application may keep a new window hidden while it customizes the window's appearance, and then make the window style **Normal**. To use ProcessWindowStyle.Hidden, the [ProcessStartInfo.UseShellExecute](http://msdn.microsoft.com/en-us/library/system.diagnostics.processstartinfo.useshellexecute%28v=vs.110%29.aspx) property must be **false**. > > > When I tested it, I didn't have to set `UseShellExecute = false`.
Is there a difference between Run As: Spring Boot App and Run As: Java Application? If I am using Spring Tool Suite or The Spring IDE plugin for eclipse, I can run a spring boot app 2 ways: ``` Run As: Spring Boot App Java Application ``` [![enter image description here](https://i.stack.imgur.com/H2l72.png)](https://i.stack.imgur.com/H2l72.png) Both of these commands work and can fire up my spring boot app without an issue. However, I wanted to understand the difference between the two different processes. Is there actually a difference between them or do they work identically?
There are couple of differences, as someone already hinted in a comment. [This article](https://spring.io/blog/2015/03/18/spring-boot-support-in-spring-tool-suite-3-6-4) explains that you get some extra 'Bells and Whistles' in the launch configuration editor. A second and perhaps more important difference is that since Boot 1.3 there is a JMX bean provided by Spring Boot App that allows STS to ask the app nicely to shut down. When you terminate the app from the IDE, for example by clicking the stop / restart button, STS uses this JMX bean to ask the boot app to shut down. This is a feature implemented in the "Run As Boot App" launcher, and so it doesn't take effect if you use "Run As Java App". The Java launcher simply terminates the process associated with the launch using Java's [Process.destroy()](https://docs.oracle.com/javase/7/docs/api/java/lang/Process.html#destroy%28%29) method. This is a more 'aggressive' way to kill the associated process and may not allow the app to cleanup stuff properly, for example cleanly closing database connnections. So... in summary you get two things: 1. Some extra boot-specific UI in the launch conf editor 2. Graceful process termination for Boot 1.3 and later.