source
sequence
text
stringlengths
99
98.5k
[ "stackoverflow", "0028081692.txt" ]
Q: Enable button only if both textview are non-empty in Android I'm doing a sign in page with two TextView and one Button. By default, the sign in button(btnSignin) is disabled. The button should be enabled only if both TextViews are non-empty. I tried this code, and it is working partially. But the Button react only to first textView(txtId). If the second textView is changed after the first, nothing happening. Here I just erased two @override function for convenience. usernameTxt = (EditText) findViewById(R.id.txtId); passwordTxt = (EditText) findViewById(R.id.txtPassword); final Button signinBtn = (Button) findViewById(R.id.btnSignin); usernameTxt.addTextChangedListener(new TextWatcher() { @Override public void onTextChanged(CharSequence s, int start, int before, int count) { signinBtn.setEnabled( (!usernameTxt.getText().toString().trim().isEmpty()) & (!passwordTxt.getText().toString().trim().isEmpty()) ); } }); A: usernameTxt = (EditText) findViewById(R.id.txtId); passwordTxt = (EditText) findViewById(R.id.txtPassword); final Button signinBtn = (Button) findViewById(R.id.btnSignin); usernameTxt.addTextChangedListener(new TextWatcher() { @Override public void onTextChanged(CharSequence s, int start, int before, int count) { signinBtn.setEnabled( (!usernameTxt.getText().toString().trim().isEmpty()) && (!passwordTxt.getText().toString().trim().isEmpty()) ); } }); And Also for passwordTxt.addTextChangedListener(new TextWatcher() { @Override public void onTextChanged(CharSequence s, int start, int before, int count) { signinBtn.setEnabled( (!usernameTxt.getText().toString().trim().isEmpty()) && (!passwordTxt.getText().toString().trim().isEmpty()) ); } });
[ "stackoverflow", "0005445711.txt" ]
Q: Flex - error 1046 - some .as files don't get importet I received a Flex project and when trying to compile it i get a few 1046 errors that say the Type was not found or was not a compile-time constant MyClass however - the respective files are listed on the top of the file in an import clause like this: import com.folder1.folder2.folder3.MyClass; and if i check the folder structure, MyClass.as is there. however, if i type this same line (import com.folder1.folder2.folder3.MyClass;) and check at each . what the autocompletion suggests, I see only a subset of the as classes that are actually there on the harddisk. What determines which classes and folders are suggested by the autocompletion function? I don't get any compile error on the corresponding import statements that import MyClass //edit: screenshot 1 shows the file in which the error occurs that tries to import the class in question (Updater) http://neo.cycovery.com/flex_problem.gif screenshot 2 shows the file Updater.as http://neo.cycovery.com/flex_problem2.gif the censored part of the path matches in both cases (folder structure and package statement in Updater.as) screenshot 3 shows where the error actually happens: http://neo.cycovery.com/flex_problem3.gif interestingly, the variable declaration private var _updater:Updater = new Updater(); further up in the file does not give an error A: This project is set up wrong. Its obvious your application can not find the classes. Move your "com" folder and all of the contents into your "src" folder. Or perhaps include the files in your source path? right click on the project name->properties->flex Build Path->add folder
[ "stackoverflow", "0061263032.txt" ]
Q: Triangularizing a list in Haskell I'm interested in writing an efficient Haskell function triangularize :: [a] -> [[a]] that takes a (perhaps infinite) list and "triangularizes" it into a list of lists. For example, triangularize [1..19] should return [[1, 3, 6, 10, 15] ,[2, 5, 9, 14] ,[4, 8, 13, 19] ,[7, 12, 18] ,[11, 17] ,[16]] By efficient, I mean that I want it to run in O(n) time where n is the length of the list. Note that this is quite easy to do in a language like Python, because appending to the end of a list (array) is a constant time operation. A very imperative Python function which accomplishes this is: def triangularize(elements): row_index = 0 column_index = 0 diagonal_array = [] for a in elements: if row_index == len(diagonal_array): diagonal_array.append([a]) else: diagonal_array[row_index].append(a) if row_index == 0: (row_index, column_index) = (column_index + 1, 0) else: row_index -= 1 column_index += 1 return diagonal_array This came up because I have been using Haskell to write some "tabl" sequences in the On-Line Encyclopedia of Integer Sequences (OEIS), and I want to be able to transform an ordinary (1-dimensional) sequence into a (2-dimensional) sequence of sequences in exactly this way. Perhaps there's some clever (or not-so-clever) way to foldr over the input list, but I haven't been able to sort it out. A: Make increasing size chunks: chunks :: [a] -> [[a]] chunks = go 0 where go n [] = [] go n as = b : go (n+1) e where (b,e) = splitAt n as Then just transpose twice: diagonalize :: [a] -> [[a]] diagonalize = transpose . transpose . chunks Try it in ghci: > diagonalize [1..19] [[1,3,6,10,15],[2,5,9,14],[4,8,13,19],[7,12,18],[11,17],[16]] A: This appears to be directly related to the set theory argument proving that the set of integer pairs are in one-to-one correspondence with the set of integers (denumerable). The argument involves a so-called Cantor pairing function. So, out of curiosity, let's see if we can get a diagonalize function that way. Define the infinite list of Cantor pairs recursively in Haskell: auxCantorPairList :: (Integer, Integer) -> [(Integer, Integer)] auxCantorPairList (x,y) = let nextPair = if (x > 0) then (x-1,y+1) else (x+y+1, 0) in (x,y) : auxCantorPairList nextPair cantorPairList :: [(Integer, Integer)] cantorPairList = auxCantorPairList (0,0) And try that inside ghci: λ> take 15 cantorPairList [(0,0),(1,0),(0,1),(2,0),(1,1),(0,2),(3,0),(2,1),(1,2),(0,3),(4,0),(3,1),(2,2),(1,3),(0,4)] λ> We can number the pairs, and for example extract the numbers for those pairs which have a zero x coordinate: λ> λ> xs = [1..] λ> take 5 $ map fst $ filter (\(n,(x,y)) -> (x==0)) $ zip xs cantorPairList [1,3,6,10,15] λ> We recognize this is the top row from the OP's result in the text of the question. Similarly for the next two rows: λ> λ> makeRow xs row = map fst $ filter (\(n,(x,y)) -> (x==row)) $ zip xs cantorPairList λ> take 5 $ makeRow xs 1 [2,5,9,14,20] λ> λ> take 5 $ makeRow xs 2 [4,8,13,19,26] λ> From there, we can write our first draft of a diagonalize function: λ> λ> printAsLines xs = mapM_ (putStrLn . show) xs λ> diagonalize xs = takeWhile (not . null) $ map (makeRow xs) [0..] λ> λ> printAsLines $ diagonalize [1..19] [1,3,6,10,15] [2,5,9,14] [4,8,13,19] [7,12,18] [11,17] [16] λ> EDIT: performance update For a list of 1 million items, the runtime is 18 sec, and 145 seconds for 4 millions items. As mentioned by Redu, this seems like O(n√n) complexity. Distributing the pairs among the various target sublists is inefficient, as most filter operations fail. To improve performance, we can use a Data.Map structure for the target sublists. {-# LANGUAGE ExplicitForAll #-} {-# LANGUAGE ScopedTypeVariables #-} import qualified Data.List as L import qualified Data.Map as M type MIL a = M.Map Integer [a] buildCantorMap :: forall a. [a] -> MIL a buildCantorMap xs = let ts = zip xs cantorPairList -- triplets (a,(x,y)) m0 = (M.fromList [])::MIL a redOp m (n,(x,y)) = let afn as = case as of Nothing -> Just [n] Just jas -> Just (n:jas) in M.alter afn x m m1r = L.foldl' redOp m0 ts in fmap reverse m1r diagonalize :: [a] -> [[a]] diagonalize xs = let cm = buildCantorMap xs in map snd $ M.toAscList cm With that second version, performance appears to be much better: 568 msec for the 1 million items list, 2669 msec for the 4 millions item list. So it is close to the O(n*Log(n)) complexity we could have hoped for. A: It might be a good idea to craete a comb filter. So what does comb filter do..? It's like splitAt but instead of splitting at a single index it sort of zips the given infinite list with the given comb to separate the items coressponding to True and False in the comb. Such that; comb :: [Bool] -- yields [True,False,True,False,False,True,False,False,False,True...] comb = iterate (False:) [True] >>= id combWith :: [Bool] -> [a] -> ([a],[a]) combWith _ [] = ([],[]) combWith (c:cs) (x:xs) = let (f,s) = combWith cs xs in if c then (x:f,s) else (f,x:s) λ> combWith comb [1..19] ([1,3,6,10,15],[2,4,5,7,8,9,11,12,13,14,16,17,18,19]) Now all we need to do is to comb our infinite list and take the fst as the first row and carry on combing the snd with the same comb. Lets do it; diags :: [a] -> [[a]] diags [] = [] diags xs = let (h,t) = combWith comb xs in h : diags t λ> diags [1..19] [ [1,3,6,10,15] , [2,5,9,14] , [4,8,13,19] , [7,12,18] , [11,17] , [16] ] also seems to be lazy too :) λ> take 5 . map (take 5) $ diags [1..] [ [1,3,6,10,15] , [2,5,9,14,20] , [4,8,13,19,26] , [7,12,18,25,33] , [11,17,24,32,41] ] I think the complexity could be like O(n√n) but i can not make sure. Any ideas..?
[ "stackoverflow", "0049230333.txt" ]
Q: How to create own custom domain name for local-netwrok I wanted to host one website in local Network.(it is not accessed by out side the network) i install apache server. how can i create own domain name for that(i don't want buy any domain name). it is my virtual host file ServerName testing.com ServerAlias www.testing.com DocumentRoot /var/www/testing.com/public_html It works only when i put localhost as serverName A: locahost file 127.0.0.1 somedomain.com
[ "stackoverflow", "0021068256.txt" ]
Q: Replacing zeros (or NANs) in a matrix with the previous element row-wise or column-wise in a fully vectorized way I need to replace the zeros (or NaNs) in a matrix with the previous element row-wise, so basically I need this Matrix X [0,1,2,2,1,0; 5,6,3,0,0,2; 0,0,1,1,0,1] To become like this: [0,1,2,2,1,1; 5,6,3,3,3,2; 0,0,1,1,1,1], please note that if the first row element is zero it will stay like that. I know that this has been solved for a single row or column vector in a vectorized way and this is one of the nicest way of doing that: id = find(X); X(id(2:end)) = diff(X(id)); Y = cumsum(X) The problem is that the indexing of a matrix in Matlab/Octave is consecutive and increments columnwise so it works for a single row or column but the same exact concept cannot be applied but needs to be modified with multiple rows 'cause each of raw/column starts fresh and must be regarded as independent. I've tried my best and googled the whole google but coukldn’t find a way out. If I apply that same very idea in a loop it gets too slow cause my matrices contain 3000 rows at least. Can anyone help me out of this please? A: Modified version of Eitan's answer to avoid propagating values across rows: Y = X'; %' tf = Y > 0; tf(1,:) = true; idx = find(tf); Y(idx(2:end)) = diff(Y(idx)); Y = reshape(cumsum(Y(:)),fliplr(size(X)))';
[ "stackoverflow", "0039779027.txt" ]
Q: Using Get-Content inside Get-Service I have a list of servers (server.txt), and I want to get the service status of "Nimsoft Robot Watcher" of all the servers. I tried the code: $text1 = Get-Content server.txt foreach ($text1 in Get-Content server.txt) { $text=$text1 } Get-Service -ComputerName $text -DisplayName "Nimsoft Robot Watcher" But only the status for the first server gets displayed. A: The code you currently have will replace the value of $text with each iteration, leaving you with just the last name after the loop completes. Either put Get-Service inside the loop foreach ($text in Get-Content server.txt) { Get-Service -ComputerName $text -DisplayName "Nimsoft Robot Watcher" } or collect the hosts in an array and pass that to Get-Service $text = Get-Content server.txt Get-Service -ComputerName $text -DisplayName "Nimsoft Robot Watcher"
[ "stackoverflow", "0030381427.txt" ]
Q: How do I drop rows from a Pandas dataframe based on data in multiple columns? I know how to delete rows based on simple criteria like in this stack overflow question, however, I need to delete rows using more complex criteria. My situation: I have rows of data where each row has four columns containing numeric codes. I need to drop all rows that don't have at least one code with a leading digit of less than 5. I've currently got a function that I can use with dataframe.apply that creates a new column, 'keep', and populates it with 1 if it is a row to keep. I then do a second pass using that simple keep column to delete unwanted rows. What I'm looking for is a way to do this in a single pass without having to create a new column. Example Data: a | b | c | d 0 145|567|999|876 1 999|876|543|543 In that data I would like to keep the first row because in column 'a' the leading digit is less than 5. The second row has no columns with a leading digit of less than 5, so that row needs to be dropped. A: This should work: In [31]: df[(df.apply(lambda x: x.str[0].astype(int))).lt(5).any(axis=1)] Out[31]: a b c d 0 145 567 999 876 So basically this takes the first character of each column using the vectorised str method, we cast this to an int, we then call lt which is less than row-wise to produce a boolean df, we then call any on the df row-wise to produce a boolean mask on the index which use to mask the df. So breaking the above down: In [34]: df.apply(lambda x: x.str[0].astype(int)) Out[34]: a b c d 0 1 5 9 8 1 9 8 5 5 In [35]: df.apply(lambda x: x.str[0].astype(int)).lt(5) Out[35]: a b c d 0 True False False False 1 False False False False In [37]: df.apply(lambda x: x.str[0].astype(int)).lt(5).any(axis=1) Out[37]: 0 True 1 False dtype: bool EDIT To handle NaN values you add a call to dropna: In [39]: t="""a,b,c,d 0,145,567,999,876 1,999,876,543,543 2,,324,344""" df = pd.read_csv(io.StringIO(t),dtype=str) df Out[39]: a b c d 0 145 567 999 876 1 999 876 543 543 2 NaN 324 344 NaN In [44]: df[(df.apply(lambda x: x.dropna().str[0].astype(int))).lt(5,axis=0).any(axis=1)] Out[44]: a b c d 0 145 567 999 876 2 NaN 324 344 NaN
[ "stackoverflow", "0007594740.txt" ]
Q: convert objects to JSON and send via jquery ajax I've 1 object: var myobject = {first: 1, second: {test: 90}, third: [10, 20]}; and I want to send it as JSON string via jQuery ajax. How can I do it? (i test JSON.stringify(), but it doesn't work in IE) Thanks. A: If you specify your myobject as the data parameter to the jQuery .ajax() method, it will automatically convert it to a query string, which I believe is what you want. e.g. $.ajax({ url: /* ... */, data: myobject, /* other settings/callbacks */ }) From the docs: data Data to be sent to the server. It is converted to a query string, if not already a string. It's appended to the url for GET-requests. See processData option to prevent this automatic processing. Object must be Key/Value pairs.
[ "sound.stackexchange", "0000033050.txt" ]
Q: Do studio monitors need an audio interface? I bought 2 pioneer studio monitors (sdj-80X) but I don't know if i need an audio interface to get the best sound. I was told that I could connect it with just a connector from jack to mini jack. Is that a good idea or should I use an audio interface? A: Several points are to be considered : PC output vs dedicated Audio Interface output A PC will output consumer line level. A dedicated interface will output either consumer line level or professional line level, or both. The quality of a dedicated interface will probably be better than the default sound chipset of a consumer PC. The difference might not be easy to hear depending on what you're listening to and your ear training (besides the quality of the loudspeaker). Jack 3.5 vs XLR vs RCA Your monitors have balanced (XLR or TRS/Jack) and unbalanced (RCA) inputs. The output of a PC will be unbalanced, so you should use a stereo jack 3.5 to two RCA's in this context. If you use an external sound card, it might have balanced outputs. Balanced outputs offer a better protection against induced interferences and allow for longer cables between the output and the monitors. Jack 3.5 are not locked in the connector, XLR's are. If you often move/install your setup, the female jack 3.5 from your computer might eventually get loose and you might have connection issues. In conclusion, I would suggest that if you are satisfied with the sound quality of your PC output, you might give it a try for a while, and upgrade if you feel it necessary. A: It is worth using an audio interface if you want to maximize sound quality, however you can also get by just fine with your built in sound card for a while if you want to save for a better interface. The #1 difference between a professional audio interface and a built in consumer sound card is the quality of the DAC or Digital to Analog Converter. Computers work with digital audio, but you can't play sound files made of ones and zeros through a speaker. You need circuits that can take that series of ones and zeroes and make it in to an actual waveform that can drive a speaker. This is the purpose of a DAC and the better quality the DAC, the smoother and cleaner the generated analog audio signal will be. Professional audio interfaces also have a few additional advantages. They tend to use pro level line outputs (which provide more signal, thus increasing the signal to noise ratio, which means cleaner audio) or even balanced outputs (which uses some wiring fanciness to greatly reduce picking up noise on the audio cables.) Additionally, they use a different audio path called ASIO on Windows boxes. This allows for drastically streamlined drivers which are optimized to pass sound to the speakers quicker, reducing the latency between the time the computer starts trying to play a sound and the time it is actually produced at the speakers. This doesn't matter in every case, if you are just listening to music, it won't matter a bit, but if you are trying to record anything while you listen, you will need to correct for latency because what you are recording will lag behind what you are hearing due to the delay. The inputs on an audio interface are also considerably better quality than consumer sound cards and often support more professional features like XLR connections and phantom power. They also have better analog to digital converters which are the opposite of the DACs and take the analog input and produce a stream of 1s and 0s that represent it. They often not only produce cleaner audio, but also can work at higher bit depths and sampling rates than consumer sound cards, which results (to a point) in better quality audio as well. Most of these advantages are relatively minor unless you are trying to do recording or are an experienced sound guy that has trained to notice such things, but they do make an audio interface a very nice item to have, but I wouldn't classify it as critical to your needs.
[ "spanish.stackexchange", "0000015424.txt" ]
Q: What is the difference among "perdón", "disculpa" and "lo siento"? According to an online dictionary (spanishdict.com), one can say "I'm sorry" in three ways: "perdón" to apologize (Perdón por...) "lo siento" in more formal occasions (Lo siento mucho por su pérdida) "disculpa" which is also more formal than "perdón". If I want to say "I'm sorry, but I can't help you", which verb would be best to use? A: "Disculpe" is used more when you want to ask something. If you want to be polite, talking to someone you don't know you can say: -Disculpe, me podría decir la hora, por favor. -Sorry, could you tell me, please, what time is it? You can say it too with "Perdón" or "Perdone" instead of "Disculpe" "Lo siento" is allways to apologize. A: Adding more information, you can say "disculpa" (excuse me) or "perdón" (pardon) when interrupting a conversation, but not "lo siento" (because you don't feel sorry for interrupting that conversation). On the contrary, you can say "lo siento" at a funeral, but not "disculpa" or "perdón" (because it's not your fault the death of that person).
[ "math.stackexchange", "0002707189.txt" ]
Q: Show that if n is a power of 3, then $\sum_{i=0}^{\log_3n} 3^i = \frac{3n-1}{2}$ Show that if n is a power of 3, then $\sum_{i=0}^{\log_3n} 3^i = \frac{3n-1}{2}$ $\sum_{i=0}^{\log_3n} 3^i =3^0+3^1+3^2+3^3...+3^{log_3n}$ This is a geometric sequence, so I used the geometric sum formula: $S= \frac{a_1 \cdot (q^n-1)}{q-1}$. The first element is : $ 3^0 = 1$, The ratio is : $q=3$, The number of elemnts (my n) is: $log_3n$. if n is a power of 3 , we get : $\log_3n= 3^x, x\in \mathbb N$. $\sum_{i=0}^{\log_3n} 3^i = \frac{3^0\cdot (3^{\log_3 n}-1)}{3-1}= \frac{3^{\log_3n}-1}{2}=\frac {3^\bbox[yellow]{3^x}-1}{2}$. I can see that my $n$ needs to be $log_3n +1$, but I can't find a way to do so. I thought that because I am using a geometric sum formula and I'm summing from $0 $ to $log_3n$, instead of summing from : $1$, I have to sum $log_3n +1$, but it doesn't sound like a valid argument. Any help is greatly appreciated. A: Make a change: $n=3^m$. Then: $$S=\sum_{i=0}^{m} 3^i=\frac{1\cdot (3^{m+1}-1)}{3-1}=\frac{3\cdot 3^m-1}{2}=\frac{3n-1}{2}.$$
[ "askubuntu", "0000547243.txt" ]
Q: Can't open or ping my ISP's authentication page only from my linux machine SCENARIO I just got a new internet connection. It's a fiber optic connection connected to a modem to which my linux computer is connected to through ethernet cable. To get internet access I have to connect the ethernet cable to the machine and go to x.xxx.xxx.xxx to login (Authentication by my ISP I guess) PROBLEM The above said authentication address(x.xxx.xxx.xxx) won't open. And the ping to 8.8.8.8 and the above said authentication page (x.xxx.xxx.xxx) returns connect: Network is unreachable. Initially I felt that my ethernet cable was not getting detected but that's not the case. Without cable connected ip link show up 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 2: eth1: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc pfifo_fast state DOWN mode DEFAULT group default qlen 1000 link/ether <<my mac addr>> brd ff:ff:ff:ff:ff:ff 3: wlan0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP mode DORMANT group default qlen 1000 link/ether <<my wifi mac addr>> brd ff:ff:ff:ff:ff:ff With cable connected ip link show up 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 2: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP mode DEFAULT group default qlen 1000 link/ether 5c:f9:dd:5a:2c:5a brd ff:ff:ff:ff:ff:ff 3: wlan0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP mode DORMANT group default qlen 1000 link/ether 88:53:2e:0a:af:73 brd ff:ff:ff:ff:ff:ff I FACE ABSOLUTELY NO SUCH PROBLEM WHEN I TRY IT THROUGH MY WINDOWS AND MAC COMPUTERS A: It was a problem with DHCP Added the following code to /etc/network/interfaces and it started to work after some time. iface eth1 inet dhcp Basically no ip address(ipv4. ipv6 was present as its just a modification of the mac address and not assigned by the ISP) was getting assigned to me and the above addition fixed the problem.
[ "stackoverflow", "0002558530.txt" ]
Q: How to execute a program on PostBuild event in parallel? I managed to set the compiler to execute another program when the project is built/ran with the following directive in project options: call program.exe param1 param2 The problem is that the compiler executes "program.exe" and waits for it to terminate and THEN the project executable is ran. What I ask: How to set the compiler to run both executables in parallel without waiting for the one in PostBuild event to terminate? Thanks in advance A: I'd have no clue how the IDE manages to wait for the termination of processes initiated by "start", but calling "CreateProcess" at its simplest in your own program starter seems to serve. Compile sth. like; program starter; {$APPTYPE CONSOLE} uses sysutils, windows; var i: Integer; CmdLine: string; StartInfo: TStartupInfo; ProcInfo: TProcessInformation; begin try if ParamCount > 0 then begin CmdLine := ''; for i := 1 to ParamCount do CmdLine := CmdLine + ParamStr(i) + ' '; ZeroMemory(@StartInfo, SizeOf(StartInfo)); StartInfo.cb := SizeOf(StartInfo); ZeroMemory(@ProcInfo, SizeOf(ProcInfo)); if not CreateProcess(nil, PChar(CmdLine), nil, nil, False, NORMAL_PRIORITY_CLASS, nil, nil, StartInfo, ProcInfo) then raise Exception.Create(Format('Failed to run: %s'#13#10'Error: %s'#13#10, [CmdLine, SysErrorMessage(GetLastError)])); end; except on E:Exception do begin Writeln(E.ClassName + ', ' + E.Message); Writeln('... [Enter] to dismiss ...'); Readln(Input); end; end; end. and then on PostBuild put: "X:\...\starter.exe" "X:\...\program.exe" param1 param2
[ "stackoverflow", "0055735895.txt" ]
Q: NoMethodError: undefined method `current_sign_in_at' for #User:0x000055ce01dcf0a8 by using Devise_token_auth rails gem is not working NoMethodError: undefined method `current_sign_in_at' for #User:0x000055ce01dcf0a8 I think it is a session method error of some sort I have an angular6 app for frontend and rails for backend, so the best option for me was to opt for devise_token_auth and ng_token_auth for user authentication. I installed devise_token_auth gem followed by executing this line of code in terminal "rails generate devise_token_auth:install User auth" and on migration there was an error, I solved the issue by adding "extend Devise::Models" to the USER model and then migration had worked, then I created a user in the backend and tried to call sign_in using postman and the error "NoMethodError: undefined method `current_sign_in_at' for #User:0x000055ce01dcf0a8" came I want the user to get authenticated using this gem or some other gem if they exist A: I had this issue recently and it turns out I didn't have the trackable fields in my migration. There are two ways to fix this: Option one. Add a new migration that adds the trackable fields to User ## Trackable t.integer :sign_in_count, default: 0, null: false t.datetime :current_sign_in_at t.datetime :last_sign_in_at t.inet :current_sign_in_ip t.inet :last_sign_in_ip Run rake db:migrate The second option: run a down migration Start with this command - add your migration version number rake db:migrate:down VERSION=xxxxxxxxxxxxxx You should then be able to add the trackable fields to the migration file and then run rake db:migrate up VERSION=xxxxxxxxxxxxxx Run rake db:migrate
[ "stackoverflow", "0032435828.txt" ]
Q: Symfony 2. Sorting an array by DateTime Arrays of data: $allEvents = []; foreach ($events as $event) { $allEvents[] = $event; } foreach ($user->getEvents() as $event) { $allEvents[] = $event; } Result to twig: return $this->render( 'MyBundle:User:dashboard.html.twig', [ 'allEvents' => $allEvents, 'user' => $user ] ); var_dump allEvents result: array:5 [ 0 => Event { id: 26 title: "test action" category: Category creator: User ***schedule***: DateTime { "date": "2015-12-24 17:10:00.000000" "timezone_type": ... "timezone": ... } } 1 => Event {...} 2 => Event {...} 3 => Event {...} 4 => Event {...} ] Question: How can I do sort an array by schedule (some like sort by ASC or DESC)? A: You could use the usort function of PHP as below (I've edited the answer for you to see clearer): EDIT: $allSortedEvents = usort( $allEvents, function( $a, $b ){ if ( $a->getSchedule() == $b->getSchedule() ) { return 0; } else { return ( $a->getSchedule() < $b->getSchedule() ) ? -1 : 1; } }); And send to Twig as: return $this->render( 'MyBundle:User:dashboard.html.twig', [ 'allEvents' => $allSortedEvents, 'user' => $user ] );
[ "stackoverflow", "0052115576.txt" ]
Q: destructor is called twice for std::vector with optimization level 0 I am trying to understand the assembly code generated for the std::vector and its emplace_back (or) push_back function using compiler explorer. Note: Optimization level is 0 i.e., -O0 is used One thing that I couldn't understand is that why are there two destructors being called instead of one (As you can see only one vector is being created. If I assume that a temporary object is being created internally, then atlease I have to see a call to std::vector constructor. This is same with clang compiler as well. Can someone please explain what is happening here? Edit 1: #include <vector> int main() { std::vector<int> vec; vec.emplace_back(10); } Edit 2: Removed the screenshot as it is hard to read it. A: There's a clue at line 34: call _Unwind_Resume. That block of code, from lines 28 through 34, is for stack unwinding when an exception is thrown. The normal code path goes through the destructor call at line 25, then, at line 27, jumps past the exception code, to line 35, and from there it returns from the function. Just to clarify, there's magic here: the call to _Unwind_Resume does not return to the caller. It's a trick, to get the address of the block that was being executed, so that the exception handling code can figure out where it was and continue up the stack.
[ "stackoverflow", "0034270337.txt" ]
Q: MinMax algorithm not working as expected I'm building a the tic tac toe game (a project on Free Code Camp), and have implemented a minmax algorithm to decide which square the computer player should choose next. It is working as expected in all cases I have tested, except for the following: var userIs = 'o' var computerIs = 'x' function countInArray(array, what) { var count = 0; for (var i = 0; i < array.length; i++) { if (array[i] === what) { count++; } } return count; } function nextMove(board, player) { var nextPlayer; if (computerIs !== player) { nextPlayer = userIs; } else { nextPlayer = computerIs; } if (isGameOver(board)) { if (player === userIs) { return { "willWin": -1, "nextMove": -1 }; } else { return { "willWin": 1, "nextMove": -1 }; } } var listOfResults = []; if (countInArray(board, '-') === 0) { return { "willWin": 0, "nextMove": -1, }; } var _list = [];//keeping track of avalible moves for (var i=0; i < board.length; i++) { if (board[i] === '-') { _list.push(i); } } for (var j = 0; j < _list.length; j++) { board[_list[j]] = player; var nextTry = nextMove(board, nextPlayer); listOfResults.push(nextTry.willWin); board[_list[j]] = '-'; } if (player === computerIs) { var maxele = Math.max.apply(Math, listOfResults); return { "willWin": maxele, "nextMove": _list[listOfResults.indexOf(maxele)] }; } else { var minele = Math.min.apply(Math, listOfResults); return { "willWin": minele, "nextMove": _list[listOfResults.indexOf(minele)] }; } } function isGameOver(board) { //horizontal wins var gameOver = false; var rowOffset = [0,3,6]; rowOffset.forEach(function(row){ if (board[row] === board[row + 1] && board[row + 1] === board[row + 2] && board[row] !== "-") { gameOver = true; } }); //vertical wins var colOffset = [0,1,2]; colOffset.forEach(function(col){ if (board[col] === board[col + 3] && board[col + 3] === board[col + 6] && board[col] !== "-" ){ gameOver = true; } }); ///diag wins if (board[0] === board[4] && board[4] === board[8] && board[8] !== "-" ) { gameOver = true; } if (board[2] === board[4] && board[4] === board[6] && board[6] !== "-" ) { gameOver = true; } return gameOver; } nextMove(["x", "x", "o", "o", "x", "-", "-", "-", "o"], computerIs) where it is returning: {willWin: 1, nextMove: 5} when I would expect {willWin: 1, nextMove: 7} I was working off an example implementation in Python: https://gist.github.com/SudhagarS/3942029 that returns the expected result. Can you see something that would cause this behavior? A: I added some logging, and discovered that I wasn't flipping between players correctly. Switching: if (computerIs !== player) { nextPlayer = userIs; } else { nextPlayer = computerIs; } to: if (computerIs === player) { nextPlayer = userIs; } else { nextPlayer = computerIs; } and reversing the scores (-1 to 1 and 1 to -1) returned for the winner when the game is over seems to have done the trick.
[ "stackoverflow", "0032666576.txt" ]
Q: Bluemix. Monitoring and Analytics. Response time not correct I have a node.js application that I'm monitoring using "Monitoring and Analytics" service. I'm using a jMeter (or SOAP UI) to do a load test against my node.js application. The average response time I'm getting with the load test tools is 900ms but in the Monitoring and Analytics graph it shows 111ms. On the other hand the throughput is correct.(110tx/sec on jMeter, 6600tx/min bluemix) Am I misunderstanding something in the bluemix graphs? Why this 800ms difference? EDIT: The 800ms overhead could be caused because of my computer / tools / network but while I'm running the test with 100 concurrent threads in my local environment, in other location (network / place) one single invocation to the service takes also 900ms, if my environment is the cause in other location the response time should be a bit more of 111ms not 900ms so I think it is not related to my infrastructure. Thank you. A: The Monitoring and Analytics service measures the time it takes for the Node.js application to build the HTTP response from the point that the request is handled. Here's a simple HelloWorld example annotated with the measurement points: var http = require('http'); var port = (process.env.VCAP_APP_PORT || 3000); var host = (process.env.VCAP_APP_HOST || 'localhost'); http.createServer(function handler(req, res) { // TIMER STARTS res.writeHead(200, {'Content-Type': 'text/plain'}); res.end('Hello World'); // TIMER ENDS }).listen(port, host); This means that two (potentially major) latencies are not measured: The network latency ie. the time taken for the network to send the request from browser to Node.js server, and for the response to be transmitted back. The HTTP request queue latency/depth ie. the time it takes from the Node.js receiving the HTTP request to the point that it can be handled. As Node.js has an event loop that runs on a single thread, only a single "blocking" action can be handled at a time. If the applications handling of the HTTP request is purely blocking as it is in the HelloWorld example (no asynchronous calls occur), then each HTTP request is handled in serial. Its the second of these that's likely to be causing your delay - the HTTP requests are running in serial, so the Monitoring and Analytics service is telling you that it takes 111ms to build the response to the request, but there's an additional 800ms where the request is queued waiting to be run.
[ "stackoverflow", "0063028853.txt" ]
Q: c++ program not running properly from outside code::blocks I am using Code::Blocks and MinGW to build c++ programs. When I run the compiled program in Code::Blocks, it works perfectly. But when I tried to run the same exe from outside Code::Blocks, the program is not running and giving two errors: The program can't run because libgcc_s_dw2-1.dll is missing from your computer. The program can't run because libstdc++-6.dll is missing from your computer. I have seen answers to other SO questions, but none of the answers worked for me. What I have tried: Copy and paste the two files from C:/MinGW/bin to the folder where the exe is located. -This works fine but It becomes awkward to copy and paste these files to all your projects again and again. Set the PATH variable to C:/MinGW/bin. In the compiler and debugger settings dialog, go to linker settings>>other linker options and add the line -static-libgcc -static-libstdc++ there. Edit I am adding images of compiler flags I can find: I have set the path in following way: (Note that I have the MinGW installation directory in D drive as the name of MinG). A: Possible solutions: build a completely static version (--staticcompiler flag) build using -static-libstdc++ -static-libgcc as linker flags copy the files libgcc_s_dw2-1.dll and libstdc++-6.dll (and any other dependancies if any) to the same folder as your .exe file
[ "portuguese.stackexchange", "0000003955.txt" ]
Q: Qual a melhor expressão, "pesquisa", "estudo" ou "trabalho acadêmico"? Seguidamente leio textos técnicos e acadêmicos com qualquer destas expressões e sem distinção ao emprego, muitas vezes escritas no mesmo texto e tratadas como sinônimos. Isso me causa um certo desconforto, talvez por ignorância minha, mas enfim. Tratando-se de textos produzidos pela academia, gostaria de saber em que casos devo, ou não, aplicar/empregar as expressões: Pesquisa acadêmica Estudo acadêmico Trabalho acadêmico A: Podem ser sinônimos, mas cada uma remete a situações diferentes: Pesquisa acadêmica - Soa mais formal, remete a idéia de um processo mais complexo, envolvendo diversos grupos ou pessoas. Estudo acadêmico - Menos formal. Idem ao caso anterior, mas sem tanta importância atribuída. Dá a impressão que uma única pessoa está envolvida. Trabalho acadêmico - Menos formal que as anteriores, dá a impressão de que alguns alunos/estudantes estão fazendo um simples trabalho.
[ "stackoverflow", "0020602037.txt" ]
Q: Netty 4 proxy: howto stop reading after the first PDU until the next read() call? The situation: I re-used the proxy from the Netty 4 examples to create my own. The key difference between the example and my implementation is that the proxy only connects to its remote peer after the first protocol data unit is processed. The relevant parts of my front-end handler: @Override public void channelActive(ChannelHandlerContext ctx) throws Exception { ctx.channel().read();// read the first message to trigger "channelRead0(...)" } @Override protected void channelRead0(final ChannelHandlerContext ctx, UCPPacket ucpPacket) throws Exception { if(!this.authenticated) {// authenticate the client then forward this packet this.authenticateAndForwardPacket(ctx, ucpPacket); } else {// forward the packet this.forwardPacket(ctx, ucpPacket); } } private void forwardPacket(final ChannelHandlerContext ctx, UCPPacket ucpPacket) { if (outboundChannel.isActive()) { outboundChannel.writeAndFlush(ucpPacket).addListener(new ChannelFutureListener() { @Override public void operationComplete(ChannelFuture future) throws Exception { if (future.isSuccess()) {// forwarding the packet succeeded so read the next one ctx.channel().read(); } else { future.channel().close(); } } }); } else { // this (often) happens when you don't set setSingleDecode(true) on the DelimiterBasedFrameDecoder LOGGER.error("FIXME: avoid else");//FIXME: ... } } The pipeline: DelimiterBasedFrameDecoder ==> UcpDecoder ==> FrontendHandler The problem: The first read() on the Channel will often read the bytes of multiple protocol data units which means that even with AUTO_READ set to false, 2 or more UCPPackets will often have to be processed. Can I somehow tell Netty that I'm done with the ByteBuf after the first UCPPacket is decoded (until I call read() again)? Or what else can I do? Blocking the subsequent UCPPackets in the channelRead0 method of the front-end handler until this.authenticated == true is obviously a no go (as this would block an IO thread). What I tried: I tried setSingleDecode(true) on DelimiterBasedFrameDecoder but that didn't work well. The first frame gets decoded correctly but even after the proxy has forwarded that PDU and has called read() again, the DelimiterBasedFrameDecoder doesn't do anything. I can only assume that this is because the read() call judged that it would be "pointless" to call the handlers on the pipeline when no new inbound bytes where read. The thing is... Netty has already read the bytes for the second (and last) UCPPacket, so it has those bytes stored somewhere. Note: when I kill the client process the proxy does process those bytes, which proves what I said: it does have the unhandled bytes, it just doesn't trigger the handlers. I guess that there's a decodeLast or something that gets called when the channel goes inactive. A: You will need to queue them somewhere and handle it by yourself as we not know how much data we should read. I think we may be able to provide a generic ChannelInboundHandler which will queue messages. Would you mind to open also an issue so we can provide such a handler? https://github.com/netty/netty/issues
[ "stackoverflow", "0051747885.txt" ]
Q: How do I define a component in a slot which as a prop which is defined in the child component Let's say I have three components: Alpha, Bravo and Charlie. Which looks like this: Alpha.vue <template> <div class="alpha"> <bravo> <template slot="card"> <charlie></charlie> </template> </bravo> </div> </template> Bravo.vue <template> <div class="bravo"> <slot name="card" v-for="result in results" :result="result"></slot> </div> </template> <script> export default { data: function() { return { results: [1, 2, 3] } } } </script> Charlie.vue <template> <h1>{{ result }}</h1> </template> <script> export default { props: [ 'result' ] } </script> How can I pass the result prop to Charlie while defining it in a slot in Alpha? The idea behind this is that Bravo contains a lot of shared logic. There are different variations of Alpha which may contain a different card for the slot (but will always have a result prop.) At the moment when running that, the result prop is not being parsed to the Charlie component and an undefined error is occurring (there are probably several wrong things with the example but I hope it demonstrates what I'm trying to achieve.) A: I think this is solution for your case Alpha.vue <template> <bravo> <template slot="card" slot-scope="{ result }"> <charlie :result="result"></charlie> </template> </bravo> </template> And you should wrap your slots in Bravo.vue Documentation
[ "stackoverflow", "0059277308.txt" ]
Q: Rails models not being set on create or update I'm on: Rails v4.2.11 Ruby v2.4.4 MySQL 5.7 I have the following rails model # app/models/UnsupportedInstitution.rb class UnsupportedInstitution < ActiveRecord::Base attr_accessor :bank_name, :reason validates :bank_name, presence: true validates :reason, presence: true has_many :unsupported_institution_routing_numbers end Here is the migration for the model: class CreateUnsupportedInstitutions < ActiveRecord::Migration def change create_table :unsupported_institutions do |t| t.string :bank_name, null: false t.string :reason, null: false t.timestamps null: false end end end which creates the following schema create_table "unsupported_institutions", force: :cascade do |t| t.string "bank_name", limit: 255, null: false t.string "reason", limit: 255, null: false t.datetime "created_at", null: false t.datetime "updated_at", null: false end This seems all pretty basic. However, when I enter the rails console and try to do: UnsupportedInstitution.create!(bank_name: 'Hello', reason: 'World'}) I get the following error: INSERT INTO `unsupported_institutions` (`created_at`, `updated_at`) VALUES ('2019-12-11 00:24:22', '2019-12-11 00:24:22') ActiveRecord::StatementInvalid: Mysql2::Error: Field 'bank_name' doesn't have a default value: INSERT INTO `unsupported_institutions` (`created_at`, `updated_at`) VALUES ('2019-12-11 00:24:22', '2019-12-11 00:24:22') from /usr/local/bundle/gems/mysql2-0.5.3/lib/mysql2/client.rb:131:in `_query' This also happens when I try to use the update method and when I try to create a new instance, set the values, and then save the model. It's obvious the bank_name and reason attributes are not being set but I can't understand why. I suspect it has something to do with the "null: false" in the migration but I can't remove that since there may be sources other than my api accessing this particular endpoint. Can anyone shed light on why this may be happening and/or what I can do to fix this? A: You're creating accessor methods which impede access to the methods ActiveRecord already generates for you. Your definition should be: class UnsupportedInstitution < ActiveRecord::Base validates :bank_name, presence: true validates :reason, presence: true has_many :unsupported_institution_routing_numbers end Where bank_name will be auto-generated as a method if there is a corresponding field in the database, which there is if your migration ran successfully.
[ "japanese.stackexchange", "0000069630.txt" ]
Q: Meaning of 行かしてもらうから In 火垂るの墓 there is a scene where the mother says: ほな、ひと足先に壕に行かしてもらうからね。 What does 行かしてもらうから mean? I know usually もらう means to receive / get someone to do something. I’ve never seen this conjugation of 行く before. Is it a casual form of 行かせて? where せ—>し? A: ほな、ひと足先に壕に行かしてもらうからね。 I think it's Kansai dialect. 「ほな」 is Kansai dialect, too. Here in Kyoto (and in Osaka and probably in Kobe as well), we often say: 行かせてもらう (in Standard Japanese) ⇒ 行かしてもらう (in Kansai-ben) 食べさせてもらう ⇒ 食べさしてもらう 言わせてもらう ⇒ 言わしてもらう 飲ませてもらう ⇒ 飲ましてもらう 見せて ⇒ 見して させて ⇒ さして やらせて ⇒ やらして etc. In Kansai dialect we often use the short form of causative verbs, eg: 行かす (cf 行かせる) 食べさす (cf 食べさせる) 言わす (cf 言わせる) 飲ます (cf 飲ませる) さす (cf させる) やらす (cf やらせる) etc.   
[ "stackoverflow", "0051133242.txt" ]
Q: Scaling and drawing a BufferedImage I have a customized JPanel with an @Override on the paintComponent method which takes an BufferedImage from a member variable and draws it. It works fine until I try to scale the image. I understand from reading that there are two different approaches. One is to use Image.getScaledInstance and the other is to create a Graphics2D with the dimensions of the scaled image. However, when I try to use either of these methods, I either get a completely white rectangle or I get nothing. I am not sure what I am doing wrong. Code for the overridden method is below. Any advice would be appreciated. I am sure this is trivial but I can't see the issue. protected void paintComponent(Graphics g) { super.paintComponent(g); if (img != null) { int imageWidth = img.getWidth(); int imageHeight = img.getHeight(); int panelWidth = getWidth(); int panelHeight = getHeight(); if(imageWidth > panelWidth || imageHeight > panelHeight){ double aspectRatio = (double)imageWidth / (double)imageHeight; int newWidth, newHeight; // rescale the height then change the width to pre if(imageWidth > panelWidth){ double widthScaleFactor = (double)panelWidth / (double)imageWidth; newWidth = (int)(widthScaleFactor * imageWidth); newHeight = (int)(widthScaleFactor * imageWidth / aspectRatio); }else{ double heightScaleFactor = (double)panelHeight / (double)imageHeight; newHeight = (int)(heightScaleFactor * imageHeight); newWidth = (int)(heightScaleFactor * imageHeight * aspectRatio); } //BufferedImage scaledImage = (BufferedImage)img.getScaledInstance(newWidth, newHeight, BufferedImage.SCALE_DEFAULT); BufferedImage scaledImage = new BufferedImage(newWidth, newHeight, img.getType()); int x = (panelWidth - newWidth) / 2; int y = (panelHeight - newHeight) / 2; //Graphics2D g2d = (Graphics2D) g.create(); Graphics2D g2d = scaledImage.createGraphics(); //g2d.drawImage(scaledImage, x, y, this); g2d.drawImage(img, 0, 0, newWidth, newHeight, null); g2d.dispose(); }else{ int x = (getWidth() - img.getWidth()) / 2; int y = (getHeight() - img.getHeight()) / 2; Graphics2D g2d = (Graphics2D) g.create(); g2d.drawImage(img, x, y, this); g2d.dispose(); } } A: Not sure whether it helps, because you havn't posted a SSCCE, but probably this will work better than your code. Image scaledImage = img.getScaledInstance(newWidth, newHeight, Image.SCALE_SMOOTH); int x = (panelWidth - newWidth) / 2; int y = (panelHeight - newHeight) / 2; g.drawImage(scaledImage, x, y, this);
[ "stackoverflow", "0020501564.txt" ]
Q: Normalize embedded records with ember-data I am trying to normalize data from REST API. I will not be changing the JSON response. How do I generically munge this JSON response to pull out embedded records to make it so they are in a side-loaded format. The response from the server looks like this: { "objects": [ { "active": true, "admin": true, "created_at": "2013-11-21T15:12:37.894390", "email": "me@example.com", "first_name": "Joe", "id": 1, "last_name": "Joeson", "projects": [ { "created_at": "2013-11-21T15:13:13.150572", "id": 1, "name": "Super awesome project", "updated_at": "2013-11-21T15:13:13.150606", "user_id": 1 } ], "updated_at": "2013-12-06T19:50:17.035881" }, { "active": true, "admin": false, "created_at": "2013-11-21T17:53:17.155700", "email": "craig@example.com", "first_name": "Craig", "id": 2, "last_name": "Craigson", "projects": [ { "created_at": "2013-11-21T17:54:05.527790", "id": 2, "name": "Craig's project", "updated_at": "2013-11-21T17:54:05.527808", "user_id": 2 }, { "created_at": "2013-11-21T17:54:29.557801", "id": 3, "name": "Future ideas", "updated_at": "2013-11-21T17:54:29.557816", "user_id": 2 } ], "updated_at": "2013-11-21T17:53:17.155717" } ] } I want to change JSON payload so it looks like the JSON response, ember-data is expecting:I am trying to normalize data from REST API. I will not be changing the JSON response. How do generically munge this JSON response to pull out embedded records to make it so they are in a side-loaded format. The response from the server looks like this: { "objects": [ { "active": true, "admin": true, "created_at": "2013-11-21T15:12:37.894390", "email": "me@example.com", "first_name": "Joe", "id": 1, "last_name": "Joeson", "updated_at": "2013-12-06T19:50:17.035881" "projects": [1] }, { "active": true, "admin": false, "created_at": "2013-11-21T17:53:17.155700", "email": "craig@example.com", "first_name": "Craig", "id": 2, "last_name": "Craigson", "updated_at": "2013-11-21T17:53:17.155717" "projects": [2, 3] } ], "projects": [ { "created_at": "2013-11-21T15:13:13.150572", "id": 1, "name": "Super awesome project", "updated_at": "2013-11-21T15:13:13.150606", "user_id": 1 }, { "created_at": "2013-11-21T17:54:05.527790", "id": 2, "name": "Craig's project", "updated_at": "2013-11-21T17:54:05.527808", "user_id": 2 }, { "created_at": "2013-11-21T17:54:29.557801", "id": 3, "name": "Future ideas", "updated_at": "2013-11-21T17:54:29.557816", "user_id": 2 } ] } So far I am extending DS.RESTSerializer: App.ApplicationSerializer = DS.RESTSerializer.extend({ extractArray: function(store, type, payload, id, requestType) { var result = {}; result[type.typeKey] = payload.objects; payload = result; return this._super(store, type, payload, id, requestType); }, extractSingle: function(store, type, payload, id, requestType) { var result; var model = type.typeKey; if (payload.object) { result = payload.object; } else { result = payload; } var embedObjs, embedKey; type.eachRelationship(function(key, relationship) { if (relationship.kind === 'hasMany') { embedKey = key; for (var i = 0; i < result[key].length; i++) { result.key.push(result[key][i].id); } embedObjs = result[key].pop(); } }); payload[model] = result; if (!payload[embedKey]) payload[embedKey] = []; payload[embedKey].push(embedObjs); return this._super(store, type, payload, id, requestType); } }); My models looks like this where a project belongs to a user: App.User = DS.Model.extend({ active: DS.attr(), admin: DS.attr(), email: DS.attr(), firstName: DS.attr(), lastName: DS.attr(), password: DS.attr(), createdAt: DS.attr(), updatedAt: DS.attr(), projects: DS.hasMany('project') }); App.Project = DS.Model.extend({ createdAt: DS.attr(), name: DS.attr(), updatedAt: DS.attr(), userId: DS.belongsTo('user') }); I am making a mistake somewhere, but I really don't know where other than it's in extractSingle. I get the following error in the JavaScript console, "Assertion failed: Error while loading route: TypeError: Cannot call method 'toString' of undefined." My app is working without the relations. A: Upfront, just had surgery, so I'm one handed and on a lot of oxycodone, so the code probably needs refactoring, I'll leave that up to you.. http://emberjs.jsbin.com/OxIDiVU/9/edit App.ApplicationSerializer = DS.RESTSerializer.extend({ extractArray: function(store, type, payload, id, requestType) { var result = {}; result[ Ember.String.pluralize(type.typeKey)] = payload.objects; payload = result; // debugger; return this._super(store, type, payload, id, requestType); }, normalizePayload: function(type, payload) { var result; var typeKey = Ember.String.pluralize(type.typeKey); if (payload.object) { result = payload.object; } else { result = payload; } var typeArr =result[typeKey]; type.eachRelationship(function(key, relationship) { if (relationship.kind === 'hasMany') { var arr = result[key]=[]; for(var j=0, jlen = typeArr.length;j<jlen;j++){ var obj = typeArr[j]; //user var collection = obj[key];//projects var ids = []; for (var i = 0, len=collection.length; i < len; i++) { var item = collection[i]; arr.push(item); ids.push(item.id); } obj[key]=ids; } } }); return this._super(type, result); }, normalizeAttributes: function(type, hash) { var payloadKey, key; if (this.keyForAttribute) { type.eachAttribute(function(key) { payloadKey = this.keyForAttribute(key); if (key === payloadKey) { return; } hash[key] = hash[payloadKey]; delete hash[payloadKey]; }, this); } }, keyForAttribute: function(attr) { return Ember.String.decamelize(attr); } });
[ "math.stackexchange", "0003360298.txt" ]
Q: The workings of Trig Substitution Use a Trig Substitution to eliminate the root in $(x^2-8x+21)^\frac32$ This is the work for this problem: complete the square: $$x^2-8x+21 +16-16$$$$((x-4)^2+5)^\frac32$$ Use the substitution $x-4=\sqrt5\tan\theta$ $$\sqrt{(\sqrt5\tan\theta)^2+5)^3}$$ $$=\sqrt{(5(\tan^2\theta+1))^3}$$ $$=[\sqrt5\sqrt{\sec^2\theta}]^3$$ $$=5^\frac32|\sec^3\theta|$$ I believe all this is correct, but how does $(x^2-8x+21)^\frac32 = 5^\frac32|\sec^3\theta|$? Shouldn't there be another step of resubstituting my $x-4=\sqrt{\tan\theta}$? How am I supposed to do that - my last step had no tangents remaining? A: You can resubstitute, but you'll end up with the same thing you started with. The idea of trig substitution is that by performing a change of variables, we can change to a variable in which integration is possible, then change back after integrating. It has the same purpose as the regular substitutions you've probably already learnt to do. As in those substitutions, resubstituting without integrating defeats the purpose and would just undo what you've done.
[ "ru.stackoverflow", "0000287600.txt" ]
Q: Перерисовывать интерфейс в пределах одной формы Есть необходимость перерисовывать интерфейс в пределах одной формы. Какие есть варианты по реализации? Создавать элементы интерфейса из кода кажется как то криво. Других вариантов не знаю. Приложение Windows Form. В C# новичок, так что не закидывайте помидорами. A: Как вариант(пусть и довольно извращенный) — эмуляция фреймов с помощью UserControls. Создаем базовый UserControl с нужными размерами и другими свойствами, наследуем от него все остальные и заполняем необходимыми компонентами. Пример реализации тут(.NET 4.0, C#, Visual Studio 2010 Project). Как видим, подобный подход позволяет реализовать наследование фреймов, обеспечить их взаимную независимость(взаимодействие через события) и использовать другие преимущества фреймов. Но, действительно, лучше смотреть в сторону WPF, как писали в комментах к ответу.
[ "stackoverflow", "0004001085.txt" ]
Q: using malloc() for double pointers a malloc question. First, I generate some strings. And second, I allocate none-space for copying pointers which point to those strings. At this moment the program should probably crash, since I ve tried to copy those pointers to nowhere, but it doesn’t crash. How is it possible? Any comments would be appreciated. #include <stdio.h> int main(int argc, char *argv[]) { int i, iJ, iCharCount = 0, iCharVal, iStrLen = 0, iStrNum = 50, iStrCount = 0, iAllocSize = 0; char *pcStr, *pcStr_CurPos, **ppcStr, **ppcStr_CurPos; // suppose, an average length of string is * N bytes. iAllocSize = iStrNum * 6; iStrCount = 0; // allocate ... pcStr = pcStr_CurPos = (char*) malloc (iAllocSize); if (pcStr==NULL){printf("NULL == malloc()\n"); exit (1);} for (i=0; i < iStrNum; i++) { iStrCount++; iStrLen = rand() % 7 + 2; // is in the range 2 to 8 printf("Len of Str=%d; str=[", iStrLen); for (iJ = 0; iJ < iStrLen-1; iJ++) { // A-Z a-z iCharVal = rand() % 58 + 65; if (iCharVal > 90 && iCharVal < 97) {iJ--; continue;} if (pcStr_CurPos < pcStr + iAllocSize ) { printf ("%c", iCharVal); *pcStr_CurPos++ = iCharVal; iCharCount ++; } else { *pcStr_CurPos++ = 0; iCharCount ++; printf ("]\n"); goto exit; } } printf ("]\n"); *pcStr_CurPos++ = 0; iCharCount ++; } exit: // I allocate NOTHING, ... ppcStr = ppcStr_CurPos = (char**) malloc (0); // ZERO ! // Copying pointers ... pcStr_CurPos = pcStr; while(pcStr_CurPos < pcStr + iCharCount) { //... BUT IT WORKS AND DON'T CRASH. // HOW IS IT POSSIBLE ??? *ppcStr_CurPos++ = pcStr_CurPos; while (*pcStr_CurPos++) ; } ppcStr_CurPos = ppcStr; iStrNum = iStrCount; printf ("\n Output ppcStr:\n", iCharCount ); while(iStrNum--) { printf("[%d][%s]\n", iStrNum, *(ppcStr_CurPos++)); } printf ("Press Enter key or sth\n"); getchar(); return 0; } A: In general, C is not guaranteed to crash if you access uninitialized memory. The behavior is undefined, which means anything could happen. The compiler could make demons fly out of your nose, as they say.
[ "academia.stackexchange", "0000141207.txt" ]
Q: Are Online First articles considered "In Press"? I'm a little bit confused about this point. If an article has appeared as an Online First article for a print journal, is it still considered "In Press" or is it considered published, and so no longer "In Press"? Or is it only articles that have not cleared the Production Stage yet that are considered "In Press"? I'm thinking some journals take ages to assign an issue number to accepted articles, so they might stay on the journal website for quite a while until they have one. Some funders want you to attach publications to the grant application that are "In Press", but not "fully" published publications. Thanks a lot for your help! A: My understanding is, online first articles are completely published except for the fact they aren't assigned to an issue. There's literally no production work left. They will also have a DOI. Therefore, although you can't give the full reference yet, you can list it in the application giving the journal name & DOI.
[ "askubuntu", "0001011815.txt" ]
Q: Eclipse app icon not showing in switcher (alt+tab) I installed the latest Eclipse as per the instructions here, and the icon shows fine in the launcher, but shows as a blank icon with an exclamation mark when using ALT+TAB to switch between applications. Should I copy the eclipse.xpm file to another folder other than usr/share/pixmaps/ so the switcher can see it, or what should I do? Image: Switching between apps with ALT+TAB: versus launcher icons: A: Open the .desktop file associated with Eclipse. Add the following line to the file and save the file: StartupWMClass=Eclipse
[ "stackoverflow", "0057995974.txt" ]
Q: Destructure an Object Like An Array in JS? Given the following code: const civic = {make: 'Honda', model: 'Civic'}; function logArgs(make, model) { console.log(make); console.log(model) } I want to do this: logArgs(...civic); instead of: logArgs(civic.make, civic.model); I get: (index):39 Uncaught TypeError: Found non-callable @@iterator Is there some way to destructure objects like arrays, or is what I am trying to do impossible? A: Use destructuring in arguments const civic = {make: 'Honda', model: 'Civic'}; function logArgs({make, model}) { console.log(make); console.log(model) } logArgs(civic) A: logArgs(...Object.values(civic)) Note that this would rely on the order of objects, which can be tricky
[ "stackoverflow", "0052809163.txt" ]
Q: Canceling file dialog prompted the excel to open excel log file I hope all of you are having a good day. I have a problem with my code here. The code here will display a file dialog and ask the user the choose the file, and it worked out great. My problem is that, when it display the file dialog, instead of choosing the folder that I want, I want to click cancel. But when I click on cancel, there will be a runtime error saying that, "The subscript is out or range." and it will open an excel file with title ts-event.log So, I tried to overcome this problem by using error handling, On Error GoTo . So instead of the default message box from VBA, I will get a msgbox that said, "You cancelled the action." but I still get that ts-event.log excel file open. How do I avoid this? Can someone help me. Thank you in advance. Sub UploadData() Dim SummWb As Workbook Dim SceWb As Workbook 'Get folder containing files With Application.FileDialog(msoFileDialogFolderPicker) .AllowMultiSelect = False .Show On Error Resume Next myFolderName = .SelectedItems(1) 'Err.Clear On Error GoTo Error_handler End With If Right(myFolderName, 1) <> "\" Then myFolderName = myFolderName & "\" 'Settings Application.ScreenUpdating = False oldStatusBar = Application.DisplayStatusBar Application.DisplayStatusBar = True Set SummWb = ActiveWorkbook 'Get source files and append to output file mySceFileName = Dir(myFolderName & "*.*") Do While mySceFileName <> "" 'Stop once all files found Application.StatusBar = "Processing: " & mySceFileName Set SceWb = Workbooks.Open(myFolderName & mySceFileName) 'Open file found With SummWb.Sheets("Master List") .Cells(.Rows.Count, "A").End(xlUp).Offset(1, 0).Value = SceWb.Sheets("Survey").Range("B1").Value .Cells(.Rows.Count, "B").End(xlUp).Offset(1, 0).Value = SceWb.Sheets("Survey").Range("B2").Value .Cells(.Rows.Count, "C").End(xlUp).Offset(1, 0).Value = SceWb.Sheets("Survey").Range("B3").Value .Cells(.Rows.Count, "D").End(xlUp).Offset(1, 0).Value = SceWb.Sheets("Survey").Range("B4").Value .Cells(.Rows.Count, "H").End(xlUp).Offset(1, 0).Value = SceWb.Sheets("Survey").Range("C7").Value .Cells(.Rows.Count, "I").End(xlUp).Offset(1, 0).Value = SceWb.Sheets("Survey").Range("D7").Value .Cells(.Rows.Count, "J").End(xlUp).Offset(1, 0).Value = SceWb.Sheets("Survey").Range("C8").Value .Cells(.Rows.Count, "K").End(xlUp).Offset(1, 0).Value = SceWb.Sheets("Survey").Range("D8").Value .Cells(.Rows.Count, "L").End(xlUp).Offset(1, 0).Value = SceWb.Sheets("Survey").Range("C9").Value .Cells(.Rows.Count, "M").End(xlUp).Offset(1, 0).Value = SceWb.Sheets("Survey").Range("D9").Value .Cells(.Rows.Count, "E").End(xlUp).Offset(1, 0).Value = SummWb.Sheets("Upload Survey").Range("C8").Value End With SceWb.Close (False) 'Close Workbook mySceFileName = Dir Loop Error_handler: MsgBox ("You cancelled the action.") MsgBox ("Upload complete.") 'Settings and save output file Application.StatusBar = False Application.DisplayStatusBar = oldStatusBar SummWb.Activate SummWb.Save 'save automaticallly Application.ScreenUpdating = True End Sub A: Cancel doesn't mean that it's an error Sub UploadData() Dim SummWb As Workbook Dim SceWb As Workbook Dim myFolderName As String Dim oldstatusbar As Boolean Dim mySceFileName As String On Error GoTo Error_handler 'Get folder containing files With Application.FileDialog(msoFileDialogFolderPicker) If .Show = -1 Then .AllowMultiSelect = False myFolderName = .SelectedItems(1) Else 'You clicked cancel GoTo Cancel_handler End If End With If Right(myFolderName, 1) <> "\" Then myFolderName = myFolderName & "\" 'Settings Application.ScreenUpdating = False oldstatusbar = Application.DisplayStatusBar Application.DisplayStatusBar = True Set SummWb = ActiveWorkbook 'Get source files and append to output file mySceFileName = Dir(myFolderName & "*.*") Do While mySceFileName <> "" 'Stop once all files found Application.StatusBar = "Processing: " & mySceFileName Set SceWb = Workbooks.Open(myFolderName & mySceFileName) 'Open file found With SummWb.Sheets("Master List") .Cells(.Rows.Count, "A").End(xlUp).Offset(1, 0).Value = SceWb.Sheets("Survey").Range("B1").Value .Cells(.Rows.Count, "B").End(xlUp).Offset(1, 0).Value = SceWb.Sheets("Survey").Range("B2").Value .Cells(.Rows.Count, "C").End(xlUp).Offset(1, 0).Value = SceWb.Sheets("Survey").Range("B3").Value .Cells(.Rows.Count, "D").End(xlUp).Offset(1, 0).Value = SceWb.Sheets("Survey").Range("B4").Value .Cells(.Rows.Count, "H").End(xlUp).Offset(1, 0).Value = SceWb.Sheets("Survey").Range("C7").Value .Cells(.Rows.Count, "I").End(xlUp).Offset(1, 0).Value = SceWb.Sheets("Survey").Range("D7").Value .Cells(.Rows.Count, "J").End(xlUp).Offset(1, 0).Value = SceWb.Sheets("Survey").Range("C8").Value .Cells(.Rows.Count, "K").End(xlUp).Offset(1, 0).Value = SceWb.Sheets("Survey").Range("D8").Value .Cells(.Rows.Count, "L").End(xlUp).Offset(1, 0).Value = SceWb.Sheets("Survey").Range("C9").Value .Cells(.Rows.Count, "M").End(xlUp).Offset(1, 0).Value = SceWb.Sheets("Survey").Range("D9").Value .Cells(.Rows.Count, "E").End(xlUp).Offset(1, 0).Value = SummWb.Sheets("Upload Survey").Range("C8").Value End With SceWb.Close (False) 'Close Workbook mySceFileName = Dir Loop SummWb.Activate SummWb.Save 'save automaticallly MsgBox ("Upload complete.") Finish: Application.StatusBar = False Application.DisplayStatusBar = oldstatusbar Application.ScreenUpdating = True Exit Sub Cancel_handler: MsgBox "You cancelled the action." Exit Sub Error_handler: MsgBox "An unexpected error occurred." GoTo Finish End Sub Notice the first Exit Sub: This is where the program will end if no error occurs. If the cancel button is clicked it will show the msgbox and end at the second Exit Sub. But if an error occurs you bring it back with Goto Finish where you have all the statements to bring the application back to the initial state.
[ "stackoverflow", "0058808947.txt" ]
Q: Duplicate class com.android.volley found in modules (dev.dworks.libs:volleyplus:0.1.4) and volley-1.1.0 android { compileSdkVersion 29 buildToolsVersion "29.0.2" defaultConfig { applicationId "com.ariacloud.monaria" minSdkVersion 22 targetSdkVersion 29 versionCode 1 versionName "1.0" testInstrumentationRunner "androidx.test.runner.AndroidJUnitRunner" vectorDrawables.useSupportLibrary = true configurations { all*.exclude module: 'volley-release' } } buildTypes { release { minifyEnabled false proguardFiles getDefaultProguardFile('proguard-android-optimize.txt'), 'proguard-rules.pro' } } compileOptions { sourceCompatibility = 1.8 targetCompatibility = 1.8 } } dependencies { implementation fileTree(dir: 'libs', include: ['*.jar']) def recyclerview_version = "1.0.0" implementation "androidx.recyclerview:recyclerview:$recyclerview_version" // For control over item selection of both touch and mouse driven selection implementation "androidx.recyclerview:recyclerview-selection:$recyclerview_version" implementation 'androidx.appcompat:appcompat:1.1.0' implementation 'androidx.constraintlayout:constraintlayout:1.1.3' testImplementation 'junit:junit:4.12' androidTestImplementation 'androidx.test.ext:junit:1.1.1' androidTestImplementation 'com.android.support.test.espresso:espresso-core:3.2.0' implementation 'com.android.volley:volley:1.1.0' implementation 'dev.dworks.libs:volleyplus:0.1.4' implementation 'devlight.io:navigationtabbar:1.2.5' // implementation 'com.github.devlight.navigationtabbar:library:1.1.2' //compile "com.google.android.gms:play-services-location:11.6.0" implementation 'com.google.firebase:firebase-core:11.8.0' implementation 'com.google.firebase:firebase-messaging:11.8.0' implementation 'com.google.firebase:firebase-analytics:15.0.0' //implementation files('libs/volley.jar') } apply plugin: 'com.google.gms.google-services' here is the error : Duplicate class com.android.volley.BuildConfig found in modules jetified-volleyplus-0.1.4-runtime.jar (dev.dworks.libs:volleyplus:0.1.4) and volley-1.1.0-runtime.jar (com.android.volley:volley:1.1.0) Duplicate class com.android.volley.Cache found in modules jetified-volleyplus-0.1.4-runtime.jar (dev.dworks.libs:volleyplus:0.1.4) and volley-1.1.0-runtime.jar (com.android.volley:volley:1.1.0) Duplicate class com.android.volley.Cache$Entry found in modules jetified-volleyplus-0.1.4-runtime.jar (dev.dworks.libs:volleyplus:0.1.4) and volley-1.1.0-runtime.jar (com.android.volley:volley:1.1.0) Duplicate class com.android.volley.CacheDispatcher found in modules jetified-volleyplus-0.1.4-runtime.jar (dev.dworks.libs:volleyplus:0.1.4) and volley-1.1.0-runtime.jar (com.android.volley:volley:1.1.0) Duplicate class com.android.volley.CacheDispatcher$1 found in modules jetified-volleyplus-0.1.4-runtime.jar (dev.dworks.libs:volleyplus:0.1.4) and volley-1.1.0-runtime.jar (com.android.volley:volley:1.1.0) Duplicate class com.android.volley.DefaultRetryPolicy found in modules jetified-volleyplus-0.1.4-runtime.jar (dev.dworks.libs:volleyplus:0.1.4) and volley-1.1.0-runtime.jar (com.android.volley:volley:1.1.0) Duplicate class com.android.volley.ExecutorDelivery found in modules jetified-volleyplus-0.1.4-runtime.jar (dev.dworks.libs:volleyplus:0.1.4) and volley-1.1.0-runtime.jar (com.android.volley:volley:1.1.0) Duplicate class com.android.volley.ExecutorDelivery$1 found in modules jetified-volleyplus-0.1.4-runtime.jar (dev.dworks.libs:volleyplus:0.1.4) and volley-1.1.0-runtime.jar (com.android.volley:volley:1.1.0) Duplicate class com.android.volley.ExecutorDelivery$ResponseDeliveryRunnable found in modules jetified-volleyplus-0.1.4-runtime.jar (dev.dworks.libs:volleyplus:0.1.4) and volley-1.1.0-runtime.jar (com.android.volley:volley:1.1.0) Duplicate class com.android.volley.Network found in modules jetified-volleyplus-0.1.4-runtime.jar (dev.dworks.libs:volleyplus:0.1.4) and volley-1.1.0-runtime.jar (com.android.volley:volley:1.1.0) Duplicate class com.android.volley.NetworkDispatcher found in modules jetified-volleyplus-0.1.4-runtime.jar (dev.dworks.libs:volleyplus:0.1.4) and volley-1.1.0-runtime.jar (com.android.volley:volley:1.1.0) Duplicate class com.android.volley.NetworkResponse found in modules jetified-volleyplus-0.1.4-runtime.jar (dev.dworks.libs:volleyplus:0.1.4) and volley-1.1.0-runtime.jar (com.android.volley:volley:1.1.0) Duplicate class com.android.volley.Request found in modules jetified-volleyplus-0.1.4-runtime.jar (dev.dworks.libs:volleyplus:0.1.4) and volley-1.1.0-runtime.jar (com.android.volley:volley:1.1.0) Duplicate class com.android.volley.Request$1 found in modules jetified-volleyplus-0.1.4-runtime.jar (dev.dworks.libs:volleyplus:0.1.4) and volley-1.1.0-runtime.jar (com.android.volley:volley:1.1.0) Duplicate class com.android.volley.Request$Method found in modules jetified-volleyplus-0.1.4-runtime.jar (dev.dworks.libs:volleyplus:0.1.4) and volley-1.1.0-runtime.jar (com.android.volley:volley:1.1.0) Duplicate class com.android.volley.Request$Priority found in modules jetified-volleyplus-0.1.4-runtime.jar (dev.dworks.libs:volleyplus:0.1.4) and volley-1.1.0-runtime.jar (com.android.volley:volley:1.1.0) Duplicate class com.android.volley.RequestQueue found in modules jetified-volleyplus-0.1.4-runtime.jar (dev.dworks.libs:volleyplus:0.1.4) and volley-1.1.0-runtime.jar (com.android.volley:volley:1.1.0) Duplicate class com.android.volley.RequestQueue$1 found in modules jetified-volleyplus-0.1.4-runtime.jar (dev.dworks.libs:volleyplus:0.1.4) and volley-1.1.0-runtime.jar (com.android.volley:volley:1.1.0) Duplicate class com.android.volley.RequestQueue$RequestFilter found in modules jetified-volleyplus-0.1.4-runtime.jar (dev.dworks.libs:volleyplus:0.1.4) and volley-1.1.0-runtime.jar (com.android.volley:volley:1.1.0) Duplicate class com.android.volley.RequestQueue$RequestFinishedListener found in modules jetified-volleyplus-0.1.4-runtime.jar (dev.dworks.libs:volleyplus:0.1.4) and volley-1.1.0-runtime.jar (com.android.volley:volley:1.1.0) Duplicate class com.android.volley.Response found in modules jetified-volleyplus-0.1.4-runtime.jar (dev.dworks.libs:volleyplus:0.1.4) and volley-1.1.0-runtime.jar (com.android.volley:volley:1.1.0) Duplicate class com.android.volley.Response$ErrorListener found in modules jetified-volleyplus-0.1.4-runtime.jar (dev.dworks.libs:volleyplus:0.1.4) and volley-1.1.0-runtime.jar (com.android.volley:volley:1.1.0) Duplicate class com.android.volley.Response$Listener found in modules jetified-volleyplus-0.1.4-runtime.jar (dev.dworks.libs:volleyplus:0.1.4) and volley-1.1.0-runtime.jar (com.android.volley:volley:1.1.0) Duplicate class com.android.volley.ResponseDelivery found in modules jetified-volleyplus-0.1.4-runtime.jar (dev.dworks.libs:volleyplus:0.1.4) and volley-1.1.0-runtime.jar (com.android.volley:volley:1.1.0) Duplicate class com.android.volley.RetryPolicy found in modules jetified-volleyplus-0.1.4-runtime.jar (dev.dworks.libs:volleyplus:0.1.4) and volley-1.1.0-runtime.jar (com.android.volley:volley:1.1.0) Duplicate class com.android.volley.VolleyLog found in modules jetified-volleyplus-0.1.4-runtime.jar (dev.dworks.libs:volleyplus:0.1.4) and volley-1.1.0-runtime.jar (com.android.volley:volley:1.1.0) Duplicate class com.android.volley.VolleyLog$MarkerLog found in modules jetified-volleyplus-0.1.4-runtime.jar (dev.dworks.libs:volleyplus:0.1.4) and volley-1.1.0-runtime.jar (com.android.volley:volley:1.1.0) Duplicate class com.android.volley.VolleyLog$MarkerLog$Marker found in modules jetified-volleyplus-0.1.4-runtime.jar (dev.dworks.libs:volleyplus:0.1.4) and volley-1.1.0-runtime.jar (com.android.volley:volley:1.1.0) Duplicate class com.android.volley.toolbox.AndroidAuthenticator found in modules jetified-volleyplus-0.1.4-runtime.jar (dev.dworks.libs:volleyplus:0.1.4) and volley-1.1.0-runtime.jar (com.android.volley:volley:1.1.0) Duplicate class com.android.volley.toolbox.Authenticator found in modules jetified-volleyplus-0.1.4-runtime.jar (dev.dworks.libs:volleyplus:0.1.4) and volley-1.1.0-runtime.jar (com.android.volley:volley:1.1.0) Duplicate class com.android.volley.toolbox.BasicNetwork found in modules jetified-volleyplus-0.1.4-runtime.jar (dev.dworks.libs:volleyplus:0.1.4) and volley-1.1.0-runtime.jar (com.android.volley:volley:1.1.0) Duplicate class com.android.volley.toolbox.ByteArrayPool found in modules jetified-volleyplus-0.1.4-runtime.jar (dev.dworks.libs:volleyplus:0.1.4) and volley-1.1.0-runtime.jar (com.android.volley:volley:1.1.0) Duplicate class com.android.volley.toolbox.ByteArrayPool$1 found in modules jetified-volleyplus-0.1.4-runtime.jar (dev.dworks.libs:volleyplus:0.1.4) and volley-1.1.0-runtime.jar (com.android.volley:volley:1.1.0) Duplicate class com.android.volley.toolbox.HttpClientStack found in modules jetified-volleyplus-0.1.4-runtime.jar (dev.dworks.libs:volleyplus:0.1.4) and volley-1.1.0-runtime.jar (com.android.volley:volley:1.1.0) Duplicate class com.android.volley.toolbox.HttpClientStack$HttpPatch found in modules jetified-volleyplus-0.1.4-runtime.jar (dev.dworks.libs:volleyplus:0.1.4) and volley-1.1.0-runtime.jar (com.android.volley:volley:1.1.0) Duplicate class com.android.volley.toolbox.HttpHeaderParser found in modules jetified-volleyplus-0.1.4-runtime.jar (dev.dworks.libs:volleyplus:0.1.4) and volley-1.1.0-runtime.jar (com.android.volley:volley:1.1.0) Duplicate class com.android.volley.toolbox.HttpStack found in modules jetified-volleyplus-0.1.4-runtime.jar (dev.dworks.libs:volleyplus:0.1.4) and volley-1.1.0-runtime.jar (com.android.volley:volley:1.1.0) Duplicate class com.android.volley.toolbox.HurlStack found in modules jetified-volleyplus-0.1.4-runtime.jar (dev.dworks.libs:volleyplus:0.1.4) and volley-1.1.0-runtime.jar (com.android.volley:volley:1.1.0) Duplicate class com.android.volley.toolbox.HurlStack$UrlRewriter found in modules jetified-volleyplus-0.1.4-runtime.jar (dev.dworks.libs:volleyplus:0.1.4) and volley-1.1.0-runtime.jar (com.android.volley:volley:1.1.0) Duplicate class com.android.volley.toolbox.ImageLoader found in modules jetified-volleyplus-0.1.4-runtime.jar (dev.dworks.libs:volleyplus:0.1.4) and volley-1.1.0-runtime.jar (com.android.volley:volley:1.1.0) Duplicate class com.android.volley.toolbox.ImageLoader$1 found in modules jetified-volleyplus-0.1.4-runtime.jar (dev.dworks.libs:volleyplus:0.1.4) and volley-1.1.0-runtime.jar (com.android.volley:volley:1.1.0) Duplicate class com.android.volley.toolbox.ImageLoader$2 found in modules jetified-volleyplus-0.1.4-runtime.jar (dev.dworks.libs:volleyplus:0.1.4) and volley-1.1.0-runtime.jar (com.android.volley:volley:1.1.0) Duplicate class com.android.volley.toolbox.ImageLoader$3 found in modules jetified-volleyplus-0.1.4-runtime.jar (dev.dworks.libs:volleyplus:0.1.4) and volley-1.1.0-runtime.jar (com.android.volley:volley:1.1.0) Duplicate class com.android.volley.toolbox.ImageLoader$4 found in modules jetified-volleyplus-0.1.4-runtime.jar (dev.dworks.libs:volleyplus:0.1.4) and volley-1.1.0-runtime.jar (com.android.volley:volley:1.1.0) Duplicate class com.android.volley.toolbox.ImageLoader$BatchedImageRequest found in modules jetified-volleyplus-0.1.4-runtime.jar (dev.dworks.libs:volleyplus:0.1.4) and volley-1.1.0-runtime.jar (com.android.volley:volley:1.1.0) Duplicate class com.android.volley.toolbox.ImageLoader$ImageContainer found in modules jetified-volleyplus-0.1.4-runtime.jar (dev.dworks.libs:volleyplus:0.1.4) and volley-1.1.0-runtime.jar (com.android.volley:volley:1.1.0) Duplicate class com.android.volley.toolbox.ImageLoader$ImageListener found in modules jetified-volleyplus-0.1.4-runtime.jar (dev.dworks.libs:volleyplus:0.1.4) and volley-1.1.0-runtime.jar (com.android.volley:volley:1.1.0) Duplicate class com.android.volley.toolbox.PoolingByteArrayOutputStream found in modules jetified-volleyplus-0.1.4-runtime.jar (dev.dworks.libs:volleyplus:0.1.4) and volley-1.1.0-runtime.jar (com.android.volley:volley:1.1.0) Duplicate class com.android.volley.toolbox.RequestFuture found in modules jetified-volleyplus-0.1.4-runtime.jar (dev.dworks.libs:volleyplus:0.1.4) and volley-1.1.0-runtime.jar (com.android.volley:volley:1.1.0) Duplicate class com.android.volley.toolbox.Volley found in modules jetified-volleyplus-0.1.4-runtime.jar (dev.dworks.libs:volleyplus:0.1.4) and volley-1.1.0-runtime.jar (com.android.volley:volley:1.1.0) A: Seems you have two dependencies regarding volley that are colliding because both of them have same class: implementation 'com.android.volley:volley:1.1.0' implementation 'dev.dworks.libs:volleyplus:0.1.4' You will just need one as it says here: https://developer.android.com/training/volley implementation 'com.android.volley:volley:1.1.1' So, just remove implementation 'dev.dworks...'
[ "judaism.stackexchange", "0000088563.txt" ]
Q: Ayin or Aleph, blessing or curse? This site: https://www.torahmusings.com/2014/08/mispronouncing-hebrew-2/ quotes Rashi as saying: "By using an ayin sound rather than an alef in birkas kohanim, they change the blessing into a curse." What exact words are being changed by mispronunciation and what is the curse? A: Rashi says why this change turns the blessing into a curse pretty clearly in his comment on Megillah 24b. He says: מפני שקורין לאלפין עיינין ולעיינין אלפין. ואם היו עושין ברכת כהנים היו אומרים יאר יער ה׳ פניו ולשון קללה הוא כי יש פנים שיתפרשו לשון כעס כמו פני ילכו (שמות לג) את פני (ויקרא כ) ומתרגמינן ית רוגזי ומעי״ן עושין אלפי״ן ופוגמין תפלתן Because they read alef as ayin and ayin as alef. And if they perform Birkas Kohanim, they would say instead of יאר ה' פניו (may Hashem shine his countenance), יער ה' פניו (may Hashem arouse his face) and this is the language of a curse, for we see that the word פנים (face) can be conveyed as an expression of anger, as in: "My presence" (Shemos 33), "My face" (Vayikra 20), and Onkelos translates it as "My anger". Thus they turn alef into ayin and blemish their prayers. (Translation is mine)
[ "stackoverflow", "0018278010.txt" ]
Q: Java Memory usage - primitives Quoting the following from Algorithms 4th edition "For example, if you have 1GB of memory on your computer (1 billion bytes), you cannot fit more than about 32 million int values or 16 million double values in memory at one time." int - 4 bytes 32 million x 4 = 128 million bytes Help me understand, why we can't fit 32 million int values, the above 128 million bytes is about 1/10 of the overall memory consumption of 1GB or 1 billion bytes. A: That depends on how you organize your data. In Java, every object has an overhead, this is JVM dependent, but typically 8 or 16 bytes. So if you wrap every int value into an object (like Integer), then you may exceed 1GB. But, if you allocate it as an int[] array, then it should fit into 1GB easily. And, this is not strictly related to the question, but reflecting to @Anony-Mousse's comment, there are JVMs for microcontrollers, and I am pretty sure that object size is below 8 bytes in those JVMs (though I didn't find exact data).
[ "stackoverflow", "0059166948.txt" ]
Q: How to stop form submit if validation fails in javascript Needing some help... I've researched solutions on stakeoverflow, but for some reason, my HTML form with validation, still submits, when it should not. <form class="submitForm" action="https://www.google.com" method="POST" id="form" target="_blank" onsubmit="return validateForm()"> <span id="validationMessage" aria-live="polite"></span> <input class="nameInput" type="text" name="name" value="" id="name" placeholder="Name"> <input class="urlInput" type="url" name="url" value="" id="url" placeholder="https://example.com"> <button type="submit">go!</button> </form> I've purposely removed required from the input fields, as I want to write my own validation. But basically, the form should not submit, if the text input and url input fields have empty values. But the form still submits! What have I done wrong? Here's my function to check if the URL is valid. Or if it's value is empty: function validateForm(data) { console.log('validate form'); var url = data.url; var pattern = /(ftp|http|https):\/\/(\w+:{0,1}\w*@)?(\S+)(:[0-9]+)?(\/|\/([\w#!:.?+=&%@!\-\/]))?/; if (pattern.test(url)) { console.log("Url is valid"); validationMessage.innerHTML = ''; return true; } else if (url.length === 0 || url === '') { console.log("Url is not valid!"); validationMessage.innerHTML = 'URL format is not correct. Please check your url'; return false; } } And here's my function, to check if the text value field is empty: function validateName(data) { // console.log('validate form'); console.log('data.name', data.name.length); if (data.name.length > 1) { // validationMessage.innerHTML = 'Please input a bookmark name'; console.log('there is a name!'); return true; } else if (data.name === "" || data.name.length === 0) { // validationMessage.innerHTML = 'Please input a bookmark name'; console.log('no name!'); return false; } } They both console log the correct scenarios, but the form still submits... Updated: To answer posters question, ValidateName() gets called here: // ADD A NEW BOOKMARK form.addEventListener('submit', function(e){ e.preventDefault(); let data = { name: nameInput.value, url: urlInput.value }; validateName(data); // validateForm(data); $.ajax({ url: urlInput.value, dataType: 'jsonp', statusCode: { 200: function() { console.log( "status code 200 returned" ); bookMarksArray.push(data); console.log("Added bookmark #" + data); localStorage.setItem("bookMarksArray", JSON.stringify(bookMarksArray)); turnLoadingStateOn(bookMarksArray); }, 404: function() { console.log( "status code 404 returned. cannot add bookmark" ); } } }); }); A: Actually you need to return false from your submit handler if you don't want the form to submit var namevalid = validateName(data); !namevalid ? return false : null;
[ "stackoverflow", "0045336628.txt" ]
Q: how to stack 3 fluid (responsive) columns without media queries I want to code a responsive fluid email template in which I could stack 3 columns without using media queries. I am able to use the following code to make them float and have them stack when the width is less than sum of two minimum widths of elements of table. But when the width is more than that, only the third column is stacked and the rest two are still seen inline. How can I stack them without using media queries? If it's at all possible. table { } tr{ background-color: lightblue; min-width: 160px; } td{ display:block; width:33%; background-color: green; margin-left:auto; margin-right: auto; text-align: center; padding:0px; float: left; min-width: 160px !important; } <table width="100%" bgcolor="green"> <tr> <center> <td>1</td> <td>2</td> <td>3</td> </center> </tr> </table> JsFiddle: https://jsfiddle.net/o8gov8oe/ Problem: Expected Solution: A: You can achieve this safely in every email client using a hybrid approach to reconfigure the layout for different screen sizes for email clients regardless of media query support. At its core, it uses max-width and min-width to impose baselines (allowing some movement) and imposes a fixed, wide width for Outlook who is shackled to desktop anyway. Once a mobile-friendly baseline is set, media queries can progressively enhance the email further in clients that support it, but is not required to make columns stack. Here's an example of a three column stacking with no media queries: <html> <body width="100%" bgcolor="#222222" style="margin: 0; mso-line-height-rule: exactly;"> <center style="width: 100%; background: #222222; text-align: left;"> <!-- Set the email width. Defined in two places: 1. max-width for all clients except Desktop Windows Outlook, allowing the email to squish on narrow but never go wider than 680px. 2. MSO tags for Desktop Windows Outlook enforce a 680px width. Note: The Fluid and Responsive templates have a different width (600px). The hybrid grid is more "fragile", and I've found that 680px is a good width. Change with caution. --> <div style="max-width: 680px; margin: auto;"> <!--[if mso]> <table role="presentation" cellspacing="0" cellpadding="0" border="0" width="680" align="center"> <tr> <td> <![endif]--> <!-- Email Header : BEGIN --> <table role="presentation" cellspacing="0" cellpadding="0" border="0" align="center" width="100%" style="max-width: 680px;"> <!-- 3 Even Columns : BEGIN --> <tr> <td bgcolor="#ffffff" align="center" height="100%" valign="top" width="100%" style="padding: 10px 0;"> <!--[if mso]> <table role="presentation" border="0" cellspacing="0" cellpadding="0" align="center" width="660"> <tr> <td align="center" valign="top" width="660"> <![endif]--> <table role="presentation" border="0" cellpadding="0" cellspacing="0" align="center" width="100%" style="max-width:660px;"> <tr> <td align="center" valign="top" style="font-size:0;"> <!--[if mso]> <table role="presentation" border="0" cellspacing="0" cellpadding="0" align="center" width="660"> <tr> <td align="left" valign="top" width="220"> <![endif]--> <div style="display:inline-block; margin: 0 -2px; max-width:33.33%; min-width:220px; vertical-align:top; width:100%;"> <table role="presentation" cellspacing="0" cellpadding="0" border="0" width="100%"> <tr> <td style="padding: 10px 10px;"> <p style="margin: 0; font-size: 15px;">Column 1 Maecenas sed ante pellentesque, posuere leo id, eleifend dolor. Class aptent taciti sociosqu ad litora torquent per conubia nostra, per inceptos himenaeos.</p> </td> </tr> </table> </div> <!--[if mso]> </td> <td align="left" valign="top" width="220"> <![endif]--> <div style="display:inline-block; margin: 0 -2px; max-width:33.33%; min-width:220px; vertical-align:top; width:100%;"> <table role="presentation" cellspacing="0" cellpadding="0" border="0" width="100%"> <tr> <td style="padding: 10px 10px;"> <p style="margin: 0; font-size: 15px;">Column 2 Maecenas sed ante pellentesque, posuere leo id, eleifend dolor. Class aptent taciti sociosqu ad litora torquent per conubia nostra, per inceptos himenaeos.</p> </td> </tr> </table> </div> <!--[if mso]> </td> <td align="left" valign="top" width="220"> <![endif]--> <div style="display:inline-block; margin: 0 -2px; max-width:33.33%; min-width:220px; vertical-align:top; width:100%;"> <table role="presentation" cellspacing="0" cellpadding="0" border="0" width="100%"> <tr> <td style="padding: 10px 10px;"> <p style="margin: 0; font-size: 15px;">Column 3 Maecenas sed ante pellentesque, posuere leo id, eleifend dolor. Class aptent taciti sociosqu ad litora torquent per conubia nostra, per inceptos himenaeos.</p> </td> </tr> </table> </div> <!--[if mso]> </td> </tr> </table> <![endif]--> </td> </tr> </table> <!--[if mso]> </td> </tr> </table> <![endif]--> </td> </tr> <!-- 3 Even Columns : END --> </table> <!-- Email Footer : END --> <!--[if mso]> </td> </tr> </table> <![endif]--> </div> </center> </body> </html> You can also see a full example here. (You can also achieve this using Flexbox or CSS Grid, but support for that in email clients is spotty.)
[ "stackoverflow", "0049981992.txt" ]
Q: Scraping a website with a form I am trying to scrape data from this site: https://action.labour.org.uk/page/content/council-cuts-calculator I plan to loop through a list of postcodes and collect information on each of them. I have tried using the requests module like so: import requests url = 'https://action.labour.org.uk/page/content/council-cuts-calculator' payload = {'firstname': 'james', 'email': 'myemailaddress', 'zip': 'WS13 6QG', 'custom_15452': 'no'} response = requests.post(url, data=payload) results_text = response.text soup = BeautifulSoup(results_text, 'html.parser') print(soup.get_text()) The code runs without an error but does not seem to pass the information to the form, or at least the printed part does not contain the same information as if I enter the same information manually. I suspect that it might be because the it's using javascript rather than a request but don't know how to tell. Can anyone let me know what method to use to get the information I'm after, a sample result is below. Also, more generally how can you tell whether a website form requires requests.get requests.post or some other method? In LICHFIELD, your council will have £68 less to spend on your household by 2020 than they had in 2010. Under the Tories some of the most deprived areas in the country are hit the hardest, while Tory councils are given a better deal. On average, Tory councils will have £128 less to spend per household, while Labour councils are hit four times harder – losing £524. A: It looks like when you first make the POST request, there is also another immediate GET request made to https://stats-microapi-production.herokuapp.com to fetch the data you are looking for. Turns out you could just make a GET request to https://stats-microapi-production.herokuapp.com/index.php?campaign=1&pc=WS136QG with appropriate pin code without first having to make that POST request. Just for future reference, it is helpful to analyze network packets your browser deals with using mitmproxy or other alternatives.
[ "math.stackexchange", "0001084196.txt" ]
Q: Comparison of the Chebyshev centers and radii of a set and of its bounding box The Chebyshev center of a bounded set $Q$ having non-empty interior is defined in this question as the center of the minimal-radius ball enclosing the entire set $Q$. Let $B$ be the minimum-volume axis-aligned box containing the set $Q$, $r_B$ the radius of the minimal-radius ball enclosing $B$ and $r_Q$ the radius of the minimal-radius ball enclosing $Q$. My questions are the following: It is true that the Chebyshev center of $Q$ is always inside $B$? Given that (1) is true, is there any upper bound to $|r_B - r_Q|$? A: 1) is true. The reason is that the Chebyshev center $c$ of $Q$ is contained in its closed convex hull $K$ (the intersection of all closed halfspaces containing $Q$). Every closed convex set containing $Q$ also contains its closed convex hull. To justify $c\in K$, suppose this fails: then there is a hyperplane separating $c$ from $K$. Moving $c$ toward $K$ along the normal to this hyperplane brings $c$ strictly closer to every point of $K$ (by the Pythagorean theorem). This contradicts $c$ being the Chebyshev center of $Q$. 2) Any upper bound on $|r_B-r_Q|$ would have to scale with the size of the set, so it can't be universal. It is better to estimate $r_B/r_Q$, which is unitless. Of course, $r_Q\le r_B$ since $Q\subset B$. On the other hand, $B$ is contained in the cube circumscribed around any ball containing $Q$, which implies $r_B\le \sqrt{n}r_Q$. And this bound is optimal, because it's attained when $Q$ is a ball.
[ "stackoverflow", "0035795010.txt" ]
Q: Datatable: apply different formatStyle to each column I want to apply formatStyle function to different columns in my table. I can easily apply the same style to all the columns, or to some subset of columns, e.g. divBySum <- function(x) x/sum(x) output$test <- DT::renderDataTable( datatable(mutate_each(mtcars, funs(divBySum)), options = list(searching = FALSE, paging = FALSE, dom = 't'), rownames = FALSE) %>% formatStyle(colnames(mtcars), background = styleColorBar(c(0, 1), 'lightgray'), backgroundSize = '98% 88%', backgroundRepeat = 'no-repeat', backgroundPosition = 'center') %>% formatPercentage(colnames(mtcars)) ) However I would like to apply different formatStyle to each of the columns. For example, I would like to define maximal length of bar to styleColorBar(c(0, max(x)), 'lightgray'), where x is a column, or different colors for them. I would like to do this with some function that takes as input a vector of column names. Is there any nice, clever way to do this? A: You could use mapply for this and loop through the columns to add whatever color you want, here's an example: data <- datatable(mtcars) mapply(function(column,color){ data <<- data %>% formatStyle(column, background = styleColorBar(c(0, max(mtcars[[column]])), color), backgroundSize = '98% 88%', backgroundRepeat = 'no-repeat', backgroundPosition = 'center') },colnames(mtcars),rep_len(c("green","red","yellow"),length.out = ncol(mtcars)))
[ "stackoverflow", "0051520442.txt" ]
Q: Getting error when running Postman Script via Newman I am trying to run the following Postman Script via Newman to write the response to file: //verify http response code pm.test("Report Generated", function () { pm.response.to.have.status(200); }); var fs = require('fs'); var outputFilename = 'C:/Users/archit.goyal/Downloads/spaceReport.csv'; fs.writeFileSync(outputFilename, pm.response.text()); The request gives a response but getting the following error when writing to file: 1? TypeError in test-script ┌─────────────────────────┬──────────┬──────────┐ │ │ executed │ failed │ ├─────────────────────────┼──────────┼──────────┤ │ iterations │ 1 │ 0 │ ├─────────────────────────┼──────────┼──────────┤ │ requests │ 20 │ 0 │ ├─────────────────────────┼──────────┼──────────┤ │ test-scripts │ 20 │ 1 │ ├─────────────────────────┼──────────┼──────────┤ │ prerequest-scripts │ 0 │ 0 │ ├─────────────────────────┼──────────┼──────────┤ │ assertions │ 2 │ 0 │ ├─────────────────────────┴──────────┴──────────┤ │ total run duration: 1m 48.3s │ ├───────────────────────────────────────────────┤ │ total data received: 1.24MB (approx) │ ├───────────────────────────────────────────────┤ │ average response time: 5.3s │ └───────────────────────────────────────────────┘ # failure detail 1. TypeError fs.writeFileSync is not a function at test-script inside "3i_BMS_Amortization_Schedule / GetReport" Please help A: Postman itself cannot execute scripts like that. To save your responses of all the api requests, you can create a nodeJS server which will call the api through newman and then save the response to a local file. Here is an example - var fs = require('fs'), newman = require('newman'), allResponse=[], outputFilename = 'C:/Users/archit.goyal/Downloads/spaceReport.csv'; newman.run({ collection: '//your_collection_name.json', iterationCount : 1 }) .on('request', function (err, args) { if (!err) { //console.log(args); // --> args contain ALL the data newman provides to this script. var responseBody = args.response.stream, response = responseBody.toString(); allResponse.push(JSON.parse(response)); } }) .on('done', function (err, summary) { fs.writeFile(outputFilename,""); for(var i =0;i<allResponse.length;i++) { fs.appendFileSync(outputFilename,JSON.stringify(allResponse[i],null,5)); } }); Note that, the above code will save only the responses. Other data like request , or URl can be extracted in a similar manner. To run this, install newman in the same directory as the script, then run the script using - node file_name.js
[ "stackoverflow", "0022764556.txt" ]
Q: Spring security OAuth2 authorization process Can anybody tell me what http GET or POST methods should I sequentially call in order to authorize to my apache cxf web services and get access to resources? I tried to call: http://localhost:8080/oauth/token?client_id=client1&client_secret=client1&grant_type=password&username=client1&password=client1 and all I can get is token response: {"access_token":"7186f8b2-9bae-48b6-90c2-033a4476c0fc","token_type":"bearer","refresh_token":"d7fe8cda-812b-4b3e-9ce7-b15067e001e4","expires_in":298653} but what is the next step after I get this token? How can I authenticate the user and get access to resource in url /resources/MyResource/getMyInfo which requires user with role ROLE_USER ? Thanks. I have the following servlet config: <http pattern="/oauth/token" create-session="stateless" authentication-manager-ref="authenticationManager" xmlns="http://www.springframework.org/schema/security"> <intercept-url pattern="/oauth/token" access="IS_AUTHENTICATED_FULLY"/> <anonymous enabled="false"/> <http-basic entry-point-ref="clientAuthenticationEntryPoint"/> <custom-filter ref="clientCredentialsTokenEndpointFilter" before="BASIC_AUTH_FILTER"/> </http> <http pattern="/resources/**" create-session="never" entry-point-ref="oauthAuthenticationEntryPoint" xmlns="http://www.springframework.org/schema/security"> <anonymous enabled="false"/> <intercept-url pattern="/resources/MyResource/getMyInfo" access="ROLE_USER" method="GET"/> <custom-filter ref="resourceServerFilter" before="PRE_AUTH_FILTER"/> <access-denied-handler ref="oauthAccessDeniedHandler"/> </http> <http pattern="/logout" create-session="never" entry-point-ref="oauthAuthenticationEntryPoint" xmlns="http://www.springframework.org/schema/security"> <anonymous enabled="false"/> <intercept-url pattern="/logout" method="GET"/> <sec:logout invalidate-session="true" logout-url="/logout" success-handler-ref="logoutSuccessHandler"/> <custom-filter ref="resourceServerFilter" before="PRE_AUTH_FILTER"/> <access-denied-handler ref="oauthAccessDeniedHandler"/> </http> <bean id="logoutSuccessHandler" class="demo.oauth2.authentication.security.LogoutImpl"> <property name="tokenstore" ref="tokenStore"/> </bean> <bean id="oauthAuthenticationEntryPoint" class="org.springframework.security.oauth2.provider.error.OAuth2AuthenticationEntryPoint"> </bean> <bean id="clientAuthenticationEntryPoint" class="org.springframework.security.oauth2.provider.error.OAuth2AuthenticationEntryPoint"> <property name="realmName" value="springsec/client"/> <property name="typeName" value="Basic"/> </bean> <bean id="oauthAccessDeniedHandler" class="org.springframework.security.oauth2.provider.error.OAuth2AccessDeniedHandler"> </bean> <bean id="clientCredentialsTokenEndpointFilter" class="org.springframework.security.oauth2.provider.client.ClientCredentialsTokenEndpointFilter"> <property name="authenticationManager" ref="authenticationManager"/> </bean> <authentication-manager alias="authenticationManager" xmlns="http://www.springframework.org/schema/security"> <authentication-provider user-service-ref="clientDetailsUserService"/> </authentication-manager> <bean id="clientDetailsUserService" class="org.springframework.security.oauth2.provider.client.ClientDetailsUserDetailsService"> <constructor-arg ref="clientDetails"/> </bean> <bean id="clientDetails" class="demo.oauth2.authentication.security.ClientDetailsServiceImpl"/> <authentication-manager id="userAuthenticationManager" xmlns="http://www.springframework.org/schema/security"> <authentication-provider ref="customUserAuthenticationProvider"> </authentication-provider> </authentication-manager> <bean id="customUserAuthenticationProvider" class="demo.oauth2.authentication.security.CustomUserAuthenticationProvider"> </bean> <oauth:authorization-server user-approval-handler-ref="userApprovalHandler" client-details-service-ref="clientDetails" token-services-ref="tokenServices"> <oauth:authorization-code/> <oauth:implicit/> <oauth:refresh-token/> <oauth:client-credentials /> <oauth:password authentication-manager-ref="authenticationManager"/> </oauth:authorization-server> <oauth:resource-server id="resourceServerFilter" resource-id="springsec" token-services-ref="tokenServices"/> <bean id="tokenStore" class="org.springframework.security.oauth2.provider.token.InMemoryTokenStore"/> <bean id="tokenServices" class="org.springframework.security.oauth2.provider.token.DefaultTokenServices"> <property name="tokenStore" ref="tokenStore"/> <property name="supportRefreshToken" value="true"/> <property name="accessTokenValiditySeconds" value="300000"/> <property name="clientDetailsService" ref="clientDetails"/> </bean> <bean id="userApprovalHandler" class="org.springframework.security.oauth2.provider.approval.TokenServicesUserApprovalHandler"> <property name="tokenServices" ref="tokenServices" /> </bean> <bean id="MyResource" class="demo.oauth2.authentication.resources.MyResource"/> and web.xml: <web-app xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns="http://java.sun.com/xml/ns/javaee" xmlns:web="http://java.sun.com/xml/ns/javaee/web-app_2_5.xsd" xsi:schemaLocation="http://java.sun.com/xml/ns/javaee http://java.sun.com/xml/ns/javaee/web-app_2_5.xsd" id="WebApp_ID" version="2.5"> <display-name>Spring Secure REST</display-name> <context-param> <param-name>contextConfigLocation</param-name> <param-value>/WEB-INF/spring-servlet.xml</param-value> </context-param> <listener> <listener-class>org.springframework.web.context.ContextLoaderListener</listener-class> </listener> <listener> <listener-class>org.springframework.web.context.request.RequestContextListener</listener-class> </listener> <filter> <filter-name>springSecurityFilterChain</filter-name> <filter-class>org.springframework.web.filter.DelegatingFilterProxy</filter-class> <init-param> <param-name>contextAttribute</param-name> <param-value>org.springframework.web.servlet.FrameworkServlet.CONTEXT.spring</param-value> </init-param> </filter> <filter-mapping> <filter-name>springSecurityFilterChain</filter-name> <url-pattern>/*</url-pattern> </filter-mapping> <servlet> <servlet-name>REST Service</servlet-name> <servlet-class>com.sun.jersey.spi.container.servlet.ServletContainer</servlet-class> <init-param> <param-name>com.sun.jersey.config.property.packages</param-name> <param-value>demo.oauth2.authentication.resources</param-value> </init-param> <load-on-startup>1</load-on-startup> </servlet> <servlet-mapping> <servlet-name>REST Service</servlet-name> <url-pattern>/resources/*</url-pattern> </servlet-mapping> <servlet> <servlet-name>spring</servlet-name> <servlet-class>org.springframework.web.servlet.DispatcherServlet</servlet-class> <load-on-startup>1</load-on-startup> </servlet> <servlet-mapping> <servlet-name>spring</servlet-name> <url-pattern>/</url-pattern> </servlet-mapping> </web-app> UPDATED: Found working sample here http://software.aurorasolutions.org/how-to-oauth-2-0-with-spring-security-2/ maybe will be useful for those who have similiar problem A: You first need to create an OAuth2AccessToken which you can then use to build an OAuth2RestTemplate which can then be used perform authenticated GET, POST calls. Here is an example of how you might setup an OAuth2RestTemplate: ResourceOwnerPasswordAccessTokenProvider provider = new ResourceOwnerPasswordAccessTokenProvider(); ResourceOwnerPasswordResourceDetails resource = new ResourceOwnerPasswordResourceDetails(); resource.setClientAuthenticationScheme(AuthenticationScheme.form); resource.setAccessTokenUri("accessTokenURI"); resource.setClientId("clientId"); resource.setGrantType("password"); resource.setClientSecret("clientSecret"); resource.setUsername("userName"); resource.setPassword("password"); OAuth2AccessToken accessToken = provider.obtainAccessToken(getResource(), new DefaultAccessTokenRequest()); OAuth2RestTemplate restTemplate = new OAuth2RestTemplate(getResource(), new DefaultOAuth2ClientContext(accessToken));
[ "stackoverflow", "0051920086.txt" ]
Q: How to copy array elements to ReadOnlyCollection? I need to create a ReadOnlyCollection from elements of an array but it seems like ReadOnlyCollection elements can only be defined in declaration of the collection. Is there any other way than listing each element of the array in the declaration of the collection as in the following sample? [byte[]]$arr=10,20,30 [System.Collections.ObjectModel.ReadOnlyCollection[byte]]$readOnly= $arr[0],$arr[1],$arr[2] Thanks A: Pass the array to the constructor instead: $readOnly = New-Object 'System.Collections.ObjectModel.ReadOnlyCollection[byte]' -ArgumentList @(,$arr) or (PowerShell 5.0 and up): $readOnly = [System.Collections.ObjectModel.ReadOnlyCollection[byte]]::new($arr) Now, your question title specifically says copy array elements - beware while you won't be able to modify $readOnly, its contents will still reflect changes to the array that it's wrapping: PS C:\> $arr[0] = 100 PS C:\> $arr[0] 100 PS C:\> $readOnly[0] 100 If you need a completely separate read-only collection, copy the array to another array first and then overwrite the variable reference with the read-only collection: $readOnly = [byte[]]::new($arr.Count) $arr.CopyTo($readOnly, 0) $readOnly = [System.Collections.ObjectModel.ReadOnlyCollection[byte]]::new($readOnly) Now you can modify $arr without affecting $readOnly: PS C:\> $arr[0] = 100 PS C:\> $arr[0] 100 PS C:\> $readOnly[0] 10
[ "stackoverflow", "0023900339.txt" ]
Q: How many threads my current machine can handle optimally? Original Question Is there a heuristic or algorithim to programatically find out how many threads i can open in order to obtain maximum throughput of a async operation such as writing on a socket? Further explained question I'm assisting a algorithms professor in my college and he posted a assignment where the students are supossed to learn the basics about distributed computing, in his words: Sockets... The assignment is to create a "server" that listens on a given port, receives a string, performs a simple operations on it (i think it's supposed to count it's length) and return Ok or Rejected... The "server" must be able to handle a minimum of 60k submitions per second... My job is to create a little app to simulate 60K clients... I've managed to automate the distribution of servers and the clients across a university lab in order to test 10 servers at a time (network infrastructure became the bottleneck), the problem here is: A lab is homogeneous, 2 labs are not! If not tunned correctly the "client" usually can't simulate 60k users and report back to me, especially when the lab is a older one, AND i would like to provide the client to the students so they could test their own "server" more reliably... The ability to determine the optimal number of threads to spawn has now become vital! PS: Fire-and-Forget is not a option because the client also tests if the returned value is correct, e.g If i send "Short sentence" i know the result will be "Rejected" and i have to check it... A class have 60 students... and there's the morning class and the night class, so each week there will be 120 "servers" to test because as the semester moves along the "server" part will have to do more stuff, the client no (it will always only send a string and receive "Ok"/"Rejected")... So there's enough work to be done in order to justify all this work i'm doing... Edit1 - Changed from Console to a async operation - I dont want the maximum number of threads, i want the number that will provide maximum throughput! I imagine that on a 6 core pc the number will be higher than on a 2 core pc Edit2 - I'm building a simple console app to perform some test in another app... one of thouse is a specific kind of load test (RUDY attack) where i have to simulate a lot of clients performing a specific attack... The thing is that there's a curve between throughput and number of threads, where after a given point, opening more threads actually decreases my throughput... Edit3 Added more context to the initial question... A: The Windows console is really meant to be used by more than one thread, otherwise you get interleaved writes. So the thread count for maximum console output would be one. It's when you're doing computation that multiple threads makes sense. Then, it's rarely useful to use more than one thread per logical processor - or one background thread plus on UI thread for UI apps on a single-core processor.
[ "stackoverflow", "0006378020.txt" ]
Q: Smooth way to set all other togglebuttons to unchecked when one is checked I specifically didn't want to use a radiogroup. I know, I know ... a radiogroup is perfect when you need exclusivity like this. But..given that I'm working with togglebuttons, how can i disable (setChecked(false)) all (3) other toggle buttons when one of them is checked? A: Here is my code for the group of toggle buttons. You need to put your buttons in the layout where you want them and then use ToggleButtonsGroup object to put them together in a single group. private class ToggleButtonsGroup implements OnCheckedChangeListener { private ArrayList<ToggleButton> mButtons; public ToggleButtonsGroup() { mButtons = new ArrayList<ToggleButton>(); } public void addButton(ToggleButton btn) { btn.setOnCheckedChangeListener(this); mButtons.add(btn); } public ToggleButton getSelectedButton() { for(ToggleButton b : mButtons) { if(b.isChecked()) return b; } return null; } @Override public void onCheckedChanged(CompoundButton buttonView, boolean isChecked) { if(isChecked) { uncheckOtherButtons(buttonView.getId()); mDataChanged = true; } else if (!anyButtonChecked()){ buttonView.setChecked(true); } } private boolean anyButtonChecked() { for(ToggleButton b : mButtons) { if(b.isChecked()) return true; } return false; } private void uncheckOtherButtons(int current_button_id) { for(ToggleButton b : mButtons) { if(b.getId() != current_button_id) b.setChecked(false); } } } A: I think you'll need to implement your own radiogroup logic for that. It will need to register itself as the OnCheckedChangeListener for each button it manages. I wish the API wasn't designed with RadioGroup as a LinearLayout. It would have been much better to separate the radio group management from layout.
[ "stackoverflow", "0021817053.txt" ]
Q: Is there a way to make a "Not a number" if statement in Python? Forgive me if this a stupid question, but I searched everywhere and couldn't find an answer due to the wording of the question. Is there a way to have a "Not a number" value in an if statement in Python? Say for example you have a menu like this: For free examples - Press 1 For worse free examples - Press 2 And I wanted to write an if statement, or elif statement, that says something along the lines of: elif menu_Choice != #: print("This menu only accepts numerical input. Please try again.\n") menu_Choice = input("\n") Can I do so? If so, how? A: Why not say something like: if menu_choice == "1": do this elif menu_choice == "2": do this else: print("Invalid input...!") You could also do: menu_choice = "" while menu_choice not in ["1", "2"]: # not ideal with lots of choices print("This menu only accepts numerical input. Please try again.\n") menu_Choice = input("\n") Then go to your if statments EDIT: with regards to your comment, you could do: # you could make range as big as you like.. choices = [str(i) for i in range(1, 11)] #['1', '2', '3', '4', '5', '6', '7', '8', '9', '10'] while menu_choice not in choices: print("That input wasn't valid. Please try again.\n") menu_Choice = input("\n") I made the list choices with a list comprehension A: There is the .isdigit() string method. elif not menu_Choice.isdigit(): print("This menu only accepts numerical input. Please try again.\n") menu_Choice = input("\n") Keep in mind this doesn't check if it's in the right range, just if it's numeric. But by the time you've gotten this far, maybe you could just use else?
[ "stackoverflow", "0021871344.txt" ]
Q: Android app restarts on orientation change My app contains fragments, and it's mainly a fragment application. I have only 3 activities, others are all fragments. But from any point of the app when I rotate the phone in landscape mode or portrait the application restarts and continues from the Start Point of the app, from the beginning. I put the android:configChanges="orientation|keyboardHidden|keyboard" in the Manifest, but it seems it has no effect at all. I'm populating the fragments only with one web view. But I also change the actionbar color. That's all I do in the app. It's a static app, there is no user input. Here's a sample code of how my fragments look like: public class CLASSNAME extends Fragment { @Override public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); this.setRetainInstance(true); } @Override public View onCreateView(final LayoutInflater inflater, final ViewGroup container, Bundle savedInstanceState) { View view = inflater.inflate(R.layout.LAYOUT, container, false); ActionBar bar = getActivity().getActionBar(); bar.setBackgroundDrawable(new ColorDrawable(getResources().getColor(R.color.COLOR))); bar.setTitle("TITLE"); final WebView wFirst = (WebView) view.findViewById(R.id.WEBVIEWID); wFirst.getSettings().setJavaScriptEnabled(true); wFirst.setBackgroundColor(0x00000000); wFirst.loadDataWithBaseURL(LOADING DATA); wFirst.setWebViewClient(new WebViewClient() {SETTING UP WEBVIEW CLIENT}); return view; } } I read lots of posts that got answered with Save Instance State and those other methods, but in my case I have no idea what to save and how to keep my activity rolling even if the orientation is changed. This is the MainActivity.class code: public class MainActivity extends Activity implements NavigationDrawerFragment.NavigationDrawerCallbacks { /** * Fragment managing the behaviors, interactions and presentation of the navigation drawer. */ private NavigationDrawerFragment mNavigationDrawerFragment; private static CharSequence mTitle; public static Context context; public static Context getContext() { return context; } @Override protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.activity_main); context = getApplicationContext(); mNavigationDrawerFragment = (NavigationDrawerFragment) getFragmentManager().findFragmentById(R.id.navigation_drawer); mTitle = getTitle(); // Set up the drawer. mNavigationDrawerFragment.setUp( R.id.navigation_drawer, (DrawerLayout) findViewById(R.id.drawer_layout)); } @Override protected void onDestroy() { super.onDestroy(); android.os.Process.killProcess(android.os.Process.myPid()); super.onDestroy(); } @Override public void onBackPressed() { super.onBackPressed(); finish(); } @Override public void onNavigationDrawerItemSelected(int position) { // update the main content by replacing fragments FragmentManager fragmentManager = getFragmentManager(); switch (position) { case 1: fragmentManager.beginTransaction() .replace(R.id.container, new AlgebraicGraphs()) .commit(); break; case 2: fragmentManager.beginTransaction() .replace(R.id.container, new Logarithms()) .commit(); break; case 3: fragmentManager.beginTransaction() .replace(R.id.container, new Polynomials()) .commit(); break; case 4: fragmentManager.beginTransaction() .replace(R.id.container, new Powers()) .commit(); break; case 6: fragmentManager.beginTransaction() .replace(R.id.container, new AreaFormulas()) .commit(); break; case 7: fragmentManager.beginTransaction() .replace(R.id.container, new SurfaceAreaFormulas()) .commit(); break; case 8: fragmentManager.beginTransaction() .replace(R.id.container, new PerimeterFormulas()) .commit(); break; case 9: fragmentManager.beginTransaction() .replace(R.id.container, new VolumeFormulas()) .commit(); break; case 11: fragmentManager.beginTransaction() .replace(R.id.container, new TrigonometryGraphsFormulas()) .commit(); break; case 12: fragmentManager.beginTransaction() .replace(R.id.container, new HyperbolicIdentities()) .commit(); break; case 13: fragmentManager.beginTransaction() .replace(R.id.container, new TrigonometricIdentities()) .commit(); break; case 16: fragmentManager.beginTransaction() .replace(R.id.container, new IntegralIdentities()) .commit(); break; case 17: fragmentManager.beginTransaction() .replace(R.id.container, new IntegralSpecialFunctions()) .commit(); break; case 18: fragmentManager.beginTransaction() .replace(R.id.container, new TableOfIntegrals()) .commit(); break; case 20: fragmentManager.beginTransaction() .replace(R.id.container, new DerivativeIdentities()) .commit(); break; case 21: fragmentManager.beginTransaction() .replace(R.id.container, new TableOfDerivatives()) .commit(); break; default: fragmentManager.beginTransaction() .replace(R.id.container, new DefaultLayout()) .commit(); break; } } @Override public boolean onCreateOptionsMenu(Menu menu) { if (!mNavigationDrawerFragment.isDrawerOpen()) { // Only show items in the action bar relevant to this screen // if the drawer is not showing. Otherwise, let the drawer // decide what to show in the action bar. getMenuInflater().inflate(R.menu.main, menu); return true; } return super.onCreateOptionsMenu(menu); } @Override public boolean onOptionsItemSelected(MenuItem item) { // Handle action bar item clicks here. The action bar will // automatically handle clicks on the Home/Up button, so long // as you specify a parent activity in AndroidManifest.xml. int id = item.getItemId(); if (id == R.id.action_about) { About about = new About(); about.show(getFragmentManager(), "about"); } return false; } /** * A placeholder fragment containing a simple view. */ public static class PlaceholderFragment extends Fragment { /** * The fragment argument representing the section number for this * fragment. */ private static final String ARG_SECTION_NUMBER = "section_number"; /** * Returns a new instance of this fragment for the given section * number. */ public static PlaceholderFragment newInstance(int sectionNumber) { PlaceholderFragment fragment = new PlaceholderFragment(); Bundle args = new Bundle(); args.putInt(ARG_SECTION_NUMBER, sectionNumber); fragment.setArguments(args); return fragment; } @Override public View onCreateView(LayoutInflater inflater, ViewGroup container, Bundle savedInstanceState) { View rootView = inflater.inflate(R.layout.fragment_main, container, false); return rootView; } } } This is the log: 02-19 06:41:24.849 9788-9788/l.pocketformulas E/AndroidRuntime﹕ FATAL EXCEPTION: main java.lang.IllegalStateException: Fragment Logarithms{4132f5d0} not attached to Activity at android.app.Fragment.getResources(Fragment.java:828) at l.pocketformulas.Logarithms$1.onPageFinished(Logarithms.java:71) at android.webkit.CallbackProxy.handleMessage(CallbackProxy.java:444) at android.os.Handler.dispatchMessage(Handler.java:99) at android.os.Looper.loop(Looper.java:155) at android.app.ActivityThread.main(ActivityThread.java:5520) at java.lang.reflect.Method.invokeNative(Native Method) at java.lang.reflect.Method.invoke(Method.java:511) at com.android.internal.os.ZygoteInit$MethodAndArgsCaller.run(ZygoteInit.java:1029) at com.android.internal.os.ZygoteInit.main(ZygoteInit.java:796) at dalvik.system.NativeStart.main(Native Method) A: Use this in you AndroidMenifest.xml <activity android:name="MyActivity" android:configChanges="orientation|keyboard|keyboardHidden" android:screenOrientation="sensor" /> Android restarts the activities whenever the orientation change by default. You will need to save your data/state by calling onSaveInstanceState() before Android destroys the activities. Have a look here: Handling Runtime Changes You could prevent this by adding android:configChanges="orientation" to your activity in AndroidManifest file. Updates: When a config change occurs the old Fragment isn't destroyed -- it adds itself back to the Activity when it's recreated. You can stop errors occurring by using the same Fragment rather than recreating a new one. Simply add this code: if(savedInstanceState == null) { mFragmentManager = getFragmentManager(); // **update** FragmentTransaction ft= mFragmentManager.beginTransaction(); MyFragment fragment = new MyFragment(); ft.add(R.id.container, fragment); ft.commit(); } Be warned though: problems will occur if you try and access Activity Views from inside the Fragment as the lifecycles will subtly change. (Getting Views from a parent Activity from a Fragment isn't easy).
[ "stackoverflow", "0059158552.txt" ]
Q: AngularJS $rootScope.$on alternative in context of migration to Angular2 Our AngularJS project had start it's long way to the modern Angular. The ngMigration util recommend me to remove all the $rootScope dependecies because Angular doesn't contain a similar concept like $rootScope. It is pretty simple in some cases but I don't know what to do with event subscription mechanisms. For example I have a some kind of Idle watchdog: angular .module('myModule') //... .run(run) //... function run($rootScope, $transitions, Idle) { $transitions.onSuccess({}, function(transition) { //... Idle.watch(); // starts watching for idleness }); $rootScope.$on('IdleStart', function() { //... }); $rootScope.$on('IdleTimeout', function() { logout(); }); } On which object instead of $rootScope I have to call the $on function if I want to get rid of the $rootScope? UPD The question was not about "how to migrate on Angular2 event system". It was about how to remove a $rootScope dependencies but keep a event system. Well it seems to be impossible. A: I don't know what to do with event subscription mechanisms. Angular 2+ frameworks replace the $scope/$rootScope event bus with observables. From the Docs: Transmitting data between components Angular provides an EventEmitter class that is used when publishing values from a component. EventEmitter extends RxJS Subject, adding an emit() method so it can send arbitrary values. When you call emit(), it passes the emitted value to the next() method of any subscribed observer. A good example of usage can be found in the EventEmitter documentation. For more information, see Angular Developer Guide - Observables in Angular
[ "electronics.stackexchange", "0000201803.txt" ]
Q: Is it sensible to always use larger diameter conductors for carrying smaller signals? This question as originally written sounds a little bit insane: it was originally asked to me by a colleague as a joke. I am an experimental NMR physicist. I frequently want to perform physical experiments which ultimately boil down to measuring small AC voltages (~µV) at about 100-300 MHz, and draw the smallest current possible. We do this with resonant cavities and impedance-matched (50 Ω) coaxial conductors. Because we sometimes want to blast our samples with a kW of RF, these conductors are often quite "beefy" -- 10 mm diameter coax with high quality N-type connectors and a low low insertion loss at the frequency of interest. However, I think this question is of interest, for the reasons I'll outline below. The DC resistance of modern coax conductor assemblies is frequently measured in ~1 Ω/km, and can be neglected for the 2 m of cable I typically use. At 300 MHz, however, the cable has a skin depth given by $$ \delta=\sqrt{{2\rho }\over{\omega\mu}} $$ of about four microns. If one assumes that the centre of my coax a solid wire (and therefore neglects proximity effects), the total AC resistance is effectively $$ R_\text{AC}\approx\frac{L\rho}{\pi D\delta}, $$ where D is the total diameter of the cable. For my system, this is about 0.2 Ω. However, holding everything else constant, this naïve approximation implies that your AC losses scale as 1/D, which would tend to imply that one would want conductors as large as possible. However, the above discussion completely neglects noise. I understand that there are at least three main sources of noise I should consider: (1) thermal (Johnson-Nyquist) noise, induced in the conductor itself and in the matching capacitors in my network, (2) induced noise arising from RF radiation elsewhere in the universe, and (3) shot noise and 1/f noise arising from fundamental sources. I am not sure how the interaction of these three sources (and any I may have missed!) will change the conclusion reached above. In particular, the expression for the expected Johnson noise voltage, $$ v_n=\sqrt{4 k_B T R \Delta f}, $$ is essentially independent of the mass of the conductor, which I naïvely find rather odd -- one may expect that the larger thermal mass of a real material would provide more opportunity for (at least transiently) induced noise currents. Additionally, everything I work with is RF shielded, but I can't help but think that the shielding (and the rest of the room) will radiate as a black body at 300 K...and therefore emit some RF that it is otherwise designed to stop. At some point, my gut feeling is that these noise processes would conspire to make any increase in the diameter of the conductor used pointless, or down right deleterious. Naïvely, I think that this has clearly got to be true, or labs would be filled with absolutely huge cables to be used with sensitive experiments. Am I right? What is the optimum coaxial conductor diameter to use when carrying information consisting of a potential difference of some small magnitude v at an AC frequency f? Is everything so dominated by the limitations of the (GaAs FET) preamplifier that this question is entirely pointless? A: You're substantially correct on everything you've mentioned. Bigger cable has lower losses. Low loss is important in two areas 1) Noise The attenuation of a feeder is what adds Johnson noise corresponding to its temperature onto the signal. A feeder of near zero length has near zero attenuation and so near zero noise figure. Up to a meter or several (depending on frequency), the noise figure of a typical cable tends to be dominated by the noise figure of the input amplifier you are using, even cables of pencil diameter (you can get really thin cables, sub-mm even, and in these you do have to worry about meter lengths). To get signals down off your roof into the lab, any feasible cable will be so lossy, even unusually thick ones, that the solution is almost always an LNA on the roof, straight after the antenna. That's why do tend not to see really fat cables in labs, they're not needed for short hops, they're not sufficient for long drags. b) High power handling In a transmitter station, you tend to have the amplifier in the building, and the antenna 'out there' somewhere. Putting the amplifier 'out there' as well is usually not an option, so here you do have fat cables, as fat as possible given that they have to remain TEM, without moding. That means <3.5mm for 26GHz, <350mm for 260MHz etc. The impedance of the cable also matters, as well as the size. Have a look at this cable manufacturer's tutorial on why we have different cable impedances, so 75\$\Omega\$ for lowest loss, and 50\$\Omega\$ as a compromise that has settled itself in as a standard. A: For most folks posting answers on this particular stack, the answer to optimum cable size generally has a lot to do with economics, service life, ease of use and such. Each individual problem has its own set of defining parameters, which in turn will be used to create a specification that will be met or exceeded. This is an important step to take, because premature optimization is a real problem. I can absolutely guarantee several things about electronic design that are always true. Larger diameter cables experience less heat waste due to improved conductivity, higher voltages allow more power to be transmitted per unit current, and larger batteries have more capacity. But the solution must actually fit the problem, so frequently you'll find yourself using the specification to choose just exactly what is acceptable for the particular problem you are having at the moment. You have demonstrated a more than adequate understanding of the issues at hand, and I humbly submit that you are likely better suited to the details than I am at the moment. You also seem to be engaged in research, rather than design. That being the case, I would offer this advice - having a firm understanding of the noise terms and how they are affected by increasing temperatures over time, decide on a firm, non-zero value of Johnson noise that is currently acceptable for your work, and design around that as a specification. Set conductor sizes and types, and if necessary consider active cooling (provided, of course, that it doesn't interfere with or invalidate your research).
[ "stackoverflow", "0034904699.txt" ]
Q: Twig (Timber) + Wordpress - showing categories of a post I am imagining suport for Timber is pretty thin, but perhaps my issue may be more twig related, or even just best php practices? I am using ACF to list a group of posts on a page. As well as the usual details (title, content etc...) I want to display the categories that are assigned to the posts. TIMBER CONTROLLER (jobs.php) $context = Timber::get_context(); $post = new TimberPost(); $context['post'] = $post; $context['categories'] = Timber::get_terms('category', array('parent' => 0)); $args = array( 'numberposts' => -1, 'post_type' => 'jobs' ); $context['job'] = Timber::get_posts($args); Timber::render( 'page-templates/job.twig', $context ); TWIG FILE <section id="job-feed"> <h3> term.name }}</h3> {% for post in job %} <div class="job mix"> <h2>{{ post.title }}</h2> <p>{{ post.content }}</p> <p>{{ function('the_category', '') }} </div> {% endfor %} </section> You will see I've tried reverting to using a WP function as I cannot find a way to do this in Timber, or any other way. A: I assume you want the categories assigned to the particular job? In that case... <section id="job-feed"> {% for post in job %} <div class="job mix"> <h2>{{ post.title }}</h2> <p>{{ post.content }}</p> <p>{{ post.terms('category') | join(', ') }} </div> {% endfor %} </section> What's going on? With {{ post.terms('category') }} you're getting all the terms from the category taxonomy (attached to the post) returned to you as an array. Then Twig's join filter: http://twig.sensiolabs.org/doc/filters/join.html ... will convert those to a string with commas. The result should be: <p>Technical, On-Site, Design, Leadership</p>
[ "unix.stackexchange", "0000015055.txt" ]
Q: Grub2 Error Loading Kernel I have been trying to start a small home server, using Ubuntu 10.04 Server edition. The installation process finished, and I got an error from Grub saying that it was "out of disk". After a bit of debugging, I created and ran Grub from a CD, but the best I could do was get to a Grub shell, where using the boot command gave the error message error: no loaded kernel. After more playing around, I decided to try re-installing Ubuntu, and booted it up to find a Grub terminal (not splash menu, but not recovery mode) telling me that it had an error, no loaded kernel again. The same thing happens when trying to follow instructions on loading an OS from grub, at the linux /vmlinux root=/dev/sda1 command. After many searches, all of the information I can find is this: The error has been reported when upgrading in Ubuntu 9, and can be solved by installing a later version of Grub. The Grub shell will load without selection if Grub can't find a configuration file. The first doesn't seem to be applicable, but the second, along with the exact commands that fail, seem to point to the problem being getting info off of the hard drive. The operating system is Ubuntu 10.04.2 Server LTS, running on the internal hard drive of a Compaq Armada m700 (very old, very slow, but I just want a text-based/LAMP server). Any suggestions on how to get the kernel to load, or another solution? Again, I have tried re-installing the OS, booting multiple times, and running Grub off of a cd. A: You can try installing grub at /dev/sda For manually loading kernel, you can try following: set root (hd0,1) linux /vmlinuz root=/dev/sda1 initrd /initrd.img here please note that you need to put your kernel version. For example, my kernel version is 3.0.0-12 (initrd.img-3.0.0-12-generic & vmlinuz-3.0.0-12-generic). To load this kernel, you have to try following: set root (hd0,1) linux /vmlinuz-3.0.0-12-generic root=/dev/sda1 initrd /initrd.img-3.0.0-12-generic You will find your available versions by pressing after typing linux or initrd command. Another thing is, make sure your root resides on /dev/sda1 Best luck :)
[ "math.stackexchange", "0002249268.txt" ]
Q: Folland Chapter 6 Problem 23b Let $(X,\mathcal{M},\mu)$ be a measure space. A set $E \in \mathcal{M}$ is called "locally null" if $\mu(E\cap F) = 0$ for every $F \in \mathcal{M}$ such that $\mu(F) < \infty$. For $f: X \to \mathbb{C}$ measurable, define $\|f\|_* = \inf\{a : \{x : |f(x)| > a\} \text{ is locally null}\}$. Problem 23b asks to show $\|\cdot\|_*$ is a norm on the set of $f$ such that $\|f\|_* < \infty$ (modded out by $f=g$ iff $f=g$ a.e.). In particular, we must have $\|f\|_* \ge 0$. I claim this does not always hold. For example, consider $X = \mathbb{R}, \mathcal{M} = \mathcal{P}(X)$, and $\mu(A) = 0$ if $A$ is at most countable, and $\mu(A) = \infty$ otherwise. Then $X$ itself is locally null so any $a < 0$ satisfies $\{x : |f(x)| > a\}$ is locally null. The author gives no restriction on the measure space. Is the problem flawed? A: It's flawed, but only superficially. Changing $\inf\{a : \{x : |f(x)| > a\} \text{ is locally null}\}$ to $\inf\{a > 0 : \{x : |f(x)| > a\} \text{ is locally null}\}$ makes the problem correct.
[ "stackoverflow", "0054197528.txt" ]
Q: set parameters in EventInput in Dialogflow V2 API I desperatly try to set parameters in a dialogflow.types.EventInput in python. This doc says the parameters need to be of type Struct. I read here that the parameters needs to be a google.protobuf.Struct. But it does not work for me. Is there another Struct type equivalent in python? If i send the EventInput without parameters, the intent is detected correctly. I tried this so far: import dialogflow_v2 as dialogflow session_client = dialogflow.SessionsClient() session = session_client.session_path(project_id, session_id) parameters = struct_pb2.Struct() parameters['given-name'] = 'Jeff' parameters['last-name'] = 'Bridges' event_input = dialogflow.types.EventInput( name='greetPerson', language_code='de', parameters=parameters) query_input = dialogflow.types.QueryInput(event=event_input) response = session_client.detect_intent( session=session, query_input=query_input) Anybody having experience with this usecase? Things i also tried: Pass a class named p yields: Parameter to MergeFrom() must be instance of same class: expected Struct got p. for field EventInput.parameters Pass a dict: parameters = { 'given-name': 'Jeff', 'last-name': 'Bridges'} yields: Protocol message Struct has no "given-name" field. Generate Struct with constructor: from google.protobuf.struct_pb2 import Struct, Value parameters = Struct(fields={ 'given-name':Value(string_value='Jeff'), 'last-name':Value(string_value='Bidges') }) yields sometimes: Exception in thread ptvsd.stopping (most likely raised during interpreter shutdown): /EventInput A: This is how I did this: import dialogflow from google.protobuf import struct_pb2 session_client = dialogflow.SessionsClient() session = session_client.session_path(project_id, session_id) parameters = struct_pb2.Struct() parameters["given-name"] = 'Jeff' parameters["last-name"] = 'Bridges' query_input = { 'event': { "name": "greetPerson", "parameters": parameters, "language_code": "de" } } response = session_client.detect_intent( session=session, query_input=query_input) Note: In dialogflow console, you must give default values of parameters as #even_name.parameter_name. In this case for parameter given-name it would be #greetPerson.given-name and for last-name it would be #greetPerson.last-name. Docs Reference: We are using DetectIntent, in which we are using QueryInput, in which finally we are using EvenInput Hope it helps.
[ "stackoverflow", "0030857949.txt" ]
Q: Is it possible to use SSI and Apache to make a cms? I've been thinking about using Apache http advanced features and SSI to build a dynamic site with like a simple cms(lists most recent stories and uses a csv file to do look ups). Is it possible to abuse the advanced features of Apache and SSI in this way and is it feasible? I know that SSI offers a lot of power when it comes to conditionals, and that apache has some really powerful built in features. A: I wrote a wiki, a blog and a large custom Web site, with content provided by users, with nothing but Apache SSI and a few Unix commands, so you can certainly do it. If you're going to use a CSV file, you'll have to write one or more CGI programs to process it. They could be nothing more than shell scripts, but be VERY careful about using input from a POST or GET as variables in shell scripts. A hacker could put commands in them that sh would execute. You could use the exec SSI command, but it's a security risk in the same way as sh. It's best to write in a language that can access the content of environment variables directly, e.g. Perl, Awk, etc. At one time, SSI gained the ability to process the query string directly. Unfortunately, in the latest version of Apache SSI, URL parsing is less powerful than it once was. Nevertheless, I think SSI is one of the most under-appreciated Web tools available. Best of luck.
[ "stackoverflow", "0005596568.txt" ]
Q: how to create dynamic google map in android i check my logcat file error is the location manage is null public class MapsActivity extends MapActivity { MapView mapView; MapController mc; GeoPoint p; double latPoint, lngPoint; LocationManager myManager; /** Called when the activity is first created. */ @Override public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.main); mapView = (MapView) findViewById(R.id.mapView); LinearLayout zoomLayout = (LinearLayout) findViewById(R.id.zoom); View zoomView = mapView.getZoomControls(); zoomLayout.addView(zoomView, new LinearLayout.LayoutParams( LayoutParams.WRAP_CONTENT, LayoutParams.WRAP_CONTENT)); mapView.displayZoomControls(true); mc = mapView.getController(); LocationManager myManager = (LocationManager) getSystemService(Context.LOCATION_SERVICE); myManager.requestLocationUpdates(LocationManager.GPS_PROVIDER, 0, 0,new myLocationListener()); p = new GeoPoint((int) (latPoint * 1E6), (int) (lngPoint * 1E6)); mc.animateTo(p); mc.setZoom(10); //---Add a location marker--- // MapOverlay mapOverlay = new MapOverlay(); // List listOfOverlays = mapView.getOverlays(); // listOfOverlays.clear(); // listOfOverlays.add(mapOverlay); mapView.invalidate(); } @Override protected boolean isRouteDisplayed() { // TODO Auto-generated method stub return false; } class myLocationListener implements LocationListener { public void ListLocationUpdater() { } @Override public void onLocationChanged(Location loc) { if (myManager != null) { // List list = myManager.getAllProviders(); String param = (String) myManager.getProviders(true).get(0); loc = myManager.getLastKnownLocation(param); if (loc != null) { latPoint = loc.getLatitude(); lngPoint = loc.getLongitude(); Log.e("RootDrawApplication",String.valueOf(latPoint)+" , "+String.valueOf(lngPoint)); } else Log.e("GoogleMaps ", "Error: Location is null"); } else Log.e("GoogleMaps ", "Error: Location Manager is null"); } @Override public void onProviderDisabled(String provider) { // TODO Auto-generated method stub } @Override public void onProviderEnabled(String provider) { // TODO Auto-generated method stub } @Override public void onStatusChanged(String provider, int status, Bundle extras) { // TODO Auto-generated method stub } } } A: You should set up a LocationListener (here). Within the LocationListener's onLocationChanged method, just have your map animate to the updated location.
[ "stackoverflow", "0007706563.txt" ]
Q: where is the __enter__ and __exit__ defined for zipfile? Based on the with statement The context manager’s __exit__() is loaded for later use. The context manager’s __enter__() method is invoked. I have seen one of the with usage with zipfile Question> I have checked the source code of zipfile located here: /usr/lib/python2.6/zipfile.py I don't know where the __enter__ and __exit__ functions are defined? Thank you A: zipfile.ZipFile is not a context manager in 2.6, this has been added in 2.7. A: I've added this as another answer because it is generally not an answer to initial question. However, it can help to fix your problem. class MyZipFile(zipfile.ZipFile): # Create class based on zipfile.ZipFile def __init__(file, mode='r'): # Initial part of our module zipfile.ZipFile.__init__(file, mode) # Create ZipFile object def __enter__(self): # On entering... return(self) # Return object created in __init__ part def __exit__(self, exc_type, exc_val, exc_tb): # On exiting... self.close() # Use close method of zipfile.ZipFile Usage: with MyZipFile('new.zip', 'w') as tempzip: # Use content manager of MyZipFile tempzip.write('sbdtools.py') # Write file to our archive If you type help(MyZipFile) you can see all methods of original zipfile.ZipFile and your own methods: init, enter and exit. You can add another own functions if you want. Good luck!
[ "stackoverflow", "0028794502.txt" ]
Q: What's the usage of "#" in Swift Some function contains "#" before their parameter like this: func delay(#seconds: Double, completion:()->()) { let popTime = dispatch_time(DISPATCH_TIME_NOW, Int64( Double(NSEC_PER_SEC) * seconds )) dispatch_after(popTime, dispatch_get_main_queue()) { completion() } } What does the sign "#" mean? A: "#"makes a parameter name not only internal, but also external. For example: func doSthwith(a:String, b:String){ } you invoke the function like this: doSthWith("thing1", b: "thing2") When you add a "#": func doSthWith(#a:String, b:String) You must invoke it like this: self.doSthwith(a: "thing1", b: "thing2") Totally, "#" is used to make your code more readable, especially when your parameter name has very significant meaning.
[ "stackoverflow", "0053404432.txt" ]
Q: How to count occurence of true positives using pandas or numpy? I have two columns, Prediction and Ground Truth. I want to get a count of true positives as a series using either numpy or pandas. For example, my data is: Prediction GroundTruth True True True False True True False True False False True True I want a list that should have the following output: tp_list = [1,1,2,2,2,3] Is there a one-liner way to do this in numpy or pandas? Currently, this is my solution: tp = 0 for p, g in zip(data.Prediction, data.GroundTruth): if p and g: # TP case tp = tp + 1 tp_list.append(tp) A: To get a running count (i.e., cumulative sum) of true positives, i.e., Prediction == True if and only if GroundTruth == True, the solution is a modification of @RafaelC's answer: (df['Prediction'] & df['GroundTruth']).cumsum() 0 1 1 1 2 2 3 2 4 2 5 3 (df['Prediction'] & df['GroundTruth']).cumsum().tolist() [1, 1, 2, 2, 2, 3] A: If you want to know how many True you predicted that are actually True, use (df['Prediction'] & df['GroundTruth']).cumsum() 0 1 1 1 2 2 3 2 4 2 5 3 dtype: int64 (thanks @Peter Leimbigiler for chiming in) If you want to know how many you have predicted correctly just compare and use cumsum (df['Prediction'] == df['GroundTruth']).cumsum() which outputs 0 1 1 1 2 2 3 2 4 3 5 4 dtype: int64 Can always get a list by using .tolist() (df4['Prediction'] == df4['GroundTruth']).cumsum().tolist() [1, 1, 2, 2, 3, 4]
[ "stackoverflow", "0015835709.txt" ]
Q: Attach view only after all composed modules are activated in DurandalJS. I have a main view with two sub-views (using DurandalJS composition). Right now, the main view's viewAttached event is firing early while my sub-views are still dealing with ajax requests. Ideally, the main view would only attach the view to the DOM and run the transition AFTER all composed sub-views have finished activating. <h1>Hello World</h1> <div data-bind="compose: { model: 'viewmodels/subview1', activate: true }">Loading...</div> <div data-bind="compose: { model: 'viewmodels/subview2', activate: true }">Loading...</div> A: The most straight-forward way to handle this is to manually activate the sub-views during the activation of the main view. Remove the activate:true from the sub-view compose bindings, and in your main VM do something like this: //in main view model var subViewModel1 = require("viewModels/subViewModel1"); var subViewModel2 = require("viewModels/subViewModel2"); function activate() { return $.when( subViewModel1.activate(), subViewModel2.activate() ).then(function () { //finish your main view's activation here ... }); } Make sure your main VM returns a promise that only resolves when the sub-VMs have completed their activation. Likewise, the sub-VMs should ensure that their activate function returns a promise that only resolves when the AJAX request for that sub-VM is completed. There's a thread in Durandal Groups that discusses a similar concept. If you haven't already, it's definitely worth reading up a bit on the docs also, specifically these related to your question: Using Composition Composition Reference Hooking Life-Cycle Events
[ "math.stackexchange", "0001616640.txt" ]
Q: Clarification on Measure Theory My text book says that the Lebesgue measure on Borel $\sigma$-algebra of $\mathbb{R}$ is not complete . I am looking for such a Borel set which has measure $0$ but its subset is not a Borel set. Such set must exist or else it will be a complete measure space. Correct me if I am wrong. A: Yes, indeed, such sets will exist. You don't even need measure theory to show this! It's not too hard to show (caveat: you do need a small amount of the axiom of choice here, namely that the union of countably many countable sets is countable; surprisingly, this is not provable in ZF alone) that there are continuum-many Borel sets of reals, while there are $2^{2^{\aleph_0}}$-many subsets of any set of reals of size continuum. Combining these facts, a counting argument shows that there must be some non-Borel subset of the Cantor set, which is Borel (indeed, closed) and has measure zero. This doesn't produce a specific example, unfortunately. Explicit examples are possible, but they're somewhat harder to describe. My personal favorite is as follows. There is a natural way to represent an element of the Cantor set as an infinite sequence of 0s and 1s (via ternary expansion), and there is a natural way to represent a binary relation $R$ on $\mathbb{N}$ as an infinite sequence of 0s and 1s (a 1 in the $2^m3^n$th bit exactly when $R(m, n)$ holds, and zeroes elsewhere). Thus, letting $W$ be the set of points in the Cantor set which are the codes of well-ordered relations on $\mathbb{N}$, we get a reasonably definable set of reals. Perhaps surprisingly, this set is not Borel!
[ "math.stackexchange", "0000406792.txt" ]
Q: Proof that stochastic process on infinite graph ends in finite step. Infinite Graph Let $G$ be an infinite graph that is constructed this way: start with two unconnected nodes $v_1$ and $u_1$. We call this "level 1". Create two more unconnected nodes $v_2$ and $u_2$. Connect $v_1$ to both of them with directed edges pointing from $v_1$ to $v_2$, and from $v_1$ to $u_2$ respectively. Then, connect $u_1$ to both of them using directed edges in the same way. This is "level 2". Repeat this process. At level $n$, nodes $v_n$ and $u_n$ are both connected to the each of the next two nodes $v_{n+1}$ and $u_{n+1}$. This results in an infinite connected graph. Stochastic Process We define a discrete time process heuristically on this graph. $v_1$ and $u_1$ are initially "infected". This infection only last for one time period. We start at $t = 0$. At each time step, infected nodes have an independent probability $p$ of passing the infection to their neighbors (infected node must have a directed edge to the target for the target to be a neighbor). If this happens, the neighbor is infected for one time period. E.g. $v_1$ is connected to $v_2$ and $u_2$. $v_1$ is infected initially. At time $t=0$, $v_1$ has a probability $p$ of infecting its neighbors $v_2$ and $u_2$. Suppose it successfully infects $v_2$, and $u_2$ is never infected by any of its predecessor. At $t=1$, $v_1$ and $u_1$ stops being infectious. And $v_2$ is infectious. This process is repeated for each level. If a node is infected twice, it is still just "infected", there is no special meaning to a double infection. Conjecture This process ends with probability $1$ after a finite number of steps. How do I prove this? My attempt: The probability that the infection on one level is passed onto the next level is $q = 1 - (1-p)^4 = $ probability that all $4$ attempts to infect fails. For the infection to reach the $k$th level, the probability is $q^k$ and this tends to zero as $k$ tends to infinity? A: If you direct the edges then the problem becomes trivial. Assume inductively that at time $i$ the only nodes that might be infected are $u_i$ and $v_i$. Then at time $i+1$ nodes $u_{i+1}$ and $v_{i+1}$ may become infected by $u_i$ and $v_i$, but $u_i$ and $v_i$ become cured, and there is no way of reinfecting them from below. Thus your argument is correct, and the probability that the process is alive at time $i$ is at most $q^i$. Now, for any natural number $n$, if the process does not terminate after a finite number of levels, then the process much reach level $n$. Therefore the probablity that the process does not terminate is less than or equal to the probability that the process reaches level $n$. So if $A$ is the event that the process survives indefinitely we have $$0\leq\mathbb P(A) \leq q^n$$for every $n\in\mathbb N$. Therefore we must have $\mathbb P(A) = 0$.
[ "stackoverflow", "0014677962.txt" ]
Q: formula in ios application how to create it Please, i have this formula here: =((r/100/12)*amt)/(1-((1+(r/100/12)^(-period))) Could someone help me with how to convert it in a formula for objective c? I need the final formula.. What i use: double int_ = ([Norma_value floatValue]/100)/12; double months = -[kohezgjatja_value floatValue]; double r1 = pow(int_, months); double pt1 = 1 + r1; double pt2 = 1- pt1; double pmt = ([shuma_value floatValue]* int_)/pt2; balanca.text = [NSString stringWithFormat:@"%.02f €",pmt]; A: The formula you've written should work in objective c except for the ^ part. To implement x ^ y in objective c you can use: pow(x,y) Take a look at this question as reference for pow: How to raise a double value by power of 12?
[ "stackoverflow", "0058733568.txt" ]
Q: Django: Updating Context Value every time form is updated I've got an UpdateView I'm using to update some data from my model, but I want to give my user only a max amount of times they can do it, after they used all their "updates" it should not be possible anymore. My view looks like this: class TeamUpdate(UpdateView): model = Team template_name = 'team/team_update_form.html' context_object_name = 'team' form_class = TeamUpdateForm def get_context_data(self, **kwargs): context = super().get_context_data(**kwargs) context.update({ 'max_updates': 2 }) return context def form_valid(self, form, **kwargs): team = form.save() // update context somehow so that it has the new value for max_updates? messages.success(self.request, f'{team} was updated') return redirect(team.get_absolute_url()) My Problem right now is that I don't know how I can dynamically update my context every time my form is updated. How would I do that? I assume the hardcoded "2" must be removed and I probably have to do something like 2 if not value_from_form else value_from_form but how do I get this value_from_form? And could I use this value in my a dispatch to check if its zero and then redirect my user back with a warning message like "Sorry you've used up all you're updates for this team!". Thanks to anyone who answers! A: I think there should be a field which stores number of max updates in Team model. Like this: class Team(models.Model): updated = models.IntegerField(default=0) def save(self, *args, **kwargs): if self.updated: self.updated += 1 else: self.updated = 0 super().save(*args, **kwargs) Finally in settings.py, have a variable MAX_UPDATES_FOR_TEAM=2. Then in context you can simply put: def get_context_data(self): context = super().get_context_data() updated = self.object.updated context.update({ 'max_updates': settings.MAX_UPDATES_FOR_TEAM - updated }) return context Finally, prevent Form from updating if value of updated exceeds the MAX_UPDATES_FOR_TEAM class TeamForm(forms.ModelForm): ... def clean(self): if self.instance.updated >= settings.MAX_UPDATES_FOR_TEAM: raise ValidationError("Exceeded max update limit")
[ "stackoverflow", "0000108807.txt" ]
Q: Convert timestamp to alphanum I have an application where a user has to remember and insert an unix timestamp like 1221931027. In order to make it easier to remember the key I like to reduce the number of characters to insert through allowing the characters [a-z]. So I'm searching for an algorithm to convert the timestamp to a shorter alphanum version and do the same backwards. Any hints? A: You could just convert the timestamp into base-36.
[ "stackoverflow", "0040160662.txt" ]
Q: Increase efficiency - parse integers from string C++ I have a string: 12:56:72 I need to get the 3 numbers (12, 56, and 72) individually. I am doing: int i=0,j=0,k=0,n, p[3]; // p[i] stores each ith value char x[], b[]; x="12:34:56"; n=strlen(x); while(i<strlen){ b[k]=x[i]; ++i; ++k; if(x[i==]':'){ p[j]=int(b); ++j; ++i; k=0; char temp[] = b; b=new char[]; delete temp; } } Can this be done more efficiently? A: To be "more efficient", you will have to profile. Here is another solution: const std::string test_data("01:23:45"); unsigned int hours; unsigned int minutes; unsigned int seconds; char separator; std::istringstream input(test_data); // Here's the parsing part input >> hours >> separator >> minutes >> separator >> seconds; Whether this is "more efficient" or not, must be measured. It looks simpler and safer. Edit 1: Method 2 Processors don't like loops or branches, so we can try to minimize. This optimization assumes perfect input as a string. static const char test_data[] = "01:23:45"; unsigned int hours; unsigned int minutes; unsigned int seconds; char c; unsigned int index = 0; hours = test_data[index++] - '0'; if (test_data[index] != ':') { hours = hours * 10 + test_data[index++] - '0'; } ++index; // Skip ':' minutes = test_data[index++] - '0'; if (test_data[index] != ':') { minutes = minutes * 10 + test_data[index++] - '0'; } ++index; // Skip ':' seconds = test_data[index++] - '0'; if (test_data[index] != ':') { seconds = seconds * 10 + test_data[index++] - '0'; } For highest optimizations, you have to make some assumptions. Another assumption is that the character encoding is UTF8 or ASCII, e.g. '1' - '0' == 1.
[ "stackoverflow", "0056747624.txt" ]
Q: What does a state mean in Angular application? I am new to NgRx and going through its documentation. But, at the beginning, I'm encountered with the following sentence: State is a single, immutable data structure. What does the state mean in simple words? Need some simple examples to understand this concept. Do I need to learn Flux and Redux to understand these concepts? A: Simply put, a state in ngrx (or redux, or other state managment systems) is how your system is described in a single point in time. You can think about it as a plain javascript object that represent your entire application at one point. Let's take a simple example of a todos app, where i can mark a completed item (by a completed flag) or selected item (by index). a possible state might look like this: { items: [ { text: 'Wash Car', completed: false}, { text: 'Write Code', completed: true} ], selectedIndex: 0 } If I'll decide to select the second index, my state future state would look like this: { items: [ { text: 'Wash Car', completed: false}, { text: 'Write Code', completed: true} ], selectedIndex: 1 } So, a state is a representation of your app logic in a single point of time. The view implementation is up to you - angular, react, and mobile application can share the same state and use different view layer. Some state-management system require the state to be immutable, meaning that in the todos example I wouldn't simply change my state, but rather create an entire new state to represent the change in the system. There are multiple reasons for that, but maybe the most obvious one is that this quality help web systems to recognize changes in the state, and change the view accordingly. NgRx is an angular-specific state management system. as described in NgRx page: NgRx Store provides reactive state management for Angular apps inspired by Redux. So, a good point to state would be to learn redux (The rule of immutability comes from redux). You can look at NgRx as a redux-based state management, power with RxJS. I would suggest to learn each concept desperately and then move to learn NgRx. Update: These questions might be useful Why should objects in Redux be immutable? Redux: why using Object.assign if it is not perform deep clone?
[ "stackoverflow", "0012574927.txt" ]
Q: Where Clause in LINQ with IN operator I have a LINQ function that returns a summary table. private DataTable CreateSummaryFocusOfEffortData() { var result = ReportData.AsEnumerable() .GroupBy(row => new { Value = row.Field<string>("Value"), Description = row.Field<string>("Description") }) .Select(g => { var row = g.First(); row.SetField("Hours", g.Sum(r => r.Field<double>("Hours"))); return row; }); return result.CopyToDataTable(); } Now I want to add a WHERE clause to this function so that it only sums up rows for fields that are there in a list. Something like the IN operator in sql. For example : Lets say I have a list with values (1,2,3) and I want to base by where clause with values that are there in list. A: Just an example of how you could try it: private DataTable CreateSummaryFocusOfEffortData(List<int> yourList) { var result = ReportData.AsEnumerable() .GroupBy(row => new { Value = row.Field<string>("Value"), Description = row.Field<string>("Description") }) .Select(g => { var row = g.First(); row.SetField("Hours", g.Where(r=>yourList.Contains(r.Field<int>("Id"))) .Sum(r => r.Field<double>("Hours"))); return row; }); return result.CopyToDataTable(); }
[ "stats.stackexchange", "0000227173.txt" ]
Q: Scalable dimension reduction Considering the number of features constant, Barnes-Hut t-SNE has a complexity of $O(n\log n)$, random projections and PCA have a complexity of $O(n)$ making them "affordable" for very large data sets. On the other hand, methods relying on Multidimensional scaling have a $O(n^2)$ complexity. Are there other dimension reduction techniques (apart from trivial ones, like looking at the first $k$ columns, of course) whose complexity is lower than $O(n\log n)$ ? A: An interesting option would be exploring neural-based dimensionality reduction. The most commonly used type of network for dimensionality reduction, the autoencoder, can be trained at the cost of $\mathcal{O}(i\cdot n)$, where $i$ represents the training iterations (is an hyper-parameter independent of training data). Therefore, the training complexity simplifies to $\mathcal{O}(n)$. You can start by taking a look at the 2006 seminar work by Hinton and Salakhutdinov [1]. Since then, things have evolved a lot. Now most of the atention is attained by Variational Autoencoders [2], but the basic idea (a network that reconstructs the input at its output layer with a bottleneck layer in-between) remains the same. Note that, as opposed to PCA and RP, autoencoders perform nonlinear dimensionality reduction. Also, as opposed to t-SNE, autoencoders can transform unseen samples without the need to retrain the whole model. On the practical side, I recomend taking a look at this post, which gives details on how to implement different types of autoencoders with the wonderfull library Keras. [1] Hinton, G. E., & Salakhutdinov, R. R. (2006). Reducing the dimensionality of data with neural networks. science, 313(5786), 504-507. [2] Kingma, D. P., & Welling, M. (2013). Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114.
[ "stackoverflow", "0023935973.txt" ]
Q: LWJGL HUD in a 2D Game So I am programming a 2D game in Java with LWJGL. I have my ortho camera in my gameloop and set it on the position I want to see. Now when I want to draw a HUD to show my money etc, I can either give it an absolute coordinate, so that it is part of my map and I can scroll away from my HUD (that definitely doesn't make sense for HUDs), or I can add my HUD vector to my cameras bottom-left Vector. Problem with that last solution is, that if I move my camera, my HUD does not perfectly update and you see it chasing its actual position. So my question is: Is there any way I can set a fixed position relative to my screen in a second 'layer'? I've seen some people who only use gltranslate to move camera, but I guess it could be much work to change it now, so I'd like to keep my ortho camera. edit: This is what my Graphicsloop looks like and it still does not work properly: private void updateRunning() { cam.update(); drawEntities(); drawHud(); cursor.draw(); } A: I can add my HUD vector to my cameras bottom-left Vector. Problem with that last solution is, that if I move my camera, my HUD does not perfectly update and you see it chasing its actual position. That's not what should happen. What's likely happening is you have something like this: gameLoop() { update(); drawPlayer(); drawHud(); updateCamera(); } However, you need to update the camera before drawing anything (or even before updating). gameLoop() { update(); updateCamera(); drawPlayer(); drawHud(); } To draw the HUD on top of other objects, just render it after. drawBackground(); drawPlayer(); drawHUD();
[ "stackoverflow", "0052408818.txt" ]
Q: Numpy: proper way of getting maximum from a list of points I have a list of points in 3d coordinates system (X, Y, Z). Moreover, each of them have assigned a float value v, so a single point might be described as (x, y, z, v). This list is represented as a numpy array of shape=(N,4). For each 2d position x, y I need to get the maximum value of v. A straightforward but computationally expensive way would be: for index in range(points.shape[0]): x = points[index, 0] y = points[index, 1] v = points[index, 3] maxes[x, y] = np.max(maxes[x, y], v) Is there a more "numpy" approach, which would be able to bring some gain in terms of performance? A: Setup points = np.array([[ 0, 0, 1, 1], [ 0, 0, 2, 2], [ 1, 0, 3, 0], [ 1, 0, 4, 1], [ 0, 1, 5, 10]]) The general idea here is sorting using the first, second, and fourth columns, and reversing that result, so that when we find our unique values, the value with the maximum value in the fourth column will be above other values with similar x and y coordinates. Then we use np.unique to find the unique values in the first and second columns, and return those results, which will have the maximum v: Using lexsort and numpy.unique def max_xy(a): res = a[np.lexsort([a[:, 3], a[:, 1], a[:, 0]])[::-1]] vals, idx = np.unique(res[:, :2], 1, axis=0) maximums = res[idx] return maximums[:, [0,1,3]] array([[ 0, 0, 2], [ 0, 1, 10], [ 1, 0, 1]]) Avoiding unique for better performance def max_xy_v2(a): res = a[np.lexsort([a[:, 3], a[:, 1], a[:, 0]])[::-1]] res = res[np.append([True], np.any(np.diff(res[:, :2],axis=0),1))] return res[:, [0,1,3]] max_xy_v2(points) array([[ 1, 0, 1], [ 0, 1, 10], [ 0, 0, 2]]) Notice that while both will return correct results, they will not be sorted as the original lists were, you can simply add another lexsort at the end to fix this if you like.
[ "stackoverflow", "0047357342.txt" ]
Q: C# Linq adding to dictionary with list as value I have a dictionary like this: Dictionary<string, List<myobject>> As I am getting new items, I'm doing logic like this: mydictionary[key].add(mynewobject); now, I'm trying to do the same with LINQ, but I'm stuck on the last line: (please ignore the bit of irrelevant logic in the code): var Test = (from F in Directory.EnumerateFiles(SOURCE_FOLDER, SOURCE_EXTENSIONS, SearchOption.AllDirectories) let Key = ParenthesisGroupRegex.Replace(F.ToLower(), string.Empty).Trim() let Descriptions = (from Match Match in ParenthesisGroupRegex.Matches(F.ToLower()) let CleanedMatches = ParenthesisRegex.Replace(Match.Name, string.Empty) let MatchesList = CleanedMatches.Split(',') select new Description { Filename = F, Tag = MatchesList.ToList() }) group Descriptions by Key into DescriptionList select new KeyValuePair<string, IEnumerable<string>>(Key, DescriptionList)) If we look at the last two lines: I'm trying to get my List (List and on the last linne, I'm attempting to build dictionary entries, but this will not compile as it looks like neither Key, nor DescriptionList, are accessible at that stage. (btw, I'm currently learning the LINQ syntax, so readability and maintainability are not the focus right now) What did I miss? A: You could call the ToDictionary at the end of the query you have defined: var Test = (from F in Directory.EnumerateFiles(SOURCE_FOLDER, SOURCE_EXTENSIONS, SearchOption.AllDirectories) let Key = ParenthesisGroupRegex.Replace(F.ToLower(), string.Empty).Trim() let Descriptions = (from Match Match in ParenthesisGroupRegex.Matches(F.ToLower()) let CleanedMatches = ParenthesisRegex.Replace(Match.Name, string.Empty) let MatchesList = CleanedMatches.Split(',') select new Description { Filename = F, Tag = MatchesList.ToList() }) group Descriptions by Key) .ToDictionary(x=>x.Key,x=>x.ToList()); Essentially GroupBy as it stated here: Groups the elements of a sequence according to a specified key selector function and projects the elements for each group by using a specified function. and it's signature is the following: public static IEnumerable<IGrouping<TKey, TElement>> GroupBy<TSource, TKey, TElement>( this IEnumerable<TSource> source, Func<TSource, TKey> keySelector, Func<TSource, TElement> elementSelector ) Note that the return type of GroupBy is this IEnumerable<IGrouping<TKey, TElement>> The above type essentially declares a sequence of keys and collections of objects that are associated with these keys (More formally it declares a sequence of objects of type IGrouping<TKey, TElement>, where IGrouping represents a collection of objects that have a common key.). That you want is a dicitonary with the keys in this sequence and values the corresponding collection of objects. This can be achieved as above by calling the ToDictionary method.
[ "math.stackexchange", "0001215677.txt" ]
Q: $(a_1,\cdots a_n)\rightarrow (|a_1-a|,\cdots ,|a_n-a|)\rightarrow\cdots\rightarrow (0,\cdots ,0)$ NOTE: I only need verification of part (b) of this question. But feel free to comment on anything about this question. Given an initial sequence $a_1,\cdots a_n$ of real numbers, we perform a series of steps. At each step, we replace the current sequence $x_1,\cdots x_n$ with $|x_1-a|,\cdots |x_n-a|$ for some $a$. At each step, the value of $a$ can be different. (a)Prove that it is always possible to obtain the null sequence consisting entirely of $0$'s. (b)Determine with proof the minimum number of steps required, regardless of the initial sequence, to obtain the null sequence. -Problem-17, Advanced Problems Section,$102$ Combinatorial Problems, Titu Andreescu, Zuming Feng To answer (a), apply the following algorithm: 1) $1$st step: $A_1=\dfrac{a_1+a_2}{2}$ where $A_i$ denotes the value of $a$ in the $i$th step 2)$2$nd step:$A_2=\dfrac{|a_2-\dfrac{a_1+a_2}{2}|+a_3}{2}$ n)$n$th step:$A_i=\dfrac{|a_i-A_{i-1}|+a_{i+1}}{2}$ After the $k$th step, the number of reals that are mutually equal in the sequence is at least $k+1$. So after the $n-1$th step, all of them will be equal to some value $p$. In the final step, just let $A_n=p$ and all of them turn into $0$. If my above description was unclear, which I am sure it was, I would just tell you to consider the average of two numbers and experiment with it. Now onto part(b). Here's where I am having a problem. I claim that the answer to (b) is $n$. Claim:If the original numbers are all pairwise distinct, then we always need $n$ steps. Proof: We induct on $n$. For $n=1,2$, it is easy to prove. Suppose it is true for all $n\le k$ For $n=k+1$, After step-i, there are at least $k-i-1$ distinct numbers in the sequence for $i\le k-1$. So after step-$(k-1)$, there are at least $k-k+1-1=2$ distinct numbers. These $2$ will require at least $2$ more moves and therefore in total $k-2+2=k+1$ moves, completing the induction. However, the book gives a much more complicated solution for part (b), which makes me sure that my solution has a big loophole somewhere. Where is the loophole in my solution of part (b)? A: Your claim :"If the original numbers are all pairwise distinct, then we always need n steps." is not correct, in fact take for example the following sequence: $$(2,14,4,12,6,10)\xrightarrow{a=8}(6,6,4,4,2,2)\xrightarrow{a=5} (1,1,1,1,3,3)\xrightarrow{a=2} (1,1,1,1,1,1)\xrightarrow{a=1} (0,0,0,0,0,0)$$ which requires only $5$ steps, this can be preformed to obtain some sequences of length $n$ with different entries and requiring only $\log_2(n)$ steps. Your proof is not very clear, but the faille is that after $k-1$ steps you will obtain the element of the first sequence are all equal but no one grantees that the rest will remain distinct so that you can apply the induction hypothesis again. The idea of the prove $b$ you have to find a sequence which requires $n$ steps (if the answer is correct but I think that you have it in the book), you have to be sure in your choose sequence that the differences are large enough so that there is no intersection problem like in the exemple above. I hope that this answers your question (I don't know really which sequence will work)
[ "stackoverflow", "0018383204.txt" ]
Q: Extjs dynamically switch tabs position in tabpanel I have tabpanel: { xtype: 'tabpanel', tabPosition: 'top', // it's default value items: [/*tabs*/] } And some button which changes layout: { xtype: 'button', text: 'Change layout', handler: function (btn) { var layout = App.helper.Registry.get('layout'); if (layout === this.getCurrentLayout()) { return; } if (layout === 'horizontal') { newContainer = this.down('container[cls~=split-horizontal]');//hbox laout oldContainer = this.down('container[cls~=split-vertical]');//vbox layout tabPanel.tabPosition = 'top'; } else { newContainer = this.down('container[cls~=split-vertical]'); oldContainer = this.down('container[cls~=split-horizontal]'); tabPanel.tabPosition = 'bottom'; } oldContainer.remove(somePanel, false); oldContainer.remove(tabPanel, false); newContainer.insert(0, somePanel); newContainer.insert(2, tabPanel); newContainer.show(); oldContainer.hide(); } When I change layout, me also need change the position of the tabs. Of course changing config property tabPosition has no effect. How i can switch tabPosition dynamically? A: I'm afraid in case of tabpanel the only way is to destroy the current panel and recreate it from config object with altered tabPosition setting. You can use cloneConfig() method to get config object from existing panel.
[ "stackoverflow", "0023789348.txt" ]
Q: iOS swipe from left on UITableView while remaining Swipe to delete I implemented the swipe from right to left to delete in my tableview using simple apple code, but I also want to add my own swipe from left to right for another button. Basically, I want the same functionality as with swipe to delete, but I will have a custom button, and the custom swipe will be from opposite side while swipe to delete will remain functional. To make it intuitive, I tried using UIPanGestureRecognizer, so that a user can see the cell moving while he is swiping (just like the swipe to delete). However, I don't know how to move the actual cell, and additionally, how to add a button below it. I think I browsed the whole internet and I couldn't find it. Do you have any suggestions? A: Here is a good article to get you started http://www.teehanlax.com/blog/reproducing-the-ios-7-mail-apps-interface/ on the conceptual level. Besides there are several open source solutions: https://github.com/CEWendel/SWTableViewCell https://github.com/designatednerd/DNSSwipeableTableCell https://github.com/alikaragoz/MCSwipeTableViewCell (You should judge the code quality) And this is just a teaser for (hopefully) upcoming functionality of iOS8 https://gist.github.com/steipete/10541433
[ "stackoverflow", "0041689299.txt" ]
Q: Why Dollar ($) Sign is not accepting in password for ldap user? Password is "abc$123" for ldap user it was successfully set using ldappasswd but when login with this password it's give error of Invalid Credential. But using sshpass -p ssh abc$123 user@ip_add it was successfully login. A: $ is a valid character and is usually accepted by LDAP servers in passwords. But it is a special character in shell, and you should escape or quote the string to avoid the shell to transform the password string.
[ "stackoverflow", "0036833974.txt" ]
Q: sql server casting some data I got a numeric(8,0) date column, with values like 20130101. I need to cast it to some date format and do some queries. My test query looks like this SELECT * FROM hund WHERE ISDATE(hund.hfdat) = 1 and cast((left(convert(varchar(8),hund.hfdat),4) + substring(convert(varchar(8),hund.hfdat),5,2) + right(hund.hfdat,2)) as datetime) between '20050101' and '20300101' I get this error Conversion failed when converting date and/or time from character string. I guess my 'date' column get some bad data. Some suggestion to write it in some other way? I want to jack this into this, dogs not older than 10 years SELECT Ras_.rasnamn as 'Ras', count(distinct person.personid) as 'Antal ägare', count(distinct JBV_Aegare.hundid) as 'Antal djur' FROM JBV_Aegare INNER JOIN hund ON JBV_Aegare.hundID=hund.hundID INNER JOIN ras_ ON hund.ras=ras_.raskod INNER JOIN person ON JBV_Aegare.personID=person.personid INNER JOIN PostnummerLan ON person.postnr=PostnummerLan.PN_Postnummer INNER JOIN land ON PostnummerLan.PN_Lan=land.landkod where postnr <> 0 and person.landkod=0 and HERE ->>> hund.hfdat >= convert(CHAR(8),DATEADD(YEAR, -1, GETDATE()),112) and hund.hfdat <= (year(getdate()) + 10) group by Ras_.rasnamn order by Ras_.rasnamn A: hund.hfdat >= replace(CONVERT(date, DATEADD(year, -10, getdate())),'-','')
[ "stackoverflow", "0039780289.txt" ]
Q: What does the exclamation mark mean in the end of an A64 instruction? The documentation for LDP and STP gives an example instruction with an exclamation mark in the end: LDP X8, X2, [X0, #0x10]! Also the documentation about porting A32 PUSH/POP instructions into A64 gives the following examples: PUSH {r0-r1} ---> STP X0, X1, [SP, #-16]! POP {r0-r1} ---> LDP X0, X1, [SP], #16 Neither of the pages explains what the exclamation mark in the end of the instructions means. What does it? A: The ! means "Register write-back": the base register is used to calculate the address of the transfer, and is updated. In your example: LDP X8, X2, [X0, #0x10]! X0 modified so that after the operation: X0 = X0 + 0x10 If you do not put the !, X0 is not modified by the operation. On the second example concerning PUSH/POP, the difference is when the increment is done: STP X0, X1, [SP, #-16]! stores at address SP-16, and SP is decremented in the same way LDP X0, X1, [SP], #16 loads from address SP, and after the transfer is performed, stores SP+16 to SP.
[ "blender.stackexchange", "0000101535.txt" ]
Q: Cloth collision with an object I have two chairs, a cloth piece falling on it and then a helmet falling above the cloth on one of the chairs. Whatever the setting I'm trying I cannot get the helmet fall nicely on the chair and interact naturally with the cloth. Right now I have following: the helmet is falling down the cloth is falling down the cloth is interacting with the chairs naturally in the process of falling the helmet gets pushed by the cloth piece far away (as if it would be very hard surface). https://youtu.be/zmyO6NA9R1U I also read that soft body physics might be a solution, but it didn't work in my case and the rendering times are getting expanded. The cloth has settings: 5/M, 10/s, 15/b, (here I tried almost everything) rigid body is off, Collision physics is off Helmet settings: Right body active, mass 10, all other physics are off. Right now almost all the other settings (except cloth physics) are default. Would be glad to get some help! Edit : Amended the Rigid Body Collision shape to 'Mesh' and this helped with allowing the helmet to collide with the cloth but now the cloth is being forced through the chair. A: For rigid body and cloth/soft body simulations there are a number of things you need to be careful of. For the Rigid Body simulation you need to ensure you select the correct 'Shape' for the Rigid Body Collisions. The default option is 'Convex Hull' which if fine for objects which are rolling over a flat surface but since it doesn't allow for any concave surfaces (such as the indentation at the seat of the chair) it is not good for smaller objects colliding with the body. Therefore, in this situation ensure you set the Shape to 'Mesh' to allow the actual mesh shape to be used for collision. This is less efficient (since the engine needs to calculate collision for all surfaces, not just an outer 'shell') but produces more accurate results. Note that you should set this for each rigid body. Ensure to set sufficient 'steps' in the cloth simulation. More steps will result in more time to calculate the simulation but will produce much more convincing results - especially where objects are hitting the cloth at high speed. Using a larger number of steps should avoid problems such as the fast moving object passing through the cloth or the cloth getting forced through the other geometry. Ensure to carefully set the Collision Settings - it's very tempting to just add the 'Collision' and forget about what the actual settings mean and leave them set to the defaults or inappropriate settings. For interacting with cloth the key settings are obviously the Soft Body and Cloth settings and these consist of Outer and Inner thresholds. The Outer dictates how close to a surface the cloth can get before it is pushed away while the Inner controls how elements of the cloth that have been pushed into the surface are repelled back out of the surface. Note that you should generally avoid setting the Inner to more than 50% of the minimum depth of the mesh - ie, if the thinnest part of the mesh is 1 Blender Unit in width then do not set this to more than 0.5 - otherwise problems can occur where cloth that penetrates below the surface is effectively pushed through to the other side rather than being repelled. The 'Outer' should be kept fairly small to prevent gaps between the surface the the colliding cloth. Note that your first issue was caused by the Rigid Body 'Shape' - set this to 'Mesh' to allow the rigid bodies to react to the actual shape of the mesh (rather than a simplified convex hull). From the second animated example, it appears that the helmet's collision 'outer' is such that it is repelling the cloth before it even gets close enough to touch it. The 'inner' collision of the chair is then presumably repelling the cloth through the surface. You should reduce the 'Outer' of both the helmet and the chair to close to '0' and adjust the 'inner' of each mesh's collision settings to a value that is not more than 50% of the thinnest part of the associated mesh. This should then produce a more stable simulation, similar to this : Blend file included
[ "stackoverflow", "0023256464.txt" ]
Q: MongoDB C# driver: Using Linq to get Guid returns nothing The following is the code I am using to query the database. The variable "filter" is a linq expression. It seems that I can use this code to get data using ObjectId or any other value that may be in the document. But when I store a Guid and try to retrieve it, there is no return. Is there something I am doing wrong here, or is there a limitation on MongoDB itself when it comes to Guids? _dbSet = mongoDatabase.GetCollection(collectionName); var query = _dbSet.AsQueryable<TEntity>(); if (filter != null) { query = query.Where(filter); } return query.ToList(); EDIT: Just to clarify a bit more. I have tried the solution shown here in the last comment: MongoDB and Guid in Linq Where clause. That does not give me a result either. The data I am trying to retrieve contains just the _id field. { "_id" : LUUID("e5bdda3b-ae6a-d942-bd43-c8c7a6803096") } The entity being used to retrieve this object has only a property called Id which, from what I understand, translates to the _id field in the Mongo document. So I tried retrieving on the Id property as well. Still no result. A: Extending @jjkim's answer as I could not post image in comment: You need to change the preference which comes under "Edit > Preferences" in Studio 3T for MongoDB. A: This was an error in my part. It looks like the Guid that was being returned back was not in the correct format in the viewer I was using. Robomongo likes to default to something they call a LegacyUUID however this is not the same format as the .Net Guid. I needed to change the options in the viewer make it display the correct Guid for me to look for.
[ "stackoverflow", "0039460247.txt" ]
Q: How to resize a Splash Screen Image using Tkinter? I am working on a game for a school project and as a part of it a Splash Screen is to load to simulate the 'loading' of the game. I have the code and it does bring up the image, but what I want it to do is bring up the defined .gif file onto the screen. Here is the code: import tkinter as tk root = tk.Tk() root.overrideredirect(True) width = root.winfo_screenwidth() height = root.winfo_screenheight() root.geometry('%dx%d+%d+%d' % (width*1, height*1, width*0, height*0)) image_file = "example.gif" image = tk.PhotoImage(file=image_file) canvas = tk.Canvas(root, height=height*1, width=width*1, bg="darkgrey") canvas.create_image(width*1/2, height*1/2, image=image) canvas.pack() root.after(5000, root.destroy) root.mainloop() My only issue is that because the image is larger than the screen it does not fit as a whole image. How can I resize the image using this code so that any .gif image fits on the screen? P.S. Please do not make any real drastic changes to the code as this does what I want, value changes are ok. A: I recommend you to use PIL. Here is the code: from PIL import Image, ImageTk import Tkinter as tk root = tk.Tk() root.overrideredirect(True) width = root.winfo_screenwidth() height = root.winfo_screenheight() root.geometry('%dx%d+%d+%d' % (width*1, height*1, width*0, height*0)) image = Image.open(image_path) image = image.resize((width, height), Image.ANTIALIAS) image = ImageTk.PhotoImage(image) canvas = tk.Canvas(root, height=height*1, width=width*1, bg="darkgrey") canvas.create_image(width*1/2, height*1/2, image=image) canvas.pack() root.after(5000, root.destroy) root.mainloop() You get the width and height from: width = root.winfo_screenwidth() height = root.winfo_screenheight() then call resize() If you still want to use PhotoImage, you can try zoom(x, y) and subsample(x, y). Here is the doc. By the way, It's not work on my computer...(python2.7 + win7)
[ "stackoverflow", "0004896457.txt" ]
Q: How to download files from net programmatically in Android? In my application downloading loads of files from web they were around 200Mb files(Zipped). How do I download files programmatically in Android? Actually my concern is about performance of code. How do I handle errors and network problem in between? A: Here's some code that I recently wrote just for that: try { URL u = new URL("http://your.url/file.zip"); InputStream is = u.openStream(); DataInputStream dis = new DataInputStream(is); byte[] buffer = new byte[1024]; int length; FileOutputStream fos = new FileOutputStream(new File(Environment.getExternalStorageDirectory() + "/" + "file.zip")); while ((length = dis.read(buffer))>0) { fos.write(buffer, 0, length); } } catch (MalformedURLException mue) { Log.e("SYNC getUpdate", "malformed url error", mue); } catch (IOException ioe) { Log.e("SYNC getUpdate", "io error", ioe); } catch (SecurityException se) { Log.e("SYNC getUpdate", "security error", se); } This downloads the file and puts it on your sdcard. You could probably modify this to suit your needs. :)
[ "stackoverflow", "0011432656.txt" ]
Q: design login details in android I m developing an app which needs login to enter the app. I m trying to show the logged user details as same like facebook login which i displayed below. I tried more but cant get any idea. Can anyone help me to get idea that which layout can we use to acheive this. A: i think you can find the answer here https://github.com/darvds/RibbonMenu this is a navigation menu for Android (based on Google+ app) you can use it as a start
[ "stackoverflow", "0021211839.txt" ]
Q: How to write and hide an tag in jQuery using one line of code My HTML: <h1 id='heading'></h1> My jQuery: $('#heading').text('Nice Heading'); There is supposed to be a way to write it then hide it in one line of code. A: $('#heading').text('Nice Heading').hide();
[ "math.stackexchange", "0001402573.txt" ]
Q: Decreasing sequence of sets: Power of natural numbers Let $P(N)$ be the set of all the possible subsets of natural numbers (power set of $N$). Suppose that we have a decreasing sequence of sets $S_n$, ie $S_{n+1} \subseteq S_n\;,\in P(N)$ such that they are all finite and a set $M$ such that $\#M \leq \#S_n$, for all $n$. It is possible to say that $$\#M \leq \#\cap_{n=0}^\infty S_n$$? My intuition says it does, but I couldnt prove it. It seems that the intersection must be one of the $S_n$. Any hint of what should I do? Thanks! A: Let $S$ denote the intersection of the $S_n$. Start with a fixed $m$. The set $S_m-S$ is finite and for every element $s\in S_m-S$ some $n_s$ exists with $s\notin S_{n_s}$. Then $S_n=S$ if $$n\geq\max\{n_s|s\in S_m-S\}\in\mathbb N$$
[ "stackoverflow", "0000771642.txt" ]
Q: How to display difference between two dates as 00Y 00M Given two DateTimes in C#, how can I display the difference in years and months? I can do basic arithmatic on the timespan that comes from a simple subtraction but this won't take into account the differing lengths of months, leap years etc. Thanks for any help. A: Because the underlying representation is measured in 100-nanosecond ticks since 12:00 midnight, January 1, 1 A.D., a subtraction will handle leap years etc. quite correctly: DateTime date1 = ... DateTime date2 = ... // date2 must be after date1 TimeSpan difference = date2.Subtract(date1); DateTime age=new DateTime(tsAge.Ticks); int years = age.Years - 1; int months = age.Month - 1; Console.WriteLine("{0}Y, {1}M", years, months);
[ "electronics.stackexchange", "0000151254.txt" ]
Q: Current limiter its not working as it should? I have created a voltage regulation circuit using op amp and feedback resistors, now I am trying to add a current limiter to the circuit. have read how to create this using two transistors and a resistor. So far I know that if the current is to be limited at 10mA, and the transistor base emitter saturation voltage is 0.65V, then I choose a value for resistance using 0.65/0.01 = 65Ω. At first this seems to work, however when I lower the load resistance, allowing more current to flow, the current is not limiting. I have read into it and followed the steps to create this circuit so I am now confused as to why this will not work the way it should. If anybody can supply me with a reason as to why this is not doing its job that would be great! EDIT:- In regards to the base being connected I have reset the circuit and produced the following results. Base current 4.34mA Load current 10.8mA I am starting to understand that it is a rule of thumb that the output voltage of the op amp must be a volt higher than the output of the load, is this correct? and why ? Also have been having issues in lowering the gain, allthough the feedback resistors are set to give the op amp a gain of 1.5 I am still getting an output voltage of 11V at the op amp, is there something I am missing ? To clarify I have added a graph of the current flow constantly rising regardless of the current limiting circuit, I am hoping that if I can allow the gain to give an output of 7.6 using ((1/2)+1), then this will allow more voltage to the transistors allowing them to pass current steadily. When I discover why the gain seems to be fixed I can go on to lower it and hopefully the circuit will work as it should, if anybody knows how to resolve this please let me know A: The image below is your circuit with the base clamp transistor Q2 flipped so it does not look like a "twister" session :-). The circuit should work "well enough" IF you have connections correct and IF Q2 collector actually connects to Q1 base - there is some doubt fom the supplied diagram whether this connection exists (see circuit at end). BUT the CC circuit and CV circuit are fighting each other. As Q1 current is limited by Q2 the Vout drops and the opamp drives Q1 harder to provide more voltage and current rises so Q2 .... . This is probably a race to the death. Maybe not. If you move the Q1 Q2 block "upstream" and provide a new Q3 driven by the opamp as Q1 was then this separates the two functions. Ultimately you can only have CC or CV dominant at any one time so one has to "lose" but in simulation this may give you better investigative control. You do not say what "when I lower the resistance" means in Ohms or what current you then see or if this was bricks and silicon or simulation (presumably the latter).
[ "stackoverflow", "0016902952.txt" ]
Q: Expected Declaration Specifiers Error? I'm working in AOP, using AspeCt in an Ubuntu virtual box. My .acc code: before (): execution(int main(void)) { printf("Before test successful!\n"); } after (): execution(int main(void)) { printf("world!\n"); } before(): call(foo) { printf("Before Foo!\n"); } My .mc code: void foo(void) { printf("foo\n"); } int main() { printf("Hello everyone "); foo(); return 0; } And the error messages: 1:13: error: expected declaration specifiers before ':' token 4:1: error : expected declaration specifiers before 'after' 7:1: error: expected declaration specifiers before 'before' 12:1 expected '{' at end of input Help please? I'm stumped on how to fix this! Thank you A: Fixed this, you need to pre-process the files with gcc in a Linux environment, saving them as .acc and .mc files respectively. Then you need to run them through acc together, and the resulting .c files through gcc again. ./a.out and you're done.
[ "stackoverflow", "0060041380.txt" ]
Q: Inspect RoR json API Is any way (I mean RoR code) to enumerate exising API endpoints input data with data types output data with data types something else Let suppose we use ruby on rails and our api is based on models and its types. ( Something like with schema here ) What is possible and how? What is not possible and why? A: This is generally not possible. The controllers (and models) in a Rails app define on various layers what data to accept. This is generally not defined in a static format but through a layered validation process (e.g. on the controller with strong_parameters and the models with their validations. Since those validations can define arbitrarily complex business rules using Ruby code, usually, you can thus only check if a given data structure is accepted by trying to pass it to the app and checking if it accepts it without any errors. With that being said, there are gems which allow you to define "abstract" API schemas which might be consumed by external clients and used to validate data in your app. Examples here are trailblazer, dry-validation, json-schema and others. Note that these approaches usually require to follow the architectural requirements of these gems which might heavily affect the way you design your application.
[ "stackoverflow", "0049127133.txt" ]
Q: How to insert a word document into MarkLogic? We are trying to insert a word document into MarkLogic , but we are not sure how to do it, we have searched many sites but we have not got any feasible answers. A: It doesn't really matter what kind of document is being loaded: all the load APIs work the same. So you could use xdmp:load e.g. xdmp:load("/home/whoever/foo.docx","/dbpath/foo.docx") Since this is a binary file, it won't be particularly searchable. If you want to make it searchable, there is a CPF pipeline that unzips the docx and cleans it up a bit. See CPF documentation
[ "cs.stackexchange", "0000051861.txt" ]
Q: Simple Bayesian classification with Laplace smoothing question I'm having a hard time getting my head around smoothing, so I've got a very simple question about Laplace/Add-one smoothing based on a toy problem I've been working with. The problem is a simple Bayesian classifier for periods ending sentences vs. periods not ending sentences, based on the word immediately before the period. I'm collecting the following counts in training: number of periods, number of sentence-ending periods (for the prior), words and counts for words before sentence-ending periods, and words and counts for words before not-sentence-ending periods. With add-one smoothing, I understand that $$P(w|\text{ending}) = \frac{\text{count}(w,\text{ending}) + 1}{\text{count}(w) + N},$$ where $P(w|\text{ending})$ is the conditional probability for word $w$ appearing before a sentence-ending period, $\text{count}(w,\text{ending})$ is the number of times $w$ appeared in the training text before a sentence-ending period, $\text{count}(w)$ is the number of times $w$ appeared in the training text (or should that be the number of times it appeared in the context of any period?), and $N$ is the "vocabulary size". The question is, what is $N$? Is it the number of different words in the training text? Is it the number of different words that appeared in the context of any period? Or just in the context of a sentence-ending period? A: The correct formula is $$P(w|\text{ending}) = \frac{\text{count}(w,\text{ending}) + 1}{\text{count}(w) + N},$$ where $N$ is the number of possible values of $w$. Here $w$ ranges over the set of all words that you'll ever want to estimate $P(w|\text{ending})$ for: this includes all the words in the training text, as well as any other words you might want to compute a probability for. For instance, if you limit yourself to only computing probabilities for English words in a particular dictionary, $N$ might be the number of words in that dictionary. The intuition/idea behind add-one smoothing is: in addition to every occurrence of a word in the training set, we imagine that we see one artificial "occurrence" of each possible word too -- i.e., we augment the training set by adding these artificial occurrences (exactly one per possible word), and then compute probabilities using the ordinary unsmoothed formula on this augmented training set. That's why we get $+1$ in the numerator and $+N$ in the denominator; $N$ is the number of artificial occurrences we've added, i.e., the number of possible words.
[ "wordpress.stackexchange", "0000102059.txt" ]
Q: How to show multiple post types on taxonomy archive? On a taxonomy archive, how does one create multiple loops for different post types that share that taxonomy? I'm using taxonomy-themes.php so that it applies to all three of the terms in the 'themes' taxonomy (climate change, governance, peace building). On each term's archive page, I want to output the main taxonomy loop (posts), and then I want to create loops for each of 'events', 'resources' and 'staff' (all of which are custom post types). I'm assuming that I create new wp_query for each of the post types, but how do I tell that query which term I want it to get 'events' from (ie. the current archives' term)? A: As you point out, if you want separate loops for different post types you'll need to use a separate WP_Query() for post type. In the template you can get the current term (ID) being viewed via: get_queried_object_id() (see source) $args = array( 'post_type' => 'staff', 'tax_query' => array( array( 'taxonomy' => 'themes', 'terms' => get_queried_object_id(), 'field' => 'id' ) ), ); $staff = WP_Query( $args );
[ "stackoverflow", "0054816411.txt" ]
Q: Can the Ansible raw ssh module be used like "expect"? In other words, can the ansible "expect" module be used over a raw SSH connection? I'm trying automate some simple embedded devices that do not have a full Python or even shell. The ansible raw module works fine for simple commands, but I'd like to drive things in an expect-like manner. Can this be done? A: The expect module is written in python, so no, that won't work. Ansible does have a model for interacting with devices like network switches that similarly don't run python; you can read about that in the How Network Automation Is Different. I don't think that will offer any sort of immediate solution to your problem, but it suggests a way of pursuing things if it was really important to integrate with Ansible. It would probably be simpler to just use the actual expect program instead of Ansible.
[ "stackoverflow", "0023169509.txt" ]
Q: Mongodb session store in Expressjs 4 In express 3 I use connect-mongo for session store. var mongoStore = require('connect-mongo')(express); But after I switched to express 4 it doesn't work. I got this error: Error: Most middleware (like session) is no longer bundled with Express and must be installed separately. Please see https://github.com/senchalabs/connect#middleware. I see connect has been removed from express 4. How can I continue use this or are there any good libs that I can use for express 4. Thanks. A: You need to install the express-session package separately now. It can be found at https://github.com/expressjs/session Use the following commands to get up and running: npm install --save express-session cookie-parser and then in your server.js file: var express = require('express'), cookieParser = require('cookie-parser'), expressSession = require('express-session'), MongoStore = require('connect-mongo')(expressSession), app = express(); app.use(cookieParser()); app.use(expressSession({ secret: 'secret', store: new MongoStore(), resave: false, saveUninitialized: true })); And enjoy
[ "stackoverflow", "0032453003.txt" ]
Q: Angular-meteor user collection only returns 1 object I am writing an app with angular as front end and meteor as back end and I want to retrieve every user in the collection "Users" (added module accounts-ui and accounts-password). But when I execute the following code, it only returns one object (the last added) while there are 3 users. if(Meteor.isClient){ angular.module('timeAppApp') .controller('SignInCtrl', function($scope, $meteor, $location) { $scope.LogIn = function() { console.log($meteor.collection(Meteor.users).subscribe('users_by_email', $scope.username)); console.log($meteor.collection(Meteor.users).subscribe('all_users')); }; }); } if (Meteor.isServer) { // Returns all users find by email Meteor.publish('users_by_email', function(emailE){ return Meteor.users.find({'emails.address': emailE}); }); Meteor.publish('all_users', function(){ return Meteor.users.find({}, {fields: {emails: 1}}); }) I'm new to meteor so I'm still experimenting but now I'm really stuck. A: Try this, On server side Meteor.methods({ get_users_by_email: function (emailE) { return Meteor.users.find({ emails: { $elemMatch: { address: emailE } } }).fetch(); } }); On client side $meteor.call('get_users_by_email', $scope.username).then( function(data){ console.log('success get_users_by_email', data); }, function(err){ console.log('failed', err); } );
[ "softwareengineering.stackexchange", "0000131499.txt" ]
Q: What are the reasons one would use fully qualified class names in source code? I recently ran across code where the developers used both fully qualified class names AND imported class names in their source code. Example: import packageA.Foo; public class Example { public packageB.Bar doSomething() { final Foo foo = new Foo(); ... } } I was under the impression that the only reason one might want to use fully qualified name for classes in source code is when they have identical class names in two different packages and they need the fully qualified name to distinguish between the two. Am I wrong? A: No - you are quite right. Using fully qualified package names is usually considered poor style, except when it is necessary to avoid collisions. If a package name is especially short and descriptive, using qualified identifiers can make code more expressive. But the JLS prescribes domain-based names for most packages, so package names usually aren't short and descriptive. A: The other reason (except collissions) would be if you need a class only once. This is especially true if the class implements a common interface and you only need the real classname for instantiation. Using the fully qualified name makes it easier to recognize that an uncommon class is used here, and where to find it (assuming sane naming conventions). List li = new com.foobar.veryspecial.MagicList();
[ "stackoverflow", "0048943720.txt" ]
Q: Closing the app and reopening it in the activity I want, without doing it all over again I can not understand a thing. I am creating an android app that is connected to the DB. When the user registers after login, once he is logged in, if he closes the application and reopens it, he makes it all over again. I want that when the user reopens the app if he has not done the Logout he must be in the activity of the Login. Can you advise me what to use? The SharedPreference? Apps like Facebook, Instagram and other things use? PS. They are many users. Thank you very much for helping. A: You can use SharedPrederence for session management of Login/Signup : When User login Hit API or Internal DataBase with validation & get all user details & store it in SharedPreferences. Next time when User again open app first it will check the status of sharedPreference if login get Data otherwise Login screen will come. When User Logout Crear all Login Data. Initialization : SharedPreferences pref = getApplicationContext().getSharedPreferences("MyPref", 0); // 0 - for private mode Editor editor = pref.edit(); Storing Data : editor.putBoolean("login_status", true); // Storing boolean - true/false editor.putString("name", "string value"); // Storing string editor.putInt("user_id", "int value"); // Storing integer editor.commit(); // commit changes Retrieving Data : pref.getString("name", null); // getting String pref.getInt("user_id", 0); // getting Integer Reference : https://www.androidhive.info/2012/08/android-session-management-using-shared-preferences/
[ "stackoverflow", "0044119944.txt" ]
Q: Conditional strip values left of delimiter pandas column I have a column in a pandas DF that looks like : Column 30482304823 3204820 2304830 Apple - 390483204 Orange - 3939491 grape - 34038414 apple I want to remove everything to the left of the '-' so basically I want the above to look like : Column 30482304823 3204820 2304830 390483204 3939491 34038414 apple I have tried the following pandas snippets : out['Column'] = out['Column'].str.split('-', 1, expand=True)[1] out['Column'] = out['Column'].str.replace('Orange -', '', ) out['Column'].str.map(lambda x: x.lstrip('Orange -')) out['Column'].str.lstrip('Orange -') A: Simplest I can think of is df.Column.str.split('\s*-\s*').str[-1] 0 30482304823 1 3204820 2 2304830 3 390483204 4 3939491 5 34038414 6 apple Name: Column, dtype: object A: out['Column'] = out['Column'].apply(lambda x : str(x).split('-')[-1])
[ "stackoverflow", "0046092451.txt" ]
Q: Bash variable expension I would like to do this kind of thing in bash: OPT="val1 val2=\"info\"" CMD="mycommand -a $OPT file.cfg" and I get: mycommand -a 'val1' 'val2="info" ' file.cfg instead I would like to have: mycommand -a 'val1 val2="info" ' file.cfg How could I do this? A: You can just single quote $OPT as such: CMD="mycommand -a '$OPT' file.cfg" Result: $ echo $CMD mycommand -a 'val1 val2="info"' file.cfg
[ "stackoverflow", "0029543519.txt" ]
Q: Unable to remove text-decoration from label I have the following example HTML: <div class="a"> <div> <ul class="b"> <li class="c"> <input id="123" type="checkbox" checked></input> <label for="123">test</label> </li> <li class="c"> <input id="456" type="checkbox"></input> <label for="456">test</label> </li> </ul> </div> </div> And the following CSS: .a { text-decoration: line-through; } .a .b .c input[type="checkbox"]:not(:checked) + label { text-decoration: none !important; } This can be found at this JS fiddle: https://jsfiddle.net/4q8a7yox/1/ I would like to remove the line-through from all labels for unchecked items. The weird thing is that Chrome console shows the 'computed' text-decoration as 'none', yet it still shows it. All browsers (IE10 and FF) fail in a similar way. Any idea what is going on here? A: According to the answer of this question that's not possible. You cannot remove text-decoration properties for children nodes. So the best thing to do is not adding the text-decoration: line-through; to your .a elements and then trying to remove it for unchecked inputs, but instead adding the line-through only for checked inputs. EDIT: try this https://jsfiddle.net/b2bL8ksn/