source
sequence
text
stringlengths
99
98.5k
[ "gamedev.stackexchange", "0000172414.txt" ]
Q: Unity crossplatform native plugins I need to create a c++ native plugin for my game. If I want to support macOS and Windows, do I need to write my plugin twice - one in XCode with a bundle project and another in VS for windows or is there a way to support both with one codebase? A: Native typically means that made for that target environment. I.e. native for iOS means it will only work on iOS. and native for Android or Windows means it will only run in its respective environment of Android or Windows. There is no short cut here. The only exception being if you can get Unity directly to do what you need, or technically, if you can find an off the shelf solution that prevents you from having to program anything unique.
[ "stackoverflow", "0046685359.txt" ]
Q: Matching Columns with Overlapping Intervals (lubridate) I have two data frames of different number of rows and number of columns: each of these data frames have a date interval. df has an additional column which indicates some kind of attribute. My goal is to extract information from df ( with the attributes) to df2 under certain conditions. The procedure should be the following: For each date interval of df2, check if there is any interval in df which overlaps with the interval of df2. If yes, create a column in df2 which indicates the attributes matching with the overlapping interval of df. There can be multiple attributes that are matched to a specific interval of df2. I created the following example of my data: library(lubridate) date1 <- as.Date(c('2017-11-1','2017-11-1','2017-11-4')) date2 <- as.Date(c('2017-11-5','2017-11-3','2017-11-5')) df <- data.frame(matrix(NA,nrow=3, ncol = 4)) names(df) <- c("Begin_A", "End_A", "Interval", "Attribute") df$Begin_A <-date1 df$End_A <-date2 df$Interval <-df$Begin_A %--% df$End_A df$Attribute<- as.character(c("Attr1","Attr2","Attr3")) ### Second df: date1 <- as.Date(c('2017-11-2','2017-11-5','2017-11-7','2017-11-1')) date2 <- as.Date(c('2017-11-3','2017-11-6','2017-11-8','2017-11-1')) df2 <- data.frame(matrix(NA,nrow=4, ncol = 3)) names(df2) <- c("Begin_A", "End_A", "Interval") df2$Begin_A <-date1 df2$End_A <-date2 df2$Interval <-df2$Begin_A %--% df2$End_A This results in these data frames: df: Begin_A End_A Interval Attribute 2017-11-01 2017-11-05 2017-11-01 UTC--2017-11-05 UTC Attr1 2017-11-01 2017-11-03 2017-11-01 UTC--2017-11-03 UTC Attr2 2017-11-04 2017-11-05 2017-11-04 UTC--2017-11-05 UTC Attr3 df2: Begin_A End_A Interval 2017-11-02 2017-11-03 2017-11-02 UTC--2017-11-03 UTC 2017-11-05 2017-11-06 2017-11-05 UTC--2017-11-06 UTC 2017-11-07 2017-11-08 2017-11-07 UTC--2017-11-08 UTC 2017-11-01 2017-11-01 2017-11-01 UTC--2017-11-01 UTC My desired data frames look like this: Begin_A End_A Interval Matched_Attr 2017-11-02 2017-11-03 2017-11-02 UTC--2017-11-03 UTC Attr1;Attr2 2017-11-05 2017-11-06 2017-11-05 UTC--2017-11-06 UTC Attr1;Attr3 2017-11-07 2017-11-08 2017-11-07 UTC--2017-11-08 UTC NA 2017-11-01 2017-11-01 2017-11-01 UTC--2017-11-01 UTC Attr1;Attr2 I already looked into the int_overlaps() function but could not make the "scanning through all intervals of another column"-part work. If yes, is there any solution that makes use of the tidyr environment? A: Using tidyverse´s lubridate package and it´s function int_overlaps(), you can create a simple for loop to go through the individual values of df2$Interval like follows: df2$Matched_Attr <- NA for(i in 1:nrow(df2)){ df2$Matched_Attr[i] <- paste(df$Attribute[int_overlaps(df2$Interval[i], df$Interval)], collapse=", ") } giving the following outcome # Begin_A End_A Interval Matched_Attr #1 2017-11-02 2017-11-03 2017-11-02 UTC--2017-11-03 UTC Attr1, Attr2 #2 2017-11-05 2017-11-06 2017-11-05 UTC--2017-11-06 UTC Attr1, Attr3 #3 2017-11-07 2017-11-08 2017-11-07 UTC--2017-11-08 UTC #4 2017-11-01 2017-11-01 2017-11-01 UTC--2017-11-01 UTC Attr1, Attr2 I left the NA strategy open, but additional line df2$Matched_Attr[df2$Matched_Attr==""]<-NA would return exact desired outcome. In response to your comment (only perform the above action when a df$ID[i]==df2$ID[i] condition is met), the inplementation follows: library(lubridate) #df df <- data.frame(Attribute=c("Attr1","Attr2","Attr3"), ID = c(3,2,1), Begin_A=as.Date(c('2017-11-1','2017-11-1','2017-11-4')), End_A=as.Date(c('2017-11-5','2017-11-3','2017-11-5'))) df$Interval <- df$Begin_A %--% df$End_A ### Second df: df2 <- data.frame(ID=c(3,4,5), Begin_A=as.Date(c('2017-11-2','2017-11-5','2017-11-7')), End_A=as.Date(c('2017-11-3','2017-11-6','2017-11-8'))) df2$Interval <- df2$Begin_A %--% df2$End_A df2$Matched_Attr <- NA for(i in 1:nrow(df2)){ if(df2$ID[i]==df$ID[i]){ df2$Matched_Attr[i] <- paste(df$Attribute[int_overlaps(df2$Interval[i], df$Interval)], collapse=", ") } } print(df2) # ID Begin_A End_A Interval Matched_Attr #1 3 2017-11-02 2017-11-03 2017-11-02 UTC--2017-11-03 UTC Attr1, Attr2 #2 4 2017-11-05 2017-11-06 2017-11-05 UTC--2017-11-06 UTC <NA> #3 5 2017-11-07 2017-11-08 2017-11-07 UTC--2017-11-08 UTC <NA>
[ "superuser", "0000143374.txt" ]
Q: Sony VAIO PCG-9RBL Drivers I need to find the windows drivers for a Sony VAIO PCG-9RBL. Similar to this question the PCG-9RBL doesn't exist. So what else do they call it? When you type in PCG-9RBL at Sony'e eSupport it brings up: PCGK23 PCGK23Q PCGK25 PCGK27 PCGK32 Any idea? A: The PCG-9RBL has a subrange, usually PCG-9RBL K23, PCG-9RBL K25, PCG-9RBL K27, etc. The sony site says it already, you need to be more specific. Check your laptop if there's K23, K23Q, K25, K27 or K32 somewhere on it. (Usually on the bottom somewhere) Also check http://esupport.sony.com/US/perl/support-info.pl?info_id=264
[ "stackoverflow", "0018677329.txt" ]
Q: Passing values between dialog boxes in mfc When using MFC, if i have a main dialog box, then I another dialog box is called from the main, what message is sent to the main dialogue box to let it know it has focus, is it WM_SETFOCUS()? If so, what paramaters are needed? The problem I have is, a value is selected in the child dialog and I want it copied to an edit control in the main dialog box once it (the child dialog) closes. Right now, I have it so the second dialog box copies its value to a global variable, but once the second dialog box closes, I wanted to the main dialog box to grab the global variable and display in the edit control. A: You can also use a member variable in the child dialog box, like CChildDialogBox dlg; if (dlg.DoModal() == IDOK) // child dialog saves the value in a CString member variable m_str { GetDlgItem(IDC_EDIT1)->SetWindowText(dlg.m_str); } This MSDN article describes how you can set up member variables connected to controls in a dialog box.
[ "stackoverflow", "0015035339.txt" ]
Q: Android - Getting data from a TextView located inside a TableRow Hey guys here's my problem. I have a long list of data inside a ListView (organized Alphabetically) so to make the users life easier I want to let them get directly to the part of the list they're looking for by clicking a letter (in a TextView) contained in a table (above the list). So the user sees: A B C D E F.... Item 1 Item 2 Item 3 ... ... when 'D' is clicked I'll use something like ScrollTo(position) to get to that part of the list. How can I do this without creating 26 onClick listeners for each TextView? My idea was to use a table and hopefully get the TextView that was clicked when the TableRow listener is activated. OR What would be the best way to do this? A: I would use a LinearLayout with android:orientation="horizontal" and put Buttons in them for each letter. Each one have a onClick in your xml that calls a function. Then in the function do something like int id = v.getId(); Button btn = (Button) findViewById(id); String letter = btn.getText().toString(); Then use that letter however you had planned on searching through the list. Using String functions or assigning each letter to a number in the list. With something like this, you only have one onClick listener and use whichever View was clicked to search for in the list
[ "stackoverflow", "0061751018.txt" ]
Q: Enable azure diagonstic setting using ARM template for azure data factory,azure sql When I am enabling the diagonstic setting fromt the azure portal for ADF & Azuresql, in the ARM template I am not able to find anything in ARM with respect to diagonstic setting.Similar way for keyvault and sql I need the ARM template for enabling the diagonstic setting. I tried from my side for ADF since I new to ARM template I am not able to find the method for enabling the diagonstic setting. { "$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#", "contentVersion": "1.0.0.0", "parameters": { "factoryName": { "type": "string", "metadata": { "description": "The name of the Data Factory" } } }, "resources": [ { "type": "Microsoft.DataFactory/factories", "apiVersion": "2018-06-01", "name": "[parameters('factoryName')]", "location": "[resourceGroup().location]", "identity": { "type": "SystemAssigned" }, "properties": { }, "resources": [ { "type": "Microsoft.DataFactory/factories/providers/diagnosticSettings", "apiVersion": "2017-05-01-preview", "name": "[concat(parameters('factoryName'),'/microsoft.insights/', parameters('settingName'))]", "location": "[resourceGroup().location]", "dependsOn": [ "[concat('Microsoft.DataFactory/factories/', parameters('factoryName'))]" ], "properties": { "name": "[parameters('DS03')]", "workspaceId": "[/subscriptions/3xxxxx-xxxxx-x-xxxx--xx/resourceGroups/BDAZxfdfG01]" } } ] } ] } A: The ARM template above is creating the diagnostic settings; however it is not actually configuring the logging of anything. Add the following for all Data Factory metrics after your workspaceID property. "logAnalyticsDestinationType": "Dedicated", "logs": [ { "category": "PipelineRuns", "enabled": true, "retentionPolicy": { "enabled": false, "days": 0 } }, { "category": "TriggerRuns", "enabled": true, "retentionPolicy": { "enabled": false, "days": 0 } }, { "category": "ActivityRuns", "enabled": true, "retentionPolicy": { "enabled": false, "days": 0 } } ], "metrics": [ { "category": "AllMetrics", "timeGrain": "PT1M", "enabled": true, "retentionPolicy": { "enabled": false, "days": 0 } } ] Besides configuring the diagnostic settings what metrics and diagnostics must be select to send to log analytics. These fields align to those on the diagnostic blade: The "logAnalyticsDestinationType": "Dedicated" is to ensure the logs go to their own table as opposed to the default AzureDiagnostic table. There is documented limitation in the original table.
[ "stackoverflow", "0062143582.txt" ]
Q: MFC or WIN32 UI Automation Client Sample? I have been looking for a sample of using UI Automation in MFC (or Win32) as a client to scroll another applications window. But I can't find any samples? Does anyone know of one or can provide one? A: The following is a Win32 C++ example of scrolling vertically on a notepad window: testWindow.txt - Notepad. Main steps: Find the main window handle of target application using FindWindow. FindWindow(L"Notepad", L"testWindow.txt - Notepad"); Get IUIAutomationElement object from above found window handle. pClientUIA->ElementFromHandle(targetWindow, &pRootElement); Find the handle of the window which containing the scrollbar using UIA_ScrollBarControlTypeId and NormalizeElement. Send WM_VSCROLL message to the window containing the scrollbar. PostMessage(foundHwnd, WM_VSCROLL, SB_LINEUP, 0); This is the complete code you can refer to: #include <windows.h> #include <uiautomation.h> IUIAutomation *pClientUIA; IUIAutomationElement *pRootElement; HWND FindScrollbarContainerWindow(const long controlType) { HRESULT hr; BSTR name; IUIAutomationCondition *pCondition; VARIANT varProp; varProp.vt = VT_I4; varProp.uintVal = controlType; hr = pClientUIA->CreatePropertyCondition(UIA_ControlTypePropertyId, varProp, &pCondition); if (S_OK != hr) { printf("CreatePropertyCondition error: %d\n", hr); } IUIAutomationElementArray *pElementFound; hr = pRootElement->FindAll(TreeScope_Subtree, pCondition, &pElementFound); if (S_OK != hr) { printf("CreatePropertyCondition error: %d\n", hr); } int eleCount; pElementFound->get_Length(&eleCount); if (eleCount == 0) return NULL; for (int i = 0; i <= eleCount; i++) { IUIAutomationElement *pElement; hr = pElementFound->GetElement(i, &pElement); if (S_OK != hr) { printf("CreatePropertyCondition error: %d\n", hr); } hr = pElement->get_CurrentName(&name); if (S_OK != hr) { printf("CreatePropertyCondition error: %d\n", hr); } wprintf(L"Control Name: %s\n", name); hr = pElement->get_CurrentClassName(&name); if (S_OK != hr) { printf("CreatePropertyCondition error: %d\n", hr); } wprintf(L"Class Name: %s\n", name); IUIAutomationTreeWalker* pContentWalker = NULL; hr = pClientUIA->get_ContentViewWalker(&pContentWalker); if (pContentWalker == NULL) return NULL; // Get ancestor element nearest to the scrollbar UI Automation element in the tree view IUIAutomationElement *ncestorElement; hr = pContentWalker->NormalizeElement(pElement, &ncestorElement); hr = ncestorElement->get_CurrentName(&name); wprintf(name); // Get window handle of ancestor element UIA_HWND controlContainerHwnd = NULL; hr = ncestorElement->get_CurrentNativeWindowHandle(&controlContainerHwnd); printf(""); if (controlContainerHwnd) { return (HWND)controlContainerHwnd; } } return NULL; } int main() { // Find target window HWND targetWindow = FindWindow(L"Notepad", L"testWindow.txt - Notepad"); if (NULL == targetWindow) { printf("FindWindow fails with error: %d\n", GetLastError()); return FALSE; } HRESULT hr = CoInitializeEx(NULL, COINIT_APARTMENTTHREADED); if (S_OK != hr) { printf("CoInitializeEx error: %d\n", hr); return 1; } hr = CoCreateInstance(CLSID_CUIAutomation, NULL, CLSCTX_INPROC_SERVER, IID_IUIAutomation, reinterpret_cast<void**>(&pClientUIA)); if (S_OK != hr) { printf("CoCreateInstance error: %d\n", hr); return 1; } hr = pClientUIA->ElementFromHandle(targetWindow, &pRootElement); if (S_OK != hr) { printf("ElementFromHandle error: %d\n", hr); return 1; } // Find scroll bar and its containing window HWND foundHwnd = FindScrollbarContainerWindow(UIA_ScrollBarControlTypeId); if (NULL == foundHwnd) return 1; // Vertical scroll bar // Line up - Like click top arrow button to scroll up one line PostMessage(foundHwnd, WM_VSCROLL, SB_LINEUP, 0); Sleep(1000); // Line down PostMessage(foundHwnd, WM_VSCROLL, SB_LINEDOWN, 0); Sleep(1000); // Page up PostMessage(foundHwnd, WM_VSCROLL, SB_PAGEUP, 0); Sleep(1000); // Page down PostMessage(foundHwnd, WM_VSCROLL, SB_LINEDOWN, 0); Sleep(1000); } -------------------------------------------------------------- UPDATE: Another method is using IUIAutomationScrollPattern::Scroll(). More direct and simple. Similar thread.
[ "stackoverflow", "0002080277.txt" ]
Q: Is there a language with RAII + Ref counting that does not have unsafe pointer arithmetric? RAII = Resource Acquisition is Initialization Ref Counting = "poor man's GC" Together, they are quite powerful (like a ref-counted 3D object holding a VBO, which it throws frees when it's destructor is called). Now, question is -- does RAII exist in any langauge besides C++? In particular, a language that does not allow pointer arithmetric / buffer overflows? A: D has RAII but still has pointer arithmetic :( BUT, you don't really have to use it. Please note getting D to work was a pain in the butt for me so IM JUST SAYING. A: While not exactly RAII, Python has the with statement and C# has the using statement. A: Perl 5 has ref counting and destructors that are guaranteed to be called when all references fall out of scope, so RAII is available in the language, although most Perl programmers don't use the term. And Perl 5 does not expose raw pointers to Perl code. Perl 6, however, has a real garbage collector, and in fact allows the garbage collector to be switched out; so you can't rely on things being collected in any particular order. I believe Python and Lua use reference counting.
[ "stackoverflow", "0045709263.txt" ]
Q: Removing currency symbol, except on Woocommerce cart and checkout pages I want to remove the currency symbol from my webshop, except on the shopping-cart page and the checkout. So I do NOT want a currency symbol on: category pages product pages home page landing pages blogs But I DO want the currency symbol on: shopping cart checkout pages confirmation e-mail I have been given this code: function avia_remove_wc_currency_symbol( $currency_symbol, $currency ) { if ( !is_cart() || !is_checkout()){ $currency_symbol = ''; return $currency_symbol; } } add_filter('woocommerce_currency_symbol', 'avia_remove_wc_currency_symbol', 10, 2); Which removes the currency symbol from all pages. It doesn't make it reappear on the shopping cart or checkout pages. A: Try this: <?php function avia_remove_wc_currency_symbol( $currency_symbol, $currency ) { $currency_symbol = ''; if ( is_cart() || is_checkout()) $currency_symbol = '$'; return $currency_symbol; } add_filter('woocommerce_currency_symbol', 'avia_remove_wc_currency_symbol', 10, 2); ?>
[ "math.stackexchange", "0001931365.txt" ]
Q: Is there a quick way to find the remainder when this determinant is divided by $5$? Find the remainder when the determinant $\begin{vmatrix} { 2014 }^{ 2014 } & { 2015 }^{ 2015 } & { 2016 }^{ 2016 } \\ { 2017 }^{ 2017 } & { 2018 }^{ 2018 } & { 2019 }^{ 2019 } \\ { 2020 }^{ 2020 } & { 2021 }^{ 2021 } & { 2022 }^{ 2022 } \end{vmatrix}$ is divided by $5$. I'm aware that this problem has a number-theoretic solution involving congruence relations. But considering that this was asked as a multiple-choice question in a test, what should be the best way to approach problems like this? The options were $(a)\quad1\quad (b)\quad2\quad (c)\quad3\quad (d)\quad 4$ A: Note that, modulo $5$, \begin{align} 2014 &\equiv-1, & 2015 & \equiv 0, & 2016 &\equiv 1,\\ 2017 & \equiv2, & 2018 &\equiv-2, & 2019 &\equiv-1, \\ 2020 &\equiv 0, & 2021 &\equiv 1, & 2022 & \equiv2. \end{align} Furthermore $2$ and $-2$ have order $4$ modulo $5$, so the determinant is congruent to $$\begin{vmatrix} 1 & 0 & 1 \\ 2 & (-2)^2 & -1 \\ 0 & 1 & 2^2 \end{vmatrix}=\begin{vmatrix} 1 & 0 & 1 \\ 2 & -1 & -1 \\ 0 & 1 & -1 \end{vmatrix}=\begin{vmatrix} 1 & 0 & 1 \\ 0 & -1 & 2 \\ 0 & 1 & -1 \end{vmatrix}=(-1)^2-2=-1\equiv 4.$$
[ "stackoverflow", "0031328971.txt" ]
Q: Android Wear: CapabilityApi times out, doesn't return capabilities I'm trying to use a new CapabilityApi introduced in play services 7.3 to learn about the capabilities of my android wear device(Asus Zenwatch). I've checked out this question and can confirm that Wearable.NodeApi.getConnectedNodes(...) approach does work to get a list of connected nodes and I do see the watch in the list. This is the code I'm running in an app on my phone to query capabilities of connected wear devices: GoogleApiClient mGoogleApiClient = new GoogleApiClient.Builder(context) .addApi(Wearable.API) .build(); PendingResult<CapabilityApi.GetAllCapabilitiesResult> result = Wearable.CapabilityApi.getAllCapabilities(mGoogleApiClient, CapabilityApi.FILTER_ALL); Map<String, CapabilityInfo> capabilities = result.await().getAllCapabilities(); context in this case is an activity. This call is happening on a non-ui thread, so calling await() is safe. I inserted a breakpoint on the last line, but when I hit it and step over it, the debugger never returns, as if the method runs forever and never returns. If I replace await() with await(10000, TimeUnit.MILLISECONDS), then it just times out after 10 seconds and capabilities is assigned null. The behavior is the same on a phone paired with android wear, or not paired with it, or on an emulator. Am I missing something? What's the correct way to use CapabilityApi and get a list of available capabilities and not have it time out? EDIT 1: With help of @ianhanniballake I came up with this code that doesn't hang, but returns the result very quickly: ConnectionResult connectionResult = mGoogleApiClient.blockingConnect(CONNECTION_TIME_OUT_MS, TimeUnit.MILLISECONDS); if (connectionResult.isSuccess()) { PendingResult<CapabilityApi.GetAllCapabilitiesResult> result = Wearable.CapabilityApi.getAllCapabilities(mGoogleApiClient, CapabilityApi.FILTER_ALL); mGoogleApiClient.disconnect(); Map<String, CapabilityInfo> capabilities = result.await().getAllCapabilities(); return capabilities != null && !capabilities.isEmpty(); } else { mGoogleApiClient.disconnect(); return false; } However, when I run this code on a phone that is paired with Android smart watch, capabilities ends up being either an empty list, or null. Is this the expected result? I thought I was supposed to get a list of my smart watch's capabilities. Or is this code supposed to be ran on the watch itself? A: You have to connect your GoogleApiClient. Consider using blockingConnect() as you are on a background thread, then checking the resulting ConnectionResult to ensure the connection succeeded.
[ "stackoverflow", "0021221982.txt" ]
Q: combine 2 select queries in mysql I have 2 select statements: timestamp of emp getting awards for specific emp id SELECT * FROM user_table,employeetable,awards where user_table.empid=employeetable.empid AND user_table.empid=awards.empid AND user_table.empid=123 ORDER BY timestamp DESC All employees staying around 25 miles from the current loc:current location: lat =37 lng=-122 SELECT * ( 3959 * acos( cos( radians(37) ) * cos( radians( lat ) ) * cos( radians( lng ) - radians(-122) )+ sin( radians(37) ) * sin( radians( lat ) ) ) ) AS distance FROM user_table,employeetable,awards where user_table.empid=employeetable.empid AND user_table.empid=awards.empid HAVING distance < 25 ORDER BY distance; How do I combine both and ORDER BY timestamp ?btw both have field timestamp. 1.has specific user 2.all users within specific radius I really appreciate any help.Thanks in Advance. A: You can combine the two queries into a single query, just using logic in the where clause (which this has turned into a having clause: select *, ( 3959 * acos( cos( radians(37) ) * cos( radians( lat ) ) * cos( radians( lng ) - radians(-122) )+ sin( radians(37) ) * sin( radians( lat ) ) ) ) as distance from user u join employee e on u.empid = e.empid join awards a on u.empid = a.empid having empid = 123 or distance < 25; This uses having instead of where so the distance column alias can be used instead of the formula.
[ "webmasters.stackexchange", "0000000325.txt" ]
Q: How much attention should I pay to page-load metrics? My site scores a D in YSlow. How important is it that I get it up to say a B or an A? I feel the page already loads quickly. I tested it at home on a modest 512k connection and it loaded in an acceptable time and was quick to browse etc. But according to YSlow there's heaps of room for improvement? Is it worth it? A: I would say it depends on what type of site you are running. If it is a person site and you don't care about how many people are visiting it then I would say don't worry about it. If it is a business and you want to give people the best experience you can then I would say worry about it some. A D in YSlow is pretty low and you probably have some reasonably easy things that can be corrected with a little work. Even though it seems fast over a 512k connection there are all kinds of things that can make it seem faster for you than it is for a lot of others. Things like compression are easy to turn on and can help a lot. You should post your score and what type of web server you are using so that people can give more specific advice.
[ "math.stackexchange", "0002523871.txt" ]
Q: Why is $\sup_{x \in [a,b]}|\int_a^xk(x,t)(f(t)-g(t))dt| \le ||f-g|| \sup_{x \in [a,b]} \int_a^x |k(x,t)|dt$? Why is it true to say that: $$\sup_{x \in [a,b]}|\int_a^xk(x,t)(f(t)-g(t))dt| \le ||f-g|| \sup_{x \in [a,b]} \int_a^x |k(x,t)|dt$$ A: Because one has that $$\left| \int h(s)\,ds \right| \le \int |h(s)|\,ds$$ Apply to your and you get $$\left| \int k(x,t) (f(t)-g(t))\,dt\right| \le \int |k(x,t)||f(t)-g(t)|\,dt \le \int |k(x,t)|||f-g||_\infty\,dt$$ And being $||f-g||_\infty$ a scalar number you can bring it out the integral and the supremum.
[ "stackoverflow", "0003751797.txt" ]
Q: Can I call memcpy() and memmove() with "number of bytes" set to zero? Do I need to treat cases when I actully have nothing to move/copy with memmove()/memcpy() as edge cases int numberOfBytes = ... if( numberOfBytes != 0 ) { memmove( dest, source, numberOfBytes ); } or should I just call the function without checking int numberOfBytes = ... memmove( dest, source, numberOfBytes ); Is the check in the former snippet necessary? A: From the C99 standard (7.21.1/2): Where an argument declared as size_t n specifies the length of the array for a function, n can have the value zero on a call to that function. Unless explicitly stated otherwise in the description of a particular function in this subclause, pointer arguments on such a call shall still have valid values, as described in 7.1.4. On such a call, a function that locates a character finds no occurrence, a function that compares two character sequences returns zero, and a function that copies characters copies zero characters. So the answer is no; the check is not necessary (or yes; you can pass zero). A: As said by @You, the standard specifies that the memcpy and memmove should handle this case without problem; since they are usually implemented somehow like void *memcpy(void *_dst, const void *_src, size_t len) { unsigned char *dst = _dst; const unsigned char *src = _src; while(len-- > 0) *dst++ = *src++; return _dst; } you should not even have any performance penality other than the function call; if the compiler supports intrinsics/inlining for such functions, the additional check may even make the code a micro-little-bit slower, since the check is already done at the while.
[ "unix.stackexchange", "0000438368.txt" ]
Q: Unix unzip is failing but Mac Archive Utility works I have a bunch of files with a .zip extension that I cannot seem to extract on my HPC: $ unzip RowlandMetaG_part1.zip Archive: RowlandMetaG_part1.zip warning [RowlandMetaG_part1.zip]: 13082642473 extra bytes at beginning or within zipfile (attempting to process anyway) error [RowlandMetaG_part1.zip]: start of central directory not found; zipfile corrupt. (please check that you have transferred or created the zipfile in the appropriate BINARY mode and that you have compiled UnZip properly) The size of the zip file itself is 17377631766 bytes. However, when I download the file to my mac and double-click, the Archive Utility app is able to unpack the file (it contains a directory with about 200 gzipped files inside). The place that generated the file says: The files are simply zipped here on our local lab PC running Windows, then uploaded to Dropbox...most people don’t have any problems with them and many can directly download the links I give them using the Linux wget command directly into their servers, then unzip there (the Linux utility can usually handle PC-zipped files). I'm not sure that the fact that the files are from dropbox is relevant, but I used curl -LO to download (also tried wget - this doesn't change anything), and the files show up with ?dl=1 at the end of the file name. That said, when I download from dropbox to my mac, unzip still fails with the same error. My question - is there anyway to get this to unzip on the server? Some software that will accomplish the same thing that Archive Utility.app does, or some other way of determining what unzipping protocol to use? EDIT: Based on comments: some additional information: $ file RowlandMetaG_part1.zip RowlandMetaG_part3.zip: Zip archive data, at least v2.0 to extract $ zip --version Copyright (c) 1990-2008 Info-ZIP - Type 'zip "-L"' for software license. This is Zip 3.0 (July 5th 2008), by Info-ZIP. Also, I did try tar, but without success. $ tar -xvf RowlandMetaG_part1.zip tar: This does not look like a tar archive tar: Skipping to next header tar: Archive contains `l@\022\t1\fjp\024uP\020' where numeric off_t value expected tar: Archive contains `\024\311\032b\234\254\006\031' where numeric mode_t value expected tar: Archive contains `\312\005hЈ\2138vÃ\032p' where numeric time_t value expected # etc... And I end up with crap in the directory like this: $ ls ???MK??%b???mv?}??????@*??TZ?S?? ??????+??}n>,!???ӟw~?i?(??5?#?ʳ??z0?[?Ed?@?쑱??lT?d???A??T???H?? ,??Y??:???'w,??+?ԌU??Wwxm???e~??ZJ]y??ˤ??4?SX?=y$Ʌ{N\?P}x~~?T?3????y?????' A: There is a chance that, although the file ends with ".zip", it is not a zip file. You can confirm if this is a zip file (and at the same time determine what is the actual file format) using the file utility: file RowlandMetaG_part1.zip Once the file format is determined you can use the proper tool to unarchive it. A: It turns out that, because the file is so large, zip can't handle it (it maxes out at 2Gb). Instead, I can use jar: $ jar xvf RowlandMetaG_part1.zip inflated: RowlandMetaG_part1/296E-7-26-17-O_S23_L001_R1_001.fastq.gz # etc...
[ "gamedev.stackexchange", "0000064253.txt" ]
Q: Input of mouseclick not always registered in XNA Update method I have a problem that not all inputs of my mouse events seem to be registered. The update logic is checking a 2 dimensional array of 10x10 . It's logic for a jewel matching game. So when i switch my jewel I can't click on another jewel for like half a second. I tested it with a click counter variable and it doesn't hit the debugger when i click the second time after the jewel switch. Only if I do the second click after waiting half a second longer. Could it be that the update logic is too heavy that while he is executing update logic my click is happening and he doesn't register it? What am I not seeing here :)? Or doing wrong. It is my first game. UPDATE : I changed to Variable time step, then i saw after the reindex of my jewels (so after the switch) i see the elapsedgametime was 380ms. So I guess that is why he doesn't catch the short "Press" of my mouseclick because update method is still busy with executing the reindexing code. Anyone knows how I can deal with this .. Or do I have to start using threads because my update of reindex takes too long? SOLVED : The problem was that in my reindexing code I got out of bound exceptions which I catched and then continued. That catching of exceptions caused a massive lag each time a reindex happened. Now everything runs smoothly and I do not have to worry about a slow Update. But I'm still asking the question.. what should you do If you have really heave update logic where the time to process the logic takes almost a 0.5 second? I guess you need to execute your game logic in multiple threads to reduce the update time? I'm also thinking i'll never have to worry about this for a jewels game :p. It's more for a problem for a heavy physics game with alot of game objects ? My function of the update methode looks like this. public void UpdateBoard() { currentMouseState = Mouse.GetState(); if (currentMouseState.LeftButton == ButtonState.Pressed && prevMouseState.LeftButton != ButtonState.Pressed) { debugCount++; if (debugCount == 3) { int a = 4; } leftButtonPressed = true; } if (this.IsBoardActive() == false) { UpdatingLogic = true; if (leftButtonPressed == true) { // this.CheckDropJewels(currentMouseState); this.CheckForSwitch(currentMouseState); if (SwitchFound == true) { reIndexSwitchedJewels = true; } this.MarkJewel(currentMouseState); } if (CheckForMatches == true) { if (this.CheckMatches(5) == true) { this.RemoveMatches(); reIndexMissingJewels = true; } else { CheckForMatches = false; } } UpdatingLogic = false; if (currentMouseState.RightButton == ButtonState.Pressed && prevMouseState.RightButton != ButtonState.Pressed) { this.CheckMatches(5); } this.ReIndex(); leftButtonPressed = false; } prevMouseState = currentMouseState; this.UpdateJewels(); } A: I believe the code could be changed to avoid this problem using two boolean variables to store the button clicks, one for each button. For the left button, could be something like this: bool isLeftMouseBtnDown = false; public void UpdateBoard() { // Mouse events processing. currentMouseState = Mouse.GetState(); // If the current state is pressed. if (currentMouseState.LeftButton == ButtonState.Pressed) { // If the left mouse button was already down, then the user is // resting or moving the mouse with the button still down, like a drag. // Else, then it is a button down event. if (isLeftMouseBtnDown) { mouseDrag(); } else { mouseDown(); } } else { // If the button is not pressed, but it was before, it means // a mouse up event. // Else, the user is moving the mouse without pressing any button. if (isLeftMouseBtnDown) { mouseUp(); } else { mouseMove(); } } // Here comes the code that is independent of mouse events. if (CheckForMatches == true) { if (this.CheckMatches(5) == true) { this.RemoveMatches(); reIndexMissingJewels = true; } else { CheckForMatches = false; } } this.UpdateJewels(); } public void mouseDown() { // Example code that is dependent on mouse status. debugCount++; if (debugCount == 3) { int a = 4; } if (this.IsBoardActive() == false) { UpdatingLogic = true; // this.CheckDropJewels(currentMouseState); this.CheckForSwitch(currentMouseState); if (SwitchFound == true) { reIndexSwitchedJewels = true; } this.MarkJewel(currentMouseState); } UpdatingLogic = false; this.ReIndex(); // Sets the mouse button state as down. isLeftMouseBtnDown = true; } public void mouseUp() { // Sets the mouse button state as up. isLeftMouseBtnDown = false; }
[ "stackoverflow", "0011070601.txt" ]
Q: Encryption using PKCS#7 I am using Bouncy Castle provided library to encrypt,decrypt,sign and verify sign. I am doing this as 1. Encrypt data 2. Sign data 3. Write signed byte to a file 4. Read signed byte from file 5. Verify signature 6. Decrypt data I have taken reference from Beginning Cryptography with Java My problem is in step 5 when i am verifying data i am getting org.bouncycastle.cms.CMSException: message-digest attribute value does not match calculated value My code is below import java.io.ByteArrayInputStream; import java.io.File; import java.io.FileInputStream; import java.io.FileOutputStream; import java.math.BigInteger; import java.security.KeyPair; import java.security.KeyPairGenerator; import java.security.KeyStore; import java.security.PrivateKey; import java.security.PublicKey; import java.security.SecureRandom; import java.security.cert.CertPathBuilder; import java.security.cert.CertStore; import java.security.cert.Certificate; import java.security.cert.CollectionCertStoreParameters; import java.security.cert.PKIXBuilderParameters; import java.security.cert.PKIXCertPathBuilderResult; import java.security.cert.TrustAnchor; import java.security.cert.X509CertSelector; import java.security.cert.X509Certificate; import java.util.Arrays; import java.util.Collections; import java.util.Date; import java.util.Iterator; import javax.security.auth.x500.X500Principal; import javax.security.auth.x500.X500PrivateCredential; import org.bouncycastle.asn1.x509.BasicConstraints; import org.bouncycastle.asn1.x509.KeyUsage; import org.bouncycastle.asn1.x509.X509Extensions; import org.bouncycastle.cms.CMSEnvelopedData; import org.bouncycastle.cms.CMSEnvelopedDataGenerator; import org.bouncycastle.cms.CMSEnvelopedDataParser; import org.bouncycastle.cms.CMSProcessable; import org.bouncycastle.cms.CMSProcessableByteArray; import org.bouncycastle.cms.CMSSignedData; import org.bouncycastle.cms.CMSSignedDataGenerator; import org.bouncycastle.cms.RecipientId; import org.bouncycastle.cms.RecipientInformation; import org.bouncycastle.cms.RecipientInformationStore; import org.bouncycastle.cms.SignerId; import org.bouncycastle.cms.SignerInformation; import org.bouncycastle.cms.SignerInformationStore; import org.bouncycastle.x509.X509V1CertificateGenerator; import org.bouncycastle.x509.X509V3CertificateGenerator; import org.bouncycastle.x509.extension.AuthorityKeyIdentifierStructure; import org.bouncycastle.x509.extension.SubjectKeyIdentifierStructure; public class Test { private static final char[] KEY_STORE_PASSWORD = "123456".toCharArray(); private static final long VALIDITY_PERIOD = 365 * 24 * 60 * 60 * 1000; private static final char[] KEY_PASSWORD = "keyPassword".toCharArray(); public static String ROOT_ALIAS = "root"; public static String INTERMEDIATE_ALIAS = "intermediate"; public static String END_ENTITY_ALIAS = "end"; public static String PLAIN_TEXT = "Hello World!123"; public static void main(String[] args) { try{ // CREATE KEY STORE KeyStore keyStore = createKeyStore(); // STEP 1. ENCRYPT AND SIGN byte[] step1Data = encryptData(keyStore, PLAIN_TEXT.getBytes()); CMSSignedData cmsSignedData = signData(keyStore,step1Data); new File("D:\\pkcs7\\encrypted-file.p7b"); FileOutputStream fileOuputStream = new FileOutputStream("D:\\pkcs7\\encrypted-file.p7b"); fileOuputStream.write(cmsSignedData.getEncoded()); fileOuputStream.flush(); fileOuputStream.close(); // STEP 2. READ ENCRYPTED DATA AND VERIFY SIGN AND DECRYPT IT File file =new File("D:\\pkcs7\\encrypted-file.p7b"); FileInputStream fileInputStream = new FileInputStream(file); byte[] encryptedAndSignedByte = new byte[(int)file.length()]; fileInputStream.read(encryptedAndSignedByte ); fileInputStream.close(); cmsSignedData = new CMSSignedData(encryptedAndSignedByte); if( verifyData(keyStore, cmsSignedData) == true ){ decryptData(keyStore,encryptedAndSignedByte); } }catch (Exception e) { e.printStackTrace(); } } /** * * This method will encrypt data */ private static byte[] encryptData(KeyStore keyStore, byte[] plainData) throws Exception { PrivateKey key = (PrivateKey) keyStore.getKey(END_ENTITY_ALIAS, KEY_PASSWORD); Certificate[] chain = keyStore.getCertificateChain(END_ENTITY_ALIAS); X509Certificate cert = (X509Certificate) chain[0]; // set up the generator CMSEnvelopedDataGenerator gen = new CMSEnvelopedDataGenerator(); gen.addKeyTransRecipient(cert); // create the enveloped-data object CMSProcessable data = new CMSProcessableByteArray(plainData); CMSEnvelopedData enveloped = gen.generate(data, CMSEnvelopedDataGenerator.AES128_CBC, "BC"); return enveloped.getEncoded(); // recreate } private static byte[] decryptData(KeyStore keyStore,byte[] encryptedData) throws Exception{ CMSEnvelopedDataParser envelopedDataParser = new CMSEnvelopedDataParser(new ByteArrayInputStream(encryptedData)); PrivateKey key = (PrivateKey) keyStore.getKey(END_ENTITY_ALIAS,KEY_PASSWORD); Certificate[] chain = keyStore.getCertificateChain(END_ENTITY_ALIAS); X509Certificate cert = (X509Certificate) chain[0]; CMSEnvelopedData enveloped = new CMSEnvelopedData(encryptedData); // look for our recipient identifier RecipientId recId = new RecipientId(); recId.setSerialNumber(cert.getSerialNumber()); recId.setIssuer(cert.getIssuerX500Principal().getEncoded()); RecipientInformationStore recipients = enveloped.getRecipientInfos(); RecipientInformation recipient = recipients.get(recId); if (recipient != null) { // decrypt the data byte[] recData = recipient.getContent(key, "BC"); System.out.println("----------------------- RECOVERED DATA -----------------------"); System.out.println(new String(recData)); System.out.println("--------------------------------------------------------------"); return recData; } else { System.out.println("could not find a matching recipient"); } return null; } private static CMSSignedData signData(KeyStore keyStore,byte[] encryptedData ) throws Exception { // GET THE PRIVATE KEY PrivateKey key = (PrivateKey) keyStore.getKey(END_ENTITY_ALIAS, KEY_PASSWORD); Certificate[] chain = keyStore.getCertificateChain(END_ENTITY_ALIAS); CertStore certsAndCRLs = CertStore.getInstance("Collection", new CollectionCertStoreParameters(Arrays.asList(chain)), "BC"); X509Certificate cert = (X509Certificate) chain[0]; // set up the generator CMSSignedDataGenerator gen = new CMSSignedDataGenerator(); gen.addSigner(key, cert, CMSSignedDataGenerator.DIGEST_SHA224); gen.addCertificatesAndCRLs(certsAndCRLs); // create the signed-data object CMSProcessable data = new CMSProcessableByteArray(encryptedData); CMSSignedData signed = gen.generate(data, "BC"); // recreate signed = new CMSSignedData(data, signed.getEncoded()); // ContentInfo conInf = signed.getContentInfo(); // CMSProcessable sigContent = signed.getSignedContent(); return signed; } private static boolean verifyData(KeyStore keyStore, CMSSignedData signed) throws Exception { // verification step X509Certificate rootCert = (X509Certificate) keyStore.getCertificate(ROOT_ALIAS); if (isValidSignature(signed, rootCert)) { System.out.println("verification succeeded"); return true; } else { System.out.println("verification failed"); } return false; } /** * Take a CMS SignedData message and a trust anchor and determine if the * message is signed with a valid signature from a end entity entity * certificate recognized by the trust anchor rootCert. */ private static boolean isValidSignature(CMSSignedData signedData, X509Certificate rootCert) throws Exception { boolean[] bArr = new boolean[2]; bArr[0] = true; CertStore certsAndCRLs = signedData.getCertificatesAndCRLs( "Collection", "BC"); SignerInformationStore signers = signedData.getSignerInfos(); Iterator it = signers.getSigners().iterator(); if (it.hasNext()) { SignerInformation signer = (SignerInformation) it.next(); SignerId signerConstraints = signer.getSID(); signerConstraints.setKeyUsage(bArr); PKIXCertPathBuilderResult result = buildPath(rootCert, signer.getSID(), certsAndCRLs); return signer.verify(result.getPublicKey(), "BC"); } return false; } /** * Build a path using the given root as the trust anchor, and the passed in * end constraints and certificate store. * <p> * Note: the path is built with revocation checking turned off. */ public static PKIXCertPathBuilderResult buildPath(X509Certificate rootCert, X509CertSelector endConstraints, CertStore certsAndCRLs) throws Exception { CertPathBuilder builder = CertPathBuilder.getInstance("PKIX", "BC"); PKIXBuilderParameters buildParams = new PKIXBuilderParameters( Collections.singleton(new TrustAnchor(rootCert, null)), endConstraints); buildParams.addCertStore(certsAndCRLs); buildParams.setRevocationEnabled(false); return (PKIXCertPathBuilderResult) builder.build(buildParams); } /** * Create a KeyStore containing the a private credential with certificate * chain and a trust anchor. */ public static KeyStore createKeyStore() throws Exception { KeyStore keyStore = KeyStore.getInstance("JKS"); keyStore.load(null, null); keyStore.load(null, null); X500PrivateCredential rootCredential = createRootCredential(); X500PrivateCredential interCredential = createIntermediateCredential( rootCredential.getPrivateKey(), rootCredential.getCertificate()); X500PrivateCredential endCredential = createEndEntityCredential( interCredential.getPrivateKey(), interCredential.getCertificate()); keyStore.setCertificateEntry(rootCredential.getAlias(), rootCredential.getCertificate()); keyStore.setKeyEntry( endCredential.getAlias(), endCredential.getPrivateKey(), KEY_PASSWORD, new Certificate[] { endCredential.getCertificate(), interCredential.getCertificate(), rootCredential.getCertificate() }); keyStore.store(new FileOutputStream("d:\\pkcs7\\KeyStore.jks"), KEY_STORE_PASSWORD); return keyStore; } /** * Create a random 1024 bit RSA key pair */ public static KeyPair generateRSAKeyPair() throws Exception { KeyPairGenerator kpGen = KeyPairGenerator.getInstance("RSA", "BC"); kpGen.initialize(1024, new SecureRandom()); return kpGen.generateKeyPair(); } /** * Generate a sample V1 certificate to use as a CA root certificate */ public static X509Certificate generateCertificate(KeyPair pair) throws Exception { X509V1CertificateGenerator certGen = new X509V1CertificateGenerator(); certGen.setSerialNumber(BigInteger.valueOf(1)); certGen.setIssuerDN(new X500Principal("CN=Test CA Certificate")); certGen.setNotBefore(new Date(System.currentTimeMillis() - VALIDITY_PERIOD)); certGen.setNotAfter(new Date(System.currentTimeMillis() + VALIDITY_PERIOD)); certGen.setSubjectDN(new X500Principal("CN=Test CA Certificate")); certGen.setPublicKey(pair.getPublic()); certGen.setSignatureAlgorithm("SHA1WithRSAEncryption"); return certGen.generateX509Certificate(pair.getPrivate(), "BC"); } /** * Generate a sample V1 certificate to use as a CA root certificate */ public static X509Certificate generateRootCert(KeyPair pair) throws Exception { X509V1CertificateGenerator certGen = new X509V1CertificateGenerator(); certGen.setSerialNumber(BigInteger.valueOf(1)); certGen.setIssuerDN(new X500Principal("CN=Test CA Certificate")); certGen.setNotBefore(new Date(System.currentTimeMillis() - VALIDITY_PERIOD)); certGen.setNotAfter(new Date(System.currentTimeMillis() + VALIDITY_PERIOD)); certGen.setSubjectDN(new X500Principal("CN=Test CA Certificate")); certGen.setPublicKey(pair.getPublic()); certGen.setSignatureAlgorithm("SHA1WithRSAEncryption"); return certGen.generateX509Certificate(pair.getPrivate(), "BC"); } /** * Generate a sample V3 certificate to use as an end entity certificate */ public static X509Certificate generateEndEntityCert(PublicKey entityKey, PrivateKey caKey, X509Certificate caCert) throws Exception { X509V3CertificateGenerator certGen = new X509V3CertificateGenerator(); certGen.setSerialNumber(BigInteger.valueOf(1)); certGen.setIssuerDN(caCert.getSubjectX500Principal()); certGen.setNotBefore(new Date(System.currentTimeMillis() - VALIDITY_PERIOD)); certGen.setNotAfter(new Date(System.currentTimeMillis() + VALIDITY_PERIOD)); certGen.setSubjectDN(new X500Principal("CN=Test End Certificate")); certGen.setPublicKey(entityKey); certGen.setSignatureAlgorithm("SHA1WithRSAEncryption"); certGen.addExtension(X509Extensions.AuthorityKeyIdentifier, false, new AuthorityKeyIdentifierStructure(caCert)); certGen.addExtension(X509Extensions.SubjectKeyIdentifier, false, new SubjectKeyIdentifierStructure(entityKey)); certGen.addExtension(X509Extensions.BasicConstraints, true, new BasicConstraints(false)); certGen.addExtension(X509Extensions.KeyUsage, true, new KeyUsage( KeyUsage.digitalSignature | KeyUsage.keyEncipherment)); return certGen.generateX509Certificate(caKey, "BC"); } /** * Generate a X500PrivateCredential for the root entity. */ public static X500PrivateCredential createRootCredential() throws Exception { KeyPair rootPair = generateRSAKeyPair(); X509Certificate rootCert = generateRootCert(rootPair); return new X500PrivateCredential(rootCert, rootPair.getPrivate(), ROOT_ALIAS); } /** * Generate a X500PrivateCredential for the intermediate entity. */ public static X500PrivateCredential createIntermediateCredential( PrivateKey caKey, X509Certificate caCert) throws Exception { KeyPair interPair = generateRSAKeyPair(); X509Certificate interCert = generateIntermediateCert( interPair.getPublic(), caKey, caCert); return new X500PrivateCredential(interCert, interPair.getPrivate(), INTERMEDIATE_ALIAS); } /** * Generate a X500PrivateCredential for the end entity. */ public static X500PrivateCredential createEndEntityCredential( PrivateKey caKey, X509Certificate caCert) throws Exception { KeyPair endPair = generateRSAKeyPair(); X509Certificate endCert = generateEndEntityCert(endPair.getPublic(), caKey, caCert); return new X500PrivateCredential(endCert, endPair.getPrivate(), END_ENTITY_ALIAS); } /** * Generate a sample V3 certificate to use as an intermediate CA certificate */ public static X509Certificate generateIntermediateCert(PublicKey intKey, PrivateKey caKey, X509Certificate caCert) throws Exception { X509V3CertificateGenerator certGen = new X509V3CertificateGenerator(); certGen.setSerialNumber(BigInteger.valueOf(1)); certGen.setIssuerDN(caCert.getSubjectX500Principal()); certGen.setNotBefore(new Date(System.currentTimeMillis())); certGen.setNotAfter(new Date(System.currentTimeMillis() + VALIDITY_PERIOD)); certGen.setSubjectDN(new X500Principal( "CN=Test Intermediate Certificate")); certGen.setPublicKey(intKey); certGen.setSignatureAlgorithm("SHA1WithRSAEncryption"); certGen.addExtension(X509Extensions.AuthorityKeyIdentifier, false, new AuthorityKeyIdentifierStructure(caCert)); certGen.addExtension(X509Extensions.SubjectKeyIdentifier, false, new SubjectKeyIdentifierStructure(intKey)); certGen.addExtension(X509Extensions.BasicConstraints, true, new BasicConstraints(0)); certGen.addExtension(X509Extensions.KeyUsage, true, new KeyUsage( KeyUsage.digitalSignature | KeyUsage.keyCertSign | KeyUsage.cRLSign)); return certGen.generateX509Certificate(caKey, "BC"); } } A: In typical usage a .p7b file contains only public key certificates and never a private key. It is often used to store an entire chain of certificates rather than a single certificate. The 'p7b' name comes from the format which is the degenerate form of PKCS#7 SignedData structure. Typically, private keys are stored in a PKCS#12 (often a file that has either a .p12 or a .pfx extension) file but other formats are also common. To read in the certificates from a p7b file you can use the CertificateFactory class. A PKCS#12 file is directly usable as a keystore. You mention PKCS#7 frequently. PKCS#7 is an old standard that is extremely large and open ended. These days the standard that is more commonly implemented is an extended subset of PKCS#7 called CMS. It's an IETF standard documented in RFC 5652. The Bouncycastle PKIX/CMS library has extensive support for the CMS specification.
[ "stackoverflow", "0036156521.txt" ]
Q: Dapper: The ConnectionString property has not been initialized I'm playing around with Dapper for the first time. Looks like a pretty handy little tool. But I'm running into one problem. In the little console app below, the first method runs as expected. However the second method returns this error: An unhandled exception of type 'System.InvalidOperationException' occurred in System.Data.dll Additional information: The ConnectionString property has not been initialized. I can turn the order of the methods around and get the same results. It's always on the second call that I get the error. Not sure what I'm doing wrong. I also tried not using the db.Close(), but I got the same result. The error is on this line in whatever method is called second: db.Open(); Any ideas? Thanks! class Program { static IDbConnection db = new SqlConnection(ConfigurationManager.ConnectionStrings["DapperConnection"].ToString()); static void Main(string[] args) { IEnumerable<Policy> policy1 = PolicySelectAll(); IEnumerable<Policy> policy2 = PolicyFindByLastFour("093D"); } public static IEnumerable<Policy> PolicySelectAll() { var sql = "SELECT * FROM Policy"; IEnumerable<Policy> policy; using (db) { db.Open(); policy = db.Query<Policy>(sql); db.Close(); } return policy; } public static IEnumerable<Policy> PolicyFindByLastFour(string LastFour) { var sql = string.Format("SELECT * FROM Policy WHERE PolicyNumber LIKE '%{0}'", LastFour); IEnumerable<Policy> policy; using (db) { db.Open(); policy = db.Query<Policy>(sql); db.Close(); } return policy; } } EDIT AFTER: Based on the answers, this is how I solved it: class Program { static string connectionString = ConfigurationManager.ConnectionStrings["DapperConnection"].ToString(); static void Main(string[] args) { IEnumerable<Policy> policy1 = PolicySelectAll(); IEnumerable<Policy> policy2 = PolicyFindByLastFour("093D"); } public static IDbConnection GetConnection() { return new SqlConnection(connectionString); } public static IEnumerable<Policy> PolicySelectAll() { IDbConnection db = GetConnection(); var sql = "SELECT * FROM Policy"; IEnumerable<Policy> policy; using (db) { db.Open(); policy = db.Query<Policy>(sql); db.Close(); } return policy; } public static IEnumerable<Policy> PolicyFindByLastFour(string LastFour) { IDbConnection db = GetConnection(); var sql = string.Format("SELECT * FROM Policy WHERE PolicyNumber LIKE '%{0}'", LastFour); IEnumerable<Policy> policy; using (db) { db.Open(); policy = db.Query<Policy>(sql); db.Close(); } return policy; } } A: if you move your definition of db into the method scope it will be fine. I.E. class Program { static void Main(string[] args) { IEnumerable<Policy> policy1 = PolicySelectAll(); IEnumerable<Policy> policy2 = PolicyFindByLastFour("093D"); } public static IEnumerable<Policy> PolicySelectAll() { var sql = "SELECT * FROM Policy"; IEnumerable<Policy> policy; using (IDbConnection db = new SqlConnection(ConfigurationManager.ConnectionStrings["DapperConnection"].ToString())) { db.Open(); policy = db.Query<Policy>(sql); db.Close(); } return policy; } public static IEnumerable<Policy> PolicyFindByLastFour(string LastFour) { var sql = string.Format("SELECT * FROM Policy WHERE PolicyNumber LIKE '%{0}'", LastFour); IEnumerable<Policy> policy; using (IDbConnection db = new SqlConnection(ConfigurationManager.ConnectionStrings["DapperConnection"].ToString())) { db.Open(); policy = db.Query<Policy>(sql); db.Close(); } return policy; } } }
[ "stackoverflow", "0014242608.txt" ]
Q: Make p:dialog scrollable when working with maximizable Does anyone know how to make a p: dialog work with maximizable and also scroll? If the window is maximized with the scroll it gets bugged and scrollbar disappears. I am using p:dialog from primefaces. A: Well, I had to modify your answer slightly for it to work for me and I just want to share my findings in the hope of helping others. I am not trying to steal the credit for your answer as I don't really deserve or want the credit for it and I just want to share my findings ...:) Here it goes: My Dialog uses the onShow attribute to call your function and passes the dialog widget var name to your function: <p:dialog widgetVar="charts" width="860" height="540" header="chart}" maximizable="true" minimizable="true" showEffect="fade" onShow="fixPFDialogToggleMaximize('charts')"> <ui:include src="/pages/charts.xhtml"/> </p:dialog> Your function then uses PF(widgetVar): function fixPFDialogToggleMaximize(dlg) { if (undefined == PF(dlg).doToggleMaximize) { PF(dlg).doToggleMaximize = PF(dlg).toggleMaximize; PF(dlg).toggleMaximize = function () { this.doToggleMaximize(); var marginsDiff = this.content.outerHeight() - this.content.height(); var newHeight = this.jq.innerHeight() - this.titlebar.outerHeight() - marginsDiff; this.content.height(newHeight); }; } } Thank you so much for providing your answer, as it helped me solve the same problem in my use of PF 5.2 Community edition. Best Regards, Joe
[ "stackoverflow", "0008877858.txt" ]
Q: About base64 encoding and decoding in multiple language Can we apply base 64 encoding in java and decode to get the same string using javascript ? I need to do this to include three separate xml file in a single xml file, three of them being base64 encoded. A: Yes. You can use Base 64 encoding in multiple languages. See Decode Base64 data in Java
[ "stackoverflow", "0033145771.txt" ]
Q: How to make a parameterized command map in vim editor I m trying to make a custom command for block commenting, to avoid writing the whole search and replace sequence each time in vim for commenting lines. What I m trying to do is make a key combination map to which I can pass line numbers as parameter and those should be passed to the .vimrc file and processed there. Is it possible? For example, I have this in my .vimrc map :pc :17,21s/^/#<CR> Now whenver I will do :pc in vim, it will add a # infront of lines 17-21 (commenting them in python) Now 17,18 is hard coded in command here but can I make this command parameterized so that I can pass line numbers specifically like :17,21pc and it will take them in map command? If it is possible then I would love to make the '#' symbol parameterized too so that I can pass in language specific comment symbol, like // in JS. A: Mappings can't have parameters, but it's typically a command's job (see :h :command). command! -range -nargs=? Comment call CommentThis(<line1>, <line2>, <q-args>) function! CommentThis(l1, l2, lead) let l:lead = a:lead == '' ? '#' : a:lead exe printf('%i,%is+^+%s', a:l1, a:l2, l:lead) endf You can use it like this: select some lines with V and arrows, then: :'<,'>Comment // Of course you can specify the line numbers by yourself : don't select anything, then type: :17,21Comment // :12,45Comment " '#' is the default Note: the above code is far from perfect, it's just an example. But there is really better if your goal is to comment some lines: use NERD Commenter; it automatically chooses the right comment leader depending of the filetype, it allows several kinds of comment styles, it can comment and uncomment... Here is an example of its use: select some lines with V and arrows, then type <leader>cc, with <leader> as \ by default.
[ "softwareengineering.stackexchange", "0000393691.txt" ]
Q: Reduce duplicates in outbox pattern in event driven systems Trying to implement the outbox pattern for an event driven system. The outbox pattern in a nutshell is a way to ensure system events are sent to the event log/queue/bus at least once (using the term event bus loosely here): a separate process that periodically checks the contents of the Outbox and processes the messages. After processing each message, the message should be marked as processed to avoid resending. My concern is I want to reduce the chance of duplicates because: it is possible that we will not be able to mark the message as processed due to communication error with Outbox. In this case when connection with Outbox is recovered, the same message will be sent again. The proposed solution is idempotency which is fine with me but does it make sense to at least improve the message relay and reduce chance of duplicates? Are there patterns that exist to improve the message relay? If none, I'm thinking of the following: Use 2 tables: OutboxTable (stores messages you want to send to the event bus) PublisherTable (stores messages already sent to the event bus) Pseudocode for worker that reads the OutboxTable 24/7: loop through each message... if item is currently processing (by other workers) - skip if item is marked as "published" in the PublisherTable (meaning a previous worker was able to publish the event but wasn't able to "confirm" it to the OutboxTable for whatever reason) - mark item as "published" in OutboxTable - skip if item is was never processed or was processed but wasn't published - mark item as "processing" in PublisherTable (so other workers can skip it) - publish/send item to the event bus - mark item as "published" in PublisherTable - mark item as "published" in OutboxTable (Each ongoing process has a defined timeout that other workers use to skip or process the item). What I gather here is that I essentially made my own mini message queue just to achieve this. But since there's no way to make an atomic transaction between a DB and a real message queue, this approach kind of make sense. So my question is, are there patterns similar to what I'm trying to do? A: I have never seen any pattern or variation that wasn't just a shell game. Fundamentally, you have two writes that you are trying to commit An update to the event bus (really, an update to the durable store of the event bus) An history in your own durable store Because these writes are fundamentally using two different locks, there's always going to be some risk that your process exits before completing the second unit of work. Welcome to the laws of physics. Now, if the durable store for the event bus is your durable store, then you may be able to avoid some kinds of problems, because the transaction semantics of a single lock are all or nothing. But if the stores are separate, then you are reduced to figuring out how to spend money to make the network more reliable, how to spend money to make your processes more reliable, and so on.
[ "codereview.stackexchange", "0000033685.txt" ]
Q: Is there any way to simplify this code I'd like to simplify this Scala code: private def getToken() = { val token = getTokenFromDb() token match { case Some(x) => token case None => { val remoteToken = getTokenRemote() remoteToken match { case Some(x) => { writeTokenToDb(x) remoteToken } case None => None } } } } A: private def getToken(): Option[Token] = { def remoteToken = getTokenRemote() map { r => writeTokenToDb(r); r } getTokenFromDb orElse remoteToken } Please write always the type of a function for outer functions - it makes it far more readable to understand the source code (often it is useful to add the types to inner functions/declarations as well). Beside from that you can also write case t: Some => t or case t @ Some(_) => t to express that you return the matched type. Your way of case Some(x) => token is hard to understand, because one needs to know what's token. Furthermore, you don't need braces around a match block, you can simply write case x => a b And naming a variable in a similar way as a function, like in val token = getToken() is completely unnecessary, when it is only used once, it also doesn't increase readability. In such cases just inline the function call: getToken match { ... }.
[ "stackoverflow", "0049332553.txt" ]
Q: Codeigniter : upload path does not appear to be valid I am using the following function in my model: function uploadsinglepicture($uploadpath){ $config['upload_path'] =$uploadpath; $config['allowed_types'] = 'gif|jpg|png'; $config['max_size'] = ''; $config['max_width'] = ''; $config['max_height'] = ''; $this->load->library('upload', $config); if ( ! $this->upload->do_upload('userfile')) { $error = array('error' => $this->upload->display_errors()); print_r($error); if($this->input->post('id') == ''){ $insertion['image'] = ''; } //$this->load->view('upload_form', $error); } else{ $data = array('upload_data' => $this->upload->data()); $insertion['image'] = $data['upload_data']['file_name']; } $image = $insertion['image']; return $image; } This is how I access the function in controller: if(!empty($this->input->post())){ $path= base_url().'assets/front/img'; $this->general->uploadsinglepicture($path); redirect(base_url().'admin/home/index/sliderupated'); } but the error I get is: Array ( [error] => The upload path does not appear to be valid. ) If I print $path, this is what I get http://localhost/site/assets/front/img/ and that opens in the browser as real path. My code in view is as below <form method="post" enctype="multipart/form-data" action ="<?=base_url()?>admin/home/index" > <label>Upload Picture </label> <input type='file' name='userfile' /> <input type="hidden" name="updateimage"> <input type="submit" class="btn btn-primary pull-right" /> </form> How can I fix the error? A: please update this line, from , $path= base_url().'assets/front/img'; to, $path= FCPATH.'assets/front/img';
[ "monero.stackexchange", "0000008075.txt" ]
Q: How much I "deanonymize" transaction when I put my address into "extra" part of a transaction? Does it break untraceability or unlinkability (or something else) if everyone can see the sender's address (or one of the senders because of mixins)? And I mean doing it for example for every 10th address used in Monero network (only use it with primary addresses, not any sub or ghost addresses). Btw the extra part is public (not encrypted), just for you to know. EDIT for knacc answer: Thanks for the great answer, but when I call curl -X POST http://localhost:19835/json_rpc -d '{"jsonrpc":"2.0","id":"0","method":"transfer","params":{"destinations":[{"amount":1,"address":"cczJn1gS7VT37m1t5oDUjTFmPZRDSoNq2Bry2JurELfrDfrmqA6z7AVZ2nsKrDo2jTMCt2ZeUaPXN24oxj1y84F75Z1HAVWBKR"}],"payment_id":"000000000000000000000000000000000000000000000000000000000005a1da","mixin":1,"get_tx_key":false,"unlock_time":0,"priority":3}}' -H 'Content-Type: application/json' and then ./build/debug/bin/citicashd print_tx bf9e43bfbe73b0f27bdc201748b8f039fec740ea7a82b6e8232ee480313a5281 I get a transaction, where I can clearly see the extra "extra": [ 2, 33, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 5, 161, 218, 1, 216, 55, 28, 200, 30, 163, 246, 76, 59, 40, 154,148, 229, 12, 181, 10, 210, 145, 179, 0, 168, 137, 145, 88, 85, 71, 5, 235, 101, 133, 255, 225, 4, 0 ] => 5, 161, 218 is 5a1da and it's unencrypted It's sumokoin, not monero, but I think it works the same way, so the payment_id is public in both cryptocurrencies. So should the payment_id be encrypted or not? A: If you include your (the sender's) public wallet address unencrypted in the tx_extra field, then for an outside observer, this will not affect untraceability in the sense that it still cannot be known which outputs are really being spent. However the person that sent you the outputs in the first place could obviously see that when you spend those outputs, you've included your wallet address, and so could infer that it's you that's spending those outputs. Of course they won't know who the recipient of your transaction is, since you'll be sending funds to a one-time stealth address that cannot be correlated with the recipient's wallet address. It will not affect unlinkability because stealth addresses mean it cannot be known whether you're sending funds to the same person each time or different people each time. You could encrypt your sender address using the transaction shared secret prior to putting it in the tx_extra field (the transaction shared secret is what is currently used to encrypt payment IDs that appear in tx_extra). The problem with this, though, is that it's rare to see people place their own custom content into the tx_extra field. Your transactions would therefore stand out.
[ "stackoverflow", "0011323145.txt" ]
Q: Difference between ALAssetType and UTType? What's the distinction between these two types? When do you use one versus the other? Is there a way to translate between the two? A: They're completely different. UTType is basically a way to describe a generic file type and give it some meaning to the system and is used by CoreServices. It's totally generic and can be used to classify all kinds of resources to the system. For more information on UTTypes, see the Overview section of the UTType Reference documentation. ALAssetType is just a way to describe the type of asset you want back from the asset library, and is only used by the AssetLibraryFramework on iOS. It's basically a string constant that tells the asset library whether you want to work with still images or video files (since the AssetLibraryFramework is the programatic access to the user's photo/video library on the device). Unlike a UTType, this constant gives no information about the actual format or arrangement of the asset (like, is it an H.264 encoded movie in an m4v container, or is it a tiff image or is it a jpg?), which you'd get from a UTType, instead it just says that you're interested in either movies or images.
[ "stackoverflow", "0019144709.txt" ]
Q: document.body.scrollTop || document.documentElement.scrollTop javascript i have this code and i want to make the scroll on a div instead of the body var scrollPosition = document.body.scrollTop || document.documentElement.scrollTop; i tried this : var divToScroll = $(".divToScroll"), scrollPosition = divToScroll.scrollTop || document.divToScroll.scrollTop; but it is not working <section class=divToScroll> <article> </article> </section> the style: .divToScroll{ position:relative; width:640px; height:320px; overflow-y:auto; overflow-x:hidden; } .divToScroll article{ width:100%; height:2000px; } A: The reason it's not working is because divToScroll is a jQuery object, not a native DOM node like document.body, and as such has no scrollTop property, but jQuery has a scrollTop() method var divToScroll = $(".divToScroll"), scrollPosition = divToScroll.scrollTop(); of course, this only works if the scrollbar is attached to that element. Without jQuery it would be: var divToScroll = document.getElementsByClassName('divToScroll')[0], scrollPosition = divToScroll.scrollTop;
[ "stackoverflow", "0016585443.txt" ]
Q: Enum over multiple files, or automatic unique constants over multiple files In C++ is there any way to automatically generate constants over multiple files at compile time? Just like how an enum has constants automatically generated in a single file, but the constants must be unique over multiple files. Eg: classBase.hpp classBase{ //blah blah }; classA.hpp class childA : public classBase{ private: static const unsigned int mID = NEXT_ID; }; classB.hpp class childB : public classBase{ private: static const unsigned int mID = NEXT_ID; }; classC.hpp class childC : public classBase{ private: static const unsigned int mID = NEXT_ID; }; So in this case, each class inheriting from classBase would automatically be assigned the next ID (0, 1, 2...) I would guess there is a way to do it with #define s, but I don't know of any way to automatically increment a #define each time something is assigned to it, is there a way to do this? A: It's not easy to generate a sequence at compile time by your own but most compilers supports a macro for this purpose: __COUNTER__. It's a counter, increased by the compiler itself each time it's used in source code so you can use it across multiple files. For example your code could be: class childB : public classBase { private: static const unsigned int mID = __COUNTER__; }; If your compiler doesn't provide that macro (or you need more control over IDs generation) then you have to write much more code but it can be done with template metaprogramming.
[ "stackoverflow", "0033051671.txt" ]
Q: Android Studio create A folder in R.raw I want to create folder in R.raw to classify my media source. But when I try to read the folder in R, I can't found the folder. R.raw.folderName How can I solve the problem? A: You can not add folders to the raw folder or any of the folders inside the res folder. Android supports only a linear list of files within the predefined folders under res. The asset folder though, can have an arbitrary hyarchie of folders because the asset folder is not considered as resources.
[ "apple.stackexchange", "0000051758.txt" ]
Q: Why do Apple's 2012 MacBook Pros cap the RAM limit at 8GB? According to apple's latest documentation (at the time of writing), the cap on the 2012 MacBook Pros is 8GB RAM. The Mac Pros, however, can take up to 32GB RAM. So, why do the MacBooks have this cap? Is it a hardware issue, because I don't think it's a software one. The OS X that runs on the Mac Pros is the same that runs on the MacBook Pros. Being 64-bit software on top of a 64-bit Architecture (both the MacBooks and the Mac Pros), it's obvious that the OS can use as much as 32 GB of RAM. Is it simply because manufactures don't make RAM sticks larger than 4GB for laptops? A: The limitation imposed is based on the chipsets used on the computer's motherboard. For example, the noted Apple maximums you mentioned for the MacBook Pro and the Mac Pro, are smaller than the maximums that those machines can address in practice. The MacBook Pro tested maximum is 16 GB according to EveryMac's MacBook Pro max RAM listing. And as for the Mac Pro, its maximum is 128 GB according to EveryMac's Mac Pro max RAM listing. As noted on Intel's Specs, the CPU supports a different maximum than the chipset on the motherboard does. The Xeon X5670 used in the Mac Pro supports up to 288 GB. And as for the i7s e.g. a i7-2860QM used in the MacBook Pros, it supports up to 32 GB. So these limitations come from engineering decisions made by Apple based on what kinds of chipsets are selected to be installed on the motherboard and what those chips support is what enforces the maximum amount of RAM that a particular Mac can address at or below what the CPU can actually address. As for why Apple underrates their maximum numbers for RAM on some Macs, that's for Apple to know and for us to wonder. However, admittedly it is a nice practice to under promise and over deliver for whatever reason Apple has. Although, the Mac Pro supports more RAM than Mac OS X can address, according to OWC's testing on Mac Pros, where they discovered Mac OS X unofficially will not address more than 96GB of RAM, but other 64-bit operating system can get to the full 128GB. A: Anyone who knows isn't saying. But apparently they can use up to 16GB. Other World Computing tests various machines for their actual (not specified) maximum useable memory and sells a 16GB memory kit for the current MacBook Pro.
[ "stackoverflow", "0044585604.txt" ]
Q: Loop through string get key and value using regex key matches Javascript I have a string like > var temp = > "meta(alias:!n,apply:!t,disabled:!f,index:'index_*',key:stockInfo.raw,negate:!f,value:green)," For information, this string is generated automatically by kibana (I recover it through the url). My question is : There is any solutions to extract keys and values from this string and get a result in a array or an object like this : > var result = { > "alias" : "!n", > "apply" : "!t", > "disabled" : "!f", > "key": "stockInfo.raw", > "negate": "!f", > "value": "green", > } Thanks A: I think you're searching something like this: var meta = "meta(alias:!n,apply:!t,disabled:!f,index:'index_*',key:stockInfo.raw,negate:!f,value:green)," var result = {} meta.substr(0, meta.length - 2).substr(5).split(',').forEach(function(item) { var split = item && item.split(':') if (split.length) { result[split[0]] = split[1]; } }) console.log(result) Split the string by , character and then split by : to identify key and value of object
[ "salesforce.stackexchange", "0000025749.txt" ]
Q: Get contacts information with authentication I want to know if there is a way for me to make an application in PHP or another language where somebody puts their username and password and I compare their contacts with my contacts to see which contacts are different in our accounts. For example, his contacts are: John Smith, VP, Coca Cola Peter Smith, Manager, Pepsi Mary Smith, President, Dr Pepper And my contacts are Steven Smith, President, Microsoft Peter Smith, Manager, Pepsi Raymond Smith, Manager, Coca Cola At the end that analysis would give something like: Accounts found: 1 Accounts not found: 2 (Because we both have Peter Smith) Is this clear? Thanks! A: You can use salesforce REST API feature where you can send a request to salesforce. That request will contain the list of your contacts. Then you can compare those contact with the contacts in salesforce and return the http response according to your requirement. Let me know if you didn't understand.
[ "stackoverflow", "0005951637.txt" ]
Q: the name of event for add or remove the children of inkCanvas using wpf I have an Inkcanvas in my project (myPaint) What is the name of event for add or remove the children (UiElement) from InkCanvas. for example I want handle this event : myInkCanvas.Children.remove(myRectangle) or this example : myInkCanvas.Children.Add(myRectangle) A: There isn't an event you can listen to that is fired when elements are added to or removed from the Children collection. There is a virtual protected method that is called, which you could leverage, called OnVisualChildrenChanged. This isn't directly tied to the Children collection, as elements can add/remove visuals separate from that. But for InkCanvas, it would probably be safe. So you'd use something like: public class MyInkCanvas : InkCanvas { protected override void OnVisualChildrenChanged(DependencyObject visualAdded, DependencyObject visualRemoved) { // TODO: Raise event or do something base.OnVisualChildrenChanged(visualAdded, visualRemoved); } }
[ "stackoverflow", "0007492859.txt" ]
Q: Is it possible to fire a single intent multiple times? I have a service that does database calls. The service receives a request with an intent, and when the database call is complete, it broadcasts an "update complete" intent indicating the completion of the call. Sometimes the database is already populated with cached data, in which case I would like to immediately broadcast an "update complete" intent, indicating the activity should display the cached data, and then once the database has been updated fire another "update complete" intent indicating the activity should load the updated data. The problem is that the second broadcast is never received by the activity. Is this because I'm re-using the same intent object that has already been fired? Here's the code: if (scheduleDatabase.populated()) { intent.putExtra("fromCache", true); getApplicationContext().sendBroadcast(intent); } scheduleDatabase.update(); intent.putExtra("fromCache", false); getApplicationContext().sendBroadcast(intent); An update: If I comment out one of the intent broadcasts, the other one always fires and is received. Also, if I create two intent objects with the same action string and fire them separately, only the first one is ever received by the activity. I'm not clear yet on whether the other gets fired but not received, or if it never fires at all. A: It turns out that the problems I had with the intents were a symptom of a larger problem in a different area of my code. After fixing that problem, the intents began firing and receiving as I expected. So, to answer my own original question, yes it is possible to fire a single intent multiple times.
[ "islam.meta.stackexchange", "0000000041.txt" ]
Q: Is this an on-topic question? https://islam.stackexchange.com/questions/44/sufi-populations-in-the-united-states-and-overall Should this question be on-topic? It is asking more about the demographics than the religion. A: I think it is. The question is asking about: What percentage of population do Sufis make up in a special region/country? It's valid question because: It's mainly about the religion or one of its sects. It's about population density which is an important aspect of study about every religion. A: In my opinion. it totally fits in here.
[ "gaming.stackexchange", "0000271137.txt" ]
Q: Which stat does the Medic's "Acid Throw" attack scale off of? In Grand Kingdom, the different kinds of attacks scale off one of several of the different stats. Physical attacks, according to the stat descriptor, scale off of strength (Str). Magical attacks scale off of Magic (Mag). The medic seems to occupy an in-between place, using items like "Magic Flask" and striking at range, but not seemingly doing extra damage to guard like most magic attacks do. Which Stat (Mag, Str, or another) do the Medic's attacks scale off of? A: Some details regarding this as others seem to be confused by how the stat/scaling appears to be working. At the moment it seems to be speculation among most players but people are throwing together theories and hopefully will have a Wikia up covering this.: (Actually on any melee you're better off with AGI+VIT imo, and a STR as only a 3rd stat (unless you're making a tank). More and stronger moves used = more damage. And part of the move gauge gets used for combo as well on top of letting you reach targets easier. My 99 AGI fighter can just waltz all the way in the back (maybe with slide edge to reach a bit further), and kill whatever is there if there was no barricades slowing him, and if didn't use move gauge.. can just do bigger combos. And yeah, Medic's bottle is based on STR. As for other stats on Medic it doesn't matter much, can either make them more tanky or get VIT and a little AGI if they have buffs to do on the same turn (like drop a bag, burst gauge, etc.. then throw a bottle with left-over gauge). As for the HP recovery moves, they're % based so.. pretty sure MAG does absolutely nothing on a medic. Classing up or using Charm scrolls both allow to redo your skill points anyway, so no need to stress over mistakes.)
[ "stackoverflow", "0043765684.txt" ]
Q: sub query within coalesce I'm currently using a coalesce query to return all values when an input parameter isn't present as follows: @unid int SELECT * FROM Bag_Dim LEFT JOIN Bag_Action_Join ON Bag_Dim.Unid = Bag_Action_Join.Bag_Unid WHERE Bag_Dim.Unid = COALESCE(@unid, [bag_dim].[unid]) I wanted to add an extra field to the return parameters that are only present on some of the records, so the code was adapted as follows: @unid int, @location int SELECT *, origin.location FROM Bag_Dim LEFT JOIN Bag_Action_Join ON Bag_Dim.Unid = Bag_Action_Join.Bag_Unid LEFT JOIN Bag_Action_Join AS origin ON Bag_Dim.Unid = origin.Bag_Unid AND origin.action = 1 WHERE Bag_Dim.Unid = COALESCE(@unid, [bag_dim].[unid]) AND origin.location = COALESCE(@location, origin.location) The problem is that not all the records have entries in the origin table for location = 1, so they get omitted when the @location parameter is null. Ideally I would adapt the final line of the query as follows, but the syntax doesn't work: WHERE origin.location = coalesce(@location,(origin.location OR origin.location IS NULL)) Any suggestions on how I can get all records (whether null or not) if the input parameter isn't present? A: It sounds like you need to move your new condition from the WHERE clause into the JOIN criteria. When you refer to the unpreserved table (from an outer join) in the where clause of the query, you logically convert the outer join to an inner join - and that probably isn't what you want.
[ "stackoverflow", "0014405491.txt" ]
Q: applescript mysql mamp I would like use an Applescript for insert values in a local mysql db. I'm using MAMP. I found this for the connection but it doesn't work: (error number -60007) set mysql_db to "my_database" set mysql_user to "root" set mysql_host to "localhost" set mysql_pw to "root" set mysql_table to "my_table" tell application "Finder" # Start MAMP's Apache server do shell script "/Applications/MAMP/bin/startApache.sh &" password mysql_pw user name mysql_user with administrator privileges # Start MAMP's MySQL server do shell script "/Applications/MAMP/bin/startMysql.sh > /dev/null 2>&1" end tell How can insert some value in my db? A: You can invoke directly mysql with AppleScript's do shell scriptstatements. do shell script lets you execute command script as you would enter them from the terminal. You then invoke mysql and send it commands: (as the server is running) do shell script "/Applications/MAMP/Library/bin/mysql -u root --password=the_password \"INSERT INTO TABLE your_table (field_1, field_2) VALUES ('foo', 'bar');\" your_data_base_name" Another (not free) solution is to use the software Navicat for mySql It allows easy and efficient management of mySql databases from a GUI frontend. It is also Apple-Scriptable.
[ "stackoverflow", "0027214902.txt" ]
Q: Running PHPDocumentor This command runs fine locally, but not during the build php vendor/bin/phpdoc -d . -t ./build/docs --ignore vendor/,build/,hm-backup/,backdrop/,assets/,bin/,languages/,node_modules/,tests/,readme/ Here's the output $ php vendor/bin/phpdoc -d $TRAVIS_BUILD_DIR -t ./build/docs --ignore vendor/,build/,hm-backup/,backdrop/,assets/,bin/,languages/,node_modules/,tests/,readme/ Collecting files .. OK Initializing parser .. OK Parsing files [Exception] No parsable files were found, did you specify any using the -f or -d parameter? Travis CI build output Does the . not refer to current working dir? or is it not the repository root? thanks A: You can not assume that . is the directory where your repository is checked out. Travis actually provides a special environment variable that points to your repository: TRAVIS_BUILD_DIR So you could write your line as php vendor/bin/phpdoc -d $TRAVIS_BUILD_DIR -t $TRAVIS_BUILD_DIR/build/docs --ignore $TRAVIS_BUILD_DIR/vendor/,$TRAVIS_BUILD_DIR/build/,$TRAVIS_BUILD_DIR/hm-backup/,$TRAVIS_BUILD_DIR/backdrop/,$TRAVIS_BUILD_DIR/assets/,$TRAVIS_BUILD_DIR/bin/,$TRAVIS_BUILD_DIR/languages/,$TRAVIS_BUILD_DIR/node_modules/,$TRAVIS_BUILD_DIR/tests/,$TRAVIS_BUILD_DIR/readme/
[ "stackoverflow", "0056595870.txt" ]
Q: Multi gpu training with estimators in this link https://www.tensorflow.org/beta/tutorials/distribute/multi_worker_with_estimator they say that when using Estimator for multi-worker training, it is necessary to shard the dataset by the number of workers to ensure model convergence.By multi-worker they mean multiple gpus in one system or distributed training? i have 2 gpus in one system, do i have to shard the dataset? A: No you don't - multiple workers refer to a cluster of machines. For single machine with multiple GPUs you don't need to shard it. This tutorial explains the MirroredStrategy which you want for multiple GPUs: https://www.tensorflow.org/beta/tutorials/distribute/keras For different distributed strategies for different setups you can refer here for more information: https://www.tensorflow.org/beta/guide/distribute_strategy#types_of_strategies
[ "stackoverflow", "0002261436.txt" ]
Q: Changing page location after google analytics setup The current website structure is setup such that all the ASPX pages are in the main folder. It's becoming increasingly difficult to maintain, so I would like to create new folders and move the relevant pages. This would change the URL from say: http://mydomain.com/DoStuff.aspx to http://mydomain.com/DoingFolder/DoStuff.aspx I fear that this will skew up the google analytics results. Is it recommended I do this change? If so, is there a way to link the page locations of after and before the change? Also, what would happen when I implement the URL rewrite? Would I run into the same issue again? Anyone? A: So in general I think it is a good idea to add the folder for both your users to visually see the section they are in via the URL and to help the search engines figure out the areas and who knows you may even get a (small) SEO benefit out of it. What I would advise is to setup a second profile in Analytics and then add a filter which removes the folder name from the request and will leave you with the same flat structure in your reports as you have currently. (NB Do this under a new profile with the same tracking code to avoid major mess-ups that you can't undo). Cheers Z
[ "stackoverflow", "0042140275.txt" ]
Q: Write a statement without calling any function? In order to optimize a code in one single line, I am trying to write a determinate statement in my code without calling any function or method. While I was thinking about this I wondered if this is even possible in my case. I was searching some information about this but it seems to be very rarely, but in my current work I must be able to keep the code intact except that optimize section. Hope you could give me a hand. Any help is welcome. This is my current progress. def count_chars(s): '''(str) -> dict of {str: int} Return a dictionary where the keys are the characters in s and the values are how many times those characters appear in s. >>> count_chars('abracadabra') {'a': 5, 'r': 2, 'b': 2, 'c': 1, 'd': 1} ''' d = {} for c in s: if not (c in d): # This is the line it is assumed to be modified without calling function or method else: d[c] = d[c] + 1 return d A: How about this, as mentioned in the comments, it does implicitly use functions, but I think it may be the sort of thing you are looking for? s='abcab' chars={} for char in s: if char not in chars: chars[char]=0 chars[char]+=1 Result {'a': 2, 'b': 2, 'c': 1}
[ "stackoverflow", "0002076792.txt" ]
Q: Is the cookie "metadata" (expires, path,...) transferred to the server? When you set a cookie, you set the raw cookie data, and some metadata. This metadata includes the path for where the cookie is valid, the expiration time of the cookie, and so on. When a browser performs a request, what exactly will the browsers send with it? Will it send the full cookie, with all the "metadata"? Or only the actual data of the cookie, without the metadata? A: No only the value of the cookie is returned in subsequent requests, the other metadata stays on the client. When you define a cookie on the server a Set-Cookie header is created in the response carrying the name, value and other metadata about the cookie. Multiple Cookies will create multiple Set-Cookie headers in the response. When the browser makes subsequent requests it checks its "database" of available cookies to see which cookies are appropriate for the path being requested. It then creates a single Cookie header in the request that carries just a series of name/value pairs of the qualifying cookies. Its important to keep tight control on the number of cookies and the size of the data otherwise you may find that the weight of cookie data being sent for each and every request can be deterimental to performance. This would be much worse if the metadata were returned with the cookies as well.
[ "stackoverflow", "0007159196.txt" ]
Q: Cannot access private member declared in class 'std::basic_ios<_Elem,_Traits>' Having an issue with this particular method and not sure how to resolve it! The error I'm getting is the above: "error C2248: 'std::basic_ios<_Elem,_Traits>::basic_ios' : cannot access private member declared in class 'std::basic_ios<_Elem,_Traits>' C:\Program Files\Microsoft Visual Studio 10.0\VC\include\ostream 604" My method is: ostream operator<<( ostream & stream, ProcessClass const & rhs ) { stream << rhs.name_; return stream; } And in the header: friend std::ostream operator<<( std::ostream & stream, ProcessClass const & rhs ); Any ideas on how to resolve this? I think it is something to do with passing by reference instead of value... but I'm a bit confused! A: The return type should be ostream & which is a reference to ostream. ostream & operator<<( ostream & stream, ProcessClass const & rhs ) { //^^^ note this! stream << rhs.name_; return stream; } When you return by value (instead of reference), then that requires copying of stream object, but copying of any stream object in C++ has been disabled by having made the copy-constructor1 private. 1. and copy-assignment as well. To know why copying of any stream has been disabled, read my detail answer here: Why copying stringstream is not allowed? A: You cannot copy streams, instead return a reference, change to ostream& operator<<( ostream & stream, ProcessClass const & rhs )
[ "stackoverflow", "0017740512.txt" ]
Q: Configuring log4j with different verbosity i have a problem with log4j I want a solution which in console I have only INFO log, and in specific files another verbosity level (like DEBUG) this is my configuration: <appender name="CONSOLE" class="org.apache.log4j.ConsoleAppender"> <layout class="org.apache.log4j.PatternLayout"> <param name="ConversionPattern" value="%d{HH:mm:ss} %-5p [%c] [%C{1}.%M:%L] %m%n"/> </layout> </appender> <appender name="PP" class="org.jboss.logging.appender.DailyRollingFileAppender"> <errorHandler class="org.jboss.logging.util.OnlyOnceErrorHandler"/> <param name="File" value="log/polling_processor.log"/> <param name="DatePattern" value="'.'yyyy-MM-dd"/> <param name="Append" value="true"/> <param name="Encoding" value="UTF-8"/> <param name="Threshold" value="TRACE"/><!-- OFF FATAL ERROR WARN INFO DEBUG TRACE ALL --> <layout class="org.apache.log4j.PatternLayout"> <param name="ConversionPattern" value="%d{HH:mm:ss} %-5p [%c] [%C{1}.%M:%L] %m%n"/><!-- "%d %-5p [%c] %m%n" --> </layout> </appender> <appender name="TR" class="org.jboss.logging.appender.DailyRollingFileAppender"> <errorHandler class="org.jboss.logging.util.OnlyOnceErrorHandler"/> <param name="File" value="log/trasmission.log"/> <param name="DatePattern" value="'.'yyyy-MM-dd"/> <param name="Append" value="true"/> <param name="Encoding" value="UTF-8"/> <param name="Threshold" value="TRACE"/><!-- OFF FATAL ERROR WARN INFO DEBUG TRACE ALL --> <layout class="org.apache.log4j.PatternLayout"> <param name="ConversionPattern" value="%d{HH:mm:ss} %-5p [%c] [%C{1}.%M:%L] %m%n"/><!-- "%d %-5p [%c] %m%n" --> </layout> </appender> <category name="POLLING_PROCESSOR"> <priority value="DEBUG" /> <appender-ref ref="PP"/> </category> <category name="TRASMISSION"> <priority value="DEBUG" /> <appender-ref ref="TR"/> </category> <root> <level value="info"/> <appender-ref ref="CONSOLE"/> </root> but i don't know why, in console I have the same verbosity that I have configured into category tag (debug) where is my error? A: Essentially you would want to have the root logger log at debug level, but limit the appender to info. How about adding the <param name="Threshold" value="INFO"/> to the console appender
[ "english.meta.stackexchange", "0000000238.txt" ]
Q: How should I report a pronunciation using the IPA notation? Should I report the pronunciation as |ˈˌdaɪəˈˌkrɪdəkəl|, or [ˈˌdaɪəˈˌkrɪdəkəl]? A: I think the best way to report pronunciations for the general audience is to use a basic IPA transcription for English and enclose the pronunciations in /slashes/. This matches how dictionaries that use IPA, such as Cambridge and Oxford, mark pronunciations. I think the other symbols for phonetic transcriptions, like [square brackets] and |pipes| are best left for detailed discussions of phonetics. /daɪəˈkrɪtəkəl/
[ "stackoverflow", "0001768817.txt" ]
Q: How to connect 2 databases in php and mysql? I have 2 databases. Users database and purchases database. Each database has different user and password. If i want to execute a query to call both databases. How do I connect the databases? $db = mysql_select_db(??????); A: You don't have to care which db you select since you are giving to MySQL the database name in the queries. i.e SELECT * FROM db.table, db2.table So whatever database you have selected it won't change a thing.
[ "stackoverflow", "0046490359.txt" ]
Q: Unix domain socket error, sendmsg: No buffer space available I've got a very simple code snippet, try to know how unix domain socket works, I've written the sender function, not yet having receiver function, it 's like below: #include <stdio.h> #include <stdlib.h> #include <sys/socket.h> #include <sys/types.h> #include <sys/un.h> #include <unistd.h> #define ERR_EXIT(m) { \ perror(m); \ exit(EXIT_FAILURE); \ } void send_fd(int sock_fd, int number){ iovec vec; vec.iov_base = &number; vec.iov_len = sizeof(number); msghdr msg; msg.msg_name = NULL; msg.msg_namelen = 0; msg.msg_iov = &vec; msg.msg_iovlen = 1; msg.msg_flags = 0; int ret = sendmsg(sock_fd, &msg, 0); if (ret != 1) ERR_EXIT("sendmsg"); } int main(void){ int sockfds[2]; if (socketpair(PF_UNIX, SOCK_STREAM, 0, sockfds) < 0) ERR_EXIT("socketpair"); send_fd(sockfds[1], 20); return 0; } Compile and run it on linux, it prints: sendmsg: No buffer space available Well I didn't print this message myself, guess it's printed by sendmsg itself. Where does my program get wrong? I've googled for sometime and check this site, didn't find good clue. How to fix it? Thanks. Yes, should add msghrd msg={0}; to initialize, problem solved! A: You need to add a memset after the msghdr initialization: msghdr msg; memset(&msg, 0, sizeof(msg)); The problem is that local variables in C (and C++, unless they have constructors) are not initialized. You only initialized some of the fields, but not all of them. The result is that some others (I suspect msg_control) had junk in them, causing the error you witnessed. In addition to that problem, what other people have said in the comments is also true. The code for error is -1 (or, better, <0).
[ "math.stackexchange", "0000465870.txt" ]
Q: how to extend a basis This is a very elementary question but I can't find the answer in my book at the moment. If I have, for example, two vectors $v_1$ and $v_2$ in $\mathbb R^5$ and knowing that they are linear independent , how can I extend this two vectors to a basis of $\mathbb R^5$. I know that I just have to add $v_i$ for $3\leq i \leq 5$ such that they are linear independent but how can I do that? Is there an easy algorithm? A: An easy solution, if you are familiar with this, is the following: Put the two vectors as rows in a $2 \times 5$ matrix $A$. Find a basis for the null space $\operatorname{Null}(A)$. Then, the three vectors in the basis complete your basis. A: I usually do this in an ad hoc way depending on what vectors I already have. However, if you want an algorithm, you can exploit algorithms to reduce a spanning set to a basis (hopefully you know some of these). In your example, expand your set of vectors to $v_1,v_2,e_1,e_2,e_3,e_4,e_5$, where $e_1,\dotsc,e_5$ are the standard basis vectors in $\mathbb{R}^5$. Then you can apply the sifting algorithm to this set to get a basis; because $v_1$ and $v_2$ are linearly independent and occur first in the list, they won't be removed in the sifting process, so you'll end up with some basis containing them. Essentially the same trick works for set of linearly independent vectors in a finite dimensional vector space. A: A good general strategy for expanding a basis is to build a matrix $A$ out of the vectors you have and the standard basis vectors. Then, put $A$ into reduced row echelon form. If you toss out the dependent/free vectors, the remaining vectors are linearly independent and span. Therefore, they form a basis of your vector space.
[ "stackoverflow", "0021589769.txt" ]
Q: ImportError: cannot import name process (Twisted) I'm using Mac OS X 10.9.1, and I remember installing Python some time ago (I can't remember why, given that Mac OS X comes with it). For some reason Twisted wasn't installed, so I installed Zope and Twisted. I'm following this tutorial: http://www.raywenderlich.com/3932/networking-tutorial-for-ios-how-to-create-a-socket-based-iphone-app-and-server The problem is, when I run this code: from twisted.internet.protocol import Factory,Protocol from twisted.internet import reactor class IphoneChat(Protocol): def connectionMade(self): print "a client connected" factory = Factory() factory.protocol = IphoneChat reactor.listenTCP(80, factory) print "Iphone Chat server started" reactor().run I get this error: Traceback (most recent call last): File "/Users/Mattieman/Desktop/server.py", line 4, in <module> from twisted.internet import reactor File "/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/twisted/internet/reactor.py", line 38, in <module> from twisted.internet import default File "/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/twisted/internet/default.py", line 56, in <module> install = _getInstallFunction(platform) File "/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/twisted/internet/default.py", line 52, in _getInstallFunction from twisted.internet.selectreactor import install File "/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/twisted/internet/selectreactor.py", line 18, in <module> from twisted.internet import posixbase File "/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/twisted/internet/posixbase.py", line 53, in <module> from twisted.internet import process, _signals ImportError: cannot import name process So I took a look in /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/twisted/internet ...and it appears that process.py is missing. What do I do? There seem to be other files missing as well, like _baseprocess.py I used setup3.py, if it helps... A: You're using Python 2.6. You shouldn't use setup3.py. You also shouldn't ever use a distutils setup.py-type script to install Python libraries into OS paths (like /Library/Frameworks/Python.framework/Versions/2.6/). You may have a randomly corrupted Twisted installation on your system now (OS X itself ships with a copy of Twisted). Or you may just have some garbage in /Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/twisted that you need to clean up. It's difficult to say which (which is why you should never use setup.py to install things - it leaves a big mess). After you clean up that mess (I can't give you many tips on how to do this apart from "reinstall your OS X", sorry), you have a few options: Use the version of Twisted distributed with your OS. Don't try to install a different version. Treat your home directory as a simpler "virtual environment" and run setup.py install --user (--user tells it to leave its mess inside ~/.local where at least it won't ruin your OS install). Create a virtualenv and install a new version of Twisted there Of these, you probably want to go with virtualenv. Also note that you shouldn't run setup3.py unless you're trying to install Twisted for Python 3. And if you're trying to do that then I'll warn you that only part of Twisted is ported - and it's a very small part. For example, it's not process support (hence the import error you're getting).
[ "gaming.stackexchange", "0000103265.txt" ]
Q: How to get prior notification that your home is under attack with enemies that attack your home in Hearthfire? I just started playing Hearthfire for the first time. I built a huge house with three wings. My problem is my steward Rayya, died easily to a bandit attack. Now UESP did warn me about attacks from creatures, humans and dragons. But what can I do to come on time to prevent my steward dying or if my children are under attack? I think if I was there I could have saved poor Rayya and my wife. A: Enemies won't attack your house unless you are in very close proximity. Avoid loitering around the immediate surroundings of your house, if you need to visit it - fast travel there directly, if you want to visit a nearby location, fast travel to the house and if the coast is clear walk to your destination. When you arrive at the house, use Detect life and Detect Death to identify hostiles and kill them as fast as you can.
[ "stackoverflow", "0007778925.txt" ]
Q: How to generate multiple HTML items based on a properties file? I have the following properties file: title = Welcome to Home Page total = 5 gallery1 = images/gallery/cs.png text1 = <b>Counter Strike</b><br /> gallery2 = images/gallery/css.png text2 = <b>Counter Strike Source Servers Available</b> gallery3 = images/gallery/cs.png text3 = <b>Counter Strike</b> gallery4 = images/gallery/cs.png text4 = <b>Counter Strike</b> gallery5 = images/gallery/cs.png text5 = <b>Counter Strike</b> I am loading it as follows: public static HashMap<String, String> getPropertyMap(String asPropBundle) throws ApplicationException { HashMap<String, String> loMap = new HashMap<String, String>(); ResourceBundle loRB = (ResourceBundle) moHMProp.get(asPropBundle) ; if (loRB == null) { throw new ApplicationException("No property bundle loaded with name: " + asPropBundle); } Enumeration<String> loKeyEnum = loRB.getKeys(); while (loKeyEnum.hasMoreElements()) { String key = (String) loKeyEnum.nextElement(); loMap.put(key, loRB.getString(key)); } return loMap ; } The returned map is set as HTTP request attribute. I am generating the HTML in JSP as follows: <li class="s3sliderImage"> <img src="${map.gallery1}" /> <span>${map.text1}</span> </li> . . . <li class="s3sliderImage"> <img src="${map.gallery2}" /> <span>${map.text2}</span> </li> How can I do this dynamically in a loop? I have the total amount of records in total property of the properties file. A: A Resource Bundle is already sort of a map from keys to values, except it has a fallback mechanism. Why do you copy its content to another map? Just use the <fmt:message> tag: its goal is precisely to get a message from a resource bundle and to output it to the JSP writer. And it can be parameterized, of course : <fmt:setBundle basename="the.base.name.of.your.Bundle"/> <fmt:message key="text2"/> <img src="<fmt:message key="gallery2"/>" /> <fmt:message key="greeting"> <fmt:param value="${user.firstName}"/> </fmt:message> This last snippet displaying "Welcome John!" if the value of the greeting key is "Welcome {0}!". The tag can also store the value in a variable, and take an EL expression as parameter, so this snippet should work to implement your loop: <fmt:message var="total" key="total"/> <c:forEach begin="1" end="${total}" varStatus="loopStatus"> <li class="s3sliderImage"> <img src="<fmt:message key="gallery${loopStatus.index}"/>" /> <span><fmt:message key="text${loopStatus.index}"/></span> </li> </c:forEach>
[ "codegolf.stackexchange", "0000026498.txt" ]
Q: Collatz Attack! This challenge is based on some new findings related to the Collatz conjecture and designed somewhat in the spirit of a collaborative polymath project. Solving the full conjecture is regarded as extremely difficult or maybe impossible by math/number theory experts, but this simpler task is quite doable and there is many examples of sample code. In a best case scenario, new theoretical insights might be obtained into the problem based on contestants entries/ ingenuity/ creativity. The new finding is as follows: Imagine a contiguous series of integers [ n1 ... n2 ] say m total. Assign these integers to a list structure. Now a generalized version of the Collatz conjecture can proceed as follows. Iterate one of the m (or fewer) integers in the list next based on some selection criteria/algorithm. Remove that integer from the list if it reaches 1. Clearly the Collatz conjecture is equivalent to determining whether this process always succeeds for all choices of n1, n2. Here is the twist, an additional constraint. At each step, add the m current iterates in the list together. Then consider the function f(i) where i is the iteration number and f(i) is the sum of current iterates in the list. Look for f(i) with a particular "nice" property. The whole/ overall concept is better/ more thoroughly documented here (with many examples in ruby). The finding is that fairly simple strategies/ heuristics/ algorithms leading to "roughly monotonically decreasing" f(i) exist and many examples are given on that page. Here is one example of the graphical output (plotted via gnuplot): So here is the challenge: Use varations on the existing examples or entirely new ideas to build a selection algorithm resulting in a f(i) "as close to monotonically decreasing as possible". Entrants should include a graph of f(i) in their submission. Voters can vote based on that graph & the algorithmic ideas in the code. The contest will be based on n1 = 200 / n2 = 400 parameters only! (the same on the sample page.) But hopefully the contestants will explore other regions and also attempt to generalize their algorithms. Note, one tactic that might be very useful here are gradient descent type algorithms, or genetic algorithms. Can discuss this all further in chat for interested participants. For some ref, another codegolf Collatz challenge: Collatz Conjecture (by Doorknob) A: I wrote some code in Python 2 to run algorithms for this challenge: import matplotlib.pyplot as plt def iterate(n): return n*3+1 if n%2 else n/2 def g(a): ##CODE GOES HERE return [True]*len(a) n1=input() n2=input() a=range(n1,n2+1) x=[] y=[] i=0 while any(j>1 for j in a): y.append(sum(a)) x.append(i) i+=1 b=g(a) for j in range(len(a)): if b[j]: a[j]=iterate(a[j]) plt.plot(x,y) plt.show() g(x) takes the list of values and returns a list of bools for whether each one should be changed. That includes the first thing I tried, as the line right after the comment, which was iterating all of the values in the list. I got this: It doesn't look close to monotonic, so I tried iterating only values that would decrease the sum, if there are any, iterating the ones that would increase it least otherwise: l=len(a) n=[iterate(i) for i in a] less=[n[i]<a[i] for i in range(l)] if any(less): return less m=[n[i]-a[i] for i in range(l)] return [m[i]==min(m) for i in range(l)] Unfortunately, that doesn't terminate (at least for n1=200, n2=400). I tried keeping track of values I'd seen before by initializing c=set(): l=len(a) n=[iterate(i) for i in a] less=[n[i]<a[i] for i in range(l)] if any(less): return less m={i:n[i]-a[i] for i in range(l)} r=[i for i in m if m[i]==min(m.values())] while all([a[i] in c for i in r]) and m != {}: m={i:m[i] for i in m if a[i] not in c} r+=[i for i in m.keys() if m[i]==min(m.values())] for i in r: c.add(a[i]) return [i in r for i in range(l)] That doesn't terminate either, though. I haven't tried anything else yet, but if I have new ideas, I'll post them here.
[ "mathoverflow", "0000317540.txt" ]
Q: Slightly finer topology vs a quasi-component Let $(X,\tau)$ be a topological space, and let $Q$ be a quasi-component of $X$. Let $S$ be a subset of $X\setminus Q$. Then is $Q$ necessarily a quasi-component of $X$ in the topology generated by $\tau\cup\{S\}$? A: Let $X$ be the subspace of the plane given by $X = \{ (\frac{1}{n},y) : n = 1, 2, \cdots,\ 0 \leq y \leq 1 \} \cup \{(0,0),(0,1)\}$, and let $S = \{ \frac{1}{n} : n = 1, 2, \cdots\} \times \{\frac{1}{2}\}$. Then the quasi-component of $(0,0)$ in $X$ is $\{(0,0),(0,1)\}$ but in the topology generated by $X$ and $S$ the quasi-component of $(0,0)$ is $\{(0,0)\}$. In fact, if $X$ is a normal space and $Q$ is a disconnected quasi-component, then there is a subset $S$ of $X \setminus Q$ such that $Q$ is not a quasi-component of the topology generated by adding $S$ to the topology of $X$. For this let $(H,K)$ be a disconnection of $Q$. Then $H$ and $K$ are disjoint closed subsets of $X$. Let $U$ and $V$ be disjoint open subsets of $X$ such that $H \subseteq U$ and $K \subseteq V$. Let $S = X \setminus (U \cup V)$.
[ "stackoverflow", "0030328207.txt" ]
Q: Git command doesn't work in subprocess in Python We've been using python to automate some git work for quite some time in my group, and everything has worked fine. Unfortunately, I've come across something I would like to use, but doesn't work when put into a python subprocess. Here's the command: git describe --tags `git rev-list --tags --max-count=1` When I use it in my git bash (we're using Windows) it works fine, but when I put it in a python subprocess, it complains that git rev-list --tags --max-count=1 is not a valid command. I was wondering if anyone could enlighten me as to why, and preferably, a way of using it. I got the line from this question: How to get the latest tag name in current branch in Git? I'm trying to get the LATEST tag on a branch, that is closest to the current HEAD. I've got a hacky workaround right now that lists all of the tags and then sorts them numerically, but that's only working because we haven't put out any non-numeric tags, which won't necessarily be the case always. Can anyone please help me? A: The Popen constructor by default doesn't use a shell to parse the command you're giving it. This means that shell metacharacters like the backquote and such things will not work. You can either pass shell = True or first run git rev-list --tags --max-count=1 and then create the whole command after that.
[ "earthscience.meta.stackexchange", "0000000268.txt" ]
Q: Is Helioseismology on topic in Earth Science or Astronomy? What is says on the tin, is Helioseismology on topic in Earth Science or Astronomy? Myself I'd think no (for ES) as although it takes elements of seismology, I think there is a fundamental difference in what's being looked at, perhaps in the same way that advanced studies of bells and bell ringing would also not be on topic. but anyway, just something I was curious about what other people think. A: I too would say astronomy. Or maybe physics (under the guise of astrophysics). Let those two duke it out.
[ "stackoverflow", "0043235304.txt" ]
Q: RELEASE C++ Macro Definition My company's main application uses OLE documents. Periodically, and unpredictably, the program closes its template documents improperly. So that at seemingly random times when they're opened, the OS throws STG_E_SHAREVIOLATION I thought the problem might be the way we're closing the files when the user either exits the application or chooses File / Close from the menu. After a lot of debugging / tracing, it comes down to ///////////////////////////////////////////////////////////////////////////// // 'Compound File' enabling in COleDocument BOOL COleDocument::OnNewDocument() { // call base class, which destroys all items if (!CDocument::OnNewDocument()) return FALSE; // for file-based compound files, need to create temporary file if (m_bCompoundFile && !m_bEmbedded) { // abort changes to the current docfile RELEASE(m_lpRootStg); // create new temporary docfile LPSTORAGE lpStorage; SCODE sc = ::StgCreateDocfile(NULL, STGM_DELETEONRELEASE| STGM_READWRITE|STGM_TRANSACTED|STGM_SHARE_EXCLUSIVE|STGM_CREATE, 0, &lpStorage); if (sc != S_OK) return FALSE; ASSERT(lpStorage != NULL); m_lpRootStg = lpStorage; } return TRUE; } in OLEDOC1.CPP (part of the MFC libraries). Specifically the RELEASE(m_lpRootStg) macro line. Prior to executing this line, trying to move or delete the document results in the OS saying that the file is in use. After this line, the file is closed and able to be moved. I'd like to subclass this method to experiment with alternative ways of closing the file. But, I cannot find the definition of the RELEASE macro anywhere. The closest I came was some code from IBM. Where is this macro defined? What is the definition? A: It's in oleimpl2.h in the MFC src directory... #ifndef _DEBUG // generate smaller code in release build #define RELEASE(lpUnk) _AfxRelease((LPUNKNOWN*)&lpUnk) #else // generate larger but typesafe code in debug build #define RELEASE(lpUnk) do \ { if ((lpUnk) != NULL) { (lpUnk)->Release(); (lpUnk) = NULL; } } while (0) #endif
[ "stackoverflow", "0031933052.txt" ]
Q: Can a table have multiple records of a single column in a single row? There is a table 'items' with columns : item_name, i_code, items_left & price. It stores all the items that a shop sells. There is another table 'customers'. It stores record of all the customers who visited the shop. I want to keep record of all the items that a particular customer bought. I want to create a column 'items_bought' in 'customers' table, that will store item codes of all the items a particular customer bought. But having more than one item code for a particular customer row is impossible. Please help me have multiple records of items_bought in a single row of customers. A: The answer is, yes you can have multiple values in a single field for each row. A comma separated field could be a solution. If you are using a relational database, your life will be easier if you create a new table, let's say items_bought, that will hold the relation between customer and item bought. For example create table `items_bought` (id int not null primary key, item_id int not null, customer_id int not null) Each field item_id and customer_id will have a foreign key to items.id and customers.id table fields respectively. This way you don't need to manage strings and parse comma separated values. You can simply query your tables like: select * from `items` i inner join `items_bought` ib on i.id = ib.item_id inner join `customers` c on ib.customer_id = c.id The above query will return all customer and item information of customers that have bought at least one item.
[ "stackoverflow", "0026887845.txt" ]
Q: Replace only the path of image usig preg_replace in php I am using preg_replace to alter the image path only except image name like: <img src="http://www.ByPasspublishing.com/uploadedImages/TinyUploadedImage/SOC_Aggression_Define_Fig Territorial Aggression.jpg" /> to Below is the code I have tried but it replace the total path. Please help me to solve this problem: $html = preg_replace('/<img([^>]+)src="([^"]+)"/i','<img\\1src="newfolder"',$slonodes[0]->SLO_content); Another thing is that $slonodes[0]->SLO_content returns an HTML content within which I have to find the image and replace the path of that image so the path will not be same. Thanks in advance. A: Alternatively, you could use an HTML Parser for this task, DOMDocument in particular: $html = '<img src="http://www.ByPasspublishing.com/uploadedImages/TinyUploadedImage/SOC_Aggression_Define_Fig Territorial Aggression.jpg" />'; $dom = new DOMDocument; libxml_use_internal_errors(true); $dom->loadHTML($html); libxml_clear_errors(); $img = $dom->getElementsByTagName('img')->item(0); $new_src = 'newfolder/' . basename($img->getAttribute('src')); $img->setAttribute('src', $new_src); echo $dom->saveHTML($img);
[ "stackoverflow", "0051094655.txt" ]
Q: How to Enable port forwarding on azure vm Hi I have a azure vm on which I want to configure port forwarding so that I can redirect traffic to 1100 port, I have created a public loadbalancer and in NAT rule I have configured the ports, but seems that I cant RDP onto the VM using my port 1100, can anyone suggest me some documents where I can get this thing done? Or point me in the right direction? A: As I understand, you want to RDP your Azure VM through the front port 1100 of Load Balancer. So you need to add your VM into the Backend pools of Load Balancer, and then create NAT rule to forward the traffic to your VM through port 1100 exposed to the internet. The NAT rule setting panel will like this: You can select your VM at which I frame up with the red pen if you added it to the Load Balancer backend pools. And you can set port as 1100 and target port as 3389 which RDP really used. When it's OK. You can connect your Azure VM through port 1100. You can get more details with the document Create inbound NAT rules.
[ "stackoverflow", "0003898496.txt" ]
Q: Customizing Paypal Express's Review Page using ActiveMerchant I am using ActiveMerchant to give my rails app access to Paypal's Express Checkout. I would like to include the Order Details on the Review Page as described here: https://cms.paypal.com/us/cgi-bin/?cmd=_render-content&content_ID=developer/e_howto_api_ECCustomizing Can this be done? Currently, my controller code looks like this: def paypal #currently, options is unused, I'm not sure where to send this info options = { :L_NAME0=>"Tickets", :L_QTY0=>@payment.quantity, :L_DESC0=>"Tickets for #{@payment.event_name}", :L_AMT0=>@payment.unit_price } #the actual code that gets used setup_response = gateway.setup_purchase(@payment.amount, :ip=> request.remote_ip, :return_url=> url_for(:action=>:confirm, :id=>@payment.id, :only_path=>false), :cancel_return_url => url_for(:action=>:show, :id=>@payment.id, :only_path=>false) ) redirect_to gateway.redirect_url_for(setup_response.token) end If what I'm trying to do is possible, what do I need to change? A: Make sure you have activemerchant version not less than 1.12.0. EXPRESS_GATEWAY.setup_purchase(220, :items => [{:name => "Tickets", :quantity => 22,:description => "Tickets for 232323", :amount => 10}], :return_url => 'example.com', :cancel_return_url => 'example.com' ) Hope this helps :) A: @Soleone I try your solution,but don't work for me. xml.tag! 'n2:OrderDescription', options[:description] xml.tag! 'n2:Name', options[:name] xml.tag! 'n2:Description', options[:desc] xml.tag! 'n2:Amount', options[:amount] xml.tag! 'n2:Quantity', options[:quantity] I think the xml structure is not right,the order items is multiple,so should like this xml.tag! 'n2:OrderItems' do xml.tag! 'n2:OrderItem' do xml.tag! 'n2:Name', options[:name] xml.tag! 'n2:Description', options[:desc] xml.tag! 'n2:Amount', options[:amount] xml.tag! 'n2:Quantity', options[:quantity] end end But really I don't know the correct structure,looking for now. ====Update I found the SOAP api doc, https://cms.paypal.com/us/cgi-bin/?cmd=_render-content&content_ID=developer/e_howto_api_soap_r_SetExpressCheckout#id09BHC0QF07Q xml.tag! 'n2:PaymentDetails' do xml.tag! 'n2:PaymentDetailsItem' do xml.tag! 'n2:Name', options[:name] xml.tag! 'n2:Description', options[:desc] xml.tag! 'n2:Amount', options[:amount] xml.tag! 'n2:Quantity', options[:quantity] end end But also doesn't work,who can help? =====UPDATE==== I tried the method of adding PaymentDetails parameter,but seems still not work,I found the schema of SetExpressCheckoutReq xml, http://www.visualschema.com/vs/paypal/SetExpressCheckoutReq/ , there is no definition of PaymentDetails,who did this stuff before,hope for your help. ======FINAL======== I have fixed this issue,new version of ActiveMerchant support the order details review,and mwagg pushed the patch about this,you guys can use this version https://github.com/mwagg/active_merchant A: You can see the available parameters in this table (only the middle column applies as activemerchant is using the SOAP API): https://cms.paypal.com/us/cgi-bin/?cmd=_render-content&content_ID=developer/e_howto_api_ECCustomizing#id086NA300I5Z__id086NAC0J0PN To best understand how activemerchant does it is probably to look directly into the implementation. You can see the relevant parameters getting inserted in the SOAP XML request (currently) starting at line 98 where the OrderTotal gets inserted: https://github.com/Shopify/active_merchant/blob/master/lib/active_merchant/billing/gateways/paypal_express.rb#L98 Notice how the parameters are fetched from the options hash so you can see the correct symbol to pass for each one here. In your case as you listed the following parameters, you would do it like this: def paypal options = { :name => "Tickets", :quantity => @payment.quantity, :description => "Tickets for #{@payment.event_name}", :amount => @payment.unit_price :ip => request.remote_ip, :return_url => url_for(:action=>:confirm, :id=>@payment.id, :only_path=>false), :cancel_return_url => url_for(:action=>:show, :id=>@payment.id, :only_path=>false) } # the actual code that gets used setup_response = gateway.setup_purchase(@payment.amount, options) redirect_to gateway.redirect_url_for(setup_response.token) end Note though: The name, quantity and amount fields are currently not support in activemerchant. You would have to fork the repository and insert these yourself and use your copy of the project. It's really very straightforward when you look at the code and see how it is done with the other ones. For example to add the order name, item quantity and item unit price you would put these lines after the OrderDescription gets inserted: xml.tag! 'n2:Name', options[:name] xml.tag! 'n2:Amount', options[:amount] xml.tag! 'n2:Quantity', options[:quantity] Hope that helps! UPDATE: Okay I think according to the XML Schema for the SOAP API it looks like you have to specify it like this in activemerchant: xml.tag! 'n2:PaymentDetails' do items = options[:items] || [] items.each do |item| xml.tag! 'n2:PaymentDetailsItem' do xml.tag! 'n2:Name', item[:name] xml.tag! 'n2:Description', item[:desc] xml.tag! 'n2:Amount', item[:amount] xml.tag! 'n2:Quantity', item[:quantity] end end end And you would pass all your items in your Rails app like this: options = { :items => [ { :name => "Tickets", :quantity => @payment.quantity, :description => "Tickets for #{@payment.event_name}", :amount => @payment.unit_price }, { :name => "Other product", :quantity => @other_payment.quantity, :description => "Something else for #{@other_payment.event_name}", :amount => @other_payment.unit_price } ] :ip => request.remote_ip, :return_url => url_for(:action=>:confirm, :id=>@payment.id, :only_path=>false), :cancel_return_url => url_for(:action=>:show, :id=>@payment.id, :only_path=>false) } Hope that works better, good luck!
[ "math.stackexchange", "0001957199.txt" ]
Q: "Incorrect" derivation for sum of infinite natural numbers I am trying to derive sum of infinite natural numbers for which the established answer is $-1/12$ but I am getting $-1/6$ as my answer and am unable to figure out what exactly am I doing wrong. Probably something very silly or something very fundamental is wrong in my derivation. The way I proceed is:$$S = 1+2+3+4+5+6...$$ $$=> S = 1+3+5+7+9+... + 2+4+6+8+10+...$$ $$=> S = 1+3+5+7+9+... +2(1+2+3+4+5+...)$$ $$=> S = 1+3+5+7+9... + 2S$$ $$=>-S = 1+3+5+7+9...$$ shifting RHS a position to the right and adding to itself: $$-2S = 1+4+8+12+16+...$$ $$=>-2S = 1+4(1+2+3+4+5...)$$ $$-2S = 1+ 4S$$ which results in $S=-1/6$. It would be nice if someone can tell me which step of derivation is wrong and why is it wrong. Thanks for your help. A: Let $$A = S-S = 1+(2-1)+(3-2)+(4-3)+\ldots = 1+1+1+1+\ldots$$ But $$n+A = (\underbrace{1+\ldots+1}_n)+1+1+1+1+\ldots= 1+1+1+1+\ldots= A$$ hence assigning a value to such divergent series is inconsistent and from $A= A+1$ you can obtain the value you'd like : $$S = S+c(A-A) = S+c(A+1)-cA =S+c + c(A-A)= S+c$$ Now if you consider instead some divergent series summation method, for example $$T = \lim_{z \to 1^-} \sum_{k=1}^\infty (-1)^k k z^k$$ it becomes different and you get a consistent value for $T$.
[ "stackoverflow", "0028923152.txt" ]
Q: Azure Active Directory as simply user/role validation service Our current process is BizTalk expose a web service for vendor to call in, where the request header contains pre-assigned user name and password. Upon receiving service call, BizTalk validates the credential against the database, extract and attach some metadata from db record to the inbound message (e.g. city, vendor level etc) Question, can we replace this process with Azure Active Directory? Heard it does provide a restful API, but get confused everytime reading the documentation when talk about JWT token... Does it have a straight forward restful endpoint to call to validate and extract user information? can we customize the metadata within the AAD user? Thanks for the help!! A: 1 - sure you can. There is no endpoint to perform validation, but it's easy to validate incoming tokens - we offer components that automate it. See https://github.com/AzureADSamples/NativeClient-DotNet for an example. The same location on guthub has lots of other samples demonstrating different scenarios. 2 - I am not certain I understand what you mean with metadata here. If you are referring to the info you can specify about the user: you can customize the user scheme. See https://msdn.microsoft.com/en-us/library/azure/dn720459.aspx
[ "stackoverflow", "0025566444.txt" ]
Q: Import git repo in cloud9 error https does not accept registry part I am learning Ruby on rails and wanted to import my git repo to Cloud9 to continue working over there. https://github.com/christoph88/sample_app I imported it. Did a bundle install, rake db:migrate, rake test:prepare and everything seemed to work fine. Until I try to register or login. Then I get following error. I read somewhere it has to do with the routes but I do not understand. Can somebody help me locate the problem? (and explain it to me) Thanks! SQL (0.4ms) UPDATE "users" SET "remember_token" = ?, "updated_at" = ? WHERE "users"."id" = 1 [["remember_token", "92add7938701a70880243cf9ca88338d37b1a0ae"], ["updated_at", Fri, 29 Aug 2014 10:21:28 UTC +00:00]] (13.2ms) commit transaction Redirected to https://sample_app-c9-christoph88.c9.io/users/1 Completed 302 Found in 88ms (ActiveRecord: 13.9ms) [2014-08-29 10:21:28] ERROR URI::InvalidURIError: the scheme https does not accept registry part: sample_app-c9-christoph88.c9.io (or bad hostname?) /usr/local/rvm/rubies/ruby-2.1.1/lib/ruby/2.1.0/uri/generic.rb:1203:in `rescue in merge' /usr/local/rvm/rubies/ruby-2.1.1/lib/ruby/2.1.0/uri/generic.rb:1200:in `merge' /usr/local/rvm/rubies/ruby-2.1.1/lib/ruby/2.1.0/webrick/httpresponse.rb:276:in `setup_header' /usr/local/rvm/rubies/ruby-2.1.1/lib/ruby/2.1.0/webrick/httpresponse.rb:206:in `send_response' /usr/local/rvm/rubies/ruby-2.1.1/lib/ruby/2.1.0/webrick/httpserver.rb:110:in `run' /usr/local/rvm/rubies/ruby-2.1.1/lib/ruby/2.1.0/webrick/server.rb:295:in `block in start_thread' A: Does Cloud9 allow you to change the name of your app from "sample_app" to "sample-app" (i.e., replace the underscore with a hyphen)? That's probably the least complex solution here. If you can't change the name of the app, @Bharath Mg's link to the other Stack Overflow thread is helpful (here it is again: Is there a workaround to open URLs containing underscores in Ruby?). The accepted answer in that thread catches your InvalidURIError thrown by the open-uri library and makes another attempt using the net/http library. EDIT: The other Stack Overflow post is really only useful in that the questioner in that post notes that the error only occurs when an underscore is present in the name of a sub-domain, and the accepted answer appears to identify the problem: "a bug in URI, and uri-open, HTTParty and many other gems make use of URI.parse." The questioner in this post is using Rails, as opposed to plain Ruby, and thus cannot easily implement the accepted answer.
[ "math.stackexchange", "0002505338.txt" ]
Q: How to find $b_n$ for the limit comparison test in $\sum_{n=1}^{\infty}\frac{(\ln(n))^2}{\sqrt{n}(10n-9\sqrt{n})}$. I'm supposed to determine whether or not the series converges or diverges but I'm stuck trying to find $b_n$ and prove that $a_n \sim b_n$. I would very much appreciate it if someone could help show me how I would go about finding $b_n$ proving that $a_n \sim b_n$. $$\sum_{n=1}^{\infty}\frac{(\ln(n))^2}{\sqrt{n}(10n-9\sqrt{n})}$$ $$a_n=\frac{(\ln(n))^2}{\sqrt{n}(10n-9\sqrt{n})}$$ $$b_n = ?$$ A: Note that $$a_n\leq \frac{(\ln n)^2}{n^{3/2}}$$ for $n\geq 1$. You might find the answers here Prove the convergence of : $\sum \ln(n)/n^{3/2}$ helpful for dealing with $\frac{(\ln n)^2}{n^{3/2}}$.
[ "stackoverflow", "0024598835.txt" ]
Q: Javascript - Reading a text file line by line. Does it matter what browser is being used? I have just started getting into Javascript and mobile web programming. One thing that I am uncertain of is how I can write proper code that runs on any browser without having the end user have any extra requirements on their end (what browser to use). I am coding this in google chrome and more recently in c9.io. I thought this would work: function readTextFile(file) { var client = new XMLHttpRequest(); client.open('GET', file); client.send(); client.onreadystatechange = function() { alert(client.responseText); } } But i get the error that XMLHTTpRequest is not defined. I have been trying to figure out why this is and I keep coming to different browsers not supporting this. I had figured simple file io would not be that difficult but its causing me more trouble than I had hoped. What is the best way to input a text file? It is 1 text file that is not having anything being written to it. Just read only. The end user isn't choosing this text file it should be the only option. A: The order in your code is wrong, send method should be the last one; otherwise, your code is fine and it should work fine in all modern browsers. The mentioned order issue, or maybe was something else (before) causing that error. The snippet below will also split received text in an array of text lines var xhr, i, text, lines; if(window.XMLHttpRequest){ // IE7+, Firefox, Chrome, Opera, Safari xhr = new XMLHttpRequest(); }else{ // IE5, IE6 - next line supports these dinosaurs xhr = new ActiveXObject("Microsoft.XMLHTTP"); } xhr.onreadystatechange = function(){ if(xhr.readyState == 4 && xhr.status == 200){ text = xhr.responseText; lines = text.split("\n"); for(i = 0; i < lines.length; i++){ console.log(lines[i]); } } } xhr.open('GET', 'http://domain/file.txt', true); xhr.send();
[ "stackoverflow", "0060283501.txt" ]
Q: handle separate transaction in java batch (JSR-352) I'm using jberet implementation of JSR 352 java batch specs. Actually I need a separate transaction for doing a singular update, something like this: class MyItemWriter implements ItemWriter @Inject UserTransaction transaction void resetLastProductsUpdateDate(String uidCli) throws BusinessException { try { if (transaction.getStatus() != Status.STATUS_ACTIVE) { transaction.begin(); } final Customer customer = dao.findById(id); customer.setLastUpdate(null); customer.persist(cliente); transaction.commit(); } catch (RollbackException | HeuristicMixedException | HeuristicRollbackException | SystemException | NotSupportedException e) { logger.error("error while updating user products last update"); throw new BusinessException(); } } I first tried marking resetLastProductsUpdateDate methoad as @Transactional(REQUIRES_NEW), however it didn't worked. My question is: Is there any more elegant way to achieve this singular transaction without manually handle of transaction? While does UserTransation works, EntityManager.transaction doesn't. I don't get it why. Class below, which is injected from a Batchlet, works properly; Why I can't get to make work the @Transactional annotation on resetLastProductsUpdateDate method instead? public class DynamicQueryDAO { @Inject EntityManager entityManager; @Inject private Logger logger; @Transactional(Transactional.TxType.REQUIRED) public void executeQuery(String query) { logger.info("executing query: {}", query); final int output = entityManager.createNativeQuery(query).executeUpdate(); logger.info("rows updated: {}", output); } } EDIT Actually I guess neither usertransaction is a good solution, because it affects entire itemwriter transaction management. Still Don't know how to deal with transaction isolation :( A: In general the batch application should avoid directly handling transaction. You can have your batch component to throw some business exceptions upon certain conditions, and configure your job.xml to trigger retry upon this business exception. During retry, each individual data will be processed and committed in its own chunk.
[ "stackoverflow", "0013663518.txt" ]
Q: Will following ever cause RACE CONDITION when accessing ASP.Net Cache Item? I am using the following code, and to me it seems that a race condition would never happen with code below. Or there is still a chance of race condition? List<Document> listFromCache = Cache[dataCacheName] as List<Document>; if (listFromCache != null) { //do something with listFromCache. **IS IT POSSIBLE** that listFromCache is //NULL here } else { List<Document> list = ABC.DataLayer.GetDocuments(); Cache.Insert(dataCacheName, list, null, DateTime.Now.AddMinutes(5), System.Web.Caching.Cache.NoSlidingExpiration); } UPDATE: Chris helped me solve this problem, but I just thought, I would share some details that would be be very helpful to others. To completely avoid any race condition, I had to add a check within the true part, else I could end up with a List with zero count, if someone else clears it in Cache ( not remove the item, but just call Clear method on the List object in Cache) after my if has evaluated to TRUE. So then, I would not have any data within my true part of if in listFromCache object. To overcome, this subtle RACE condition in my original code, I have to double check listFromCache in the true part as in code below, and then repopulate Cache with latest data. Also, as Chris said, if someone else 'removes' the items from Cache by calling the method Cache.Remove, then listFromCache would not be affected, since the Garbage Collector will not remove the actual List object from HEAP memory because a variable called 'listFromCache' is still having a reference to it ( I have explained this in more detail in a comment under Chris's answer post). List<Document> listFromCache = Cache[dataCacheName] as List<Document>; if (listFromCache != null) { //OVERCOME A SUBTLE RACE CONDITION BY IF BELOW if( listFromCache == null || listFromCache.Count == 0) { List<Document> list = ABC.DataLayer.GetDocuments(); Cache.Insert(dataCacheName, list, null, DateTime.Now.AddMinutes(5), System.Web.Caching.Cache.NoSlidingExpiration); } //NOW I AM SURE MY listFromCache contains true data //do something with listFromCache. **IS IT POSSIBLE** that listFromCache is //NULL here } else { List<Document> list = ABC.DataLayer.GetDocuments(); Cache.Insert(dataCacheName, list, null, DateTime.Now.AddMinutes(5), System.Web.Caching.Cache.NoSlidingExpiration); } A: No, it's not possible in your comment that listFromCache will become null as it's a local reference at that point. If the cache entry is nullified elsewhere, it doesn't affect your local reference. However, you could possibly get a condition where you retrieved a null value, but while in the process of gathering the documents (ABC.DataLayer.GetDocuments()) another process has already done so and inserted the cache entry, at which point you overwrite it. (this may be perfectly acceptable for you, in which case, great!) You could try locking around it with a static object, but honestly, I'm not sure if that'll work in an ASP.NET context. I don't remember if Cache is shared across all ASP.NET processes (which IIRC, have different static contexts) or only shared within each single web worker. If the latter, the static lock will work fine. Just to demonstrate too: List<Document> listFromCache = Cache[dataCacheName] as List<Document>; if (listFromCache != null) { Cache.Remove(dataCacheName); //listFromCache will NOT be null here. if (listFromCache != null) { Console.WriteLine("Not null!"); //this will run because it's not null } }
[ "travel.stackexchange", "0000024914.txt" ]
Q: Manual or Automatic transmission: Do rental companies let you choose which type of car you want to rent? Say I'm a foreigner renting a car in Brazil. What kind of car transmissions are popular in Brazil rental car agencies? Do they let you choose if you ask? I know standard (aka manual) transmission is, well, the most "standard" or popular type of car everywhere except the US, but in case I want to split driving with someone else who doesn't know how to drive one... A: According to TripAdvisor it's rare to get automatic transmissions in Brazil, or that when the companies do have them, they're regularly more expensive. Your best bet is to contact the company before you get to Brazil (from comments of yours on other questions I gather you've booked through Modiva), and find out if you can reserve one. I suspect they'll be getting similar questions from others, especially from the US, so should be able to clarify. A: Well, in Brazil you can get automatic transmission, and like @Mark said, it's more expensive. However, be advised when rent "popular" models with manual transmission. In general, the most cheap models are insecure (no ABS, no air-bags...). There are some companies you can contact: Avis Locaralpha Mistercar Localiza
[ "stackoverflow", "0004312912.txt" ]
Q: Strategy Pattern the right thing? i hope you can help me with my problem: I have a Class doing soap calls. But if the soap definition changes i'll have to write a new class or inherit from it etc. So I came to the solution to write something like that: switch(version) { case "1.0": saopV1.getData() case "2.0": soapV2.getData() } Well pretty bad code, i know. Then I read about the Strategy pattern and i thought, wow that's what i need to get rid of this bad switch-case thing: abstract SoapVersion { public SoapVersion GetSoapVersion(string version) { //Damn switch-case thing //with return new SoapV1() and return new SoapV2() } public string[] virtual getData() { //Basic Implementation } } class SoapV1:SoapVersion { public override string[] getData() { //Detail Implementation } } class SoapV2:SoapVersion {//the same like soapv1} But i can't avoid using "ifs" or switch cases in my code. Is this possible using OO-techniques?? Edit: The GetSoapVersion-Function should be static A: That's more or less the right way to do this in a beautiful fashion. At some point in your code, you'll have to make a decision whether v1 or v2 has to be used, so you'll have to have a conditional statement (if or switch) anyway. However, when using a strategy and a factory (factory method or factory class), you've centralized that decision. I would make my factory method on the abstract class static though. Also, I would make use of the template-method pattern: that is, a public, non overridable GetData method which calls a protected virtual (abstract) method that should be overriden in a concrete implementation. public abstract class SoapProcessor { protected SoapProcessor() { /* protected constructor since public is of no use */ } public static SoapProcessor Create( SoapVersion version ) { switch( version ) { case SoapVersion.Version1 : return new SoapV1Processor(); case SoapVersion.Version2 : return new SoapV2Processor(); default: throw new NOtSupportedException(); } } public string[] GetData() { return GetDataCore(); } protected abstract GetDataCore(); } }
[ "stackoverflow", "0026430444.txt" ]
Q: How To Apply Trigger To Image I'm very new to WPF (and quite frankly I don't know why WinForms even exists because in my opinion it's FAR inferior to WPF), so I'm still not quite in the swing of things. I have a TabControl, and inside each TabHeader is an image. Essentially, I just want the selected TabItem to have an Image with a gaussian blur radius of 2 and all the non-selected TabItems to have an Image with a gaussian blur of 8. I've been looking through a lot of material on XAML, WPF, triggers, etc. and I'm just overwhelmed with information. Could someone help me out? A: You can achieve that by changing Effect on the image depending on TabItem.IsSelected. Lets say this is your Image in the Header <Image Source="..."> <Image.Style> <Style TargetType="{x:Type Image}"> <Setter Property="Effect"> <Setter.Value> <BlurEffect Radius="8"/> </Setter.Value> </Setter> <Style.Triggers> <DataTrigger Binding="{Binding RelativeSource={RelativeSource Mode=FindAncestor, AncestorType={x:Type TabItem}}, Path=IsSelected}" Value="True"> <Setter Property="Effect"> <Setter.Value> <BlurEffect Radius="2"/> </Setter.Value> </Setter> </DataTrigger> </Style.Triggers> </Style> </Image.Style> </Image> basically you create DataTrigger which binding goes up the visual tree to TabItem and trigger on IsSelected=true
[ "stackoverflow", "0049619817.txt" ]
Q: Allauth Custom Provider URLs Where/how does one hook up a custom provider URL patterns? I can't find anywhere in the code that automatically installs the providers... e.g. allauth.socialaccount.providers.shopify URLs. My custom provider worked on older versions like 0.2x.x but now I am getting reversal errors in the provider list template because the URLs are not registered A: The allauth installation instructions ask you to include allauth.urls in your URL conf. url(r'^accounts/', include('allauth.urls')), Then, in allauth.urls, the code loops through the providers and registers the provider urls. You don't have to hook up the provider url patterns manually.
[ "stackoverflow", "0011227890.txt" ]
Q: Perl Plucene Index Search Fooling around more with the Perl Plucene module and, having created my index, I am now trying to search it and return results. My code to create the index is here...chances are you can skip this and read on: #usr/bin/perl use Plucene::Document; use Plucene::Document::Field; use Plucene::Index::Writer; use Plucene::Analysis::SimpleAnalyzer; use Plucene::Search::HitCollector; use Plucene::Search::IndexSearcher; use Plucene::QueryParser; use Try::Tiny; my $content = $ARGV[0]; my $doc = Plucene::Document->new; my $i=0; $doc->add(Plucene::Document::Field->Text(content => $content)); my $analyzer = Plucene::Analysis::SimpleAnalyzer->new(); if (!(-d "solutions" )) { $i = 1; } if ($i) { my $writer = Plucene::Index::Writer->new("solutions", $analyzer, 1); #Third param is 1 if creating new index, 0 if adding to existing $writer->add_document($doc); my $doc_count = $writer->doc_count; undef $writer; # close } else { my $writer = Plucene::Index::Writer->new("solutions", $analyzer, 0); $writer->add_document($doc); my $doc_count = $writer->doc_count; undef $writer; # close } It creates a folder called "solutions" and various files to it...I'm assuming indexed files for the doc I created. Now I'd like to search my index...but I'm not coming up with anything. Here is my attempt, guided by the Plucene::Simple examples of CPAN. This is after I ran the above with the param "lol" from the command line. #usr/bin/perl use Plucene::Simple; my $plucy = Plucene::Simple->open("solutions"); my @ids = $plucy->search("content : lol"); foreach(@ids) { print $_; } Nothing is printed, sadly )-=. I feel like querying the index should be simple, but perhaps my own stupidity is limiting my ability to do this. A: Three things I discovered in time: Plucene is a grossly inefficient proof-of-concept and the Java implementation of Lucene is BY FAR the way to go if you are going to use this tool. Here is some proof: http://www.kinosearch.com/kinosearch/benchmarks.html Lucy is a superior choice that does the same thing and has more documentation and community (as per the comment on the question). How to do what I asked in this problem. I will share two scripts - one to import a file into a new Plucene index and one to search through that index and retrieve it. A truly working example of Plucene...can't really find it easily on the Internet. Also, I had tremendous trouble CPAN-ing these modules...so I ended up going to the CPAN site (just Google), getting the tar's and putting them in my Perl lib (I'm on Strawberry Perl, Windows 7) myself, however haphazard. Then I would try to run them and CPAN all the dependencies that it cried for. This is a sloppy way to do things...but it's how I did them and now it works. #usr/bin/perl use strict; use warnings; use Plucene::Simple; my $content_1 = $ARGV[0]; my $content_2 = $ARGV[1]; my %documents; %documents = ( "".$content_2 => { content => $content_1 } ); print $content_1; my $index = Plucene::Simple->open( "solutions" ); for my $id (keys %documents) { $index->add($id => $documents{$id}); } $index->optimize; So what does this do...you call the script with two command line arguments of your choosing - it creates a key-value pair of the form "second argument" => "first argument". Think of this like the XMLs in the tutorial at the apache site (http://lucene.apache.org/solr/api/doc-files/tutorial.html). The second argument is the field name. Anywho, this will make a folder in the directory the script was run in - in that folder will be files made by lucene - THIS IS YOUR INDEX!! All we need to do now is search that index using the power of Lucene, something made easy by Plucene. The script is the following: #usr/bin/perl use strict; use warnings; use Plucene::Simple; my $content_1 = $ARGV[0]; my $index = Plucene::Simple->open( "solutions" ); my (@ids, $error); my $query = $content_1; @ids = $index->search($query); foreach(@ids) { print $_."---seperator---"; } You run this script by calling it from the command line with ONE argument - for example's sake let it be the same first argument as you called the previous script. If you do that you will see that it prints your second argument from the example before! So you have retrieved that value! And given that you have other key-value pairs with the same value, this will print those too! With "---seperator---" between them!
[ "stackoverflow", "0055483629.txt" ]
Q: Separate digits from integer (without string functions) I have an integer with only two digits, let's say n = 52, i want to be able to separate these two digits, like 5 and 2. Left Digit: int left = (n / 10); This gives me left = 5 for n = 52. Right Digit: int right = (int)(((n / 10f) - (n / 10)) * 10) Error The left digit is always true, but the right digits are sometimes right and sometimes wrong, and here are the test cases: 1. 29, 48 , 10 , 50 : Correct 2. 52 : Wrong, gives 5 , 1 3. 99 : Wrong, gives 9 , 8 4. 26 : Wrong, gives 2 , 5 A: int n = 52 ; Solution 1 : int left =int.Parse( n.toString().Substring(0,1)) ; int right =int.Parse( n.toString().Substring(1,1)) ; Solution 2 : int left = n / 10 ; int right = n % 10 ;
[ "stackoverflow", "0043030871.txt" ]
Q: Anaconda-Navigator - Ubuntu16.04 This is ubuntu16.04. I can open Anaconda-Navigator from the terminal using anaconda-navigator, but when I click on it, it doesn't open. What am I missing? A: To run anaconda-navigator: $ source ~/anaconda3/bin/activate root $ anaconda-navigator A: it works : export PATH=/home/yourUserName/anaconda3/bin:$PATH after that run anaconda-navigator command. remember anaconda can't in Sudo mode, so don't use sudo at all. A: I am using Ubuntu 16.04. When I installed anaconda I was facing the same problem. I tried this and it resolved my problem. step 1 : $ conda install -c anaconda anaconda-navigator​ step 2 : $ anaconda-navigator Hope it will help.
[ "stackoverflow", "0027392228.txt" ]
Q: Select text in ::before or ::after pseudo-element Look this very simple code <!DOCTYPE html> <html> <head> <style> p::before { content: "Before - "; } </style> </head> <body> <p>Hello</p> <p>Bye</p> </body> </html> Css adds ¨Before -" at the start of every <P> and renders like this If you use your mouse to select the text (for copy paste) you can select the original text, but not the Before or Aftter text added by css. I have a very specific requirement where I need to allow users to select with the mosue the Before text. How would you do it? A: This cannot be done within an HTML page, but as a hack, if you need to copy/paste pseudo elements, you can export the page to PDF and copy from it. In Chrome, for example, you can copy page's content from print preview. A: You can't, the before and after pseudo classes are not meant to be used for that purpose. A: If users need to select the generated text, it should be considered content (and in the HTML), not presentation (and hidden in the CSS). Don't use ::before or ::after, use HTML. You could use JavaScript to insert it (if that would help): var text = document.createTextNode('Before - '); Array.prototype.forEach.call(document.getElementsByTagName('p'), function (p) { p.insertBefore(text.cloneNode(), p.firstChild); }); This way the text itself is present in the DOM, unlike generated content from CSS.
[ "math.stackexchange", "0003219252.txt" ]
Q: Why does this square root simplify so much? I was messing about with a dot product, trying to simplify an expression, when I came across this equality by graphing. Why would these expressions be equal? $\cos(2x)+r^2=\sqrt{r^4+\frac{r^2h^2}{2}\cos(2x)+\frac{h^4}{16}}$ I noticed that: $r^4+\frac{r^2h^2}{2}+\frac{h^4}{16}=(r^2+\frac{h^2}{4})^2$ But the $\cos(2x)$ in there really makes it tough. Also, somehow the $h$ factors out completely? Very strange! Hope somebody can figure it out! Edit: Oops! I happened to only be looking at the function for situations where r >> h. When this isn't true, the equations becomes obviously different. The square rooted equation seemed so close to simplifying though :( A: For fixed $r,x$, the left hand side is constant, but the right hand side varies monotonically with $h$. So the equality cannot hold. To illustrate the point further, for very large $h$, the RHS is large, $\sim \frac{h^2}4$, whereas the LHS doesn't change.
[ "stackoverflow", "0027886788.txt" ]
Q: Java program draw a square using for loop? Given: public static void printTriangle(int sideLength) { for (int i = 0; i <= sideLength; i++) { for (int j = 0; j < i; j++){ System.out.print("[]"); } System.out.println(); } } How do you modify the code to print a square with sideLength = 3? [][][] [][][] [][][] A: Like this: public static void printSquare(int sideLength) { for (int i = 0; i < sideLength; i++) { for (int j = 0; j < sideLength; j++) { System.out.print("[]"); } System.out.println(); } }
[ "stackoverflow", "0021787852.txt" ]
Q: Detecting date overlaps without going day by day I am working on a project that involves managing information about what driver had a car on a particular day. Ostensibly the assignments should always have an end_date at least one day prior to the next start_date. Table is like +----+--------+-----------+------------+----------+ | id | car_id | driver_id | start_date | end_date | +----+--------+-----------+------------+----------+ There is a lot of human input from folks who are not really invested in this process, editing of old rows, and overlaps occur. Now I can easily imagine running many queries using GROUP BY car_id for a given date and seeing if you have more than one row for a car on date x. What I would love to sort out is a single query that indicates all row ids which have an overlapping dates for any one car. Can anyone point me in the right direction? A: This query will return pairs of rows that overlap: select r1.id, r2.id from rentals r1 join rentals r2 on r1.car_id = r2.car_id and r1.id != r2.id where (r1.start_date < r2.end_date or r2.end_date is null) and r1.end_date > r2.start_date; This just compares each date range to every other date range for the same car_id. More info on overlapping dates.
[ "ell.stackexchange", "0000163125.txt" ]
Q: twice the size of a sphere/cube I want to know the most commonly understood meaning of phrases like "twice the size of" or "three times the size of" 3D objects. If I say "this ball is twice the size of a table tennis ball", do native English speakers envision something with twice a table tennis' ball diameter? Or twice its volume? Or something else? What about cubes? What should I picture when someone describes an object using these kinds of descriptions? A: It's ambiguous. I think that if you say "twice the size" of a 3D object, that SHOULD mean twice the volume. But many people say that meaning "twice the diameter" or "twice the length of a side". If you want to be clear, you should use different words. Afterthought I read a book on statistics once that talked about misleading presentations of statistics. One example they gave was a graph that purported to show differences in income between various groups. The graph had a picture of a money bag for each group. And the HEIGHT of the money bag was proportional to the income of that group, e.g. if group X made twice as much money as group Y, then group X's bag was twice as tall. But, the writer pointed out, this gave a very misleading impression, because the bag in the picture was two dimensional, and so if X's bag was twice as tall, it would be 4 times the area. And the bags depicted 3-dimenstional objects, so X's bag would be 8 times the volume. It exaggerated the differences tremendously.
[ "stackoverflow", "0018157520.txt" ]
Q: MySQL Syntax Error Error When calling my function a MySQL Syntax Error appears, any ideas? You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near '' at line 1 function getComments1() { $query = mysql_query("SELECT * FROM comments") or die(mysql_error()); while($post = mysql_fetch_assoc($query)) { echo $post['Author']; } } function addComment($cName, $cContent) { $query = mysql_query("INSERT INTO comments VALUES(null,'$cName','$cContent'") or die(mysql_error()); } <?php include('includes/functions.php'); mysql_set_charset ( "utf8" ); if(isset($_POST['submit'])) { if(isset($_POST['CommentName'])) { if(isset($_POST['CommentContent'])) { addComment($_POST['CommentName'],$_POST['CommentContent']); header("Location: derger.php"); } else { "text missing"; } } else { echo "name missing"; include('herger.php'); } } else { header("Location: werger.php"); } ?> A: ("INSERT INTO comments VALUES(null,'$cName','$cContent'") You didn't close the values. should be: ("INSERT INTO comments VALUES(null,'$cName','$cContent')") Also, as everyone here will remind you, mysql_ functions are deprecated. You should be using PDO or MySQLi. A simple google search on either will give you plenty of resources to transition into.
[ "stackoverflow", "0009177319.txt" ]
Q: What would be the best decision for separator make-up? I want to insert a separator between blocks and content, that are rounded with css3 border-radius property (see the picture below). What would be the best decision for separator make-up? A: You could apply a faux column approach to the parent element (#sheet?). I'm not sure if that will meet all of your requirements. From a design point of view, if both pieces of content already are contained in a shape, would a separator really be necessary? Just seems like it would add extra clutter is all.
[ "math.stackexchange", "0000127443.txt" ]
Q: How to find the equation of a line Can anyone help? I have the following equation that is in point slope form: $$y-3 = {\textstyle{3\over11}}(x-4).$$ I now need to get this equation into THIS form: $$3x-11y = -21.$$ The first step is to do the multiplication on the right hand side to give this answer: $\ \ \ \ y-3 = {3\over11}x - ({3\over11} \cdot 4)$. Then to get the $y$ alone on the left, move the $3$ over giving: $\ \ \ \ y = {3\over11}x - ({3\over11}\cdot 4) - 3$. Then do the calculation on the right to give: $\ \ \ \ y = {3\over11}x + 1.90909090$. But how do I get rid of the fraction $3/11$? Plus the values don't seem like they will add up? Seems like a primary school question but I can't figure this out. Any help would be appreciated. In case your wondering, I have taken this from a tutorial found here (in step 3 of the tutorial). PS: sorry if tag is wrong but I did not know what to add it under A: A sensible option is to multiply each term by 11 at the start, to get rid of the fraction. (actually, we're multiplying each side of the equation by 11) $$y-3 = \frac{3}{11}(x-4)$$ leads to: $$11y - 33 = 3(x-4)$$ I think you can continue from there! Let me know if you need more help. If you prefer, you can also do the same thing (multiply by 11) after the last line of your work.
[ "stackoverflow", "0007510482.txt" ]
Q: where exactly do we need exception handling for web-services on iPhone I am a new iPhone developer and totally new to web-services as well. I had used http://www.sudzc.com/ to develop my obj-c code for my wsdl. I need to know that where exactly i need to handle exceptions in this code? Or the code generated by sudz itself takes care of the exception handling itself? A: Look at the example code generated for your web service. They give you the generic layout for the error handlers. Should look something like this: // Handle the response from webserviceConnection. (void) webserviceConnectionHandler: (BOOL) value { // Do something with the BOOL result NSLog(@"webserviceConnection returned the value: %@", [NSNumber numberWithBool:value]); }
[ "stackoverflow", "0028160601.txt" ]
Q: what is the difference between a[0] and &a[0] in string string a = "asdf"; cout<<&a[0]; cout<<a[0]; Why are these two outputs different? Why is &a[0] not the address but the whole string? A: &a[0] has type char *. Stream operator << is deliberately overloaded for const char * arguments to output zero-terminated string (C-style string) that begins at that address. E.g. if you do const char *p = "Hello World!"; cout << p; it is that overloaded version of << that makes sure the "Hello World!" string itself is sent to output, not the pointer value. And this is exactly what makes your code to output the entire string as well. Since C++11 std::string objects are required to store their data as zero-terminated strings and &a[0] is nothing else than a pointer to the beginning of the string stored inside your a object.
[ "stackoverflow", "0022749414.txt" ]
Q: Adding objects to a list I have the following code: public class BaseEmployee { public bool Status {get;set;} public DateTime DateOfJoining {get;set;} } public class Employee : BaseEmployee { public string Name {get;set;} public string City {get;set;} public string State {get;set;} } foreach(var record in records) { var employee = GetDefaultBaseEmployeeProperties(); employee.Name = record.Name employee.State = record.Name; employee.City = record.city; Department.Employess.Add(employee) } When I do this then all the employees get updated with the same of name, city and state as the last employee added. So to get around the problem of reference I did Department.Employees.Add(new Employee { Name = record.Name; City = record.City; State = record.State; }); But the problem with this approach is that I loose the BaseEmployee properties in the employee object. I need a way of adding the employee to the Department.Employees with the base properties retained. any ideas from you people, without touching the base class. FYI: moving the base class properties to the employee class is not an option. A: If the behavior you describe really occurs with the code you posted, there is only one conclusion: GetDefaultBaseEmployeeProperties() returns the same Employee instance every time it is called. This is bad, as you have witnessed. Fix GetDefaultBaseEmployeeProperties() to make it return a new Employee instance every time. EDIT: If you cannot change GetDefaultBaseEmployeeProperties(), you can copy the properties as follows: var template = GetDefaultBaseEmployeeProperties(); foreach(var record in records) { var employee = new Employee(); // create a *new* Employee instance employee.Status = template.Status; // copy default properties employee.DateOfJoining = template.DateOfJoining; employee.Name = record.Name; // fill Employee with new values employee.State = record.State; employee.City = record.city; Department.Employees.Add(employee); }
[ "gamedev.stackexchange", "0000126245.txt" ]
Q: What type of Collider should I add to a LineRenderer? I am currently building a game where I need to generate some wires (as Bézier curves) using LineRenderer. After changing the width of each line lineRenderer.SetWidth (0.15f, 0.15f); I am attaching an EdgeCollider2D to the same game object (specifying the same points used in generating the Bézier curve). This seems to work. However, I would like to be able to know if the Mouse Pointer is at any time over the EdgeCollider2D (something similar to the popular Fruit Ninja game). I'm using Raycast as follows: void CastRay() { Ray ray = Camera.main.ScreenPointToRay (Input.mousePosition); RaycastHit2D hit = Physics2D.Raycast (ray.origin, ray.direction, Mathf.Infinity); if (hit) { Debug.Log (hit.collider.gameObject.name); } } I'm calling the CastRay method inside the Update method. Because EdgeCollider2D does not have width (as expected), the log message is never produced. Do I need to use a different type of Collider? Do I need to call CastRay somewhere else? A: Ok. After some experiments I ended up replacing the EdgeCollider2D with a PolygonCollider2D. I generated the collider points (taking the width of the wire into account) by computing the Euclidean distance between two consecutive points on the curve (you need to iterate through all the points) and the perpendicular on the distance vector (using the cross product). Here's an example: List<Vector2> edgePoints = new List<Vector2> (); PolygonCollider2D collider = new PolygonCollider2D (); for (int j = 0; j < bezierPoints.Count; j++) { Vector2 distanceBetweenPoints = bezierPoints[j-1] - bezierPoints[j]; Vector3 crossProduct = Vector3.Cross (distanceBetweenPoints, Vector3.forward); Vector2 up = (wireWidth / 2) * new Vector2(crossProduct.normalized.x, crossProduct.normalized.y) + bezierPoints [j-1]; Vector2 down = -(wireWidth / 2) * new Vector2(crossProduct.normalized.x, crossProduct.normalized.y) + bezierPoints [j-1]; edgePoints.Insert(0, down); edgePoints.Add(up); if (j == bezierPoints.Count - 1) { // Compute the values for the last point on the Bezier curve up = (wireWidth / 2) * new Vector2(crossProduct.normalized.x, crossProduct.normalized.y) + bezierPoints [j]; down = -(wireWidth / 2) * new Vector2(crossProduct.normalized.x, crossProduct.normalized.y) + bezierPoints [j]; edgePoints.Insert(0, down); edgePoints.Add(up); } } collider.points = edgePoints.ToArray(); This will generate a Polygon collider that will wrap the entire wire (curve) based on the wireWdth. I wanted to post the solution in case someone else is having similar issues.
[ "stackoverflow", "0038488219.txt" ]
Q: JQuery clear divs when click off input but not off page I am trying clear a div when a user clicks or tabs off of an input, but I don't want the div cleared if the user changes to a different window or browser tab. What I have so far clears the div in both cases: $input.on('blur focus mousedown', function () { $input.val(""); $("#cors-test-invalid").html(""); $("#cors-test-views").html(""); $("#cors-test-workbooks").html(""); $("#cors-test-datasources").html(""); $("#cors-test-projects").html(""); $("#cors-test-users").html(""); }); A: You can do it like this $(document.body).on('click',function(){ //here its a click on body which eliminates the other tabs or windows if(!$input.is(":focus")) { //here element doesnt have focus, so we are good to do the logic $input.val(""); $("#cors-test-invalid").html(""); $("#cors-test-views").html(""); $("#cors-test-workbooks").html(""); $("#cors-test-datasources").html(""); $("#cors-test-projects").html(""); $("#cors-test-users").html(""); } });
[ "stackoverflow", "0036252993.txt" ]
Q: Rspec & FactoryGirl 'Cannot call create unless parent is saved' I'm sure that my issue is pretty easy to resolve. However, I cannot find how or where to resolve it. Currently I am trying to test the #follow method on my User model. Here is the test that I have: describe "#follow & #following?" do before(:each) do @other_user = FactoryGirl.create(:user) end it "returns false for user following other_user" do expect(@user.following?(@other_user)).to eq(false) end it "returns true for user following other_user" do @user.follow(@other_user) expect(@user.following?(@other_user)).to eq(true) end end Here is the #follow method: def follow(other_user) active_relationships.create(followed_id: other_user.id) end The error that is being returned is You cannot call create unless the parent is saved. Obviously the parent in question here is @other_user. Now the first test passes as intended because we obviously aren't running a method that calls create like when we run the #follow method. My question is how would I save this @other_user so that I can create an active_relationship. Here is how @user is being presented: before { @user = FactoryGirl.build(:user) } subject { @user } Also, @user is working with all other tests. When running .persisted? on both @user & @other_user I receive true. A: You @user is not saved, because Factory.build(:user) returns unsaved records. Just change your specs to save @user before you run that particular example. I would write the specs like this: subject(:user) { FactoryGirl.build(:user) } describe "#follow & #following?" do let(:other) { FactoryGirl.create(:user) } it "returns false for user following other_user" do expect(user.following?(other)).to be_false end context "when following" do before do user.save user.follow(other) end it "returns true for user following other_user" do expect(user.following?(other)).to be_true end end end
[ "stackoverflow", "0016512652.txt" ]
Q: Access violation with C++ I am a bit rusty with the C languages, and I have been asked to write a quick little application to take a string from STDIN and replace every instance of the letter 'a' to a letter 'c'. I feel like my logic is spot on (largely thanks to reading posts on this site, I might add), but I keep getting access violation errors. Here is my code: #include <stdio.h> #include <string.h> #include <iostream> #include <algorithm> using namespace std; int main() { printf("Enter a string:\n"); string txt; scanf("%s", &txt); txt.replace(txt.begin(), txt.end(), 'a', 'c'); txt.replace(txt.begin(), txt.end(), 'A', 'C'); printf("%s", txt); return 0; } I can really use some insight. Thank you very much! A: scanf doesn't know what std::string is. Your C++ code should look like this: #include <string> #include <iostream> #include <algorithm> using namespace std; int main() { cout << "Enter a string:" << endl; string txt; cin >> txt; txt.replace(txt.begin(), txt.end(), 'a', 'c'); txt.replace(txt.begin(), txt.end(), 'A', 'C'); cout << txt; return 0; }
[ "ell.stackexchange", "0000053602.txt" ]
Q: what does " tweak your pitch" mean here? But (also like dating) a lot can go wrong. If you’re striking out in your email campaigns, you’ve got to tweak your pitch. Here are nine reasons that marketing emails get rejected – any of these sound familiar? tweak:improve pitch:the quality of a sound governed by the rate of vibrations producing it; the degree of highness or lowness of a tone. A: There's another definition of pitch, it's a script that salesmen memorize to sell a product. Good metaphor, huh? To tweak your pitch in this context means to make small adjustments in whatever strategy you're using to present yourself to your preferred gender.
[ "stackoverflow", "0009041542.txt" ]
Q: Recursive Searching Category and Their Children I have the following class: public class Category { public string Name { get; set; } public Category Parent { get; set; } public string Url { get; set; } public List<Category> Children { get; set; } } Now given an instance of a Category and url. I wish to get the Category where the url matches the category or any of it's children (or their children etc). Therefore my function would have the following signature: public Category FindCategory(Category category, string url); I know recursion is the way to go and i have managed to come up with a solution. However i've definitely seen it done better but i can't find where. I'd appreciate it if someone could show me the easiest and cleanest way to achieve this. Thanks A: In terms of recursion the answer is pretty straight forward. I would prefer the Try pattern over a null return though. bool TryFind(string url, Category current, out Category found) { if (category.Url == url) { found = current; return true; } foreach (var child in current.Children) { if (TryFind(url, child, out found)) { return true; } } found = null; return false; } Your question mentioned you'd seen it done "better". Could you elaborate a bit on this? I'm not quite sure what you mean. A: Here is a simple recursive algorithm: public Category FindCategory(Category category, string url) { if(category.Url == url) { return category; } Category solution = null; foreach(Category child in category.Children) { solution = FindCategory(child, url); if(solution != null) { return solution; } } return null; }
[ "stackoverflow", "0057170891.txt" ]
Q: Write a program that uses a for-loop to iterate through this array, displaying the values stored at every index I have written a small program that holds an array of 10 numbers, I wish for the output of the program to look like this: Index Value 0 2 1 4 2 6 3 8 4 10 5 12 6 14 7 16 8 18 9 20 However I do not know how to make my code do that. This is my code #include<stdio.h> int main() { int a[10] = {2, 4, 6, 8, 10, 12, 14, 16, 18, 20}; // 10 elements int i; for(i=0; i<(&a)[1]-a; i++) printf("%d ", a[i]); printf("\n\n"); return 0; } A: Here is a simple approach how you could do it: #include<stdio.h> int main() { int n = 10; int a[] = {2, 4, 6, 8, 10, 12, 14, 16, 18, 20}; // 10 elements printf("Index\tValue\n"); for(int i=0; i<n; i++){ printf("%d\t%d\n", i, a[i]); } printf("\n\n"); printf("Press any key to continue . . .\n"); getchar(); return 0; }
[ "stackoverflow", "0024091288.txt" ]
Q: AndroidSDK Hello World app giving an invalid format error when debugging on my MotoX? I just started learning the android-SDK, I followed the basic steps to launch a hello world app, however I get this error in the LogCat: 06-06 17:53:49.203: E/Adreno-ES20(17462): <gl_external_unsized_fmt_to_sized:2379>: QCOM> format, datatype mismatch 06-06 17:53:49.203: E/Adreno-ES20(17462): <get_texture_formats:3009>: QCOM> Invalid format! I believe it might be a problem related to my Moto X that I am debugging on? I can see the app itself launching successfully on my phone, however I don't really understand this error so I don't know if it is a big problem or not, I was hoping someone could shed some light on it. A: It is an OpenGL bug related to Adreno GPU on your Moto X but it is in no way related to your application. So just ignore it and keep learning.
[ "stackoverflow", "0062663370.txt" ]
Q: Using Pipeline with GridSearchCV Suppose I have this Pipeline object: from sklearn.pipeline import Pipeline pipe = Pipeline([ ('my_transform', my_transform()), ('estimator', SVC()) ]) To pass the hyperparameters to my Support Vector Classifier (SVC) I could do something like this: pipe_parameters = { 'estimator__gamma': (0.1, 1), 'estimator__kernel': (rbf) } Then, I could use GridSearchCV: from sklearn.model_selection import GridSearchCV grid = GridSearchCV(pipe, pipe_parameters) grid.fit(X_train, y_train) We know that a linear kernel does not use gamma as a hyperparameter. So, how could I include the linear kernel in this GridSearch? For example, In a simple GridSearch (without Pipeline) I could do: param_grid = [ {'C': [ 0.1, 1, 10, 100, 1000], 'gamma': [0.0001, 0.001, 0.01, 0.1, 1], 'kernel': ['rbf']}, {'C': [0.1, 1, 10, 100, 1000], 'kernel': ['linear']}, {'C': [0.1, 1, 10, 100, 1000], 'gamma': [0.0001, 0.001, 0.01, 0.1, 1], 'degree': [2, 3], 'kernel': ['poly']} ] grid = GridSearchCV(SVC(), param_grid) Therefore, I need a working version of this sort of code: pipe_parameters = { 'bag_of_words__max_features': (None, 1500), 'estimator__kernel': (rbf), 'estimator__gamma': (0.1, 1), 'estimator__kernel': (linear), 'estimator__C': (0.1, 1), } Meaning that I want to use as hyperparameters the following combinations: kernel = rbf, gamma = 0.1 kernel = rbf, gamma = 1 kernel = linear, C = 0.1 kernel = linear, C = 1 A: You are almost there. Similar to how you created multiple dictionaries for SVC model, create a list of dictionaries for the pipeline. Try this example: from sklearn.datasets import fetch_20newsgroups from sklearn.pipeline import pipeline from sklearn.feature_extraction.text import CountVectorizer from sklearn.svm import SVC categories = [ 'alt.atheism', 'talk.religion.misc', 'comp.graphics', 'sci.space', ] remove = ('headers', 'footers', 'quotes') data_train = fetch_20newsgroups(subset='train', categories=categories, shuffle=True, random_state=42, remove=remove) pipe = Pipeline([ ('bag_of_words', CountVectorizer()), ('estimator', SVC())]) pipe_parameters = [ {'bag_of_words__max_features': (None, 1500), 'estimator__C': [ 0.1, ], 'estimator__gamma': [0.0001, 1], 'estimator__kernel': ['rbf']}, {'bag_of_words__max_features': (None, 1500), 'estimator__C': [0.1, 1], 'estimator__kernel': ['linear']} ] from sklearn.model_selection import GridSearchCV grid = GridSearchCV(pipe, pipe_parameters, cv=2) grid.fit(data_train.data, data_train.target) grid.best_params_ # {'bag_of_words__max_features': None, # 'estimator__C': 0.1, # 'estimator__kernel': 'linear'}
[ "tex.stackexchange", "0000063696.txt" ]
Q: Institute name in beamer presentation I am trying to put the name of the institution in a beamer presentation \documentclass{beamer} \institution{XYZ} ... \begin{document} \maketitle However I get the error that it is an undefined control sequence. What possibly is the mistake? A: The right command is \institute{...} (see as reference the beameruserguide section 10.1 Adding a Title Page). A better way to proceed is: \documentclass{beamer} \usepackage{lmodern} \title{My title} \author{My name} \institute{My institute} \date{\today} \begin{document} \begin{frame} \titlepage \end{frame} ... \end{document}
[ "stackoverflow", "0044372287.txt" ]
Q: Value contains many text in Conditional Format Rules Google Sheet I want to set conditional format rules in my Google spreadsheet. For the format cell, I select "Text contains", then I type values "PA, MA, CT, NY", formatting style I choose red. When I click DONE, the columns containing these words didn't show the color. I don't want to create rules "PA', "MA", "CT"and "NY" one by one. How can I fix them? Thanks for helping. A: Select the relevant range (I am assuming starts at A1) and clear any existing CF rules from it. Format, Conditional formatting..., Format cells if... Custom formula is and: =regexmatch(A1,"PA|MA|CT|NY") with red fill and Done. This should format any cells that contain any of the four state abbreviations (that is, both as part of the content of a cell and as all the content of the cell). It should format PACT but being case sensitive not many.
[ "stackoverflow", "0056777620.txt" ]
Q: How to create a for loop to tune lambda and alpha for glmm elastic net? I am using the MMS package in R to conduct an elastic net regression on a GLMM. What I want to do is tune the lambda (or mu which is what it is called in the package) and alpha values. I want to select the best combination of alpha and lambda. Currently I have created a for loop that loops through alpha values from 0.1 to 0.9 and I am trying to do the same with alpha but it doesn't work. What I want is for each alpha (say 0.1) each lambda value in the sequence is used. For example, for alpha = 0.1 I want lambda = seq(10, by= -1) to be tried, and I want this for each alpha value. mu <- seq(10, by = -1) for (i in 1:9) { for (j in mu) { fit.name <- paste0("alpha ", i/10) list.of.fits[[fit.name]] <- lassop(X, Y, Z, grp = g, alpha = i/10, mu = j) } } This output for list.of.fits is a list of fits using different alpha values but the mu (lambda) is 1 when I want it to iterate 1 to 10 for each alpha value. X = matrix of fixed effects, first column is intercept of 1 Y = vector of response variable Z = 1 random effect grp = group variable A: Here's another solution that might be faster. mu <- seq(10) alpha <- 1:9/10 df.of.params <- data.frame(expand.grid(mu, alpha)) names(df.of.params) <- c('mu', 'alpha') list.of.params <- split(df.of.params, seq(nrow(df.of.params))) list.of.fits <- mclapply(list.of.params, function(params) { lassop(X, Y, Z, grp = g, alpha = params$alpha, mu = params$mu) }) If you want to parallelize just change mclapply to a regular lapply.
[ "stackoverflow", "0038762731.txt" ]
Q: Get storage space used in a folder in Firebase Storage I'm creating a Firebase app that you can use to upload files. How can I get the amount of space used by a user in his folder (users/{userId}/{allPaths=**}) ? A: Great question. In short, there's no easy way to do this (even for us!) since this effectively requires that we recurse over an entire set of files and sum them all up. It's a pretty big mapreduce that isn't efficient to run every time a file is uploaded. We do however, return the size of an individual file in the metadata.size property, so you can perform your own list call on a server (look at gcloud`) which will give you a list of files and "folders". Take the sizes of the files and add them up, then recurse and do the same for all subfolders. Sum them up, and write them something like the Firebase Realtime Database, where you can easily grab the folder sizes from clients.