prompt,ground_truth "Swift 5: How to add Marker on GoogleMap on tap/touch with coordinates? *I was learning GoogleMap-SDK for iOS, in the process i did searched for the above question and found satisfactory answers but not the one that's practically useful.* E.G. : [Swift 3 google map add markers on touch](https://stackoverflow.com/questions/43877300/swift-3-google-map-add-markers-on-touch/43879138) It adds Marker but does not have Place Name or Place Details and without that Marker is not that useful and for that i have to search other answers to get Address from the coordinates. So , here i have combined both answers to save time of fellow developers and make it Marker more practical. ","**For Swift 5.0+** > > First, make sure you have added `GMSMapViewDelegate` delegate to your ViewController Class > > > We do not need `UILongPressGestureRecognizer` or `UITapGestureRecognizer` for this, `GMSMapViewDelegate` provides convenient default method for this. ``` ///This default function fetches the coordinates on long-press on `GoogleMapView` func mapView(_ mapView: GMSMapView, didLongPressAt coordinate: CLLocationCoordinate2D) { //Creating Marker let marker = GMSMarker(position: coordinate) let decoder = CLGeocoder() //This method is used to get location details from coordinates decoder.reverseGeocodeLocation(CLLocation(latitude: coordinate.latitude, longitude: coordinate.longitude)) { placemarks, err in if let placeMark = placemarks?.first { let placeName = placeMark.name ?? placeMark.subThoroughfare ?? placeMark.thoroughfare! ///Title of Marker //Formatting for Marker Snippet/Subtitle var address : String! = """" if let subLocality = placeMark.subLocality ?? placeMark.name { address.append(subLocality) address.append("", "") } if let city = placeMark.locality ?? placeMark.subAdministrativeArea { address.append(city) address.append("", "") } if let state = placeMark.administrativeArea, let country = placeMark.country { address.append(state) address.append("", "") address.append(country) } // Adding Marker Details marker.title = placeName marker.snippet = address marker.appearAnimation = .pop marker.map = mapView } } } ``` ***HOPE IT HELPS !!!*** " "C++ Using the function of the class two class above I have the current setup: ``` class Interface1 { public: virtual ~Interface1() {} virtual void DoSomething() = 0; }; class Interface2 : public virtual Interface1 { public: virtual ~Interface2() {} virtual void DoSomething() override = 0; virtual void DoSomethingElse() = 0; }; class MyClass1 : public Interface1 { public: MyClass1(); void DoSomething() override; }; class MyClass2 : public Interface2 { public: MyClass2(); void DoSomething() override; void DoSomethingElse() override; }; int main() { std::map items; items.insert(make_pair(""item1"", shared_ptr(new MyClass1()))); items.insert(make_pair(""item2"", shared_ptr(new MyClass2()))); auto object = items.at(""item2""); auto item = boost::any_cast>(object); item->DoSomething(); return 0; } ``` When I run this code, nothing happens. `MyClass2` doesn't appear to be calling `DoSomething()`, which is what I would like. How can I make the call to `Interface1::DoSomething()` actually call `Interface2::DoSomething()`? I would think it would be possible because they all inherit from each other, but I can't seem to make it work. The reason I want this is because I have some functions which will only work with classes inherited from `Interface2`, but some functions need to support classes derived from either `Interface1` and `Interface2`. Once boost::any takes over I loose which type it originally was, but it shouldn't be a problem if I could use the setup described above, so even if my original class was derived from `Interface2`, it could call the same function in `Interface1` and get the same result. Is there a way of doing what I want with the current setup? EDIT: Sorry, the `void` in front of the constructors where my bad, but that is not the issue. ","Why do you need the `boost::any`? If you need to determine the difference between `Interface1` and `Interface2`, and you have a `std::shared_pointer` stored in your map, then just store a `std::shared_pointer` and use `std::dynamic_pointer_cast` to determine whether you have an `Interface1` or an `Interface2` **Example:** ``` #include #include #include class Interface1 { public: virtual ~Interface1() = default; virtual void DoSomething() = 0; }; class Interface2 : public Interface1 { public: virtual ~Interface2() = default; virtual void DoSomethingElse() = 0; }; class MyClass1 : public Interface1 { public: MyClass1() {} void DoSomething() override { std::cout << ""\t\t"" << __PRETTY_FUNCTION__ << '\n'; } }; class MyClass2 : public Interface2 { public: MyClass2() {} void DoSomething() override { std::cout << ""\t\t"" << __PRETTY_FUNCTION__ << '\n'; } void DoSomethingElse() override { std::cout << ""\t\t"" << __PRETTY_FUNCTION__ << '\n'; } }; int main() { std::map> items; items.emplace(""item1"", std::make_shared()); items.emplace(""item2"", std::make_shared()); auto check = [&items](const std::string& name) { auto object = items.at(name); auto item = std::dynamic_pointer_cast(object); if (item) { std::cout << name << "" is an Interface2\n""; item->DoSomething(); item->DoSomethingElse(); } else { std::cout << name << "" is an Interface1\n""; object->DoSomething(); } }; check(""item1""); check(""item2""); return 0; } ``` **Output:** ``` item1 is an Interface1 virtual void MyClass1::DoSomething() item2 is an Interface2 virtual void MyClass2::DoSomething() virtual void MyClass2::DoSomethingElse() ``` **Some final notes:** - I also question the need for virtual inheritance between `Interface2` and `Interface1` - I don't believe you need to override `DoSomething` in `Interface2` - it's already there by publically inheriting from `Interface1` `virtual void DoSomething() override = 0;` is unnecessary " "Julia: Macro threads and parallel as we know, Julia supports parallelism and this is something rooted in the language which is very good. I recently saw that Julia supports threads but it seems to me to be experimental. I noticed that in the case of using the `Threads.@Threads` macro there is no need for Shared Arrays which is perhaps a computational advantage since no copies of the objects are performed. I also saw that there is the advantage of not declaring all functions with `@everywhere`. Can anyone tell me the advantage of using the `@parallel` macro instead of the `@threads` macro? Below are two simple examples of using non-synchronized macros for parallelism. **Using the @threads macro** ``` addprocs(Sys.CPU_CORES) function f1(b) b+1 end function f2(c) f1(c) end result = Vector(10) @time Threads.@threads for i = 1:10 result[i] = f2(i) end ``` *0.015273 seconds (6.42 k allocations: 340.874 KiB)* **Using the @parallel macro** ``` addprocs(Sys.CPU_CORES) @everywhere function f1(b) b+1 end @everywhere function f2(c) f1(c) end result = SharedArray{Float64}(10) @time @parallel for i = 1:10 result[i] = f2(i) end ``` *0.060588 seconds (68.66 k allocations: 3.625 MiB)* It seems to me that for Monte Carlo simulations where loops are mathematically independent and there is a need for a lot of computational performance the use of the `@threads` macro is more convenient. What do you think the advantages and disadvantages of using each of the macros? Best regards. ","Here is my experience: ### Threads Pros: - shared memory - low cost of spawning Julia with many threads Cons: - constrained to a single machine - number of threads must be specified at Julia start - possible problems with false sharing () - often you have to use locking or atomic operations for the program to work correctly; in particular many functions in Julia are not threadsafe so you have to be careful using them - not guaranteed to stay in the current form past Julia 1.0 ### Processess Pros: - better scaling (you can spawn them e.g. on a cluster of multiple machines) - you can add processes while Julia is running Cons: - low efficiency when you have to pass a lot of data between processes - slower to start - you have to explicitly share code and data to/between workers ### Summary Processes are much easier to work with and scale better. In most situations they give you enough performance. If you have large data transfers between parallel jobs threads will be better but are much more delicate to correctly use and tune. " "Numpy concatenate is slow: any alternative approach? I am running the following code: ``` for i in range(1000) My_Array=numpy.concatenate((My_Array,New_Rows[i]), axis=0) ``` The above code is slow. Is there any faster approach? ","This is basically what is happening in all algorithms based on arrays. Each time you change the size of the array, it needs to be resized and every element needs to be copied. This is happening here too. (some implementations reserve some empty slots; e.g. doubling space of internal memory with each growing). - If you got your data at np.array creation-time, just add these all at once (memory will allocated only once then!) - If not, collect them with something like a linked list (allowing O(1) appending-operations). Then read it in your np.array at once (again only one memory allocation). This is not much of a numpy-specific topic, but much more about data-strucures. **Edit:** as this quite vague answer got some upvotes, i feel the need to make clear that my linked-list approach is one possible example. As indicated in the comment, python's lists are more array-like (and definitely not linked-lists). But the core-fact is: list.append() in python is [fast](https://wiki.python.org/moin/TimeComplexity) (amortized: O(1)) while that's not true for numpy-arrays! There is also a small part about the internals in the [docs](https://docs.python.org/2/faq/design.html#how-are-lists-implemented): > > > > > > How are lists implemented? > > > > > > **Python’s lists are really variable-length arrays, not Lisp-style linked lists**. The **implementation uses a contiguous array of references to other objects**, and keeps a pointer to this array and the array’s length in a list head structure. > > > > > > This makes indexing a list a[i] an operation whose cost is independent of the size of the list or the value of the index. > > > > > > When items are appended or inserted, the array of references is resized. **Some cleverness is applied to improve the performance of appending items repeatedly**; when the array must be grown, some extra space is allocated so the next few times don’t require an actual resize. > > > > > > > > > (bold annotations by me) " "Collatz Sequence I'm a beginner and I only know limited amounts such as `if`, `while`, `do while`. So I'm here to check if I'm coding with best practise and the most effective methods to my current knowledge. ``` import java.util.Scanner; public class SixtyTwo { public static void main(String[] args) { Scanner keyboard = new Scanner(System.in); System.out.print(""Starting Number: ""); int n = keyboard.nextInt(); int counter = 0; int stepsTaken = 0; int largestNumber = 0; System.out.println(); while ( n != 1 ){ if ( ( n & 1 ) == 0 ) { System.out.print( (n = ( n / 2 )) + "" "" ); stepsTaken++; counter++; } else { System.out.print( (n = ( n * 3 ) + 1) + "" "" ); stepsTaken++; counter++; } if ( n > largestNumber ){ largestNumber = n; } if (counter == 9){ counter = 0; System.out.print(""\n""); } } System.out.println(); System.out.println(""\nTerminated after "" + stepsTaken + "" steps.""); System.out.println(""The largest value was "" + largestNumber + "".""); } } ``` ","That is some overall quite good code you have there, but I have a couple of comments. - `Scanner` should be closed when you are done with the input. Simply call `keyboard.close();` when you have acquired the input that you need. - Your class is called `SixtyTwo` but I have no clue what that has to do with anything. A better name would be `Collatz` - `if ( ( n & 1 ) == 0 )` although it is a nice way of checking if a number is even, it is for Java programmers more readable to use the `%` (modulo) operator. I would use `if (n % 2 == 0)`. - `System.out.print( (n = ( n / 2 )) + "" "" );` ...now this I'm not a big fan of. Sure, it works to both modify a value and outputting it on the same line, but I would separate the assignment to it's own line. Assign on one line, output on another, makes things much clearer to read. - `stepsTaken++;` and `counter++;` don't need to be inside the if-else blocks, put them outside them as they are always done no matter if your condition is true or false. - `if (counter == 9)` can instead be `if (stepsTaken % 9 == 0)`, which would remove the need for the `counter` variable entirely. " "Make a backup of the installed packages on Ubuntu like Linux Mint On the Linux mint you have a tool that list and save all your installed packages in a simple way. There is any way to do the same on Ubuntu/Xubuntu/Lubuntu/...? ","**EDIT:** This is no longer a free application. ### Aptik After seeing the various answers here (and not disagreeing with any of them) it strikes me that you asked for simplicity. In my comment, I linked to an application called Aptik and I'm going to show you why I think this meets your criteria best. Aptik is simple to install and trivially easy to use. It is also a handy dandy GUI (Graphical User Interface) that has easy to use buttons and requires absolutely no real advanced knowledge. If you can click on a button with your mouse then you can install Aptik. If you'd like to *try* read more about Aptik then you can click [here](http://www.teejeetech.in/) and visit their home page. I'm not actually sure what good that will do. Their home page doesn't seem to have a whole lot of information. - **It is Simple and it Does Work!** Not only does it do what you asked, it does a bit more. I think a screen shot should be fairly self-explanatory. [![Aptik in Action](https://i.stack.imgur.com/nyPQR.png)](https://i.stack.imgur.com/nyPQR.png) ### How to Install Aptik 1. Open your terminal by pressing `CTRL`+`ALT`+`T`. 2. Copy and Paste this to the terminal: ``` sudo apt-add-repository ppa:teejee2008/ppa ``` 3. Press `ENTER` 4. Enter your password. Nothing will show on your screen, the cursor will not move, no asterisks will appear - this is normal behaviour. 5. Copy and paste this to your terminal: ``` sudo apt-get update ``` 6. Press `ENTER`. 7. Copy and paste this to your terminal: ``` sudo apt-get install aptik ``` 8. Press `ENTER`. 9. Allow the application to install and follow any on-screen prompts. ### Running Aptik If your computer is anything like mine then Aptik should magically appear in the Start menu. In my case, it appears under System Tools. Your case is probably not like mine. Open the launcher and search for `aptik`. If, for some peculiar reason, Aptik is not available then it can be launched from the terminal (use above commands) by running `sudo aptik-launcher`. If, for some even stranger reason, you want to go ahead and run the application from the terminal, entirely, you can do that too but you're on your own. For the sake of completeness, this is a list of commands. ``` Aptik v1.6.4 by Tony George (teejee2008@gmail.com) Syntax: aptik [options] Options: --list-available List available packages --list-installed List installed packages --list-top List top-level installed packages --list-{manual|extra} List top-level packages installed by user --list-default List default packages for linux distribution --list-ppa List PPAs --list-themes List themes in /usr/share/themes --list-icons List icon themes in /usr/share/icons --backup-ppa Backup list of PPAs --backup-packages Backup list of manual and installed packages --backup-cache Backup downloaded packages from APT cache --backup-themes Backup themes from /usr/share/themes --backup-icons Backup icons from /usr/share/icons --restore-ppa Restore PPAs from file 'ppa.list' --restore-packages Restore packages from file 'packages.list' --restore-cache Restore downloaded packages to APT cache --restore-themes Restore themes to /usr/share/themes --restore-icons Restore icons to /usr/share/icons --take-ownership Take ownership of files in your home directory --backup-dir Backup directory (defaults to current directory) --[show-]desc Show package description if available --yes Assume Yes for all prompts --h[elp] Show all options ``` ### Closure Regardless of which method you choose, this has been included to ensure that there's a nice, simple, GUI way to do this. There are many ways to accomplish things in Linux and it is always nice to have options. " "How to pause MySQL before taking an LVM/ZFS snapshot? How can I instruct MySQL to complete all ""in-progress"" transactions, but to delay starting new ones (without kicking clients off) until I have taken a ZFS or LVM snapshot (which takes less than a second). e.g. 1. pause MySQL, waiting for ""in-progress"" transactions to complete 2. sync to disk 3. take ZFS/LVM snapshot 4. resume MySQL The point of this is to get a consistent snapshot for backup purposes. Step 2 takes a fraction of a second. Step one should not cause client errors, just a very short pause until step 4 is reached. Are there MySQL commands which can do 1 and 4? What are they? ","A hacky way would be, to wait for the transactions to finish: `mysql> FLUSH LOCAL TABLES; Query OK, 0 rows affected (11.31 sec)` and then getting a read lock: `mysql> FLUSH TABLES WITH READ LOCK; Query OK, 0 rows affected (22.55 sec)` Now all queries are blocked (ie. they wait for the lock to get released) until your session ends. Mind you - you still have to wait until all transactions are finished. Depending on your workload this can take a while (UPDATE-ing a few million rows...). You can code this in your favourite script language. But seriously - why not use [Xtrabackup](https://www.percona.com/doc/percona-xtrabackup/2.3/index.html ""Xtrabackup"") ? It does take care of a consistent snapshot of mysql for you and you can dump it on the filesystem, and zfs/lvm snapshot it. " "Randomly remove 30% of values in numpy array I have a 2D numpy array which contains my values (some of them can be NaN). I want to remove the 30% of the non-NaN values and replace them with the mean of the array. How can I do so? What I tried so far: ``` def spar_removal(array, mean_value, sparseness): array1 = deepcopy(array) array2 = array1 spar_size = int(round(array2.shape[0]*array2.shape[1]*sparseness)) for i in range (0, spar_size): index = np.random.choice(np.where(array2 != mean_value)[1]) array2[0, index] = mean_value return array2 ``` But this is just picking the same row of my array. How can I remove from all over the array? It seems that choice works only for one dimension. I guess what I want is to calculate the `(x, y)` pairs that I will replace its value with `mean_value`. ","There's likely a better way, but consider: ``` import numpy as np x = np.array([[1,2,3,4], [1,2,3,4], [np.NaN, np.NaN, np.NaN, np.NaN], [1,2,3,4]]) # Get a vector of 1-d indexed indexes of non NaN elements indices = np.where(np.isfinite(x).ravel())[0] # Shuffle the indices, select the first 30% (rounded down with int()) to_replace = np.random.permutation(indices)[:int(indices.size * 0.3)] # Replace those indices with the mean (ignoring NaNs) x[np.unravel_index(to_replace, x.shape)] = np.nanmean(x) print(x) ``` **Example Output** ``` [[ 2.5 2. 2.5 4. ] [ 1. 2. 3. 4. ] [ nan nan nan nan] [ 2.5 2. 3. 4. ]] ``` NaNs will never change and floor(0.3 \* number of non-NaN elements) will be set to the mean (the mean ignoring NaNs). " "How to programmatically replace an HyperLinkField in a ASP.NET GridView I have an ASP.NET Web Forms application. In my application I have a **GridView** that works smoothly. I have several text fields and the last one is a ``. Now I would like to programmatically change the field by placing a **simple link** instead of the `hyperlinkfield` if a specific condition is fulfilled. Therefore I catch the `onRowDataBound` event: ``` Sub myGridView_RowDataBound(ByVal sender As Object, ByVal e As GridViewRowEventArgs) Handles myGridView.RowDataBound If (condition) Then Dim link = New HyperLink() link.Text = ""login"" link.NavigateUrl = ""login.aspx"" e.Row.Cells(3).Controls.Add(link) End If End If End Sub ``` where **n** is the cell where the `hyperlinkfield` is placed. With this code it just adds to the `hyperlinkfield` the new `link`. How can I replace it? PS: The code is in VB6 but I am a C# programmer, answers with both languages are accepted ","Remove the control you want to replace from the collection before adding the new one: ``` protected void TestGridView_RowDataBound(object sender, GridViewRowEventArgs e) { if (e.Row.RowType == DataControlRowType.DataRow) { HyperLink newHyperLink = new HyperLink(); newHyperLink.Text = ""New""; e.Row.Cells[3].Controls.RemoveAt(0); e.Row.Cells[3].Controls.Add(newHyperLink); } } ``` But I agree with the others, just change the existing link's properties: ``` protected void TestGridView_RowDataBound(object sender, GridViewRowEventArgs e) { if (e.Row.RowType == DataControlRowType.DataRow) { HyperLink link = e.Row.Cells[0].Controls[0] as HyperLink; if (link != null) { link.Text = ""New""; link.NavigateUrl = ""New""; } } } ``` " "How can I install MinGW-w64 and MSYS2? I am trying to build some open source library. I need a package management system to easily download the dependencies. At first I am using [MinGW](https://en.wikipedia.org/wiki/MinGW) and [MSYS](https://en.wikipedia.org/wiki/MinGW#History). But the included packages are limited. Someone told me to use [Mingw-w64](https://en.wikipedia.org/wiki/Mingw-w64) and [MSYS2](https://en.wikipedia.org/wiki/Mingw-w64#MSYS2). I downloaded the `mingw-w64-install` from [here](http://mingw-w64.yaxm.org/doku.php/download). When running, it reports the following error. How can I fix it? ![Enter image description here](https://i.stack.imgur.com/F0IoK.png) And by the way, from the Mingw-w64 download page, I see a lot of download links. Even [Cygwin](https://en.wikipedia.org/wiki/Cygwin) is listed. How are Cygwin and Mingw-w64 related? ![Enter image description here](https://i.stack.imgur.com/uzlMg.png) My current understanding is, in the time of MinGW and MSYS, MSYS is just a nice addon to MinGW, while in Mingw-w64 + MSYS2, MSYS2 is stand-alone and Mingw-w64 is just a set of libraries it can work with. Just like Cygwin can download many different packages. ","Unfortunately, the MinGW-w64 installer you used sometimes has this issue. I myself am not sure about why this happens (I think it has something to do with Sourceforge URL redirection or whatever that the installer currently can't handle properly enough). Anyways, if you're already planning on using MSYS2, there's no need for that installer. 1. Download MSYS2 from [this page](https://msys2.github.io/). 2. After the install completes, click on the `MSYS2 UCRT64` in the Start menu (or `C:\msys64\ucrt64.exe`). If done correctly, the terminal prompt will say `UCRT64` in magenta letters, not `MSYS`. 3. Update MSYS2 using `pacman -Syuu`. If it closes itself during the update, restart it and repeat the same command to finish the update. You should routinely update your installation. 4. Install the toolchain: (i.e. the compiler and some extra tools) ``` pacman -S mingw-w64-ucrt-x86_64-toolchain ``` 5. Install any libraries/tools you may need. You can search the repositories by doing ``` pacman -Ss name_of_something_i_want_to_install ``` e.g. ``` pacman -Ss gsl ``` and install using ``` pacman -S package_name_of_something_i_want_to_install ``` e.g. ``` pacman -S mingw-w64-ucrt-x86_64-gsl ``` and from then on the GSL library will be automatically found by your compiler! Make sure any compilers and libraries you install have this package prefix: `mingw-w64-ucrt-x86_64-`. Only use unprefixed packages for misc command-line utilities (such as `grep`, `sed`, `make`, etc), unless you know what you're doing. 6. Verify that the compiler is working by doing ``` gcc --version ``` If you want to use the toolchains (with installed libraries) outside of the MSYS2 environment, all you need to do is add `C:/msys64/ucrt64/bin` to your `PATH`. MSYS2 provides [several compiler flavors](https://stackoverflow.com/q/76552264/2752075), UCRT64 being one of them. It should be a reasonable default. " "Stochastic volatility model - why not identified? For the following model $y\_t = \beta e^{h\_t/2} \epsilon\_t$ $h\_{t+1} = \mu + \phi(h\_t - \mu) + \sigma\_n \eta\_t$ [this Kim et al (1998)](http://apps.olin.wustl.edu/faculty/chib/papers/KimShephardChib98.pdf) paper writes that > > For identifiability reasons either $\beta$ must be set to one or $\mu$ to zero > > > Why is that? (to me this is not immediately obvious) ","Intuitively, it's because both $\beta$ and $\mu$ are responsible for the average *scale* of the volatility. Say you define a new parameter $\mu' = \mu + \Delta$. Then, it follows that: $$h\_{t+1} + \Delta = \mu' + \phi((h\_t + \Delta) - \mu') + \sigma\_\eta \eta\_t$$ Substituting in the other equation: $$y\_t = \beta e^{-\Delta/2} e^{\frac{h\_t + \Delta}{2}} \varepsilon\_t$$ If you then define $\beta' = \beta e^{-\Delta/2}$ and $h\_t'=h\_t + \Delta$, it follows that: $$y\_t = \beta' e^{h'\_t/2} \varepsilon\_t$$ $$h'\_{t+1} = \mu' + \phi(h'\_t - \mu') + \sigma\_\eta \eta\_t$$ This is the same model form that you started with, so you will get the same likelihood no matter the value of $\Delta$, if you adjust both parameters in this way by the same value of $\Delta$; this is the identifiability issue that was alluded to. In theory it doesn't really matter which parameter you get rid of, but in the paper you linked, the specific application (MCMC) typically works better when parameters are less correlated in the posterior. In this case, it means you should get rid of $\beta$. This is demonstrated empirically in the paper. " "Django: How to call management custom command execution from admin interface? Referring to, [executing management commands from code](https://stackoverflow.com/questions/907506/how-can-i-call-a-custom-django-manage-py-command-directly-from-a-test-driver), Is their a way to call this command execution code from the django admin interface? I've a custom command to do some periodic update of database which has been [scheduled as cron](https://stackoverflow.com/questions/573618/django-set-up-a-scheduled-job). The cron is working fine. I need to update the database manually from the admin interface whenever needed. ","**update:** you can run any management command simply by calling the function `call_command('compilemessages')` from anywhere within your python code. Note: this obviously is a *blocking* process from the caller's perspective. With the ajax example below you can have a form of non-blocking/asynchronous user experience. Depending on the backend implementation you can take this isolation level further. Example: ``` from django.core.management import call_command call_command('compilemessages') ``` If the task is bound to the object currently viewed in the admin, a nice way might be to implement an extra view called by an ajax script when clicking a button. The extra view could optionally be wrapped as a celery task, e.g. **models.py** ``` class Foo(models.Model): # fields... def my_task_init(self): return mark_safe("""") % self.id my_task_init.allow_tags = True my_task_init.short_description = _(u""Execute Task"") ``` **admin.py** ``` class FooAdmin(admin.ModelAdmin): list_display = ['other_field', 'my_task_init'] class Media: js = ( 'https://ajax.googleapis.com/ajax/libs/jquery/1.7.2/jquery.js', '/static/js/admin_tasks.js', ) def get_urls(self): urls = super(FooAdmin, self).get_urls() extra_urls = patterns('', (r'^my-task/$', self.admin_site.admin_view(self.parse_view)) ) return extra_urls + urls # optionally decorated by celery def task_view(self, request): if not request.is_ajax(): raise Http404 task_id = request.GET.get('task_id') # your logic return HttpResponse('Success') ``` **admin\_tasks.js** ``` $(document).ready(function (){ $('.task').click(function(){ var image = $(this).find('img'), loading = $(this).parent().find('.loading'), task_id = $(this).data('identifier').replace('task_', ''); $.ajax({ type: ""GET"", data: ({'task_id': task_id}), url: ""/admin/app/model/my-task/"", beforeSend: function() { image.hide(); loading.show(); }, statusCode: { 200: function() { loading.hide(); image.attr('src', '/static/img/success.png'); image.show(); }, 404: function() { loading.hide(); image.attr('src', '/static/img/error.png'); image.show(); }, 500: function() { loading.hide(); image.attr('src', '/static/img/error.png'); image.show(); } } }); }); }); ``` If you're trying to initiate an unbound task you could just override a template element or add some html. " "Nginx: Permission denied to Gunicorn socket on CentOS 7 I'm working in a Django project deployment. I'm working in a CentOS 7 server provided ma EC2 (AWS). I have tried to fix this bug by many ways but I cant understand what am I missing. I'm using ningx and gunicorn to deploy my project. I have created my `/etc/systemd/system/myproject.service`file with the following content: ``` [Unit] Description=gunicorn daemon After=network.target [Service] User=centos Group=nginx WorkingDirectory=/home/centos/myproject_app ExecStart=/home/centos/myproject_app/django_env/bin/gunicorn --workers 3 --bind unix:/home/centos/myproject_app/django.sock app.wsgi:application [Install] WantedBy=multi-user.target ``` When I run `sudo systemctl restart myproject.service`and `sudo systemctl enable myproject.service`, the `django.sock` file is correctly generated into `/home/centos/myproject_app/`. I have created my `nginx` conf flie in the folder /etc/nginx/sites-available/ with the following content: ``` server { listen 80; server_name my_ip; charset utf-8; client_max_body_size 10m; client_body_buffer_size 128k; # serve static files location /static/ { alias /home/centos/myproject_app/app/static/; } location / { include proxy_params; proxy_pass http://unix:/home/centos/myproject_app/django.sock; } } ``` After, I restart `nginx` with the following command: ``` sudo systemctl restart nginx ``` If I run the command `sudo nginx -t`, the reponse is: ``` nginx: configuration file /etc/nginx/nginx.conf test is successful ``` When I visit my\_ip in a web browser, I'm getting a 502 bad gateway response. If I check the nginx error log, I see the following message: ``` 1 connect() to unix:/home/centos/myproject_app/django.sock failed (13: Permission denied) while connecting to upstream ``` I really have tried a lot of solutions changing the sock file permissions. But I cant understand how to fix it. How can I fix this permissions bug?... Thank you so much ","If all the permissions under the `myproject_app` folder are correct, and `centos` user or `nginx` group have access to the files, I would say it looks like a Security Enhanced Linux (SELinux) issue. I had a similar problem, but with RHEL 7. I managed to solve it by executing the following command: ``` sudo semanage permissive -a httpd_t ``` It's related to the security policies of SELinux, you have to add the `httpd_t` to the list of permissive domains. This post from the NGINX blog may be helpful: [NGINX: SELinux Changes when Upgrading to RHEL 6.6 / CentOS 6.6](https://www.nginx.com/blog/nginx-se-linux-changes-upgrading-rhel-6-6/) Motivated by a similar issue, I wrote a tutorial a while ago on [How to Deploy a Django Application on RHEL 7](https://simpleisbetterthancomplex.com/tutorial/2017/05/23/how-to-deploy-a-django-application-on-rhel.html). It should be very similar for CentOS 7. " "Change colour navbar header Ionic 2 I have this problem... My colour is white right now, my code is like this: ``` HELLO ``` Change color with this opcion is easy (primary, secondary, danger, light, dark) ``` HELLO ``` but my problem is when I want to use custom colors. Somebody know how can I resolve it? Thanks inadvance. Best regards. ","There're two ways of doing this, based on if you want to change the color only in a single page, or if you want to change it in all the pages from your app: ### 1) Change it in a single page/view Just like you can see [here](http://ionicframework.com/docs/v2/theming/theming-your-app/) > > To change the theme, just tweak the `$colors` map in your > `src/theme/variables.scss` file: > > > ``` $colors: ( // ... newcolor: #55acee ) ``` And then use it in the view ``` HELLO ``` ### 2) Change it in all the pages/views In this case, you'd need to add the following in your `variables.scss` file to override Ionic's defaults: ``` $toolbar-ios-background: #55acee; $toolbar-md-background: #55acee; $toolbar-wp-background: #55acee; ``` --- ### Edit > > Hi, how can I add gradient in app/theme/app.variables.scss? > > > You could add the colors that you're going to use in the `src/theme/variables.scss`: ``` $header-first-color: #AAAAAA; $header-last-color: #000000; ``` And then set a rule to use it (in your `app.scss` file if you want to apply it to every page, or in the `page-name.scss` file if you want to apply it to a single page): ``` ion-header { .toolbar-background { background: linear-gradient(135deg, $header-first-color 0%, $header-last-color 100%); } } ``` " "Can I use a library that uses guice to bind contact and implementations in applications without issues? I have a play application and want to take a common operation out from the application and make it as a library in order to use in other play applications. This proposing library has a contract(interface) and several implementations of top of that. I have 2 questions; 1. What if I use guice to bind contact and implementations with named injection(using annotations) and call them with the appropriate annotation in the play application without instantiating library classes in the apply application? 2. Can we use such libraries in other java applications without any issue? Ex: Use a library which used Guice in a Spring application ","You are confusing dependency injection (DI) with the more specific idea of using a DI framework to supply those injected dependencies. remember that it's perfectly possible to use pure DI to inject dependencies without a framework. So the answer to the question, ""*Is it good to use dependency injection in a java library?*"", is ""yes"". As a default state, you should use DI everywhere, only not using it when there's a compelling reason not to. But the body of your question then asks about tying your library to a specific DI framework. Unless you are sure that all of your users will be using that framework, then that is likely a bad idea. You couple people to a specific framework that they may not wish to use. Further, they may already be using another framework and they tend not to be compatible with each other. As a result, you risk driving folk away from your library as it's too difficult to use. So the answer to the question, ""*should I couple my library to guice (or another framework)*"" is likely to be ""no"" for most scenarios. " "Codeigniter Cart - saving data in database - how to approach? I need help with handling orders and cart in my web application. I decided to use Cart library built in Codeigniter 2. I've seen some tutorials about that Cart library and i know how to use it, but i dont know: - when shall I create/save that order in database? when user is adding items to cart, or shall i keep all order data in session and ""move"" it do database when user accepts order? - shall I store session id or something other in database? I tried to see how cart/order functionality is implemented in PrestaShop, but it looks too complicated for me, im amateur PHP programmer. ","The ideal and good way to use cart is keep it in session, codeigniter's cart class is doing the same thing, and when user gives order use this data put this order in database and do other stuff like payment gateway, shipping. If you want to use user to keep his order in next session, like if user add some product in cart and he quits before giving order and he is a registered user, then you can save his cart every time in database, so that if he gone away without putting order you can show him his orders next time when he logged in. You can store users cart data in database using `$this->cart->contents();` method of cart class. use like this ``` $cartContentString = serialize($this->cart->contents()); ``` you will get a string of cart content, you can save this string in database and later use it like ``` $cartArray = unserialize($cartContentString); ``` " "why in an 'if' statement 'then' has to be in the next line in bash? `if` is followed by `then` in bash but I don't understand why `then` cannot be used in the same line like `if [...] then` it has to be used in the next line. Does that remove some ambiguity from the code? or bash is designed like that? what is the underlying reason for it? I tried to write `if` and `then` in the same line but it gave the error below: ``` ./test: line 6: syntax error near unexpected token \`fi' ./test: line 6: \`fi' ``` the code is: ``` #!/bin/bash if [ $1 -gt 0 ] then echo ""$1 is positive"" fi ``` ","It has to be preceded by a separator of some description, not *necessarily* on the next line(a). In other words, to achieve what you want, you can simply use: ``` if [[ $1 -gt 0 ]] ; then echo ""$1 is positive"" fi ``` As an aside, for one-liners like that, I tend to prefer: ``` [[ $1 -gt 0 ]] && echo ""$1 is positive"" ``` But that's simply because I prefer to see as much code on screen as possible. It's really just a style thing which you can freely ignore. --- (a) The reason for this can be found in the `Bash` manpage (my emphasis): > > RESERVED WORDS: Reserved words are words that have a special meaning to the shell. The following words are recognized as reserved when unquoted and either ***the first word of a simple command*** (see SHELL GRAMMAR below) or the third word of a `case` or `for` command: > > > `! case coproc do done elif else esac fi for function if in select then until while { } time [[ ]]` > > > Note that, though that section states it's the ""first word of a simple command"", the manpage seems to contradict itself in the referenced `SHELL GRAMMAR` section: > > A simple command is a sequence of optional variable assignments followed by blank-separated words and redirections, and terminated by a control operator. ***The first word specifies the command to be executed,*** and is passed as argument zero. > > > So, whether you consider it part of the next command or a separator of some sort is arguable. What is *not* arguable is that it needs a separator of some sort (newline or semicolon, for example) before the `then` keyword. The manpage doesn't go into *why* it was designed that way but it's *probably* to make the parsing of commands a little simpler. " "How to create a CLI in Python that can be installed with PIP? As the title suggests, I'm trying to make a python script accessible from the command line. I've found libraries like [click](https://pypi.org/project/click/) and [argv](https://pypi.org/project/argv/) that make it easy to access arguments passed from the command line, but the user still has to run the script through Python. Instead of ``` python /location/to/myscript.py ``` I want to be able to just do ``` myscript ``` from any directory From what I understand, I can achieve this on my computer by editing my PATH variables. However, I would like to be able to simply do: ``` pip install myscript ``` and then access the script by typing `myscript` from anywhere. Is there some special code I would put in the `setup.py`? ","You can do this with `setuptools` an example of a nice `setup.py` (say your package requires pandas and numpy): ``` import setuptools setuptools.setup( name='myscript', version='1.0', scripts=['./scripts/myscript'], author='Me', description='This runs my script which is great.', packages=['lib.myscript'], install_requires=[ 'setuptools', 'pandas >= 0.22.0', 'numpy >= 1.16.0' ], python_requires='>=3.5' ) ``` Your directory should be setup as follows: ``` [dkennetz package]$ ls lib scripts setup.py ``` inside lib would be: ``` [dkennetz package]$ ls lib myscript ``` inside of `myscript` would be: ``` [dkennetz package]$ ls lib/myscript __main__.py __init__.py helper_module1.py helper_module2.py ``` main would be used to call your function and do whatever you want to do. inside scripts would be: ``` [dkennetz package]$ ls scripts myscript ``` and the contents of `myscript` would be: ``` #!/usr/bin/env bash if [[ ! $@ ]]; then python3 -m myscript -h else python3 -m myscript $@ fi ``` then to run you do: `python setup.py install` which will install your program and all of the dependencies you included in `install_requires=[]` in your setup.py and install `myscript` as a command-line module: ``` [dkennetz ~]$ myscript ``` " "Redis - Default blocking VM > > The blocking VM performance is better overall, as there is no time lost in > synchronization, spawning of threads, and resuming blocked > clients waiting for values. So if you are willing to accept an higher > latency from time to time, blocking VM can be a good pick. Especially > if swapping happens rarely and most of your often accessed data > happens to fit in your memory. > > > This is default mode of Redis (and the only mode going forward I believe now VM is deprecated in 2.6), leaving the OS to handle paging (if/when required). I am correct in my understanding that it will take some time to get ""hot"" when booted/started. When working on a 1gb RAM node with a 16gb dataset, does Redis attempt to load it all into virtual memory at boot and thus 90%+ is immediately paged out, and only after some good amount of usages does the above statement hold true? ","Redis VM was already deprecated in Redis 2.4, and has been removed in Redis 2.6. It is a dead end: don't use it. I think you are confusing the blocking VM with OS paging. They are two different things. OS paging is the default mode of Redis when Redis VM is not configured at all (whatever the blocking mode). The OS will swap Redis memory if it does not fit in physical memory. The event loop can be frozen at any time. When it happens, performance is abysmal because none of the Redis internal data structures is designed for this (no locality, no paging system). Redis VM can be configured in non blocking mode (using I/O threads). When I/Os are done, the event loop is not blocked, and Redis is still responsive. However, when too many I/Os pile up, the I/O threads will be completely busy, and you end up with a responsive Redis, but unable to process any queries requiring I/Os. Redis VM can also be configured in blocking mode. In this mode all I/Os are synchronously performed in the main event loop thread. So the event loop is frozen in case of I/O (for instance in case of a key miss). All clients are impacted. However, general performance (CPU consumption and latency) is better than with the non blocking mode because some threading scheduling/synchronization is saved. In practice, the difference between OS paging and the Redis blocking VM is the granularity level. With Redis VM, the granularity is the key. With OS paging, well it is the page (a 4 KB block which can span on several unrelated keys). In all 3 cases, the initial load of the dump file will be extremely slow and generate a peak of random I/Os on your system. As you pointed out, most objects will be loaded and then swapped out. The warm-up time will be significant. Except if you have extreme locality in your data, or if you do not care at all about the latencies, using 1 GB RAM for a 16 GB dataset with the Redis VM is science-fiction IMO. There is a reason why the Redis VM was phased out. By design, it will never perform as well as a disk-based datastore (which can exploit file mapping or direct I/Os to avoid the double buffering, and use adapted data structures like B-trees). Redis as an in-memory store is excellent. But if you need to store something which is bigger than RAM, don't use it. Other (disk-based) stores will all perform much better. " "What are namespaces for ? what about usages? 1. what is the purpose of namespaces ? 2. and, more important, should they be used as objects in java (things that have data and functions and that try to achieve encapsulation) ? is this idea to far fetched ? :) 3. or should they be used as packages in java ? 4. or should they be used more generally as a module system or something ? ","Given that you use the Clojure tag, I suppose that you'll be interested in a Clojure-specific answer: > > what is the purpose of namespaces ? > > > Clojure namespaces, Java packages, Haskell / Python / whatever modules... At a very high level, they're all different names for the same basic mechanism whose primary purpose is to prevent name clashes in non-trivial codebases. Of course, each solution has its own little twists and quirks which make sense in the context of a given language and would not make sense outside of it. The rest of this answer will deal with the twists and quirks specific to Clojure. **A Clojure namespace groups Vars**, which are containers holding functions (most often), macro functions (functions used by the compiler to generate macroexpansions of appropriate forms, normally defined with `defmacro`; actually they are just regular Clojure functions, although there is some magic to the way in which they are registered with the compiler) and occasionally various ""global parameters"" (say, `clojure.core/*in*` for standard input), Atoms / Refs etc. The protocol facility introduced in Clojure 1.2 has the nice property that protocols are backed by Vars, as are the individual protocol functions; this is key to the way in which protocols present a solution to the expression problem (which is however probably out of the scope of this answer!). It stands to reason that namespaces should group Vars which are somehow related. In general, creating a namespace is a quick & cheap operation, so it is perfectly fine (and indeed usual) to use a single namespace in early stages of development, then as independent chunks of functionality emerge, factor those out into their own namespaces, rinse & repeat... Only the things which are part of the public API need to be distributed between namespaces up front (or rather: prior to a stable release), since the fact that function such-and-such resides in namespace so-and-so is of course a part of the API. > > and, more important, should they be used as objects in java (things that have data and functions and that try to achieve encapsulation) ? is this idea to far fetched ? :) > > > Normally, the answer is no. You might get a picture not too far from the truth if you approach them as classes with lots of static methods, no instance methods, no public constructors and often no state (though occasionally there may be some ""class data members"" in the form of Vars holding Atoms / Refs); but arguably it may be more useful not to try to apply Java-ish metaphors to Clojure idioms and to approach a namespace as a group of functions etc. and not ""a class holding a group of functions"" or some such thing. There is an important exception to this general rule: namespaces which include `:gen-class` in their `ns` form. These are meant precisely to implement a Java class which may later be instantiated, which might have instance methods and per-instance state etc. Note that `:gen-class` is an interop feature -- pure Clojure code should generally avoid it. > > or should they be used as packages in java ? > > > They serve some of the same purposes packages were designed to serve (as already mentioned above); the analogy, although it's certainly there, is not that useful, however, just because the things which packages group together (Java classes) are not at all like the things which Clojure namespaces group together (Clojure Vars), the various ""access levels"" (`private` / `package` / `public` in Java, `{:private true}` or not in Clojure) work very differently etc. That being said, one has to remember that there is a certain correspondence between namespaces and packages / classes residing in particular packages. A namespace called `foo.bar`, when compiled, produces a class called `bar` in the package `foo`; this means, in particular, that namespace names should contain at least one dot, as so-called single-segment names apparently lead to classes being put in the ""default package"", leading to all sorts of weirdness. (E.g. I find it impossible to have VisualVM's profiler notice any functions defined in single-segment namespaces.) Also, **`deftype` / `defrecord`-created types do not reside in namespaces**. A `(defrecord Foo [...] ...)` form in the file where namespace `foo.bar` is defined creates a class called `Foo` in the package `foo.bar`. To use the type `Foo` from another namespace, one would have to `:import` the class `Foo` from the `foo.bar` package -- `:use` / `:require` would not work, since they pull in Vars from namespaces, which records / types are not. So, in this particular case, there is a certain correspondence between namespaces and packages which Clojure programmers who wish to take advantage of some of the newer language features need to be aware of. Some find that this gives an ""interop flavour"" to features which are not otherwise considered to belong in the realm of interop (`defrecord` / `deftype` / `defprotocol` are a good abstraction mechanism even if we forget about their role in achieving platform speed on the JVM) and it is certainly possible that in some future version of Clojure this flavour might be done away with, so that the namespace name / package name correspondence for `deftype` & Co. can be treated as an implementation detail. > > or should they be used more generally as a module system or something ? > > > They *are* a module system and this is indeed how they should be used. " "BackChannelLogoutUri with multi-tenant scenario I am currently working with Identity server 4, where i am trying to enable [BackChannelLogoutUri](https://github.com/IdentityServer/IdentityServer4/tree/master/samples/Clients/src/MvcHybridBackChannel). Each client has been given a BackChannelLogoutUri in the config of the client ``` BackChannelLogoutUri = ""http://localhost:44322/home/LogoutBackChannel"", ``` Each client application has registered the cookieEventHandler and LogoutSessionManager. ``` services.AddTransient(); services.AddSingleton(); services.AddAuthentication(options => { options.DefaultScheme = CookieAuthenticationDefaults.AuthenticationScheme; options.DefaultChallengeScheme = ""oidc""; }) .AddCookie(options => { options.ExpireTimeSpan = TimeSpan.FromMinutes(60); options.Cookie.Name = ""mvchybridbc""; options.EventsType = typeof(CookieEventHandler); }) ``` My logout view on the identity server contains the Iframe ``` @if (Model.PostLogoutRedirectUri != null) {
Click here to return to the @Model.ClientName application.
} @if (Model.SignOutIframeUrl != null) { } ``` This is all well and good. But my problem is that the BackChannelLogoutUri is a single url. When hosted it will need to be passed some how from each tennent - """" - """" - """" - """" We cant really have a client for each tenant and app. That would be a lot of clients. That and clients that are only users of tenant one would not need to be logged out of tenant two. I am not sure how to address this issue. ","I've implemented the backchannel logout without having to rely on iframes. What it basically does is, collect the necessary urls and then send the notifications. I don't have tenants, so this will work for me. But you can adapt the code and add the logic for tenants, as commented in the code: ``` // Injected services: //private readonly IUserSession _userSession; //private readonly IClientStore _clientStore; //private readonly IBackChannelLogoutService _backChannelClient; private async Task LogoutUserAsync(string logoutId) { if (User?.Identity.IsAuthenticated == true) { // delete local authentication cookie await HttpContext.SignOutAsync(); // Get all clients from the user's session var clientIds = await _userSession.GetClientListAsync(); if (clientIds.Any()) { var backChannelClients = new List(); var sessionId = await _userSession.GetSessionIdAsync(); var sub = User.Identity.GetSubjectId(); foreach (var clientId in clientIds) { var client = await _clientStore.FindEnabledClientByIdAsync(clientId); // This should be valid in any case: if (client == null && !string.IsNullOrEmpty(client.BackChannelLogoutUri)) continue; // Insert here the logic to retrieve the tenant url for this client // and replace the uri: var tenantLogoutUri = client.BackChannelLogoutUri; backChannelClients.Add(new BackChannelLogoutModel { ClientId = client.ClientId, LogoutUri = tenantLogoutUri, SubjectId = sub, SessionId = sessionId, SessionIdRequired = true }); } try { await _backChannelClient.SendLogoutNotificationsAsync(backChannelClients); } catch (Exception ex) { // Log message } } // raise the logout event await _events.RaiseAsync(new UserLogoutSuccessEvent(User.GetSubjectId(), User.GetDisplayName())); } } ``` " "How does copy and paste for large files work I am curious to know how computers execute ""copy"" and ""paste"" of large folders. I've read that copy and paste of text between different processes or same process is achieved by saving the content into RAM and then copying it from there to destined location. So, how does computer instructions flow while copying a folder of say 10 GB on a machine which has 2 GB RAM and 4 GB max virtual memory. Is file copy is different from text copy. I think it's basic question but any links or insights appreciated. ","Clipboard doesn't have to hold entire file. When you copy a file (or files), only its path is put into the clipboard. It's also marked as a file - clipboard keeps track of its content's type, like plain text, formatted text, file, image, Word text etc. This is why you can't for example open up an image in Paint, press `Ctrl`+`C` and then paste it into a directory - because you have copied a picture, and directories hold files, not pictures. When you paste a compatible content (i.e. file(s) and/or folder(s)) into a directory, some application will handle the copying/moving operation. By default it will be the `explorer` process (the same one that's responsible for displaying Start menu and all file explorer windows), but some apps may replace it.[1] What happens now depends on what you're doing: - If you're **moving a file to another directory on the same partition**, it won't be physically moved on the disk, only its path will be updated[2]. - If you're **moving a file to another partition**, it will be split into chunks of the same size[3] and those will be copied one by one, then the original file will be deleted. Too small chunks will slow down the process, too big chunks will consume more memory. - Exactly the same will happen when you're **copying a file** (no matter if it's the same partition or not), except that the original file won't be deleted. - **Writing to an external storage** (like USB drives) doesn't work exactly like that[4] and I'm not sure what exactly happens then. My guess is that it's not a continuous chunk-by-chunk process, but something else happens every few chunks (buffer-related?). If anybody knows something about this, then feel free to edit. --- **Annotations:** [1]. For example [TeraCopy](http://codesector.com/teracopy), which is a nice advanced copy window replacement. [2]. Physical file structure on the hard disk doesn't resemble the directory structure - it's flat and all hierarchy information is stored in a separate part of the partition. How it's exactly done depends on filesystem (for example see [MFT](http://codesector.com/teracopy)). That information block holds all the information about file locations etc. So when you move a file inside one partition, there's no need to move it physically - only the path information has to be updated. [3]. Not literally, nothing will be split on the hard disk. The program that handles the copying process will work like there are multiple separate chunks, but the original file will be untouched. It's purely virtual. [4]. You can see it when using TeraCopy: the ""predicted progress"" doesn't work like on fixed drives, instead the ""real progress"" catches up with it, then ""predicted progress"" is expanded and so on. Explorer's default copy window has USB hiccups too. " "Creating unitary tests in Spring 3 I'm starting on testing applications in general and I want to create several tests to learn Mockito in Spring. I've been reading several information but I have some general doubts I'd like to ask. 1. I have seen come Mockito tests and they annotate the test of the class with: **@RunWith(MockitoJUnitRunner.class)** while in the Spring documentation it is used **@RunWith(SpringJUnit4ClassRunner.class)**. I don't know what's the difference between them and which one should I use for a Spring application where tests use Mockito. 2. As I haven't seen any real application that has test I'd like to know typical test that a developer would do. For example in a typical CRUD application for users (users can be created, updated...) can anyone a usual test that it would be done. Thanks. "," ``` @RunWith(MockitoJUnitRunner.class) ``` With this declaration you are suppose to write a **unit test**. Unit tests are exercising a single class mocking all dependencies. Typically you will inject mocked dependencies declared like this in your test case: ``` @Mock private YourDependency yourDependencyMock; ``` --- ``` @RunWith(SpringJUnit4ClassRunner.class) ``` Spring runner is meant for **integration test** (*component test*?) In this type of tests you are exercising a whole bunch of classes, in other words you are testing a single class with real dependencies (testing a controller with real services, DAOs, in-memory database, etc.) You should probably have both categories in your application. Althought it is advices to have more unit tests and only few smoke integration tests, but I often found myself more confident writing almost only integration tests. As for your second question, you should have: - **unit tests** for each class (controller, services, DAOs) separately with mocked all other classes - **integration tests** for a whole single CRUD operation. For instance creating a user that exercises controller, service, DAO and in-memory database. " "Create or update a section in XML file with Augeas I would like to update a section in an XML config file or add a new one if does not exist already using Augeas. The XML file looks like this: ``` ... deployment Deployment User somepasshere active changeme1@yourcompany.com ``` I would like to update the last name/first name/email if the ID exists already or add a new user section if it's a new ID. In AugTool I use: ``` augtool> set /augeas/load/Xml/lens Xml.lns augtool> set /augeas/load/Xml/incl /security.xml augtool> load ``` I'm still learning Augeas, so this was my first try to get the node : ``` augtool> print /files/security.xml/security/users/user/*[ #text= 'deployment'] ``` What would be the command to update or create a new section user in users ? thank you! ","First, with recent Augeas versions (>= 1.0.0), you can use `--transform` to set up the transformation. Let's say the file is `./users.xml`: ``` $ augtool -r . --noautoload --transform ""Xml.lns incl /users.xml"" augtool> defnode user /files/users.xml/security/users/user[id/#text=""deployment""] # Create a new user entry if it doesn't exist yet, assign node to the ""user"" variable augtool> set $user/id/#text ""deployment"" # Set id if node was just created augtool> set $user/firstName/#text ""Deployment"" # Set firstName augtool> set $user/lastName/#text ""User"" # Set lastName augtool> set $user/email/#text ""changeme1@yourcompany.com"" # set email ... augtool> save Saved 1 file(s) ``` You can even turn this into a script, say `user.augtool`: ``` #!/usr/bin/augtool -sf defnode user /files/users.xml/security/users/user[id/#text=""deployment""] set $user/id/#text ""deployment"" set $user/firstName/#text ""Deployment"" set $user/lastName/#text ""User"" set $user/email/#text ""changeme1@yourcompany.com"" ``` which you can then launch: ``` $ chmod +x user.augtool $ ./user.augtool --transform ""Xml.lns incl /users.xml"" -r . Saved 1 file(s) ``` " "Regular expression quoting in Python How should I declare a regular expression? ``` mergedData = re.sub(r'\$(.*?)\$', readFile, allData) ``` I'm kind of wondering why this worked. I thought that I need to use the `r''` to pass a regular expression. ``` mergedData = re.sub(""\$(.*?)\$"", readFile, allData) ``` What does `""\$""` result in in this case? Why? I would have thought `""$""`. "," > > I thought that I need to user the r'' to pass a regular expression. > > > `r` before a string literal indicates raw string, which means the usual escape sequences such as `\n` or `\r` are no longer treated as new line character or carriage return, but simply `\` followed by `n` or `r`. To specify a `\`, you only need `\` in raw string literal, while you need to double it up `\\` in normal string literal. This is why it is usually the case that raw string is used in specifying regular expression1. It reduces the confusion when reading the code. You would have to do escaping twice if you use normal string literal: once for the normal string literal escape and the second time for the escaping in regex. > > What does `""\$""` result in this case? Why? I would have thought `""$""` > > > In Python normal string literal, if `\` is not followed by an escape sequence, the `\` is preserved. Therefore `""\$""` results in `\` followed by `$`. This behavior is slightly different from the way C/C++ or JavaScript handle similar situation: the `\` is considered escape for the next character, and only the next character remains. So `""\$""` in those languages will be interpreted as `$`. **Footnote** 1: There is a small defect with the design of raw string in Python, though: [Why can't Python's raw string literals end with a single backslash?](https://stackoverflow.com/questions/647769/why-cant-pythons-raw-string-literals-end-with-a-single-backslash) " "Copy data from numpy to ctypes quickly I have a ctypes object shared between a c++ and python process. The python process takes the input values from that object, runs them through Tensorflow, and I'm left with a numpy array as output. As these arrays are pretty big I'm wondering if there is a better approach to copying the data from the output of tensorflow back to the shared ctypes object so the c++ process can act on them. (speed is the issue, yes.) Right now I'm copying each value one by one: ``` output = np.array([12, 13, 11, 10]) # in reality this is fairly large (the Tensorflow result) for i, value in enumerate(output): data.pressure[i] = ctypes.c_double(value) ``` where data is the ctypes object shared in memory. (structured after [this example](https://github.com/teeks99/py_boost_shmem)) On the other hand, copying data from the ctypes object into numpy is easy, I'm wondering if there is something that does the opposite (from numpy to the ctypes array) Here is that easy code: ``` # Creating a numpy array from the ctypes array input = np.reshape(data.velocity, (1, 29791)) # Tensorflow returns a numpy array output = sess.run(final, feed_dict={Input: input}) # Now how do I get the data from output into data.pressure? ``` Edit: For reference, this is how the ctypes looks (python side) ``` class TransferData(ctypes.Structure): _fields_ = [ ('statusReady', ctypes.c_bool), # ... ('velocity', ctypes.c_double * (31 * 31 * 31)), ('pressure', ctypes.c_double * (31 * 31 * 31)) ] ``` ","This shows how to copy a whole data block from numpy array to ctypes array: ``` import numpy as np import ctypes # Preparing example class TransferData(ctypes.Structure): _fields_ = [ ('statusReady', ctypes.c_bool), ('velocity', ctypes.c_double * 4), ('pressure', ctypes.c_double * 4) ] data = TransferData() output = np.array([12., 13., 11., 10.]) # Actual code # Both values should be equal but there could be problems with alignment settings assert ctypes.sizeof(data.pressure) == output.nbytes ctypes.memmove(ctypes.byref(data.pressure), output.ctypes.data, output.nbytes) print(list(data.pressure)) ``` " "I've been trying to reconfigure my SSH key I keep getting the following error: ``` git commit -sam ""blah blah blah"" fatal: either user.signingkey or gpg.ssh.defaultKeyCommand needs to be configured ``` I just updated Git completely trying to figure this out, so it's completely up to date, and then I successfully added all of my usual configurations to the new version now running in Git Bash. I've repeatedly gone to Github's SSH key generator and followed the directions one by one. AND AFTER ALL THAT, I'M STILL GETTING THIS ERROR. I am unable to make commits ANYWHERE on my local machine (Git Bash, Terminal, GitKraken; I even broke down and tried Github Desktop), much less signed, annotated commits, as is my habit. I removed the expired keys from my Github account. I DON'T KNOW WHAT ELSE TO DO. [![I'm about to lose my mind.](https://i.stack.imgur.com/10n5v.png)](https://i.stack.imgur.com/10n5v.png) ","You're mixing together the concepts of ssh keys (which are somewhat generalized and apply across all of ssh, and which you can use to authenticate yourself to GitHub) and Git's *signed commits* (and signed annotated tags). These are different, although they use related mechanisms. In particular, to use an ssh key to *sign a Git commit*, you must: - configure your ssh locally so that it can sign commits (this may or may not already be supported depending on your OpenSSH version); - tell Git *how* to use your ssh to sign commits (this depends on your ssh version); and - tell Git *to* use ssh to sign commits. None of these three steps use or require anything on *GitHub*. But this is what is failing here: you have not set up `user.signingkey` or `gpg.ssh.defaultKeyCommand` in Git, which is where that second bullet point comes in. (You're already doing the third one, but Git doesn't know *how* to run your ssh yet!) You'll need to figure out how to get Git to invoke the right commands on your system (which will depend somewhat on your OS and OpenSSH version). Once you have such signed commits, however, these digital signatures are useful only to you, not to anyone else, *unless* you have spread the key(s) involved in these digital signatures. This is where you get GitHub involved. See [How do I sign git commits using my existing ssh key](https://stackoverflow.com/q/72844616/1256452) and particularly VonC's answer [here](https://stackoverflow.com/a/45120525/1256452) to [Why does git sign with GPG keys rather than using SSH keys?](https://stackoverflow.com/q/45119932/1256452), to see how to do the first part. See both VonC's and other answers, particularly [Jakuje's here](https://stackoverflow.com/a/45126508/1256452), for some cautions involving using ssh keys here. I don't know any of the GitHub side details here, but VonC's answers have more. In general, it's a lot easier to use GPG for signing commits and/or annotated tags. " "How to know/change current directory in Python shell? I am using Python 3.2 on Windows 7. When I open the Python shell, how can I know what the current directory is? How can I change it to another directory (where my modules are)? ","You can use the `os` module. ``` >>> import os >>> os.getcwd() '/home/user' >>> os.chdir(""/tmp/"") >>> os.getcwd() '/tmp' ``` But if it's about finding other modules: You can set an environment variable called `PYTHONPATH`, under Linux would be like ``` export PYTHONPATH=/path/to/my/library:$PYTHONPATH ``` Then, the interpreter searches also at this place for `import`ed modules. I guess the name would be the same under Windows, but don't know how to change. **edit** Under Windows: ``` set PYTHONPATH=%PYTHONPATH%;C:\My_python_lib ``` (taken from ) **edit 2** ... and even better: use `virtualenv` and `virtualenv_wrapper`, this will allow you to create a development environment where you can add module paths as you like (`add2virtualenv`) without polluting your installation or ""normal"" working environment. " "How to copy DLL files into the same folder as the executable using CMake? We use CMake for generating the Visual Studio files of our sources in our SVN. Now my tool requires some DLL files to be in the same folder as the executable. The DLL files are in a folder alongside the source. How can I change my `CMakeLists.txt` such that the generated Visual Studio project will either have already the particular DLL files in the release/debug folders or will copy them upon compilation? ","I'd use `add_custom_command` to achieve this along with `cmake -E copy_if_different...`. For full info run ``` cmake --help-command add_custom_command cmake -E ``` So in your case, if you have the following directory structure: ``` /CMakeLists.txt /src /libs/test.dll ``` and your CMake target to which the command applies is `MyTest`, then you could add the following to your CMakeLists.txt: ``` add_custom_command(TARGET MyTest POST_BUILD # Adds a post-build event to MyTest COMMAND ${CMAKE_COMMAND} -E copy_if_different # which executes ""cmake - E copy_if_different..."" ""${PROJECT_SOURCE_DIR}/libs/test.dll"" # <--this is in-file $) # <--this is out-file path ``` If you just want the entire contents of the `/libs/` directory copied, use `cmake -E copy_directory`: ``` add_custom_command(TARGET MyTest POST_BUILD COMMAND ${CMAKE_COMMAND} -E copy_directory ""${PROJECT_SOURCE_DIR}/libs"" $) ``` If you need to copy different dlls depending upon the configuration (Release, Debug, eg) then you could have these in subdirectories named with the corresponding configuration: `/libs/Release`, and `/libs/Debug`. You then need to inject the configuration type into the path to the dll in the `add_custom_command` call, like this: ``` add_custom_command(TARGET MyTest POST_BUILD COMMAND ${CMAKE_COMMAND} -E copy_directory ""${PROJECT_SOURCE_DIR}/libs/$"" $) ``` " "is there any way to get samples under each leaf of a decision tree? I have trained a decision tree using a dataset. Now I want to see which samples fall under which leaf of the tree. From here I want the red circled samples. [![enter image description here](https://i.stack.imgur.com/DYhwf.png)](https://i.stack.imgur.com/DYhwf.png) I am using Python's Sklearn's implementation of decision tree . ","If you want only the leaf for each sample you can just use ``` clf.apply(iris.data) ``` > > array([ 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, > 1, > 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, > 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 5, > 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, > 5, 5, 14, 5, 5, 5, 5, 5, 5, 10, 5, 5, 5, 5, 5, 10, 5, > 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 16, 16, > 16, 16, 16, 16, 6, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, > 8, 16, 16, 16, 16, 16, 16, 15, 16, 16, 11, 16, 16, 16, 8, 8, 16, > 16, 16, 15, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16]) > > > If you want to get all samples for each node you could calculate all the decision paths with ``` dec_paths = clf.decision_path(iris.data) ``` Then loop over the decision paths, convert them to arrays with `toarray()` and check whether they belong to a node or not. Everything is stored in a `defaultdict` where the key is the node number and the values are the sample number. ``` for d, dec in enumerate(dec_paths): for i in range(clf.tree_.node_count): if dec.toarray()[0][i] == 1: samples[i].append(d) ``` **Complete code** ``` import sklearn.datasets import sklearn.tree import collections clf = sklearn.tree.DecisionTreeClassifier(random_state=42) iris = sklearn.datasets.load_iris() clf = clf.fit(iris.data, iris.target) samples = collections.defaultdict(list) dec_paths = clf.decision_path(iris.data) for d, dec in enumerate(dec_paths): for i in range(clf.tree_.node_count): if dec.toarray()[0][i] == 1: samples[i].append(d) ``` **Output** ``` print(samples[13]) ``` > > [70, 126, 138] > > > " "incomprehensible performance improvement with openmp even when num\_threads(1) The following lines of code ``` int nrows = 4096; int ncols = 4096; size_t numel = nrows * ncols; unsigned char *buff = (unsigned char *) malloc( numel ); unsigned char *pbuff = buff; #pragma omp parallel for schedule(static), firstprivate(pbuff, nrows, ncols), num_threads(1) for (int i=0; i #include #include #include int nrows = 4096; int ncols = 4096; size_t numel = nrows * ncols; unsigned char * buff; void func() { unsigned char *pbuff = buff; #pragma omp parallel for schedule(static), firstprivate(pbuff, nrows, ncols), num_threads(1) for (int i=0; i(end-begin).count(); std::cout << ""func average running time: "" << usec/100 << "" usecs"" << std::endl; return 0; } ``` ","The answer, as it turns out, is that `firstprivate(pbuff, nrows, ncols)` effectively declares `pbuff`, `nrows` and `ncols` as local variables within the scope of the for loop. That in turn means the compiler can see `nrows` and `ncols` as constants - it cannot make the same assumption about global variables! Consequently, with `-fopenmp`, you end up with the huge speedup because *you aren't accessing a global variable each iteration*. (Plus, with a constant `ncols` value, the compiler gets to do a bit of loop unrolling). By changing ``` int nrows = 4096; int ncols = 4096; ``` to ``` const int nrows = 4096; const int ncols = 4096; ``` **or** by changing ``` for (int i=0; i > FastCGI provides a way to improve the > performance of the thousands of Perl > applications that have been written > for the Web. -[Source](http://www.fastcgi.com/drupal/) > > > and how does it do that? ","Mark R. Brown's [whitepaper on the subject](http://www.fastcgi.com/drupal/node/6?q=node/16#S3) claims that one of the primary benefits of FastCGI is that different requests can share a single cache, making caching practical: > > Today's most widely deployed Web server APIs are based on a pool-of-processes server model. The Web server consists of a parent process and a pool of child processes. Processes do not share memory. An incoming request is assigned to an idle child at random. The child runs the request to completion before accepting a new request. A typical server has 32 child processes, a large server has 100 or 200. > > > In-memory caching works very poorly in this server model because processes do not share memory and incoming requests are assigned to processes at random. For instance, to keep a frequently-used file available in memory the server must keep a file copy per child, which wastes memory. When the file is modified all the children need to be notified, which is complex (the APIs don't provide a way to do it). > > > FastCGI is designed to allow effective in-memory caching. Requests are routed from any child process to a FastCGI application server. The FastCGI application process maintains an in-memory cache. > > > " "Set href in attribute directive in Angular I'm trying to set up a Bootstrap tab strip with Angular 2. I have the tabs rendering in an `ngFor` but I'm getting template errors when I try to put the `#` infront of the `href` expression. So this template compiles but isn't what I want: ``` ``` What I want to do is `[attr.href]=""#aType.Name""` but that blows up. What is the correct syntax to prepend the `#` in front of the expression in the attribute directive? ","There is no need to prefix with `#` In this code ```