instruction
stringlengths
0
30k
While installing npm package from github getting error with message "is not valid repository name"
|github|npm|
Firebase Authentication Fails Only on release Version of Android App
I'm working with python for some years, and when it comes to unit testing and mocking I used to mock the collaborators of the unit under test where they are declared, for example if I want to mock a function called **my_function** declared in **my_module.py** module and used in my unit under test, I would write ``` @mock.patch('my_module.my_function') def test_toast(mock_my_function): ... ``` and it works roughly all the times, but sometimes it does not work until mocking it where it is used, for example supposing my unit under test is declared in **foo_module.py**, and it imports **my_function** from **my_module.py** in order to mock **my_function** I would write ``` @mock.patch('foo_module.my_function') def test_toast(mock_my_function): ... ``` My first question is what the differences between the two methods of mocking in python ?, and when we should use each one ? As far as my observations, they both work, but sometimes mocking where it's declared does not work. Can you please help me to understand this behavior in details
where to mock in python
|python|unit-testing|mocking|
I upgraded build-info-extractor-gradle to the [latest 4.x release](https://mvnrepository.com/artifact/org.jfrog.buildinfo/build-info-extractor-gradle) and that fixed the issue. ``` buildscript { dependencies { classpath "org.jfrog.buildinfo:build-info-extractor-gradle:4.33.13" } } ```
I hope you are well. I have a question related to PostgreSQL. I created an application with login. The password is hashed with bcrypt, and so far everything is fine. But today I asked myself, what if the database is hacked, and the hash code in the password column is changed? That's why I would like to know if in POSTGRESQL there is some type of action that completely blocks the administrator from editing passwords, keeping password changes only for users and through the website itself.
How to block editing per column in PostgreSQL?
|sql|database|postgresql|
null
It looks like you're using Akka Classic's implementation of the ask pattern, `akka.pattern.Patterns.ask`. This implementation is logically equivalent (there are some slight optimizations under the hood) to spawning an actor which schedules a timeout message to itself and then sending a message to the askee purporting to be from the spawned actor. The spawned actor completes the future successfully if it gets a response before the timeout message or fails it with a timeout otherwise. In ```java Patterns.ask(Adapter.toClassic(typedActor), new TypedActor.Message(Adapter.toTyped(self())), timeout)); ``` you're setting the classic actor which initiated the ask as the `replyTo` on the message, so the response from the typed actor goes to that classic actor, not to the actor which `ask` spawned, so that spawned actor will report that the ask timed out. The typed behavior doesn't see the "return address" on the classic envelope. The easiest fix is probably to use the Akka Typed ask pattern, `akka.actor.typed.javadsl.AskPattern.ask`, which would look something like: ```java AskPattern.ask( typedActor, replyTo -> new TypedActor.Message(replyTo), timeout, Adapter.toTyped(getContext().getSystem()).scheduler() ) ``` The `replyTo -> new TypedActor.Message(replyTo)` injects the "reply-to" address into the request. Note that while `Patterns.ask` returns a Scala future, this returns a Java `CompletionStage`: you'll most likely want to pipe the returned `CompletionStage` to the classic actor with ``` Patterns.pipe(csFromAsk, getContext().dispatcher()).to(self()) ``` Note that blocking a thread while waiting for the `Future` from `Patterns.ask` or `CompletionStage` from `AskPattern.ask` is not a good idea and can create deadlock.
WordPress won't allow you to have duplicate term slugs in the same taxonomy, they need to be unique in the database. However, we can extend WordPress core to make the url structure work while maintaining the unique term slugs. Using your example: `cars/ford/parts` and `cars/toyota/parts` urls where `ford_parts` and `toyota_parts` are the unique child terms in the `cars` taxonomy and their parent term slugs are `ford` and `toyota`, respectively. First, the rewrite rule: ```php /** * Add custom rewrite rules for parent/child term urls. * * Adds a rewrite rule that will match terms * that have their slug prefixed with the parent term * slug, separated by an underscore, but their url * is just the child term slug. */ function prefix_custom_hierarchical_taxonomy_rewrite_rules() { $tax_object = get_taxonomy( 'cars' ); if ( $tax_object ) { $tax_slug = $tax_object->rewrite['slug']; add_rewrite_rule( $tax_object->rewrite['slug'] . '/([^/]+)/([^/]+)/?', 'index.php?' . $tax_object->query_var . '=$matches[1]_$matches[2]', 'top' ); } } add_action( 'init', 'prefix_custom_hierarchical_taxonomy_rewrite_rules', 100 ); ``` Then filter the `term_link` such that you remove the "parent_" part of the slug in the child term's link: ```php /** * Filter the term link to handle hierarchical taxonomy permalinks. * * This will filter the term link to remove the parent term slug * from the child term url, so that the url is just the child term slug, * where the child term name is "parent_child" and the parent term name is "parent". * * @param string $link The term link * @param WP_Term $term The term object * @param string $taxonomy The taxonomy name * @return string */ function prefix_custom_term_link( $link, $term, $taxonomy ) { if ( $taxonomy !== 'cars' ) { return $link; } // Get the parent term $parent_term = get_term( $term->parent, $taxonomy ); if ( $parent_term && ! is_wp_error( $parent_term ) ) { // Remove the "parent_" prefix from the child term slug // to generate a url that looks like /parent/child/ $link = str_replace( '/' . $term->slug, '/' . str_replace ( $parent_term->slug . '_', '', $term->slug ), $link ); } return $link; } add_filter( 'term_link', 'prefix_custom_term_link', 10, 3 ); ``` You may also want to filter the `parse_query` to check for any child terms that don't follow this structure. This avoids having to set _all_ your child category slugs to the `parent_child` pattern, you can just apply it to those that have a "duplicated" url slug. **Note:** this next example is looking for the `category_name` query variable, which is used by the `category` taxonomy. For your custom taxonomy that will likely be different - see [register_taxonomy][1] ```php /** * Parse the query to handle hierarchical taxonomy permalinks. * * This filter the query to check for terms that are child terms * yet do not have the parent term in the slug, thus the parent/child * structure should literally look for a "child" slug, not "parent_child" * * @param WP_Query $query The WP_Query instance (passed by reference) * @return void */ function prefix_custom_parse_query($query) { if ( isset( $query->query_vars['category_name'] ) ) { $slug = $query->query_vars['category_name']; if ( strpos( $slug, '_' ) !== false ) { list( $parent, $child ) = explode( '_', $slug, 2 ); if ( ! term_exists( $slug, 'category' ) && term_exists( $child, 'category' ) ) { $query->query_vars['category_name'] = $child; } } } } add_action( 'parse_query', 'prefix_custom_parse_query' ); ``` I've put this into a gist [here][2] but using the `category` taxonomy instead of a custom `cars` taxonomy. Notes: - This only works with hierarchical taxonomies - Update the `prefix_` naming to something unique to your theme or plugin. - This example uses a custom term slug for your child terms that follows a pattern like `{parent-slug}_{child-slug}`. You could change that to another separating character, but be careful to use a dash `-` as that is the default used by core when names are generated (spaces turn to dashes). - This requires [the "pretty" permalinks][3] where `/%postname%/` is set in the Settings > Permalinks. [1]: https://developer.wordpress.org/reference/functions/register_taxonomy/ [2]: https://gist.github.com/joshuadavidnelson/282614233c25071669f2502d77a7a8f9 [3]: https://wordpress.org/documentation/article/customize-permalinks/#pretty-permalinks
>1. Put it through C# Function App that by the way takes it and is supposed to output the SAME FILE (so no Excel manipulation) If you are sending binary which is converted from base64 data, then you need to give something like this: { "$content-type":@{body('Get_file_content_using_path')['$content-type']}, "$content":@{outputs('Compose')['$content']} } ![enter image description here](https://i.imgur.com/87z9KJG.png) then: ![enter image description here](https://i.imgur.com/7CS9BId.png) Output: ![enter image description here](https://i.imgur.com/bE0S66S.png) If binary is correctly formatted you will get output as i have got. I too agree with @Skin, that Logic apps is sufficient for many data operations.
null
I Have a "For **Loop**" In ASP.NET Core Like This : `<button type="submit" Form="FormActive" name="id" value="@item.I"></button>` <!--It Will Create SomeThing Like This:--> <button type="submit" Form="FormActive" name="id" value="1"></button> <button type="submit" Form="FormActive" name="id" value="2"></button> <button type="submit" Form="FormActive" name="id" value="3"></button> ... ... ... ... 4 ... ... ... ... 99 <!--And This Is My Single Form : --> <form id="FormActive" action="/action_page"> </form> Whats The Problem? why This Form Just Send A **httpRequest POST** **Without"ID"** And Value?(infact It Will Send A Post Request With **Empty Body**) What I Should To do for **send ID** Edit: I cant **Remove Form** - And I Cant Use **+99 Form** too
I am creating an infinite marquee slider from right to left. When the user hovers over it, it will stop. Upon hovering out, it will start again. It's working but not smoothly. it's a little bit of jumping. Would you help me out with this? Please find the below code <!-- begin snippet: js hide: false console: true babel: false --> <!-- language: lang-js --> //marque slide var animationInterval; $(document).ready(function() { $('.marqueslide').mouseenter(function() { stopSliderAnimation(); console.log("mouse entered"); }).mouseleave(function() { startSliderAnimation(); console.log("mouse out"); }); startSliderAnimation(); // Start animation initially }); function startSliderAnimation() { animationInterval = setInterval(function() { $('.marqueslide .row, .marqueslide .marquestarthere').animate({ marginLeft: '-=371px' // Adjust value according to the width of the cards and margin }, 1000, 'linear', function() { $(this).css('marginLeft', '0').find('.marqueme:last').after($(this).find('.marqueme:first')); }); }, 1000); // Adjust the interval as needed } function stopSliderAnimation() { clearInterval(animationInterval); } <!-- language: lang-css --> .testimonial-cards { list-style: none; display: flex; gap: 56px 31px; margin: 0 auto; width: max-content; flex-wrap: nowrap; } .testimonial-cards li { width: 100%; max-width: 500px; min-height: 400px; } .marqueslide .marqueme { flex: 0 0 auto; margin-right: 20px; } .testimonial-cards .card-item { display: flex; flex-direction: column; justify-content: space-between; } .testimonial-cards .card-item { height: 100%; background: #FFFFFF; border-radius: 12px; padding: 24px; border: 1px solid #000; } <!-- language: lang-html --> <div class="marqueslide"> <ul class="testimonial-cards marquestarthere"> <li class="marqueme"> <div class="card-item"> <p class="comment">Lorem ipsum dolor sit amet, consectetur adipiscing elit. Cras metus turpis, imperdiet ac dui at, elementum viverra metus. Vestibulum at interdum neque, sodales dapibus massa. Phasellus non faucibus nibh. Aliquam in varius mauris. Proin sodales nulla quam, eget pellentesque lorem imperdiet quis. Sed placerat nisi lectus, ut tempus ante euismod eget. Phasellus rhoncus, lacus et varius dapibus, purus enim cursus tortor, et mattis velit lectus nec erat. Duis ultrices posuere eros. Praesent ut risus eros. Proin at lacus feugiat mauris ultrices feugiat. Lorem ipsum dolor sit amet, consectetur adipiscing elit. Aliquam ac sapien ac mi lacinia elementum. Nunc posuere odio condimentum mi fermentum, ac dapibus diam pharetra. Phasellus sed justo rutrum, bibendum ante porttitor, tempor orci. Morbi porttitor mi sed mi ullamcorper, sit amet ultrices magna volutpat. </p> </div> </li> <li class="marqueme"> <div class="card-item"> <p class="comment">Lorem ipsum dolor sit amet, consectetur adipiscing elit. Cras metus turpis, imperdiet ac dui at, elementum viverra metus. Vestibulum at interdum neque, sodales dapibus massa. Phasellus non faucibus nibh. Aliquam in varius mauris. Proin sodales nulla quam, eget pellentesque lorem imperdiet quis. Sed placerat nisi lectus, ut tempus ante euismod eget. Phasellus rhoncus, lacus et varius dapibus, purus enim cursus tortor, et mattis velit lectus nec erat. Duis ultrices posuere eros. Praesent ut risus eros. Proin at lacus feugiat mauris ultrices feugiat. Lorem ipsum dolor sit amet, consectetur adipiscing elit. Aliquam ac sapien ac mi lacinia elementum. Nunc posuere odio condimentum mi fermentum, ac dapibus diam pharetra. Phasellus sed justo rutrum, bibendum ante porttitor, tempor orci. Morbi porttitor mi sed mi ullamcorper, sit amet ultrices magna volutpat. </p> </div> </li> <li class="marqueme"> <div class="card-item"> <p class="comment">Lorem ipsum dolor sit amet, consectetur adipiscing elit. Cras metus turpis, imperdiet ac dui at, elementum viverra metus. Vestibulum at interdum neque, sodales dapibus massa. Phasellus non faucibus nibh. Aliquam in varius mauris. Proin sodales nulla quam, eget pellentesque lorem imperdiet quis. Sed placerat nisi lectus, ut tempus ante euismod eget. Phasellus rhoncus, lacus et varius dapibus, purus enim cursus tortor, et mattis velit lectus nec erat. Duis ultrices posuere eros. Praesent ut risus eros. Proin at lacus feugiat mauris ultrices feugiat. Lorem ipsum dolor sit amet, consectetur adipiscing elit. Aliquam ac sapien ac mi lacinia elementum. Nunc posuere odio condimentum mi fermentum, ac dapibus diam pharetra. Phasellus sed justo rutrum, bibendum ante porttitor, tempor orci. Morbi porttitor mi sed mi ullamcorper, sit amet ultrices magna volutpat. </p> </div> </li> <li class="marqueme"> <div class="card-item"> <p class="comment">Lorem ipsum dolor sit amet, consectetur adipiscing elit. Cras metus turpis, imperdiet ac dui at, elementum viverra metus. Vestibulum at interdum neque, sodales dapibus massa. Phasellus non faucibus nibh. Aliquam in varius mauris. Proin sodales nulla quam, eget pellentesque lorem imperdiet quis. Sed placerat nisi lectus, ut tempus ante euismod eget. Phasellus rhoncus, lacus et varius dapibus, purus enim cursus tortor, et mattis velit lectus nec erat. Duis ultrices posuere eros. Praesent ut risus eros. Proin at lacus feugiat mauris ultrices feugiat. Lorem ipsum dolor sit amet, consectetur adipiscing elit. Aliquam ac sapien ac mi lacinia elementum. Nunc posuere odio condimentum mi fermentum, ac dapibus diam pharetra. Phasellus sed justo rutrum, bibendum ante porttitor, tempor orci. Morbi porttitor mi sed mi ullamcorper, sit amet ultrices magna volutpat. </p> </div> </li> <li class="marqueme"> <div class="card-item"> <p class="comment">Lorem ipsum dolor sit amet, consectetur adipiscing elit. Cras metus turpis, imperdiet ac dui at, elementum viverra metus. Vestibulum at interdum neque, sodales dapibus massa. Phasellus non faucibus nibh. Aliquam in varius mauris. Proin sodales nulla quam, eget pellentesque lorem imperdiet quis. Sed placerat nisi lectus, ut tempus ante euismod eget. Phasellus rhoncus, lacus et varius dapibus, purus enim cursus tortor, et mattis velit lectus nec erat. Duis ultrices posuere eros. Praesent ut risus eros. Proin at lacus feugiat mauris ultrices feugiat. Lorem ipsum dolor sit amet, consectetur adipiscing elit. Aliquam ac sapien ac mi lacinia elementum. Nunc posuere odio condimentum mi fermentum, ac dapibus diam pharetra. Phasellus sed justo rutrum, bibendum ante porttitor, tempor orci. Morbi porttitor mi sed mi ullamcorper, sit amet ultrices magna volutpat. </p> </div> </li> <li class="marqueme"> <div class="card-item"> <p class="comment">Lorem ipsum dolor sit amet, consectetur adipiscing elit. Cras metus turpis, imperdiet ac dui at, elementum viverra metus. Vestibulum at interdum neque, sodales dapibus massa. Phasellus non faucibus nibh. Aliquam in varius mauris. Proin sodales nulla quam, eget pellentesque lorem imperdiet quis. Sed placerat nisi lectus, ut tempus ante euismod eget. Phasellus rhoncus, lacus et varius dapibus, purus enim cursus tortor, et mattis velit lectus nec erat. Duis ultrices posuere eros. Praesent ut risus eros. Proin at lacus feugiat mauris ultrices feugiat. Lorem ipsum dolor sit amet, consectetur adipiscing elit. Aliquam ac sapien ac mi lacinia elementum. Nunc posuere odio condimentum mi fermentum, ac dapibus diam pharetra. Phasellus sed justo rutrum, bibendum ante porttitor, tempor orci. Morbi porttitor mi sed mi ullamcorper, sit amet ultrices magna volutpat. </p> </div> </li> </ul> </div> <script src="https://code.jquery.com/jquery-3.7.1.min.js" integrity="sha256-/JqT3SQfawRcv/BIHPThkBvs0OEvtFFmqPF/lYI/Cxo=" crossorigin="anonymous"></script> <!-- end snippet --> [1]: https://jsfiddle.net/b3o62f09/
Infinite marque slider is a little bit jumping
|html|jquery|css|slider|css-animations|
I'm trying to run C code on my Windows, but I have a low-end PC, and I don't think I *should* install Visual Studio. How do I only install the C/C++ compiler that comes with it, without installing Visual Studio itself.
|c|visual-studio|visual-c++|
You can use parameters ``` @pytest.mark.parametrize("x, y", (("aa", "aab"), ("xx", "aaabc"), ("aab", "ababa"))) def test_canConstruct(x, y): sol = Solution() assert sol.canConstruct(x, y) is True ``` and get the results individually ``` x.py::test_canConstruct[aa-aab] PASSED [ 33%] x.py::test_canConstruct[xx-aaabc] PASSED [ 66%] x.py::test_canConstruct[aab-ababa] PASSED [100%] ```
The most trivial way to download a binary file from an FTP server using VB.NET is using [`WebClient.DownloadFile`](https://learn.microsoft.com/en-us/dotnet/api/system.net.webclient.downloadfile): Dim client As WebClient = New WebClient() client.Credentials = New NetworkCredential("username", "password") client.DownloadFile( "ftp://ftp.example.com/remote/path/file.zip", "C:\local\path\file.zip") ---- If you need a greater control, that `WebClient` does not offer (like [TLS/SSL encryption](https://stackoverflow.com/q/4331665/850848#56888555), ascii/text transfer mode, [resuming transfers](https://stackoverflow.com/q/6331133/850848#66154869), etc), use [`FtpWebRequest`](https://learn.microsoft.com/en-us/dotnet/api/system.net.ftpwebrequest). Easy way is to just copy an FTP response stream to `FileStream` using [`Stream.CopyTo`](https://learn.microsoft.com/en-us/dotnet/api/system.io.stream.copyto): Dim request As FtpWebRequest = WebRequest.Create("ftp://ftp.example.com/remote/path/file.zip") request.Credentials = New NetworkCredential("username", "password") request.Method = WebRequestMethods.Ftp.DownloadFile Using ftpStream As Stream = request.GetResponse().GetResponseStream(), fileStream As Stream = File.Create("C:\local\path\file.zip") ftpStream.CopyTo(fileStream) End Using --- If you need to monitor a download progress, you have to copy the contents by chunks yourself: Dim request As FtpWebRequest = WebRequest.Create("ftp://ftp.example.com/remote/path/file.zip") request.Credentials = New NetworkCredential("username", "password") request.Method = WebRequestMethods.Ftp.DownloadFile Using ftpStream As Stream = request.GetResponse().GetResponseStream(), fileStream As Stream = File.Create("C:\local\path\file.zip") Dim buffer As Byte() = New Byte(10240 - 1) {} Dim read As Integer Do read = ftpStream.Read(buffer, 0, buffer.Length) If read > 0 Then fileStream.Write(buffer, 0, read) Console.WriteLine("Downloaded {0} bytes", fileStream.Position) End If Loop While read > 0 End Using For GUI progress (WinForms `ProgressBar`), see (C#): https://stackoverflow.com/q/45269263/850848 --- If you want to download all files from a remote folder, see https://stackoverflow.com/q/22626766/850848
I have a webpage with two <picture> elements that each have <a> in front of them. The images are inline, next to each other. Whitespace is between these two elements (not sure why, but I'm fine with the small gap between them) and the whitespace itself has a little underscore or bottom-border type of characteristic to it. I don't want this little underscore to be visible. Page viewable here: https://nohappynonsense.net/writtte If I remove the <a> from each picture the little line goes away. But I want these images to have a link, so I'm not sure what else to try. Also - sorry if my code looks like some horrible mess, I'm not a programmer or coder or anything.
I have written a code for finding prime numbers using sieve of Erasthenes algorithm, but the problem is my code works as it tends to be, but for only some numbers. It shows like "entering upto-value infinitely" as the error and for some numbers it works perfectly. As I started recently studying C in-depth I couldn't find what is going wrong. I request the code-ies to help me in this case. Here's the code: ``` #include <stdio.h> int main() { int n; printf("enter number: "); scanf("%d",&n); int arr[n],pr=2,i=0; for(i=0;pr<=n;i++) { arr[i]=pr; pr++; } int j,k; while(arr[k]<=n) { for(j=2;j<n;j++) { for(k=0;k<n;k++) { if(arr[k]%j==0 && arr[k]>j) arr[k]=0; } } } for(i=0;arr[i]<=n;i++) { if(arr[i]!=0) printf(" %d",arr[i]); } printf("\n"); return 0; } ```
I found this comment... https://github.com/lovell/sharp/issues/3295#issuecomment-1186927843 and tried this... nitro: { hooks: { 'dev:reload': () => require('onnxruntime-node') } } Now I get a different error but I'm not sure if it's related or just another problem that has now been exposed... [unhandledRejection] connect ECONNREFUSED 127.0.0.1:53191 at TCPConnectWrap.afterConnect [as oncomplete] (node:net:1602:16)
Maybe you can try this. I just tried it and it is working now! ``` conda update conda conda update conda-build conda install -n base conda-libmamba-solver conda config --set solver libmamba ``` This issue helped me a lot: https://github.com/conda/conda/issues/11919
Use dic to store key and expired time. This should work your code. from datetime import datetime, timedelta key_expiration_map = {} def generate_key(): key = str("4255") expiration_time = datetime.now() + timedelta(days=0, minutes=1) #Key expires in 1 day and 30 minutes return key, expiration_time def is_expired(expiration_time): return datetime.now() > expiration_time def validate_key(user_input): if user_input in key_expiration_map: expiration_time = key_expiration_map[user_input] if is_expired(expiration_time): print("Key Expired") return False else: print("Successfully Loaded") print("Expiration:", expiration_time) return True else: print("Invalid Key") return False correct_key, expiration_time = generate_key() key_expiration_map[correct_key] = expiration_time print("# Generated key:", correct_key) while True: user_input = input("Enter your license Key:") if validate_key(user_input): break input("Press any Key to close")
I would like to point out that I have been dealing with this issue for 1 month. I'm open to any suggestions. I have form pages that receive input data from the user. These pages contain a component called basa-crud-page. This base-crud-page component uses Template Driven Forms and the codes inside look like this. The reason I cannot share the entire code is because it is the codes of the company I have just started working for: <form #editForm="ngForm" validate> <ng-container *ngIf="customized"> <ng-content select="[customForm]"></ng-content> </ng-container> <ng-container *ngIf="!customized"> . . Here are the codes of the elements of the base form . . </ng-container> </form> The "customized" property on this page has a false value by default, and form validations work perfectly on my pages that use base-crud-page directly. For example for glr-vezne-pos-page there is a usage like this: selector: 'glr-vezne-pos-page', template: ` <ng-container> <base-crud-page #baseCrudPage /> </ng-container>` However, when I write a page with the customized property set to true and where I specify the form elements myself, the form controls never work. As an example of its usage: <ng-container> <base-crud-page [customized]="true"> <div customForm> . . Here are the codes of the form elements of the "custom" form. . . </div> </base-crud-page> </ng-container> *Note: the div labeled customForm in this component must be placed in the `<ng-content select="[customForm]"></ng-content>` field in the base-crud-page.* What I suspect is this: I can print a page created as in the first example and the ngAfterViewInitte form to the console. [![][1]][1] As in the second example, a page is created as a custom form and the form is not printed to the console. As you can see, 'Kaydet' - that is, the Save button seems to be active as the controls are not working. [![][2]][2] I am open to all kinds of ideas and need help. [1]: https://i.stack.imgur.com/xSxRi.png [2]: https://i.stack.imgur.com/wTPTC.png
I am terribly sorry if this isn't the right place to ask but what I am hoping is that someone can explain what is going on in plain English. We're CompanyA and we were recently bought over by CompanyB. Our application was called after our Company, so for simplicity's sake let's say that we wrote an App and it was built as CompanyA.exe Our App creates a folder %localappdata%\CompanyA where we put "our stuff" Our app is a WPF Application written for .NET Framework 4.8 using primarily VB.NET. Our application is signed with a .SNK file When we were bought over we changed the Company Name in the Assembly Information to "CompanyA, a CompanyB solution", but we still build the application as CompanyA.exe Now here's the weird thing We started noticing that a folder was created at %localappdata%\CompanyA,_a_CompanyB_solution within that folder would be subfolders named after some of our executables, followed by _Strongname_ followed by a bunch of random looking characters, followed by a version foldername in each of those subfolders would be a user.config file So, for example, we might find %localappdata%\CompanyA,_a_CompanyB_solution\CompanyA.exe_StrongName_*gibberishhere*\1.2.3.4\user.config after we build version 1.2.3.4 of application CompanyA.exe the user.config file would contain very little, and removing any of these files, even the entire %localappdata%\CompanyA,_a_CompanyB_solution folder, makes no difference to how usable the application is. It works fine with or without that folder. My questions: 1. What causes this to be created? We are not aware of having deliberately changed anything in our application to tell it to create these. 2. What purpose do these files serve? 3. If they are as unimportant as they seem to be (at least to our application), is there anything we can change in our application's configuration to stop it doing this?
I've written the following code to compare the theoretical alpha = 0.05 with the empirical one from the buit-in t.test in Rstudio: ``` set.seed(1) N=1000 n = 20 k = 500 poblacion <- rnorm(N, 10, 10) #Sample mu.pob <- mean(poblacion) sd.pob <- sd(poblacion) p <- vector(length=k) for (i in 1:k){ muestra <- poblacion[sample(1:N, n)] p[i] <- t.test(muestra, mu = mu.pob)$p.value } a_teo = 0.05 a_emp = length(p[p<a_teo])/k sprintf("alpha_teo = %.3f <-> alpha_emp = %.3f", a_teo, a_emp) ``` And it works printing both theoretical and empirical values. Now I wanna make it more general, to different values of 'n', so I wrote this: ``` set.seed(1) N=1000 n = c(2, 10, 15, 20) k = 500 for (i in n){ poblacion <- rnorm(N, 10, 10) mu.pob <- mean(poblacion) sd.pob <- sd(poblacion) p <- vector(length=k) for (j in 1:k){ muestra <- poblacion[sample(1:N, length(n))] p[j] <- t.test(muestra, mu = mu.pob)$p.value } a_teo = 0.05 a_emp = length(p[p<a_teo])/k sprintf("alpha_teo = %.3f <-> alpha_emp = %.3f", a_teo, a_emp) } ``` But I don't get the 'print'. Any ideas about what is wrong?
Check your backend server port like this `http://localhost:**5000**/` and same port number using in your API to fetch the data in frontend like this `http://localhost:**5000**/books/${id}`
I have been working on a project for some time Some time ago, I committed a large file by mistake and then pushed it Now, every time I want to push the changes, that large file must also be pushed, which is about 200 MB, even though I deleted the file from the project. Note that I cannot go back to previous commits Now is there a way to stop pushing the large file? Or Without resetting the repository (I need repository history), can I send only the files that are currently in the project to Git, just like when we create a repository for the first time?
Is there a way to delete a large file that has already been sent to git by mistake?
|git|github|
null
You should try to use `LauncherActivity` instead of `MainActivity`. The package name of the LauncherActivity is also a little bit different. To sum up, the following command will start Instagram app from the terminal: adb -s 077743323A102985 shell am start -n com.instagram.android/com.instagram.mainactivity.LauncherActivity
I have the following command adb -s 077743323A102985 shell am start -n com.instagram.android/.MainActivity i am trying to start the instagram app via command line, but why does it gives me this error: Starting: Intent { cmp=com.instagram.android/.MainActivity } Error type 3 Error: Activity class {com.instagram.android/com.instagram.android.MainActivity} does not exist. the app is installed and the package name is com.instagram.android
I've written the following code to compare the theoretical alpha = 0.05 with the empirical one from the buit-in t.test in Rstudio: ``` set.seed(1) N <- 1000 n <- 20 k <- 500 poblacion <- rnorm(N, 10, 10) #Sample mu.pob <- mean(poblacion) sd.pob <- sd(poblacion) p <- vector(length=k) for (i in 1:k) { muestra <- poblacion[sample(1:N, n)] p[i] <- t.test(muestra, mu=mu.pob)$p.value } a_teo <- 0.05 a_emp <- length(p[p < a_teo])/k sprintf("alpha_teo = %.3f <-> alpha_emp = %.3f", a_teo, a_emp) ``` And it works printing both theoretical and empirical values. Now I wanna make it more general, to different values of 'n', so I wrote this: ``` set.seed(1) N <- 1000 n <- c(2, 10, 15, 20) k <- 500 for (i in n) { poblacion <- rnorm(N, 10, 10) mu.pob <- mean(poblacion) sd.pob <- sd(poblacion) p <- vector(length=k) for (j in 1:k) { muestra <- poblacion[sample(1:N, length(n))] p[j] <- t.test(muestra, mu=mu.pob)$p.value } a_teo <- 0.05 a_emp <- length(p[p<a_teo])/k sprintf("alpha_teo = %.3f <-> alpha_emp = %.3f", a_teo, a_emp) } ``` But I don't get the 'print'. Any ideas about what is wrong?
Looking for information regarding user.config files and what causes these to be created by a .NET Framework application
|c#|.net|vb.net|user-config|
I'm using ASP.net C# and I have a table for products where I have a function that gets called whenever there is a change in a dropdown list selection. My function works perfect currently, but only if there is 1 row of data. If there is 2 or 3 rows or more, it only works for the top row and not individually for each row. So on my function, based on a dropdown list selection, it gets the value from the dropdown list and sets that value to an empty hidden input text field. So on the first row, the function works great when making a selection. If I go to the second row and so on, and make a selection, it only works for the first row. I need to have this work individually for each row for the same dropdown list element when a selection is made and not only work for that row, but not affect the values that were already set from the previous row(s). Here is my current code: <script> function selectedText(ddlitem) { selvalue = ddlitem.value; $('#Units').val(selvalue); } </script> <table> <tr> <th>Product Code</th> <th>Description</th> <th>Quantity</th> <th>Units</th> </tr> @foreach (var item in Products) { <tr> <td class="code"> @item.Product_Code </td> <td class="descrip"> @item.Description </td> <td> <input type="text" class="form-control qty-ctl" id="ncQty" name="ncQty" /> </td> <td> @Html.DropDownListFor(x => x.Unit_Of_Measure, Extensions.GetUOMList(), "--Select Units--", new { id = "txt1", @class = "form-control uom-ctl", @onChange = "selectedText(this)" }) <input type="hidden" id="Units" name="Units" /> </td> </tr> } </table> Any help figuring this out is greatly appreciated.
How to make a jQuery function work for same element in all rows of a foreach loop instead of just the first row?
|javascript|c#|jquery|asp.net|asp.net-mvc|
Please make sure your dagger version should be updated. [![enter image description here][1]][1] https://github.com/google/dagger/releases [1]: https://i.stack.imgur.com/z1nz8.png
It is possible to use attributes on the source files like file size and file modified time: ``` SELECT "$path", "$file_size", "$file_modified_time" FROM my_table LIMIT 10 ```
I am trying to setup OpenTelemetry Tracing with exports traces to Jaeger and Datadog Here is extracted example that I used. All containers is running and no errors in console. For jaeger url directly it is working and I see my traces but when I trying to change URL to `otel-collector` it is not sending traces. I don't see it on otel-collector container logs and in my jaeger My docker-compose.yml ``` services: otel-collector: image: otel/opentelemetry-collector-contrib:0.95.0 ports: - 1888:1888 # pprof extension - 8888:8888 # Prometheus metrics exposed by the Collector - 8889:8889 # Prometheus exporter metrics - 13133:13133 # health_check extension - 4317:4317 # OTLP gRPC receiver - 4318:4318 # OTLP http receiver - 55679:55679 # zpages extension command: ["--config=/etc/otel-collector-config.yaml"] volumes: - ./otel-collector-config.yaml:/etc/otel-collector-config.yaml environment: DD_SITE: <url> DD_API_KEY: <key> jaeger: container_name: jaeger ports: - 16686:16686 - 43017:4317 image: jaegertracing/opentelemetry-all-in-one environment: COLLECTOR_ZIPKIN_HOST_PORT: 9411 COLLECTOR_OTLP_ENABLED: true my-api: image: myapi build: context: . dockerfile: Dockerfile ports: - 8080:80 - 8081:443 depends_on: - jaeger environment: OtelUrl: http://jaeger:4317 ASPNETCORE_ENVIRONMENT: Production DD_LOGS_INJECTION: true DD_SERVICE: MyAPI ``` My otel-collector-config.yaml ``` receivers: otlp: protocols: http: endpoint: "localhost:4318" grpc: endpoint: "localhost:4317" processors: exporters: otlp/jaeger: # Jaeger supports OTLP directly. The default port for OTLP/gRPC is 4317 endpoint: http://jaeger:4317 tls: insecure: true datadog: api: site: ${env:DD_SITE} key: ${env:DD_API_KEY} traces: trace_buffer: 500 service: pipelines: metrics: receivers: [otlp] processors: [] exporters: [datadog] traces: receivers: [otlp] processors: [] exporters: [datadog, otlp/jaeger] logs: receivers: [otlp] processors: [] exporters: [datadog] ``` Program.cs setup ``` services.AddOpenTelemetry() .ConfigureResource(resource => resource.AddService(AppTracing.TelemetryServiceName)) .WithTracing(tracing => tracing .SetResourceBuilder(ResourceBuilder.CreateDefault().AddService(AppTracing.TelemetryServiceName)) .AddSource(AppTracing.SourceName) .AddAspNetCoreInstrumentation() .AddHttpClientInstrumentation() .SetSampler(new AlwaysOnSampler()) .AddOtlpExporter(otlp => { otlp.Endpoint = otlpEndpoint; }) //.AddConsoleExporter() ); ``` My controller action that I use for test ``` [HttpPost("test-datadog")] public void SendBatchTraces() { var session = Random.Shared.Next(0, 100); foreach (var index in Enumerable.Range(0, 1000)) { using var activity = AppTracing.Source.StartActivity($"Sending trace number: {index}"); activity?.SetTag("Session", session); } } ``` I want to send it to collector and then distribute it to jaeger and datadog. Maybe I can somehow see detailed log for collector or for my .NET app to check any connection issues? For now in console of each app there is no any errors. Thank you very much for any suggestion!
The following self explanatory and complete playbook meets your requirement: ```yaml --- - hosts: localhost gather_facts: false vars: inc_numbers: [ A, B, C, D, E, F, G ] users: - username: u1 status: active - username: u2 status: active - username: u3 status: active - username: u4 status: active user_rounds: "{{ (inc_numbers | length / users | length) | round(0, 'ceil') }}" repeated_users_names_only: "{{ users | map(attribute='username') * user_rounds | int }}" zipped_tickets_to_user: "{{ inc_numbers | zip(repeated_users_names_only) }}" ticket_to_user_dict: "{{ dict(zipped_tickets_to_user) }}" tasks: - name: "Count how many times you need to repeat your full user list to cover the number of tickets" ansible.builtin.debug: var: user_rounds - name: "Extract the user names and repeat them as needed" ansible.builtin.debug: var: repeated_users_names_only - name: "Zip together the elements from tickets and user names" ansible.builtin.debug: var: zipped_tickets_to_user - name: "Use the zipped list as key/value pairs to create a dict" ansible.builtin.debug: var: ticket_to_user_dict ``` Running the above playbook gives: ```none PLAY [localhost] ********************************************************************************************************************************************************************************************************************* TASK [Count how many times you need to repeat your full user list to cover the number of tickets] ************************************************************************************************************************************ ok: [localhost] => { "user_rounds": "2.0" } TASK [Extract the user names and repeat them as needed] ****************************************************************************************************************************************************************************** ok: [localhost] => { "repeated_users_names_only": [ "u1", "u2", "u3", "u4", "u1", "u2", "u3", "u4" ] } TASK [Zip together the elements from tickets and user names] ************************************************************************************************************************************************************************* ok: [localhost] => { "zipped_tickets_to_user": [ [ "A", "u1" ], [ "B", "u2" ], [ "C", "u3" ], [ "D", "u4" ], [ "E", "u1" ], [ "F", "u2" ], [ "G", "u3" ] ] } TASK [Use the zipped list as key/value pairs to create a dict] *********************************************************************************************************************************************************************** ok: [localhost] => { "ticket_to_user_dict": { "A": "u1", "B": "u2", "C": "u3", "D": "u4", "E": "u1", "F": "u2", "G": "u3" } } PLAY RECAP *************************************************************************************************************************************************************************************************************************** localhost : ok=4 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 ```
I was trying to add a p5.js sketch to my [Svelte](https://svelte.dev/) project, when I noticed that the sketch seemed to be running much slower than it was in the p5.js [web editor](https://editor.p5js.org/). To test my suspicions, I timed the execution of the draw call with `performance.now()`. Results in the p5.js web editor yielded an average execution time of `~10`ms. In my svelte project, execution time took about `80ms`. To confirm whether this was just a one-off problem, I threw together [a codepen](https://codepen.io/kemmel-dev/pen/LYvzXqp?editors=1111), and saw the execution times remained poor compared to the [web editor](https://editor.p5js.org/codingtrain/sketches/OPYPc4ueq). To clear any doubts that this was Svelte or Codepen related, I created a new directy, threw in a boilerplate `html` file and imported p5.min.js from the CDN, and finally linked the `sketch.js`: ```html <!DOCTYPE html> <html lang="en"> <head> <meta charset="UTF-8"> <meta name="viewport" content="width=device-width, initial-scale=1.0"> <title>Document</title> <script src="https://cdnjs.cloudflare.com/ajax/libs/p5.js/1.9.2/p5.min.js"></script> </head> <body> <script src="./sketch.js"></script> </body> </html> ``` Then, I ran my project using (npm) `serve`, and observed that the execution times were still the same. It's not `frameRate` related, (that should only dictate how often the draw call is called, but I tried experimenting with setting different framerates anyway, and the time it took for the draw call to execute stayed consistent throughout). [This](https://editor.p5js.org/codingtrain/sketches/OPYPc4ueq) is the sketch I was experimenting with. My suspicion (although it is kind of a stab in the dark) is that it has to do with the `WebGL` mode used in that sketch (`createCanvas(600, 600, WEBGL);`), and that the web editor version is hardware accelerated, but my local project version is not. That's kind of a weird idea though, as to my understanding hardware acceleration should be enbaled browser-wide? If anyone could elaborate as to why there is such a massive discrepancy between performance, and how I can make the sketch on my website run as smooth as the web editor one, it'd be hugely appreciated.
I was consuming remote Kafka producer event and facing Class not found exception. so finally I removed configuration form .properties file added below config class in consumer. Here is my application.properties. spring.application.name=payment-service server.port=8082 spring.kafka.payment.bootstrap-servers= localhost:9092 spring.kafka.order.consumer.group-id.notification= group-id spring.kafka.consumer.auto-offset-reset= latest spring.kafka.order.topic.create-order=new_order1 @EnableKafka @Configuration("NotificationConfiguration") public class CreateOrderConsumerConfig { @Value("${spring.kafka.payment.bootstrap-servers}") private String bootstrapServers; @Value("${spring.kafka.order.consumer.group-id.notification}") private String groupId; @Bean("NotificationConsumerFactory") public ConsumerFactory<String, OrderEvent> createOrderConsumerFactory() { Map<String, Object> props = new HashMap<>(); props.put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, bootstrapServers); props.put(ErrorHandlingDeserializer.KEY_DESERIALIZER_CLASS, ErrorHandlingDeserializer.class); props.put(ErrorHandlingDeserializer.VALUE_DESERIALIZER_CLASS, ErrorHandlingDeserializer.class); props.put(ConsumerConfig.GROUP_ID_CONFIG, groupId); props.put(ConsumerConfig.CLIENT_ID_CONFIG, UUID.randomUUID().toString()); props.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, StringSerializer.class); props.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, JsonSerializer.class); props.put(JsonDeserializer.VALUE_DEFAULT_TYPE, "com.swiggy.payment.event.OrderEvent");// this my consumer event class props.put(JsonDeserializer.USE_TYPE_INFO_HEADERS,false); props.put(ConsumerConfig.ENABLE_AUTO_COMMIT_CONFIG, false); props.put(JsonDeserializer.TRUSTED_PACKAGES, "*"); return new DefaultKafkaConsumerFactory<>(props, new StringDeserializer(), new JsonDeserializer<>(OrderEvent.class)); } @Bean("NotificationContainerFactory") public ConcurrentKafkaListenerContainerFactory<String, OrderEvent> createOrderKafkaListenerContainerFactory() { ConcurrentKafkaListenerContainerFactory<String, OrderEvent> factory = new ConcurrentKafkaListenerContainerFactory<>(); factory.setConsumerFactory(createOrderConsumerFactory()); factory.getContainerProperties().setAckMode(ContainerProperties.AckMode.MANUAL_IMMEDIATE); return factory; } }
I want to perform some analysis of source files in Gitlab-managed project. The analysis is about computing frequencies, counting unique strings etc. I came to conclusion that the plain bash `find`|`grep`|`sed`|`sort`|`uniq`-based solution would encounter its limits very soon. Hence I decided to use bash commands only to preprocess data into SQL table (created and filled in `create_table.sql` generated script - details of table are not substantial) and do subsequent analysis solely in SQL (`run_statistics.sql`) in own one-shot docker container. I am trying to add this analysis as new job into Gitlab pipeline. It is sufficient for the result of the job to be some artifact with plain text output of SQL query dumps (no [reporting](https://docs.gitlab.com/ee/ci/yaml/#artifactsreports) needed at this moment). My intention is to run this job in [its own postgres docker image](https://docs.gitlab.com/ee/ci/docker/using_docker_images.html#what-is-an-image) which [sees the project tree](https://stackoverflow.com/a/54188996/653539). ``` statistics: image: name: postgres:16 stage: test allow_failure: true script: - chmod +x ./create_statistics_sql.sh - ./create_statistics_sql.sh > create_table.sql - psql -U postgres < create_table.sql - echo "<html><body><pre>" > statistics.html - psql -U postgres < run_statistics.sql >> statistics.html - echo "</pre></body></html>" >> statistics.html artifacts: when: always expire_in: 7 days paths: - statistics.html ``` The problem is, the Postgres container requires setting of mandatory environment variable `POSTGRES_PASSWORD` (or `POSTGRES_HOST_AUTH_METHOD: trust` since there is not security risk here, however the environment variable is needed anyway). Without it, the container for job won't start and then the error `psql: error: connection to server on socket "/var/run/postgresql/.s.PGSQL.5432" failed: No such file or directory` appears in job log. In other words, I am looking for equivalent of `docker run -e` option for container spawned from job's `image:`. I tried to set it into `variables:` in job but without success. The `image:` section of job has very limited syntax, I would expect something like `image:docker:variable` though I am aware it is [not currently possible](https://docs.gitlab.com/ee/ci/yaml/#imagedocker). Is there some workaround? (Actually I found one workaround in my self-answer but it is specific for my case and I am open to learn there is better solution.) Notice: I don't (or intend to) use Postgres in project itself nor I have it as [service](https://stackoverflow.com/q/53837282/653539). (These two cases make my problem little bit hard to google.) I want to use it just as a single-use tool.
I tried creating a jar file from a project that's using JavaFX in Visual Studio Code.. When I run the app inside VSCode it works fine, but when I export it to a jar file it throws an error: > Error: LinkageError occurred while loading main class HelloFX > java.lang.UnsupportedClassVersionError: Preview features are not enabled for HelloFX (class file version 65.65535). Try running with '--enable-preview' So I gathered this error is caused because I compiled the jar file with a newer version than my JRE.. but when I use "javac -version" and "java -version" I get the same (21.0.2) (this has been set up in my system variables JAVA_HOME and Path variables).. I also found I needed to add '--enable-preview' inside the vmArgs parameters of the launch.json file (full text is now: "vmArgs": "--enable-preview --module-path \"C:/Program Files/Java/javafx-sdk-21.0.2/lib\" --add-modules javafx.controls,javafx.fxml" because I need to add the javafx sdk as well) I created a new project that would display my java version and javaFX version ``` import javafx.application.Application; import javafx.scene.Scene; import javafx.scene.control.Label; import javafx.scene.layout.StackPane; import javafx.stage.Stage; public class HelloFX extends Application { @Override public void start(Stage stage) { String javaVersion = System.getProperty("java.version"); String javafxVersion = System.getProperty("javafx.version"); Label l = new Label("Hello, JavaFX " + javafxVersion + ", running on Java " + javaVersion + "."); Scene scene = new Scene(new StackPane(l), 640, 480); stage.setScene(scene); stage.show(); } public static void main(String[] args) { launch(); } } ``` The text inside the label gives me: "Hello, JavaFX 21.0.2, running on Java 21.0.2" when I run it in VSCode, but same error when ran as jar file I configured my Java Runtime inside VSCode (Java:Configure java Runtime) and the version is set to 21 I don't use Maven or Gradle (No build tools) since I don't really know how they work.. I use "java -jar MyApp.jar" when using the command line Am I missing something?
1ST CODE VARIANT: ``` let getLastDayOfMonth = function(year, month) { let date = new Date(); date.setFullYear(year, month, 28); while (date.getMonth() === month){ date.setDate(date.getDate() + 1); } date.setDate(date.getDate() - 1); return date.getDate(); }; alert(getLastDayOfMonth(2012, 1)); //29 ``` 2ND CODE VARIANT: ``` let getLastDayOfMonth = function(year, month) { let date = new Date(); date.setFullYear(year); date.setMonth(month); date.setDate(28); while (date.getMonth() === month){ date.setDate(date.getDate() + 1); } date.setDate(date.getDate() - 1); return date.getDate(); }; alert(getLastDayOfMonth(2012, 1)); //nothing ``` THE QUESTION: Why does the 1st code work correctly but the second doesn't alert anything (if the month number = 1 it becomes March not Feb and idk why). Thanks!
What is the difference?
|javascript|
null
I'm trying to animate a sine wave made out of a Shape struct. To do this, I need to animate the phase of the wave as a continuous looping animation - and then animate the frequency and strength of the wave based on audio input. As far as I understand it, the `animatableData` property on the Shape seems unable to handle multiple animation transactions. I've tried using `AnimatablePair` to be able to set/get more values than one, but it seems they need to come from the same animation transaction (and for my case I need two different instances). The code goes something like this: <!-- language: swiftui --> ``` @Binding var externalInput: Double @State private var phase: Double = 0.0 @State private var strength: Double = 0.0 struct MyView: View { var body: some View { MyShape(phase: self.phase, strength: self.strength) .onAppear { /// The first animation (that needs to be continuous) withAnimation(.linear(duration: 1).repeatForever(autoreverses: false)) { self.phase = .pi * 2 } } .onChange(self.externalInput) { /// The second animation (that is reactive) withAnimation(.default) { self.strength = self.externalInput } } } } ``` <!-- language: swiftui --> ``` struct MyShape: Shape { var phase: Double var strength: Double var animatableData: AnimatablePair<Double, Double> { get { .init(phase, strength) } set { phase = newValue.first strength = newValue.second } } /// Drawing the bezier curve that integrates the animated values func path(in rect: CGRect) -> Path { ... } } ``` This approach however seems to only animate the value from the last animation transaction that's been initialised. I am guessing both animated values goes in, but when the second animation transaction fires, it sets the first value to the target value (without any animations) - as those animated values are part of the first transaction (that gets overridden by the second one). My solution right now is to use an internal `Timer`, in a wrapper View for the Shape struct, to take care of updating the value of the `phase` value - but this is far from optimal, and quite ugly. When setting the values in `animatableData`, is there a way to access animated values from other transactions - or is there another way to solve this? I've also tried with implicit animations, but that seems to only render the last set animation as well - and with a lot of other weird things happening (like the whole view zooming across the screen on a repeated loop...). **EDIT 1:** Apparently (according to apples [docs][1]) you should be able to set a custom transaction key on the transaction (animation) - but I can't for the life of me figure out how to read that key in the `animatableData` setter. I was thinking maybe you could look at what kind of animation key is currently supplying data, and update either phase or strength depending on which transaction the data is coming from. Then again, I would still have the issue of my animations cancelling each other out (so the data from the first animation would no longer be supplying data...). **EDIT 2:** I switched out the `Timer` in favour of a `CADisplayLink`, to better time the phase updates with the framerate - which fixes the performance issues. However, I would still like to know how to set this up as a proper animation. It must be possible, as all other native structs (that implements the Animatable protocol) seem to be able to handle multiple animations. Or maybe it's a limitation of the `Shape` structs for some reason? [1]: https://developer.apple.com/documentation/SwiftUI/TransactionKey
unable to run jar files (LinkageError, UnsupportedClassVersionError)
|java|javafx|jar|
null
I am writing an app using WebView to load my html, css, js and get some problems when loading the html. Here is the MainActivity ``` public class MainActivity extends AppCompatActivity { HTMLDataBase myDB1; ContentTableDataBase myDB2; CloudTableDataBase myDB3; InterviewDataBase myDB4; Context context; boolean check; private WebView view; @Override protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.activity_main); myDB1 = new HTMLDataBase(this); myDB2 = new ContentTableDataBase(this); myDB3 = new CloudTableDataBase(this); myDB4 = new InterviewDataBase(this); display_html(); Javascript_Android_Communication(); } public void display_html() { WebView view = (WebView) findViewById(R.id.WebView); view.getSettings().setJavaScriptEnabled(true); Cursor res = myDB1.getAllData(); if (res.getCount() > 0) { StringBuffer buffer = new StringBuffer(); while (res.moveToNext()) { buffer.append(res.getString(0) + ","); buffer.append(res.getString(1) + ","); buffer.append(res.getString(2) + ","); } String[] result = buffer.toString().substring(0, buffer.toString().trim().length() - 1).split(",", -2); if (result[2].equals("block")) { view.loadUrl("file:///android_asset/index.html"); } if (!result[2].equals("block")) { view.loadUrl("file:///android_asset/index2.html"); // Problem occurs when loading this html }; } } public void Javascript_Android_Communication() { WebView view = (WebView) findViewById(R.id.WebView); view.setWebChromeClient(new WebChromeClient() {}); WebSettings settings = view.getSettings(); settings.setJavaScriptEnabled(true); settings.setDomStorageEnabled(true); view.addJavascriptInterface(new JavaScriptInterface(this), "Android"); view.addJavascriptInterface(new WebViewJavaScriptInterface(this), "app"); } } ``` Javascript requests information from DataBase and imports them to: ``` document.getElementById("").style.display = ""; ``` The problem is data can be extracted but the html is never completely loaded and thus errors occurs when I change the html element. When I open the logcat, it said "Uncaught TypeError: Cannot read property 'style' of null". Whenever I use alert method to call ``` document.getElementById("").style.display ``` It simply returns me empty. I try to use ``` document.addEventListener("load",function(){alert(document.getElementById().style.display);}); ``` but the page is never completely loaded so it just wastes my time. What shall I do so that I can change the html with data I have extracted.
You can make the enum constants have powers of 2 as their values, and use bitwise or to combine them into a single enum value. Example: ``` // You can optionally add the [Flags] attribute to this enum ColliderType { A = 1, B = 2, C = 4, D = 8 // and so on } ... ColliderType combinedType = collider1.Type | collider2.Type; switch (combinedType) { case ColliderType.A | ColliderType.B: // this is the same value as ColliderType.B | ColliderType.A ... break; case ColliderType.A | ColliderType.A: // or just ColliderType.A, depending on what you find more readable ... break; // and so on } ``` Alternatively, if you cannot modify the enum, you can make a tuple to represent the pair of collider types: ``` var combinedType = (collider1.Type, collider2.Type); switch (combinedType) { case (ColliderType.A, ColliderType.B): case (ColliderType.B, ColliderType.A): ... break; // and so on } ```
To query a list of products from Google Play Billing Library version 5 (or later), you typically use the QueryProductDetailsParams method. Here's a general overview of how you can do this: val productList = ArrayList<QueryProductDetailsParams.Product>() productList.add( QueryProductDetailsParams.Product.newBuilder() .setProductId(Constants.BASE_PLAN) .setProductType(SUBS) .build() ) productList.add( QueryProductDetailsParams.Product.newBuilder() .setProductId(Constants.STANDER_PLAN) .setProductType(SUBS) .build() )
I'm writing a function to reduce the key by 2 times. The function accepts a key in the following form: ``` Buffer.from('0xe581d529cc69816e768432a8aa09178470c9b1e703951f4a85e0dab7d8008e2a9e9e179', 'hex') ``` so that I can reduce the key I need to convert it to a string. The `toString()` or `toString('hex')` method do not work. I also tried using sha256 encryption for the key, but when I called the function with different keys I still got the same value after hashing. Checking the value of a key or its length using `console.log()` showed that the length was 0, but when printing the key it did not output anything.
null
I have written a code for finding prime numbers using sieve of Erasthenes algorithm, but the problem is my code works as it tends to be, but for only some numbers. It shows like "entering upto-value infinitely" as the error and for some numbers it works perfectly. As I started recently studying C in-depth I couldn't find what is going wrong. I request the code-ies to help me in this case, and also help me realize what is the mistake, why it happens and how to prevent it. Here's the code: ``` #include <stdio.h> int main() { int n; printf("enter number: "); scanf("%d",&n); int arr[n],pr=2,i=0; for(i=0;pr<=n;i++) { arr[i]=pr; pr++; } int j,k; while(arr[k]<=n) { for(j=2;j<n;j++) { for(k=0;k<n;k++) { if(arr[k]%j==0 && arr[k]>j) arr[k]=0; } } } for(i=0;arr[i]<=n;i++) { if(arr[i]!=0) printf(" %d",arr[i]); } printf("\n"); return 0; } ```
use [**`customtkinter.CTkImage`**](https://customtkinter.tomschimansky.com/documentation/utility-classes/image/) not [`ImageTk.PhotoImage`](https://pillow.readthedocs.io/en/stable/reference/ImageTk.html#PIL.ImageTk.PhotoImage). Try this. ``` import tkinter import customtkinter from PIL import Image # ,ImageTk ## No need to import this customtkinter.set_appearance_mode("System") # Modes: system (default), light, dark customtkinter.set_default_color_theme("blue") # Themes: blue (default), dark-blue, green app = customtkinter.CTk() # create CTk window like you do with the Tk window wdth = app.winfo_screenwidth() hgt = app.winfo_screenheight() app.geometry("%dx%d"%(wdth,hgt)) def button_function(): print("button pressed") img1=customtkinter.CTkImage(Image.open(r"C:\Users\Vedant\Desktop\py project\pizzalogo-removebg-preview.png")) # Use CTkButton instead of tkinter Button button = customtkinter.CTkButton(master=app,image = img1, text="",width=500,height=200, command=button_function,compound='left') button.place(relx=0.5, rely=0.5, anchor=tkinter.CENTER) app.mainloop() ```
{"OriginalQuestionIds":[77768982],"Voters":[{"Id":11107541,"DisplayName":"starball","BindingReason":{"GoldTagBadge":"visual-studio-code"}}]}
If you encounter this error in Expo, it is likely due to incompatible versions of some dependencies in your project. run `npx expo-doctor` then `npx expo install --fix` This fix the issue for me.
There are two ways which is what I found. Either go to the directory where kernels are residing and delete from there. Secondly, using this command below. List all kernels and grab the name of the kernel you want to remove ```lang-none jupyter kernelspec list ``` to get the paths of all your kernels. Then simply uninstall your unwanted-kernel ```lang-none jupyter kernelspec remove kernel_name ```
Good morning, I am creating a site on Wix. I created a CMS database with dynamic pages. It works very well. I would however like to add a map for each trip (each page is one trip). So I created a Velo javascript table with the list of points to display on the map. Can you help me display these points dynamically (i.e. depending on the page we are on)? I can't connect the blank generic HTML map on the page with the table in my CMS. `import wixData from 'wix-data'; $w.onReady(function () { // Récupérer les données de votre tableau JavaScript const pointsOfInterest = [...]; // Remplacez les points d'intérêt par votre tableau JavaScript // Récupérer l'élément de carte const mapElement = $w('#votre_id_de_carte').value; // Remplacez 'votre_id_de_carte' par l'ID de votre élément de carte // Initialiser la carte Mapbox mapboxgl.accessToken = 'YOUR_MAPBOX_ACCESS_TOKEN'; const map = new mapboxgl.Map({ container: mapElement, style: 'mapbox://styles/mapbox/streets-v11', center: [0, 0], // Coordonnées de départ (peu importe car nous allons ajuster la vue) zoom: 10 // Niveau de zoom initial }); // Ajouter des marqueurs pour chaque point d'intérêt pointsOfInterest.forEach(point => { new mapboxgl.Marker() .setLngLat(point.coordinates) .setPopup(new mapboxgl.Popup().setHTML(`<h3>${point.name}</h3><p>${point.description}</p>`)) .addTo(map); }); // Ajuster la vue pour inclure tous les marqueurs const bounds = new mapboxgl.LngLatBounds(); pointsOfInterest.forEach(point => { bounds.extend(point.coordinates); }); map.fitBounds(bounds, { padding: 50 }); });` To summarize, each trip contains a list of points to display on the map. What I can't do is connect this table to my map to display them.
Wix and mapbox integration (CMS)
|wix|mapbox|velo|
null
After further research I found another conversation on CodeProject saying to use [`HasButton()`][2]. If I add this override to my class: BOOL HasButton() const { return TRUE; } Then it will show `...` on the right, which is at-least a step forward! Then, I used the code search feature in Visual Studio 2022 and noticed an **undocumented** method (`OnClickButton`)! If I override it: virtual void OnClickButton(CPoint point) { AfxMessageBox(L"Button clicked!"); } It appears to work! I consider this the correct way to add an embedded button with event handler to a property control. But, I still don't know how to adjust the button text from "...".
A preload script has to be attached to a window to be accessible: ```js new BrowserWindow({ ... webPreferences: { preload: path.join(__dirname, 'preload.js') } }); ```
I came across this problem recently for Vercel deployment. And i created `server_install.sh` file. ``` #!/bin/bash sed -i '' -e "s/git@bitbucket.org\/org_name\/repo_name.git#v1.0.1/git+https:\/\/$BITBUCKET_USERNAME:$BITBUCKET_PASSWORD@bitbucket.org\/org_name\/repo_name.git/g" package.json yarn install echo "Install successful!" ``` This code will update the private repository path in `package.json` with authentication and then run the `yarn install`. **Note** *I prefer to use ssh key in such case. I created a script because did not found any solution for Vercel. Happy to learn if there is better approach available.*
This is considered as anti pattern in modern React ecosystem. According to single responsibility principle keep your business logic in a simple way with custom hooks (Use prototype inheritance in it, if required), To store API calls use Tanstack Query and to store global data use Jotai (Atoms). This libraries are very easy to learn and maintain. You don't need to write Redux (action, reducers and store), Redux toolkit and other boilerplate codes today. Even you learn this concepts it was not so useful in other stacks. Even today many React interviewers asks for Redux questions, I hope they will update their projects with the mentioned best practices soon. A sample snippet is given below. ``` const Counter = () => { const [counter, setCounter] = useAtom(counterAtom); const { increment, decrement } = useCounter(setCounter); return ( <> {counter} <Button onPress={increment}>Increment</Button> <Button onPress={decrement}>Decrement</Button> </> ); }; ``` Bonus: You can write the unit test for that custom hook 'useCounter' easily.
When we commit something, Git starts tracking the folder/project. For unnecessary files, we can also delete those from the project. To delete the `.idea/` folder from the project, follow these steps as shown below: 1. Make sure that, you have added ```.idea/``` to your ```.gitignore``` file. 2. Run these following command ``` git rm -r --cached . git add . git commit -m "untracked fixed" ```
File docker-compose.yml: ``` version: '3.9' services: wagtail: build: ./wagtail/ra_pegas ports: - "8000:8000" volumes: - ./wagtail/ra_pegas:/app links: - postgres postgres: image: postgres:16.2 ports: - "8001:5432" command: - "postgres" - "-c" - "psql" - "-c" - "CREATE USER postgres WITH PASSWORD 'postgres' CREATEDB;" - "-c" - "CREATE DATABASE postgres WITH OWNER postgres;" - "-c" - "GRANT ALL PRIVILEGES ON DATABASE postgres TO postgres;" environment: - POSTGRES_DB=postgres - POSTGRES_USER=postgres - POSTGRES_PASSWORD=postgres volumes: - db:/var/lib/postgresql/data volumes: db: driver: local ``` Database settings in wagtail: ``` DATABASES = { "default": { "ENGINE": "django.db.backends.postgresql_psycopg2", "NAME": "postgres", "USER": "postgres", "PASSWORD": "postgres", "HOST": "postgres", "POST": "8001", } } ``` When I run a project with host = postgres (postgresql container name), i get an error: ``` wagtail-1 | django.db.utils.OperationalError: could not translate host name "postgres" to address: Name or service not known wagtail-1 | wagtail-1 exited with code 1 ``` If i change host to 127.0.0.1 or localhost: ``` wagtail-1 | django.db.utils.OperationalError: connection to server at "localhost" (::1), port 5432 failed: Connection refused wagtail-1 | Is the server running on that host and accepting TCP/IP connections? wagtail-1 | connection to server at "localhost" (127.0.0.1), port 5432 failed: Connection refused wagtail-1 | Is the server running on that host and accepting TCP/IP connections? ``` Also if I change the username, password or database name in the settings absolutely nothing changes, but command block in docker-compose.yml necessary because without it nothing works When I tried to run the project without docker compose, or with docker compose but with sqlite instead of postgresql everything worked great I also tried using networks but that also did not give any result
|python|math|matrix|pytorch|matrix-multiplication|
PHP Startup: Unable to load dynamic library 'fileinfo' (tried: C:\xampp\php\ext\fileinfo (The specified module could not be found), C:\xampp\php\ext\php_fileinfo.dll (The specified module could not be found))
Store Values From a Multi-Area Range in an Array - **2D Array** <!-- language: lang-vb --> Sub Test2D() Dim ws As Worksheet: Set ws = ThisWorkbook.Worksheets(1) Dim rg As Range: Set rg = ws.Range("A3:C3,A5:C5,A7:C7") Dim MyArray() As Variant: ReDim MyArray(1 To 3, 1 To 3) ' 2D Dim arg As Range, HelpArray() As Variant, r As Long, c As Long For Each arg In rg.Areas r = r + 1 ' Since all areas ('arg') are a single 3-cell row, 'HelpArray' ' will automatically be sized as '(1 To 1, 1 To 3)'. HelpArray = arg.Value For c = 1 To 3 MyArray(r, c) = HelpArray(1, c) Next c Next arg For r = 1 To 3 For c = 1 To 3 Debug.Print r, c, MyArray(r, c) Next c Next r End Sub **(1D) Jagged Array aka Array of Arrays** <!-- language: lang-vb --> Sub TestJagged() Dim ws As Worksheet: Set ws = ThisWorkbook.Worksheets(1) Dim rg As Range: Set rg = ws.Range("A3:C3,A5:C5,A7:C7") Dim MyArray() As Variant: ReDim MyArray(1 To 3) ' 1D Dim arg As Range, r As Long, c As Long For Each arg In rg.Areas r = r + 1 ' Since all areas ('arg') are a single 3-cell row, each element ('r') ' of 'MyArray' will hold a 2D one-based single-row array ('arg.Value') ' sized as '(1 To 1, 1 To 3)'. MyArray(r) = arg.Value Next arg For r = 1 To 3 For c = 1 To 3 Debug.Print r, c, MyArray(r)(1, c) ' !!! Next c Next r End Sub [![enter image description here][1]][1] Both Results ``` 1 1 5 1 2 3 1 3 1 2 1 4 2 2 8 2 3 6 3 1 7 3 2 9 3 3 2 ``` **Notes** - Note that the expression MyArray = rg.Value only works if `rg` is a **single-area range** and has **at least two cells**. - Also, note that in the same case, `rg.Value` (on the right side of the expression) is already a **2D one-based array** (containing the values of the range) with the same number of rows and columns as the rows and columns of the range which you can prove with: <!-- language: lang-vb --> Sub Test() Dim ws As Worksheet: Set ws = ThisWorkbook.Worksheets(1) Dim rg As Range: Set rg = ws.Range("A3:C3") Debug.Print "Rows: " & UBound(rg.Value, 1) & vbLf _ & "Columns: " & UBound(rg.Value, 2) Dim Item As Variant, c As Long For Each Item In rg.Value c = c + 1 Debug.Print c, Item Next Item End Sub Result ``` Rows: 1 Columns: 3 1 5 2 3 3 1 ``` [1]: https://i.stack.imgur.com/s43oH.jpg
Problem running wagtail(CMS for django) and postgresql in Docker Compose
|postgresql|docker-compose|wagtail|
null
You cannot directly use `localStorage` to init the state value because as you said `localStorage` is not available on the server side. You can init the state with a default value and use an `effect` to update its value as soon as the app is loaded on client side. ``` const [color, setColor] = useState<...>('blue'); useEffect(() => { setColor(localStorage?.getItem('preferred_color') ?? 'blue'); }, []); const contextValue = useMemo(() => ({ color, setColor: newColor => { setColor(newColor); localStorage?.setItem('preferred_color', newColor); } }), [color, setColor]); return ( <ColorsContext.Provider value={contextValue}> {children} </ColorsContext.Provider> ); ```
You can use Docker [Volumes](https://docs.docker.com/storage/volumes/) > Volumes can be more safely shared among multiple containers. You can define Volumes on `dockerfile`, `docker-compose` file or on `cli` But to share volumes, you should define named volumes. You can do so using `cli` or `docker-compose` file. cli ```bash docker volume create --name shared-volume docker run -d --name container1 -v shared-volume:/path/in/container1 my-image1 docker run -d --name container2 -v shared-volume:/path/in/container2 my-image2 ``` docker-compose file ```docker services: service1: image: my-image1 volumes: - shared-volume:/path/in/container1 service2: image: my-image2 volumes: - shared-volume:/path/in/container2 volumes: shared-volume ``` A named volume is declared for the path /path/in/container(1/2). Docker will create a volume, named shared-volume, (folder on your pc) and mount it to /path/in/container(1/2) inside the container. This means that any data written to /path/in/container(1/2) inside the container will be stored in the volume and persist even after the container is stopped or removed. ![docker volume](https://docs.docker.com/storage/images/types-of-mounts-volume.webp?w=450&h=300) _image from docker docs [link](https://docs.docker.com/storage/volumes/)_ Beware containers might overwrite each other's data stored on the volume.
Error finding and rectification of my C code for finding prime numbers
There are two ndarrays, a and b I would like to parallel iterate through array a, and update array b at the same time. `a.par_iter().enumerate().for_each(|(i, data)| { b[i] = data + 1; });` However, I'm getting the message "cannot borrow b as mutable, as it is a captured variable in a Fn closure" I know that I could use par_iter_mut() instead, but this only enables mutable on a, but not b.
Rayon cannot borrow as mutable
|numpy|rust|numpy-ndarray|rust-ndarray|
null
I am creating an operating system and I created a child process using C with this code: ```c #include <stdio.h> #include <stdlib.h> #include <sys/types.h> #include <unistd.h> #include <windows.h> int main() { FILE *fptr; // Open a file in read mode fptr = fopen("filename.txt", "r"); // Store the content of the file char myString[100]; fgets(myString, 100, fptr); fclose(fptr); PROCESS_INFORMATION ni; STARTUPINFO li; ZeroMemory(&li, sizeof(li)); li.cb = sizeof(li); if (CreateProcess(NULL, "child_process.exe", NULL, NULL, FALSE, 0, NULL, NULL, &li, &ni)) { // Parent process WaitForSingleObject(ni.hProcess, INFINITE); CloseHandle(ni.hProcess); CloseHandle(ni.hThread); } else { // Child process } pid_t pid = getpid(); printf("(%d) WARNING: These processes are vital for the OS:\n", pid); printf("(%d) %d\n", pid, pid); printf("(%d) %s\n\n\n", pid, myString); return 0; } ``` And I could not end the child process. ***I do not want to use signals as they are too complex and I am a beginner.*** I tried using `return 0;` and it did not work, the process was still running.
I have a DataFrame with a structure and want to convert it to another structure. firstName names variableName variableValue abc123 v_001 varX 1.0 abc123 v_002 varX 2.0 abc123 v_001 varY 3.0 abc123 v_002 varY 4.0 efg456 v_001 varX 1.0 efg456 v_002 varX 2.0 efg456 v_001 varY 3.0 efg456 v_002 varY 4.0 The above is initial dataframe. variableName varX varY variableType TypeOne TypeTwo abc123_v_001 1.0 3.0 abc123_v_002 2.0 4.0 efg456_v_001 1.0 3.0 efg456_v_002 2.0 4.0 The above is the expected output I tried the pivot option but did not succeed, I could see NaN values.
**Create a material.module.ts file importing all Material Packages:** import {MatSnackBarModule} … import { CUSTOM_ELEMENTS_SCHEMA, ModuleWithProviders, NgModule } from '@angular/core'; @NgModule({ imports:[MatSnackBarModule], exports:[MatSnackBarModule], schemas: [CUSTOM_ELEMENTS_SCHEMA], }) export class MaterialModule { static forRoot(): ModuleWithProviders<MaterialModule> { return { ngModule: MaterialModule, }; } } **In App.module.ts,** Imports:[ MaterialModule.forRoot(), ], schemas: [CUSTOM_ELEMENTS_SCHEMA], *Please refer this article for reference.* https://angular.io/guide/deprecations#modulewithproviders-type-without-a-generic
Running the following command in **Developer PowerShell** pane within VS2022 fixed the problem for me: msbuild /target:Restore <YourSolution.sln> This command restored NuGet packages needed by MAUI project. After this command completed (and 15-20 seconds passed), a dialog appeared asking for Android SDK installation. Afterwards, rebuilding the project gave me the sweet `Rebuild succeeded` message. I'm in VS2022 (17.9.0) and targeting .NET 8.
I've worked with many off the shelf systems including ones that were highly modified to meet specific business needs. Don't do it. Find a way to aggregate the data you are interested in your own system and tie it to Keycloak. Don't modify Keycloak. Every modification you make to Keycloak will drastically increase future maintenance and upgrades by an exponential amount. There will even be a point where upgrades will not be possible without trashing your existing instance and starting over. If there is no way around it, here are a few tips to improve maintainability: - Organize modifications in a similar way throughout the project, preferably all together in their own section. - Include inline notes and indicators where modifications begin and end, what the changes were and why you made them. - Keep documentation outside of the project on modifications to track them all in one place.
For me, adding "ts-jest/presets/default-esm" preset like below solved the issue. jest.config.js: ``` export default { testEnvironment: 'node', moduleNameMapper: { "(.+)\\.js": "$1", }, preset: 'ts-jest/presets/default-esm', // or other ESM presets } ``` Ref: https://kulshekhar.github.io/ts-jest/docs/guides/esm-support/#use-esm-presets
Docker OpenTelemetry Collector contrib instrumentation issue with .NET