instruction
stringlengths
0
30k
βŒ€
Technically C doesn't really have "multi-dimensional" arrays. All it have are plain arrays, but then each element in turn can then be another arrays. That's how "multi-dimensional" normal arrays work: ``` int arr[X][Y]; ``` Here `arr` is an array of `X` elements, each element is an array of `Y` elements. The most simple and naive way to handle do this nesting for "dynamic" arrays is basically the same: For a simple "one-dimensional" array you have a pointer. So to nest that inside another dynamically created array, you have an array of pointers, or a pointer to a pointer: ``` int **arr; ``` To create memory for it, we first create the outer array: ``` arr = malloc(X * sizeof(*arr)); // sizeof(*arr) is the same as sizeof(int *) ``` Then we need to create each nested array: ``` for (size_t x = 0; x < X; ++x) { arr[x] = malloc(Y * sizeof(*arr[x])); // sizeof(*arr[x]) is the same as sizeof(int) } ``` And finally we initialize all the individual array elements: ``` for (size_t x = 0; x < X; ++x) { for (size_t y = 0; y < Y; ++y) { arr[x][y] = 0; } } ``` There are other ways to create dynamic multi-dimensional arrays, but this is likely what your teacher expect you to create.
In my case I have forgot Camera Permission so that i have added permission in **AndroidManifest.xml:** <?xml version="1.0" encoding="utf-8"?> <manifest xmlns:android="http://schemas.android.com/apk/res/android" xmlns:tools="http://schemas.android.com/tools"> <uses-permission android:name="android.permission.CAMERA" /> <application android:allowBackup="true" android:dataExtractionRules="@xml/data_extraction_rules" android:fullBackupContent="@xml/backup_rules" android:icon="@mipmap/ic_launcher" android:label="@string/app_name" android:supportsRtl="true" android:theme="@style/Theme.MyAppName" tools:targetApi="31"> <activity android:name=".MainActivity" android:exported="true"> <intent-filter> <action android:name="android.intent.action.MAIN" /> <category android:name="android.intent.category.LAUNCHER" /> </intent-filter> </activity> <provider android:name="androidx.core.content.FileProvider" android:authorities="com.example.myappname.fileProvider" android:exported="false" android:grantUriPermissions="true"> <meta-data android:name="android.support.FILE_PROVIDER_PATHS" android:resource="@xml/file_path" /> </provider> </application> </manifest>
I am estimating survival with weights in a case control study. So all cases have weight equal to one. When drauwing the estimated curves and comparing them to unweighted estimation I noted that KM curves for cases dont't overlap. Here's the code for data from "survival" package. ``` library(dplyr) library(tidyverse) library(survival) library(broom) library(WeightIt) library(survey) a <- survival::ovarian #Calculation of weghts: weights <- WeightIt::weightit(rx ~ age + ecog.ps + resid.ds, int = T, estimand = "ATT", data = a, method = "glm" , stabilize = F, missing = "saem") a$weights <- weights$weights a$ps <- weights$ps design <- svydesign(ids = ~ 1, data = a, weights = ~weights) KM_PFS <- survfit(Surv(futime, fustat > 0)~rx, a) # KM naive KM_PFS_w_TT <- survfit(Surv(futime, fustat > 0)~rx, a, weights = weights, robust = T) KM_PFS_w <- svykm(Surv(futime, fustat > 0)~rx, design = design,se=T) par(mfrow=c(1,1)) plot(KM_PFS_w[[2]], lwd=2, col=c("red"),xlab="Time (months)",ylab="PFS",#svykm treated xaxt="n", ci=F) #lines(KM_PFS_w[[1]],col=c("blue"),lwd=2) lines(KM_PFS,col=c("black","black"),lwd=2,lty=c(0,2)) #km naive treated lines(KM_PFS_w_TT,col=c("orange","violet"),lwd=2,lty=c(0,1))#km TT treated cas_km_w_TT <- tidy(KM_PFS_w_TT)%>%filter(strata == "rx=1") cas_km <- tidy(KM_PFS)%>%filter(strata == "rx=1") cas_km_w <- do.call("rbind", lapply(names(KM_PFS_w), \(x) { data.frame(strata = x, do.call("cbind", KM_PFS_w[[x]])) })) %>% filter(strata ==1) ```
I believe it depends. My field of work is primarily reliant on cloud infrastructure scalability, thus I work with AWS rather than GCP, which excels in ML. I can confidently tell that both perform well and are reliable. AWS has a wider range of scalable services than GCP. while on pricing, GCP is very transparent with its pricing model but AWS's pricing model is a little complex. You could look this up online for more information. check out the following link for starts; https://www.veritis.com/blog/aws-vs-azure-vs-gcp-the-cloud-platform-of-your-choice/
I'm new to this, but I understand that the file is missing. I'm working in vscode, and the error comes out of vs studio 1 I've been trying to install pandas using pip, but it always fails and gives me a giant wall of error text and this at the bottom
Cant install pandas using pip
|python|python-3.x|pandas|
null
I recently migrated our app to.Net 8 and I'm trying to get our previously working Antiforgery tokens to work. Most of it is working fine. However, whenever opening multiple browser tabs we start getting validation errors saying the anti forgery token is not valid. I found this blurb in the Microsoft documentation stating that the synchronizer pattern is going to invalidate the antiforgery token whenever you open a new tab. It then suggests considering alternative CSRF protection patterns if this poses an issue. However, it does not demonstrate other patterns in the documentation! It seems to me that users using multiple browser tabs is commonplace now, so what other patterns are there? In every case they show, you are essentially creating a token, sending it down to the JavaScript and sending it back up in requests... what other pattern is even possible with an antiforgery token? I don't get it. https://learn.microsoft.com/en-us/aspnet/core/security/anti-request-forgery?view=aspnetcore-8.0#antiforgery-in-aspnet-core [![enter image description here][1]][1] [1]: https://i.stack.imgur.com/tVdo6.png
ModuleNotFoundError: No module named 'flask_sqlalchemy', importing in app.py but not subprocess
|python|flask|sqlalchemy|flask-sqlalchemy|
null
The docs are accurate. URL Parameters are not currently 'officially' supported. Some parameters 'may' work for some jobs, but again URL parameters are not officially supported at this time. One workflow option is to have an AWS Lambda function use all the necessary parameters to retrieve the source URL via curl or wget, and write the file to a secure AWS S3 bucket. Then the same Lambda can trigger a MediaConvert job for the file. One benefit of this workflow is that the Lambda could choose different job templates based on the desired outputs. This workflow would be subject to the Lambda function's maximum runtime of 15 minutes. File transfers taking Longer than 15min would need to be performed via EC2 or CloudShell. Another lambda could then delete the copied file once the conversion job is complete. You can see more about Lambda default limits at: https://docs.aws.amazon.com/lambda/latest/dg/gettingstarted-limits.html
I'm trying to read a url using **curl** command, and expecting command to exit with return code 0 or 1 based on response of curl after parsing the response json. I tried to parse a json response, but notable to make it working in if else condition. *URL - localhost:8080/health* response of the url { "db": { "status": "healthy" }, "scheduler": { "fetch": "2024-03-12T04:32:53.060917+00:00", "status": "healthy" } } Expected output - one liner cmd to return with exit code 0 if scheduler.status is healthy else 1 Note - I'm not looking for curl response as 0 or 1 but looking for command to exit with 0 or 1. Purpose - If my scheduler status is unhealthy than my process would terminate and exit from the app. I'm able to parse the response message but notable to apply condition properly on response here is what i tried so far, **cmd 1 :** if ((status=$(curl -s 'https://localhost:8080/health' | python -c "import sys, json; print (json.load(sys.stdin) ['scheduler'] ['status'])"))='healthy'); then exit 0; else 1; It throws error as. zsh: parse error near `='healthy'' From above cmd this `curl -s 'https://localhost:8080/health' | python -c "import sys, json; print (json.load(sys.stdin) ['scheduler'] ['status'])"` part is working fine returns (healthy/unhealthy) while adding condition failing. **cmd 2:** /bin/sh -c "status=$(curl -kf https://localhost:8080/health --no- progress-meter | grep -Eo "scheduler [^}]" | grep -Eo '[^{]$' | grep -Eo "status [^}]" | grep -Eo "[^:]$" | tr -d \"' | tr -d '\r' | tr -d '\ '); if [ $status=='unhealthy']; then exit 1; fi;0" This is also not working but, this part `curl -kf https://localhost:8080/health --no- progress-meter | grep -Eo "scheduler [^}]" | grep -Eo '[^{]$' | grep -Eo "status [^}]" | grep -Eo "[^:]$" | tr -d \"' | tr -d '\r' | tr -d '\ '` is working fine to return healthy/unhealthy I tried all this short of, but no luck not sure if there is any workaround with a single cmd.
Shell one liner custom curl command with if else handling
|linux|shell|curl|cmd|
The important thing to note in the quote from the official documentation is in bold below: > All soft references to **softly-reachable objects** are guaranteed to have been cleared before the virtual machine throws an OutOfMemoryError. Softly-reachable objects are those that can be reached by the "root" context only through soft references or weaker ([reference](https://docs.oracle.com/en/java/javase/21/docs/api/java.base/java/lang/ref/package-summary.html#reachability)). In your first example you are putting your `byte[]` in a `List`; this creates a path of **strong** references to the object - that is why it is not garbage collected in the end. In your second example, the array referenced by the `tmp` variable is added to the list and lives at least as long as the list lives. The array referenced by the `bytes` variable however is created and is strongly referenced in the scope of `func()` only. The only reference that escapes is the soft reference. So, before the `OutOfMemoryError` it is guaranteed that this softly reachable array, the last you created in the loop, will be garbage collected. The ones you created in the loop iterations before the last are already *unreachable*, because even the soft reference to them that survived the loop is overwritten in the next iteration. You could try changing the list to be `List<SoftReference<byte[]>>`; you will find out that all of the soft references will have been cleared before the OOME.
I started to learn WireMock. My first experience is not very positive. Here's a failing MRE: ```java import com.github.tomakehurst.wiremock.WireMockServer; import org.junit.jupiter.api.Test; public class GenericTest { @Test void test() { new WireMockServer(8090); } } ``` ```xml <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-test</artifactId> <scope>test</scope> </dependency> <dependency> <groupId>org.wiremock</groupId> <artifactId>wiremock</artifactId> <version>3.5.1</version> <scope>test</scope> </dependency> ``` ```lang-none java.lang.NoClassDefFoundError: org/eclipse/jetty/util/thread/ThreadPool ``` I debugged it a little: ```java public WireMockServer(int port) { this(/* -> this */ wireMockConfig() /* <- throws */.port(port)); } ``` ```java // WireMockConfiguration // ↓ throwing inline private ThreadPoolFactory threadPoolFactory = new QueuedThreadPoolFactory(); public static WireMockConfiguration wireMockConfig() { return /* implicit no-args constructor */ new WireMockConfiguration(); } ``` ```java package com.github.tomakehurst.wiremock.jetty; import com.github.tomakehurst.wiremock.core.Options; import com.github.tomakehurst.wiremock.http.ThreadPoolFactory; // ↓ package org.eclipse does not exist, these lines are in red import org.eclipse.jetty.util.thread.QueuedThreadPool; import org.eclipse.jetty.util.thread.ThreadPool; public class QueuedThreadPoolFactory implements ThreadPoolFactory { @Override public ThreadPool buildThreadPool(Options options) { return new QueuedThreadPool(options.containerThreads()); } } ``` My conclusions: 1. WireMock has a dependency on `org.eclipse` 2. WireMock doesn't include this dependency in its artifact 3. I have to provide it manually ```xml <!-- like so --> <dependency> <groupId>org.eclipse.jetty</groupId> <artifactId>jetty-util</artifactId> <version>12.0.7</version> </dependency> ``` I even visited their [GitHub][1] to see for myself if the dependency is marked as `provided`, but they use Gradle, and I don't know Gradle But that's not all! You'll also have to include (at least) `com.github.jknack.handlebars` and `com.google.common.cache` (see `com.github.tomakehurst.wiremock.extension.responsetemplating.TemplateEngine`) Luckily, I found this "stand-alone" artifact that doesn't require any manual props ```xml <dependency> <groupId>org.wiremock</groupId> <artifactId>wiremock-standalone</artifactId> <version>3.5.1</version> <scope>test</scope> </dependency> ``` My question: 1. Why aren't all artifacts "stand-alone"? Why do artifacts that don't work unless propped up by manually declared dependencies even exist, what are their advantages? [1]: https://github.com/wiremock/wiremock/tree/master
Asp net core Antiforgery token that works over multiple tabs?
|angular|asp.net-mvc|asp.net-core|.net-8.0|antiforgerytoken|
Problem solved ! After days of research the reason for that behaviour was complete different. The Android Maps SDK and the API Key works fine and as expected. The reason was the following: Because I using a self-signed ssl certificate for server validation, I added a custom X509TrustManager implementation to my network class some weeks before. And the problem was: I added it to the class HttpsUrlConnection (huge mistake!) and not to the instance of that class. The google maps api called network requests for tiles, but they ended in my custom X509TrustManager.. Inside this custom implementation I check only for my self-signed certificate and not for google certificates. So all google maps sdk requests where blocked.
It looks like the problem is caused by state 1, where the step length distribution is estimated to be gamma with mean `3.426185e-15` and standard deviation `8.356259e+09`. This happens because you have very many steps of length exactly zero, and so state 1 is basically just used for zero inflation (as indicated by the zero mass parameter equal to 1). There are a few options to circumvent this problem: 1. As Dan suggested, you could add a little bit of random jitter to the data to remove the steps of length zero. Given that only very small perturbations are needed, I don't think that the approach matters much, and you could possibly use `rnorm`, i.e.: ``` # this assumes that x and y are in km, and that it's # reasonable to add noise with standard deviation 1 meter data$x <- data$x + rnorm(nrow(data), 0, 0.001) data$y <- data$y + rnorm(nrow(data), 0, 0.001) move_data <- prepData(data, type = "UTM") ``` 2. The long periods during which the animal is completely still could be removed from the data altogether, to avoid the problem you have, and to save computation time. The idea would be to split a track where there is a long period of no movement, which would require changing the `ID` column to indicate that some data have been removed. This would be a little more difficult to implement in a general way. As a side note if anyone else has this problem: this error usually indicates that some parameters take extreme values, and so that there is a problem when trying to compute the state-dependent distributions. In many cases, it is resolved by trying different sets of starting values, or by trying a simpler model formulation; but I don't think this is applicable to your situation. [1]: https://cran.r-project.org/web/packages/moveHMM/vignettes/moveHMM-starting-values.pdf
Here is what I landed on. Short version: I used `Patterns.askWithReplyTo()` as I show below: ``` Patterns.askWithReplyTo(Adapter.toClassic(typedActorRef), ref -> new TypedActor.Message("cat",Adapter.toTyped(ref)), timeout) ``` This gives me access to the temporary actor created by the `ask` pattern, which I transform to a typed actor to allow the Typed actor to reply to this actor instead of directly to the asking actor. This allows the Future to complete. My complete classic actor example is below. A Typed actor that handles `TypedActor.Message` and replies with a `TypedActor.Reply` to the replyTo argument is not shown. ``` package com.example; import akka.actor.AbstractActor; import akka.actor.Props; import akka.actor.typed.javadsl.Adapter; import akka.pattern.Patterns; import akka.util.Timeout; import scala.compat.java8.FutureConverters; import java.time.Duration; public class ClassicAskerActor extends AbstractActor { private final akka.actor.typed.ActorRef<TypedActor.Command> typedActorRef; static Props props() { return Props.create(ClassicAskerActor.class); } private ClassicAskerActor() { this.typedActorRef = Adapter.spawn(getContext().system(), TypedActor.create(), "typedActor"); } @Override public Receive createReceive() { return receiveBuilder() .match(String.class, this::onStringMessage) .build(); } private void onStringMessage(String message) { Timeout timeout = Timeout.create(Duration.ofSeconds(5)); FutureConverters.toJava(Patterns.askWithReplyTo(Adapter.toClassic(typedActorRef), ref -> new TypedActor.Message("cat",Adapter.toTyped(ref)), timeout)) .toCompletableFuture() .thenAccept(response -> { System.out.println("Response received: " + ((TypedActor.Reply) response).value()); }) .exceptionally(ex -> { System.out.println("Failed to get response: " + ex.getMessage()); return null; }); } } ``` I would be glad for feedback on what I am suggesting here.
Why do "non-stand-alone" artifacts exist?
I strongly suggest that you write this using Swift Concurrency, instead of using completion handlers. It would look something like this: func downloadWithEscaping() async throws -> UIImage { let (data, response) = try await URLSession.shared.data(from: url) guard let image = handleResponse(data: data, response: response) else { throw Errors.invalidImage } return image } enum Errors: Error { case invalidImage } <hr> As for the second error, it is saying that your completion handler is not `@Sendable`. You can fix the error by just marking it as `@Sendable`, func downloadWithEscaping(completion: @escaping @Sendable (UIImage?, Error?) -> Void) but of course this puts certain restrictions on what you can do in the completion handler (the same restrictions as what you can do in the completion handler of `URLSession.shared.dataTask(with: url)`), so you might get more errors depending on how you use this method. For the first error, if your class cannot be `final`, that means subclasses can add whatever properties they like in a non-sendable way, so capturing `self` is inherently unsafe. That said, `handleResponse` doesn't use any instance properties. It might as well be a `static func` (or `class func` if you allow subclasses to override it). Then you can call it with `Self.handleResponse(...)` instead of `self.handleResponse(...)`.
This is considered as anti pattern in modern React ecosystem. According to single responsibility principle keep your business logic in a simple way with custom hooks (Use prototype inheritance in it, if required), To store API calls use Tanstack Query and to store global data use Jotai (Atoms). This libraries are very easy to learn and maintain. You don't need to write Redux (action, reducers and store), Redux toolkit and other boilerplate codes today. Even you learn this concepts it was not so useful in other stacks. Even today many React interviewers asks for Redux questions, I hope they will update their projects with the mentioned best practices soon. A sample snippet is given below. ``` const Counter = () => { const [value, setValue] = useAtom(counterAtom); const { increment, decrement } = useCounter(setValue); return ( <> <Button onPress={increment}>Increment</Button> <Button onPress={decrement}>Decrement</Button> </> ); }; ``` Bonus: You can write the unit test for that custom hook 'useCounter' easily.
|java|wiremock|artifact|
when I use the same Keras pipeline to get the results, I'm getting different results. ``` results = pipeline.recognize([ultralytics_crop_objects[5]]) print(results) --> ji856931 results = pipeline.recognize(ultralytics_crop_objects) print(results[5]) --> ji8569317076 ``` [ultralytics_crop_objects[5]](https://i.stack.imgur.com/x6QvJ.png) [Different Results][1] Does anybody have an explanation for that? I checked a couple of times if I accidentally used another pipeline or if the input pictures are different. But they aren't. And I googled a lot. [1]: https://i.stack.imgur.com/IhpHi.png
You can pass configuration options like: ```shell pecl install -D 'with-php-config="/usr/local/bin/php-config" with-libmemcached-dir="/usr" with-zlib-dir="no" enable-memcached-msgpack="no" enable-memcached-json="no" with-system-fastlz="no" enable-memcached-igbinary="yes" enable-memcached-protocol="yes" enable-memcached-sasl="yes" enable-memcached-session="yes"' memcached ```
I have two dataframes. df1 col1 var1 var2 var3 X11 NA (for var3) X12 NA (for var2) X13 NA (for var1) df1 has a few columns (float64 type representing some categories) like var1, var2, var3 with values between 1-5 for each and some missing values for the categories. I want to fill in the missing values (in each of var1, var2, and var3 columns) using another dataframe, df2 such that df2 has a column with the value for the category. df2 col1 col2 val col4 X11 var1 3 X11-X21 X12 var3 2 X21-X22 X13 var2 1 X13-X32 How could I do this? Since we need to look up on several columns and also because of the structure of df1, I found it complicated to use pivot or melt or even one-hot encoding (produces 5 columns each with _1 to _5 suffixed. I also though about creating a set but then the pairs must be unique which is not the case. Same when I thought of using dictionary as I cannot think of unique keys. How could I solve this issue? Thanks.
lookup missing values from another dataframe in pandas
|python|pandas|lookup|
I have query that i have cached forever controller.php $duplex = Cache::rememberForever('duplex', function () { return Property::where('buildingtype_id', '4')->orderBy('created_at', 'desc') ->get(); }) in view.php, i have added a filter in which users can filter down from the cached data to what they want. in model.php public function scopeFilter($query, array $filters) { // search filter address $query->when($filters['search'] ?? false, fn($query, $search) => $query->where(fn($query) => $query->where('address', 'like', '%' . $search . '%') ) );} So i want when a user search for certain strings, it will go through the address of cached data $duplex and filter it. My idea is $duplex->filter(request(['search']))->withQueryString(); But this gives error.
Illuminate\Support\Collection::filter(): Argument #1 ($callback) must be of type ?callable, array given, called in
|php|laravel|
|reactjs|jestjs|ts-jest|babel-jest|
Update for anyone searching for the same thing: ever since Wagtail 2.12, the original answer does not work as `stream_data` as a property of `StreamValue` was deprecated. You can use the following to achieve the same thing: def find_block(block_name): for page in YourPageModel: for block in page.body: if block.block_type == block_name: print('@@@ Found one: ' + str(block) + ' at page: ' + page)
It appears that you just want to find records of employees having salaries greater than the average. If so, then the `GROUP BY` clause is superfluous. Remove it, and use this version: <!-- language: sql --> SELECT * FROM db_hr.employee WHERE salary > (SELECT AVG(salary) FROM db_hr.employee); By the way, regarding the use of `GROUP BY`, if a query is not selecting any aggregates of columns, or there is no goal to remove duplicates, then the `GROUP BY` may be unnecessary.
CMake Error at CMakeLists.txt:21 (find_package): Could not find a configuration file for package "Qt6" that is compatible with requested version "6.6.3". The following configuration files were considered but not accepted: C:/Qt/6.6.3/mingw_64/lib/cmake/Qt6/Qt6Config.cmake, version: 6.6.3 (64bit) can any one help me out with this error?? I have try building plugins through qmake but i dont have mysql.pro file available at the location.. and also I have downloaded the mysql connector/c and mysql server 8.0.2
i have installed qt version 6.0.3 and this error QMYSQL driver not loaded displaying again and again
|mysql|qt|cmake|qmake|qsqldatabase|
null
|r|machine-learning|keras|deep-learning|mnist|
> Hello I am building an Android application using Clang Qt 6.4.1 armeabi-v7a. However, on other devices such as the Pixel 4 (Android 13) I get a β€œblank space” appearing on the right at at the bottom of the application when I change the orientation of the device from Portrait to Landscape and vice versa. Here are some photos; The app: https://drive.google.com/file/d/1mJqAhuD057NpcaHBrsdEIi0jYGLdTe9Z/view?usp=drivesdk When flipped horizontally: https://drive.google.com/file/d/1m6o4rF59u9ABez2wUEItS0t6ax3hH2p-/view?usp=drivesdk The issue is in the second image. Is this a bug? What do you recommend? Should I try use Felgo for Qt 5 instead?? When the device's orientation changes from Potrait to Landscape, I expected the Felgo App to adapt accordingly as it does on the Felgo Mobile App. But instead, when I run on debug and install into device and change orientation some extra space appears on the right. See picture: https://drive.google.com/file/d/1m6o4rF59u9ABez2wUEItS0t6ax3hH2p-/view?usp=drivesdk
Felgo 4 Android Orientation bug
|android|orientation|felgo|
null
use `Cross Join` and `Subquery` to get your desired result ``` select AgeGroup,SUMNumberOfFans*1.0/SUMNumberOfFans2 from (Select Date, AgeGroup, SUM(NumberOfFans) From FansPerGenderAge WHERE date = (SELECT max(date) from FansPerGenderAge) GROUP BY AgeGroup) a join ( Select SUM(NumberOfFans) NumberOfFans2 From FansPerGenderAge WHERE date = (SELECT max(date) from FansPerGenderAge)) b on 1=1 ```
|amazon-web-services|terraform|devops|amazon-eks|aws-devops|
I am creating an operating system and I created a child process using c with this code: ``` #include <stdio.h> #include <stdlib.h> #include <sys/types.h> #include <unistd.h> #include <windows.h> int main() { FILE *fptr; // Open a file in read mode fptr = fopen("filename.txt", "r"); // Store the content of the file char myString[100]; fgets(myString, 100, fptr); fclose(fptr); PROCESS_INFORMATION ni; STARTUPINFO li; ZeroMemory(&li, sizeof(li)); li.cb = sizeof(li); if (CreateProcess(NULL, "child_process.exe", NULL, NULL, FALSE, 0, NULL, NULL, &li, &ni)) { // Parent process WaitForSingleObject(ni.hProcess, INFINITE); CloseHandle(ni.hProcess); CloseHandle(ni.hThread); } else { // Child process } pid_t pid = getpid(); printf("(%d) WARNING: These processes are vital for the OS:\n", pid); printf("(%d) %d\n", pid, pid); printf("(%d) %s\n\n\n", pid, myString); return 0; } ``` And I could not end the child process. ***I do not want to use signals as they are too complex and I am a beginner.*** I tried using ` return 0;` and it did not work the process was still running
I need to know if there any easy migration for the app from layouts to viewbinding enabled in android studio.
Clarification: Any simple way to update the layouts to viewbinding in onCreate?
|android|kotlin|
We have a platform that allows users view and sign a document. We did this using firebase storage `getSignedUrl` to create an array `[document with read access, document with write access]`. The issue we have is that we keep getting a CORS error. We have set the cors configuration on the Google console a couple of times but we still get the same error.[![CORS error][1]][1] Is there a way to solve this issue? An example of this can be replicated [here][2] if you try and sign the document [1]: https://i.stack.imgur.com/C6DPF.png [2]: http://localhost:5173/shared/-NrBwOCgu226c7DtrgPh/-Nr_-6878_2wIU-ZclFm Below is my config configuration: [ { "origin": ["*"], "method": ["GET","PUT", "HEAD"], "maxAgeSeconds": 3600 } ]
You can use [First class callable syntax](https://www.php.net/manual/en/functions.first_class_callable_syntax.php) since PHP 8.1 Routes::setup($router, $injector, $this->toRouterCallable(...)); You can use the [Clousure](https://www.php.net/manual/en/class.closure.php) class before that $cl = Closure::fromCallable([$this, 'toRouterCallable']); Routes::setup($router, $injector, $cl);
All services were stopped. Instead of changing the ports I did these steps and it worked for me. Right click on WAMPSERVER icon -> go to **Tools** -> and then click on **Reinstall all services**. Thank you
I'm using the extension "Custom right-click menu" to add a custom site search When i use the context menu, it adds spaces to the search term. i need it to have a +. `var search = crmAPI.getSelection() || prompt('Please enter a search query'); var url = 'https://www.subetalodge.us/list_all/search/%s/mode/name'; var toOpen = url.replace(/%s/g,search); window.open(toOpen, '_blank');` I'm a noob and i have literally no idea what im doing
How do i add a custom delimiter to this context menu search?
|javascript|
null
>**Am I missing some setting/switch/or option that controls population of the Client/dist folder?** - You were not building the `dist` folder in the using command `npm run build` - You need to upload after building. - In you `.yml` file you were only selecting the node version, no operations were being done after node selection. This Worked for me. **#my directory :** ``` Webapp/ β”‚ β”œβ”€β”€ .github/ β”‚ └── workflows/ β”‚ └── main_dotnetwithangualr.yml β”‚ β”œβ”€β”€ WebApplication2/ | β”œβ”€β”€ angular-app/ | β”‚ β”œβ”€β”€ src/ | β”‚ β”‚ └── ... | β”‚ β”œβ”€β”€ angular.json | β”‚ β”œβ”€β”€ package.json | β”‚ └── ... β”‚ β”œβ”€β”€ Controllers/ β”‚ β”‚ └── ... β”‚ β”œβ”€β”€ Models/ β”‚ β”‚ └── ... β”‚ β”œβ”€β”€ Properties/ β”‚ β”‚ └── ... β”‚ β”œβ”€β”€ Views/ β”‚ β”‚ └── ... β”‚ β”œβ”€β”€ wwwroot/ β”‚ β”‚ └── ... β”‚ β”œβ”€β”€ appsettings.json β”‚ β”œβ”€β”€ Program.cs β”‚ └── WebApplication2.csproj β”‚ └── WebApplication2.sln ``` [![][1]][1] **`main_dotnetwithangualr.yml`:** ```yml # Docs for the Azure Web Apps Deploy action: https://github.com/Azure/webapps-deploy # More GitHub Actions for Azure: https://github.com/Azure/actions name: Build and deploy ASP.Net Core app to Azure Web App - dotnetwithangualr on: push: branches: - main workflow_dispatch: jobs: build: runs-on: windows-latest steps: - uses: actions/checkout@v4 - name: Set up .NET Core uses: actions/setup-dotnet@v1 with: dotnet-version: '8.x' include-prerelease: true - name: Set up Node.js version uses: actions/setup-node@v3 with: node-version: '18.x' - name: npm install, build, and test run: | npm install npm run build working-directory: WebApplication2/angular-app/ # building `dist`, using `package.json` as it is available in "WebApplication2/angular-app/" path I added working directory. - name: Upload artifact for deployment job uses: actions/upload-artifact@v3 with: name: angular path: ./WebApplication2/angular-app/dist/ # uploading files and folders created inside `dist` - name: Build with dotnet run: dotnet build --configuration Release - name: dotnet publish run: dotnet publish -c Release -o ${{env.DOTNET_ROOT}}/myapp - name: Upload artifact for deployment job uses: actions/upload-artifact@v3 with: name: .net-app path: ${{env.DOTNET_ROOT}}/myapp deploy: runs-on: windows-latest needs: build environment: name: 'Production' url: ${{ steps.deploy-to-webapp.outputs.webapp-url }} steps: - name: Download artifact from build job uses: actions/download-artifact@v3 with: name: .net-app - name: Download artifact from angular build job uses: actions/download-artifact@v3 with: name: angular - name: Deploy to Azure Web App id: deploy-to-webapp uses: azure/webapps-deploy@v2 with: app-name: 'dotnetwithangualr' slot-name: 'Production' package: . publish-profile: ${{ secrets.AZUREAPPSERVICE_PUBLISHPROFILE_E48F4EC67FB14F22A7CD2D3AA9C7BDF5 }} ``` # `OUTPUT`: **`dist` in local :** [![][2]][2] **uploaded files on azure :** [![][5]][5] [![][3]][3] [![][4]][4] [1]: https://i.imgur.com/1vBTOL7.png [2]: https://i.imgur.com/xmlKMlv.png [3]: https://i.imgur.com/mqDhLGP.png [4]: https://i.imgur.com/PghUF0M.png [5]: https://i.imgur.com/sP8wq4d.png
|iterator|iteration|nim-lang|
I have a registry of class constructors. The registry is also used to create instances of these classes. To allow creating a registry of anything that extends `Base`, I use a generic class `Registry` that infers its repositories from the descriptors passed in constructor. It works well. But each instance generated with a registry should also reference it and I'm stuck at getting TS to keep a precise inference. ```typescript type RepositoryDescriptor = { ctor: () => (new(options: RepositoryOptions<any, any>) => Base<any, any>) // would like to have something like a generic class constructor to allow inference // ctor: () => (new(options: RepositoryOptions<R, any>) => Base<R, any>) } type RepositoryCtor<T extends RepositoryDescriptor> = ReturnType<T['ctor']> type Repository<T extends RepositoryDescriptor> = InstanceType<RepositoryCtor<T>> // Think of this logic either that doesn't work // type Repository<T extends RepositoryDescriptor, R> = (InstanceType<RepositoryCtor<T>>)<R> type RepositoryOptions<R, V> = { n: string, v: V r: R } class Registry<R extends {[key: string]: RepositoryDescriptor}> { repositories: R constructor(registry: R) { this.repositories = registry } get<K extends string & keyof R, T extends Repository<R[K]>>(key: K, v: T['v']) : T { let ctor = this.repositories[key].ctor(); return new ctor({r: this, v, n: key}) as T // Alternatively of working on constructor, something like this may be great // return new ctor({r: this, v, n: key}) as T<R> } } class Base<R, V> { registry: R n: string v: V constructor({n, v, r}: RepositoryOptions<R, V>) { this.registry = r this.n = n this.v = v } } class A<R> extends Base<R, boolean> {} class B<R> extends Base<R, number> {} const registry = new Registry({ A: { ctor: () => A }, B: { ctor: () => B } }) let a = registry.get('A', true) // It's typed as A<any>, registry type information is lost let b = registry.get('B', 50) as B<typeof registry> // Cast can solved my issue but I prefer to avoid it if possible let c = a.registry.get('C', true) // Should be an error let d = b.registry.get('C', true) // Error is raised as b is casted ``` [TS playground](https://www.typescriptlang.org/play?#code/C4TwDgpgBAShYHsDOBLYCBOIAiEkGMMUx0MoBeKAbwCgp6p9SAuKACgEoKA+dgOwgB3NghIoEfJKziJUpEAHkxEpAB4AhnxAAaKJpDcu5XgCF1SCBq279hugwD0DqIIQBXADYATKB5QBraHQoAAt1ADdoJAQAWwhgEJQ+AHNfAOh1KGSIASJ8Rg9zJEYVYAw3JkwoYPUPDwRBKCSAMwgMHPwIe3onRhZ2I142AWFRYHFJaXhkNExFZUlVGBstQx4oMwsllYMOGgBfGhpQSFhpuTmAYVJVABUoCAAPYByvYpkZ+VwCIhJMXkocGAbgwfFu4EstwA2gBySoYGEAXW4xwhZ1ksywdwez1e73OmJweEIxFIAKgAElJMBNJ1wZAlgT5NdMHduCjerdEnx-FAEM1qolivVkih8hA0CE2oL1MAoF4EHg+DC5a4MP4aL0TtAPhcsfcni8+G90Z85t8SX8MLoYOS2FSkDS+HSIYyMcybrd2RwlijUaddYSlOMVNsoAA1cm0BhQPisR1EFLabpQcKscMpjCsWAHI74QpIfGihMgJY4o0mqhQwIgeNlJLJRFTd3m4m-Uj7XjRhjtFtEPDSI4x-ClcrwtjtYtlWuwLjdmP0BIoJAAOl7Zv7xUok+X05ThxT2WAqgA0uW8VAEw2oAAyKA1-mwXQG3HG-F90swKEn5HcNg11gT10NMoGhGFwiRLhWHuecFw8eI+iqSgl1Xdc9RQPBqwgEBERXccOAAbiHBd6HaYFQVjIREIwNgqCzQVl2A3Q43vbD9i4cxQJTGNegAQQ8F5QVlFBIg8EA+QFNV-GvCQSmpMdSF0aI4iXFI0kCBjihidRxIAI2gZJ2llbjHGcMiQT4SjGnHOjWBQpjY1YGt2L0Ypbl9fdcxofMig2cxLGWCMu0zCApywQcYxYq8UhTECMxTEd5IqUhaL4ByMH2ZsN3mENFkCyM5xMxchTXULdywCgoAwIrNJXSzKD4GqUJXcJKvCTyDx8wsoF431zzfPytkC3SEAQeDNC7Q4uuKEw+sNC9NgC5i3BifSMEmvNSiqsqS0qkYzjCkBaJTXjWFg4d+k4dZeP3ZMYxMM6avhVgruMDYOr2Gh4LlTJtx26cVyPNgYV4mFdDKNwIC4XoKWAGFim1HxON62xdB3XbtSaPhmkwbTcqaYVkGAL6EN0yr0YBoGYRMMGoAAVgABg4mbVG1R8KawXhekucw5XwTRLzGyIfBicTlyQSGoF0tw5QpKAwHaVoyBqcIEBQHw0CaAVZFQXT4KOb7GEq9RSsOwH4mBy5aYhqGoF6ABlEJ3G8KWMkstoMEwEm5R8ShdNN8qQHN4BLet8pbd6BQTyAA) As I feel it, there's two thinkable ways : 1. Working on `RepositoryDescriptor` type to make it works like a generic class constructor that allow inference of registry type 2. Working on `Repository` type be able to use it with a generic and cast as `Repository<R>` in `Registry::get` I've failed on both approaches. I'm not closed to another pattern to achieve the same goal if more convenient with typescript.
How to write a non generic type for the constructor of a generic class in typescript or use generic with utility types?
I finally got the solution...the problem was I was referencing to the storage in this manner: <!-- begin snippet: js hide: false console: true babel: false --> <!-- language: lang-html --> ... table(infer_schema(location=>'@adf_copyparquetfile_stage_dev/Account', file_format=>'DEV_PARQUET_TYPE'))); <!-- end snippet --> But I realize that the route for the stage was different in snowflake and I chaged it (The best way to find the route o name go to snowflake in worksheets go to databases and find the stage press 3 dots and "place name in Editor"): <!-- begin snippet: js hide: false console: true babel: false --> <!-- language: lang-html --> ... table(infer_schema(location=>'@DEV_LANDING_CRM_DB.POC_CCV.ADF_COPYPARQUETFILE_STAGE_DEV', file_format=>'DEV_PARQUET_TYPE'))); <!-- end snippet -->
I was able to do this quickly by editing the user under Securities and under the Securables, setting permissions there.
I finally got the solution...the problem was I was referencing to the stage of the integration storage in wrong manner: <!-- begin snippet: js hide: false console: true babel: false --> <!-- language: lang-html --> ... table(infer_schema(location=>'@adf_copyparquetfile_stage_dev/Account', file_format=>'DEV_PARQUET_TYPE'))); <!-- end snippet --> But I realize that the route for the stage was different in snowflake and I chaged it (The best way to find the route o name go to snowflake in worksheets go to databases and find the stage press 3 dots and "place name in Editor"): <!-- begin snippet: js hide: false console: true babel: false --> <!-- language: lang-html --> ... table(infer_schema(location=>'@DEV_LANDING_CRM_DB.POC_CCV.ADF_COPYPARQUETFILE_STAGE_DEV', file_format=>'DEV_PARQUET_TYPE'))); <!-- end snippet -->
Use `Invoke-WebRequest` instead for your first call, ``` $reportResponse = Invoke-WebRequest -Uri $generateReportUri -Method Post -Body ($reportRequest | ConvertTo-Json) -Headers @{ "Authorization" = "Bearer $((Get-AzAccessToken).Token)" } ```
Look the following code: ```python3 import asyncio async def count(): print("One") await asyncio.sleep(1) print("Two") async def main(): await asyncio.gather(count(), count(), count()) # THIS LINE if __name__ == "__main__": import time s = time.perf_counter() asyncio.run(main()) elapsed = time.perf_counter() - s print(f"{__file__} executed in {elapsed:0.2f} seconds.") ``` Look at `# THIS LINE`, asyncio.gather can do its functionality before its parameter count() return value. But as my understanding toward python. The python interpreter consider the outer function to be a black box and focus on evaluate its parameter first. When all value of its parameter done, the interpreter then pass it to the function to execute. According to my understanding above, there will be no difference between: ```python3 await asyncio.gather(count(), count(), count()) ``` and ```python3 await (count(), count(), count()) ``` The later one is a non-assigned tuple. But how `asyncio.gather` implement its jobs in such a form? Or there is something special for async function definition itself?
I have two endpoints, /login and /index. The /login is for authentication and getting an access_token. The /index is protected and user should login first. After authentication, the /login sends the access token to the client. The client should be redirect to the /index page after receiving an access token. It is the code: oauth2_scheme = OAuth2PasswordBearer(tokenUrl="login") @app.get("/index") async def root( request: Request, current_user: Annotated[User, Depends(oauth2_scheme)] ): return templates.TemplateResponse("index.html", {"request": request}) @app.post("/login") async def login( request: Request, data: Annotated[OAuth2PasswordRequestForm, Depends()], ): user = authenticate_user(data.username, data.password) if not user: raise HTTPException(...) access_token = create_access_token(...) response.set_cookie(key="access_token", value=access_token, httponly=True) return Token(access_token=access_token, token_type="bearer") It is the client form: <!DOCTYPE html> <html lang="en"> <head> <meta charset="UTF-8"> <meta name="viewport" content="width=device-width, initial-scale=1.0"> <title>Login</title> </head> <body> <h1>Login</h1> <form id="loginForm" > <table> <tr> <td> <label for="username">Email:</label> </td> <td> <input type="text" id="username" name="username" required> </td> </tr> <tr> <td> <label for="password">Pass Code:</label> </td> <td> <input type="text" id="password" name="password" required> </td> </tr> <tr> <td></td> <td> <button type="submit" style="margin-top: 15px">Submit</button> </td> </tr> </table> </form> <script> document.getElementById("loginForm").addEventListener("submit", function (event) { event.preventDefault(); fetch("/login", { method: "POST", body: new FormData(event.target) }) .then(response => { if (response.ok) { return response.json(); } else { throw new Error('Failed to authenticate'); } }) .then(data => { window.location.href = '/index'; }) .catch(error => console.error("Error:", error)); }); </script> </body> </html> This code does not work because of lacking of the authentication header. I get this error: {"detail":"Not authenticated"} The test in /docs works because it sends the access token in the Authorization header. This command works: curl -X 'GET' \ 'http://127.0.0.1:8000/index' \ -H 'accept: application/json' \ -H 'Authorization: Bearer A_VALID_TOKEN ' I do not know how I should handle the client side. I am not sure if I should send another fetch for the /index and get the html content and assign it to the body section. I do not know what is the best practice. Maybe I can use RedirectResponse from fastapi.response. I am not sure if it is a good practice or not. /login should send back an access token and not a html code I think. **Edit** The access token is stored in cookie. **Edit 2** I added a hidden form and used a post method to reach the index endpoint <form action="/index" id="token_form" method="post" style="display: none;"> <label> <input type="text" id='token' name="token" value="YOUR TOKEN" hidden> </label> </form> In the back-end I did not use Depence feature for now like @app.post("/index") async def root( request: Request, ): frm = await request.form() return templates.TemplateResponse("index.html", {"request": request}) But ideally I am looking for a way to use the endpoint protection through dependency injection. @app.post("/index") async def root( current_user: Annotated[User, Depends(oauth2_scheme)] ):
To address the issue you're facing, simply execute the command below. This will clear out all installed Vagrant plugins and reinstall them, which often resolves problems: vagrant plugin expunge --reinstall This command is particularly useful for troubleshooting issues related to corrupt or incompatible plugins in Vagrant. By performing this action, you'll ensure that you have clean, up-to-date versions of your plugins, potentially fixing the problem at hand.
null
From the documentation ([PHP_Codesniffer: ignoring parts of a file][1]): You can try wrapping the offending code in special comments. Their example: $xmlPackage = new XMLPackage; // phpcs:disable $xmlPackage['error_code'] = get_default_error_code_value(); $xmlPackage->send(); // phpcs:enable (Imperfect answer but at least it can let you get to a clean `phpcs` run.) [1]: https://github.com/squizlabs/PHP_CodeSniffer/wiki/Advanced-Usage#ignoring-parts-of-a-file
Since you have explicitly mentioned decrease clause, Dafny will use that decrease clause. You assumption that it will compare `x + y` with tuple `x`, `y` is wrong. It would have chosen tuple `x`, `y` as if you haven't provided decrease clause. In either case it will compare decrease clause of invoking function/lemma with decrease clause of called function/lemma. Hence it will compare `x+y` with `x+y` (recursion call values) in this case. Edit: Dafny doesn't compare decrease clause of caller with parameter tuple of callee. It compares decrease clause of caller with decrease clause of callee. Since here caller and callee are same, it is same expression. If either of these don't have decrease clause, decrease clause will be defaulted to parameter tuple - but it still compares decrease clause of caller and callee. See this example ``` lemma Test(x: nat, y: nat) decreases y { if y > 0 { Test(x+1, 0); } } ``` Here according to your argument y might be less than tuple `x+1`, `0` (depending of value of x) but it still verifies. Now take case when it is called with `x = 3` and `y = 2`. Here `x + y` is 5 and when you call recursively in last else if branch it will be `x = 2` and `y = 3` but `x + y` is 5 still. It is not decreasing hence Dafny is complaining.
There are no "Rules" on where to install python, it's all up to personal preference, but having multiple installations with different libraries added to them can cause issues. I suggest deleting the installation with the lesser amount of libraries and adding all missing libraries to the larger installation.
I have a DataFrame with a structure and want to convert it to another structure. firstName names variableName variableValue abc123 v_001 varX 1.0 abc123 v_002 varX 2.0 abc123 v_001 varY 3.0 abc123 v_002 varY 4.0 efg456 v_001 varX 1.0 efg456 v_002 varX 2.0 efg456 v_001 varY 3.0 efg456 v_002 varY 4.0 The above is initial dataframe. variableName varX varY variableType TypeOne TypeTwo abc123_v_001 1.0 3.0 abc123_v_002 2.0 4.0 efg456_v_001 1.0 3.0 efg456_v_002 2.0 4.0 The above is the expected output I tried the pivot option but did not succeed, I could see NaN values.
Deleting duplicates in a table where a unique identifier doesn't quite exist
|mysql-workbench|mysql-innodb-cluster|
null
My project required application of latest version of Tensorflow(2.15.0). In order to install, I created a new environment with Python3.11.8 using Anaconda Navigator. Among the 'Not Installed' packages, Tensorflow-2.10.0 is shown. I also tried to install outside the Navigator using the command. conda install conda-forge::tensorflow Still the version getting installed is 2.10.0. How can I install Tensorflow-2.15.0 in the Anaconda environment?
How to Install Latest version of Tensorflow in the Anaconda environment
|python|tensorflow|anaconda3|
The configurations of the client, proxy server, and backend server are as follows: CIP (client IP address): 192.168.189.149, VIP (proxy server IP address): 172.19.222.16, and RIP (back-end intranet server IP address): 192.100.13.203. Config the iptables proxy at proxy server: 1. iptables -A PREROUTING -p tcp -d 172.19.222.16 -j DNAT --to 192.100.13.203. 2. iptables -A POSTROUTING -j MASQUERADE. Req message ip changed by reverse proxy, Rsp message reply network process as below: Req msg: Client(CIP 192.168.189.149->VIP 172.19.222.16) ==> proxy server_iptables(VIP 172.19.222.16->RIP 192.100.13.203) ==> RealServer Rsp msg:RealServer(RIP 192.100.13.203->VIP 172.19.222.16) => proxy server_iptables(VIP 172.19.222.16->CIP 192.168.189.149) **question here** ==> Client Question: The destination address of the reply message sent by the real server is not the IP address of the client, and the reverse proxy server is not informed of the final address (IP address of the client) of the message. How does the reverse proxy server know that this message should forwards to the client? Is it because there has flow table stored in iptables module, or some other way? Detail proxy of request and reply message as below: 1. The client sends a message to the proxy server. The proxy server forwards the message to the IP address of the real server based on the DNAT rule, changes the source IP address to the IP address of the proxy server, and routes the message. 2. The real server receives the message from the reverse proxy server. The source IP address of the message is the IP address of the proxy server, and the destination IP address is the IP address of the real server. 3. The real server sends a request message to the reverse proxy server. The source IP address is the IP address of the real server, and the destination IP address is the IP address of the reverse proxy server. 4. After proxy server receive the reply message, it will forwards the message to the client. The destination address of the message sent by the real server is not the IP address of the client, and the reverse proxy server don't know reply message final address (IP address of the client). How does the reverse proxy server know that this message should forwards to the client? Is it because there has flow table stored in iptables module, or some other way?
When iptables is used for reverse proxy, how does the proxy server know the client IP address after the real-server replies messages?
|reverse-proxy|iptables|nat|
I think you have to finish your `eval` statement to get a correct output. And also I shifted `return None` 1 indent to the left, because you want to apply all transform checks and see if they fit before returning None (i.e. finish the `for` loop). ``` transform = pd.DataFrame({'Condition': ['Country', 'Country'], 'Operator': ['==', '=='], 'Comparison': ['US', 'UK'], 'Output': ['1', '2'] }) df = pd.DataFrame({ 'Country': ['US', 'UK', 'other'] }) def transform_data(row): for i in range(len(transform)): condition = transform.iloc[i,0] operator = transform.iloc[i,1] comparison = transform.iloc[i,2] output = transform.iloc[i,3] if eval(f"row['{condition}']{operator}'{comparison}'"): return output return None print(df.apply(transform_data, axis=1)) ``` Output: ``` 0 1 1 2 2 None dtype: object ```
In my Spring Boot (3.2.3 vers) application I have the following two JPA entities: @Entity @Table(name = "table_1") @Getter @Setter public class Table1 { @EmbeddedId private Table1ID id; @Column(name = "tt_xx") private String xx; @Column(name = "tt_yy") private String yy; @OneToMany(fetch = FetchType.EAGER) @JoinColumn(name = "ts_id", referencedColumnName = "tt_id") private List<Table2> t2List; @Data @Embeddable public static class Table1ID { @Convert(converter = PaddingConverter.class) @Column(name = "tt_id") private String ttID; } } @Entity @Table(name = "table_2") @Getter @Setter public class Table2 { @EmbeddedId private Table2ID id; @Column(name = "ts_xx") private String xx; @Column(name = "ts_yy") private String yy; @Data @Embeddable public static class Table2ID { @Column(name = "ts_id") private String tsID; @Column(name = "ts_key") private String tsKey; } } So, in my DB I have two tables: *table_1*, *table_2*: - table_1: has as primary key the column *tt_id* which is of type DECIMAL(8) on the database. - table_2: has as primary key the columns *ts_id* (VARCHAR(8)) and ts_key. As you could see in the entities, and for reasons of business logic, I applied a converter to tt_id of table_1 so that in the application we have a value of the type '00012345' for db values 12345. In table_2 however this is already so since the value on the db precisely is already a string including the leading zeros. The problem is that the OneToMany relationship I am mapping seems not to be working. It seems that Hibernate does not apply the converter. I also tried defining a relation like: @OneToMany(fetch = FetchType.EAGER) @JoinFormula(referencedColumnName = "tt_id", value = "CAST(ts_id AS DECIMAL)") private List<Table2> t2List; but I get the error "Values involves formula". Hibernate vers: 6.4.1 Final Any ideas?
I need to connect ipv6 through WIFI. But below code works without turning on WIFI. Let me know how to get ipv6 by turning on WIFI. I am bit confused and can you please guide me. public String getLocalIpV6() { try { for (Enumeration<NetworkInterface> en = NetworkInterface .getNetworkInterfaces(); en.hasMoreElements(); ) { NetworkInterface intf = en.nextElement(); for (Enumeration<InetAddress> enumIpAddr = intf .getInetAddresses(); enumIpAddr.hasMoreElements(); ) { InetAddress inetAddress = enumIpAddr.nextElement(); System.out.println("ip1--:" + inetAddress); System.out.println("ip2--:" + inetAddress.getHostAddress()); if (!inetAddress.isLoopbackAddress() && inetAddress instanceof Inet6Address) { String ipaddress = inetAddress.getHostAddress().toString(); return ipaddress; } } } } catch (Exception ex) { Log.e("IP Address", ex.toString()); } return null; }
DataFrame structure conversion
|python|pandas|dataframe|
null
For me this is what worked: .mat-expansion-indicator::after{ border-color: red; }
{"Voters":[{"Id":23474427,"DisplayName":"snj wrk"}],"DeleteType":1}
`lru_cache` is [implemented in C](https://github.com/python/cpython/blob/52517118685dd3cc35068af6bba80b650775a89b/Modules/_functoolsmodule.c#L737), its calls are interleaved with your function's calls, and I think your **C code** recursion is too deep and crashes. In Python 3.11 I get a similar bad crash: ``` [Execution complete with exit code -11] ``` In Python 3.12 I just get an error: ``` Traceback (most recent call last): File "/ATO/code", line 34, in <module> dfs(0) File "/ATO/code", line 31, in dfs s += dfs(u) + 1 ^^^^^^ File "/ATO/code", line 31, in dfs s += dfs(u) + 1 ^^^^^^ File "/ATO/code", line 31, in dfs s += dfs(u) + 1 ^^^^^^ [Previous line repeated 496 more times] RecursionError: maximum recursion depth exceeded ``` That's despite your `sys.setrecursionlimit(2 * 10 ** 9)`. [What’s New In Python 3.12](https://docs.python.org/3/whatsnew/3.12.html#sys) says: > sys.setrecursionlimit() and sys.getrecursionlimit(). The recursion limit now applies only to Python code. Builtin functions do not use the recursion limit, but are protected by a different mechanism that prevents recursion from causing a virtual machine crash So let's avoid that C recursion. If I use a (simplified) Python version of `lru_cache`, the program works fine in both 3.11 and 3.12 without any other change: ``` def lru_cache(_): def deco(f): memo = {} def wrap(x): if x not in memo: memo[x] = f(x) return memo[x] return wrap return deco ```
This is a bit of a hack, but this works as expected: ## Using e_title() + e_text_g() ```{r} df |> e_charts(x) |> e_scatter(y) |> echarts4r::e_title( text = "right", textStyle = list( fontSize = 30, color = "red", fontWeight = 'normal' ), right = "10%", top = "7%" ) |> echarts4r::e_text_g( style = list( text = "left", fontSize = 30, fill = "red" ), left = "10%", top = "7%" ) ``` [![enter image description here][1]][1] ## Using 2x e_title() ```{r} df |> e_charts(x) |> e_scatter(y) |> echarts4r::e_title( text = "right", textStyle = list( fontSize = 30, color = "red", fontWeight = 'normal' ), right = "10%", top = "7%" ) |> echarts4r::e_title( text = "left", textStyle = list( fontSize = 30, color = "red", fontWeight = 'normal' ), left = "10%", top = "7%" ) ``` [![enter image description here][2]][2] [1]: https://i.stack.imgur.com/1snYq.png [2]: https://i.stack.imgur.com/leOfA.png
So I tried stretching the image but it didnt work. Not sure if thats what Im meant to do and I cant find anything online that would help. MyCanvas.Children.Add(ImageOne.Background); Images ImageOne = new(); public MainWindow() { InitializeComponent(); MyCanvas.Children.Add(ImageOne.Background); } class Images { public Image Background = new() { Source = new BitmapImage(new Uri("chessbackground2.jpg", UriKind.Relative)), Stretch =.Fill, StretchDirection = StretchdDirection.Both }; } after that I just displyed the image on the window through c# code
How to connect ipv6 through DHCP via WIFI android?
|android|android-source|ipv6|android-networking|inetaddress|
I'm trying to migrate `elasticsearch:5.6.15` to `elasticsearch:7.17.18`. I have the following issues: TransportClient client; DeleteByQueryRequestBuilder deleteRequestBuilder = DeleteByQueryAction.INSTANCE.newRequestBuilder(client); deleteRequestBuilder.source().setIndices(index); Error: Cannot resolve method 'newRequestBuilder' in 'DeleteByQueryAction' Second issues is: Retry retryRequest = Retry.on(Exception.class) Error: Cannot resolve method 'on' in 'Retry' Third issue: Map<String, Object> indexMapping; CreateIndexRequest request = new CreateIndexRequest(index); request.source(indexMapping); Required type: XContentBuilder Provided:Map<java.lang.String,java.lang.Object> Do you know how I can migrate the code accordingly? P.S I created this test project to show migration issues: https://github.com/rcbandit111/elasticsearch_migration_poc/blob/main/src/main/java/com/test/portal/account/platform/service/WebhookPlatformClient.java Please clone it and run it.
You can also explore the functions [`REGEXP_EXTRACT`][1] to pull out the list content enclosed in a square bracket [], combined with [`SPLIT`][2] to separate the extracted string into an array of individual values using space ' ' as the delimiter. ``` WITH sample_data AS ( SELECT 1 AS col1, '[1.2 4.2 6.3 3.1]' AS col2 UNION ALL SELECT 2, '[5.2 5.4 0.3 6.1]' UNION ALL SELECT 3, '[3.2 0.1 5.3 6.7]' ) SELECT col1, col2 FROM ( SELECT col1, SPLIT(REGEXP_EXTRACT(col2, r'\[(.*?)\]'), ' ') AS col2 FROM sample_data ) ``` [1]: https://cloud.google.com/bigquery/docs/reference/standard-sql/string_functions#regexp_extract [2]: https://cloud.google.com/bigquery/docs/reference/standard-sql/string_functions#split