instruction
stringlengths 0
30k
⌀ |
---|
Like any kind of `Stream` (e.g. `Promise`s), when you see nesting in `Observable`s you might want to take a step back to see if it's really warranted.
Let's examine your solution bit by bit.
Our starting point is:
this.router.paramMap.pipe(takeUntil(this.ngUnsubscribe))
Then you subscribe, but within that subscribe you do observable operations on the given data, this strongly suggests you should be `pipe`ing an operation instead, and then subscribe on the final result.
In this case you want to `map` your `params` to some `Observable`. You also might benefit from the "interrupt early" behavior that [`switchMap`](https://rxjs.dev/api/index/function/switchMap) offers. Otherwise there's also [`mergeMap`](https://rxjs.dev/api/index/function/mergeMap) as a potential option if you don't want to "interrupt early" (it used to be more appropriately named [`flatMap`](https://rxjs.dev/api/index/const/flatMap)).
We'll add a [`filter`](https://rxjs.dev/api/index/function/filter) and [`map`](https://rxjs.dev/api/index/function/map) for good measure, to ensure we have the `team` param, and to pluck it out (since we don't need the rest).
this.router.paramMap.pipe(
takeUntil(this.ngUnsubscribe),
filter(params => params.has("team"))
map(params => params.get("team"))
switchMap(team => {
this.teamToFilterOn = team as string;
// We'll dissect the rest
})
) // [...]
Then comes the part with what you want to do with that team.
You have multiple "tasks" that rely on the same input, and you want them both at the same time, so reaching for [`forkJoin`](https://rxjs.dev/api/index/function/forkJoin) is a good call. But there's also [`combineLatest`](https://rxjs.dev/api/index/function/combineLatest) that does something similar, but combine the results "step by step" instead.
You use the word "latest" for both your tasks, so we'll indeed reach for `combineLatest` instead:
const releaseDef$ = // [...]
const pipelineDef$ = // [...]
return combineLatest([releaseDef$, pipelineDef$]);
Now let's dissect these two operations.
From what I gather, you're only interested in releases that have a `lastRelease`. You also don't want to "switch" when a new one comes in, you want them all, let's encode that:
const releaseDef$ = this.apiService.getReleaseDefinitions(this.teamToFilterOn as string).pipe(
filter(releaseDef => releaseDef.lastRelease),
mergeMap(lastReleaseDef => this.apiService.getRelease(releaseDefinition.lastRelease.id)),
filter(info => !!info)
mergeMap(info => this.apiService.getTestRunsByRelease(info.releaseId).pipe(map(testruns => ({ testruns, info })))
tap(({ testruns, info }) => {
this.isLoading = false;
this.results = [...this.results, { info: info, testruns: testruns, totals: this.calculateEnvironmentTotals(testruns.testRunResults)}];
this.dataSource.data = this.results;
})
)
Here I'll use [`tap`](https://rxjs.dev/api/index/function/tap) assuming you operate independently on the results of `releaseDef$` and `pipelineDef$`. If you don't, skip the `tap`s and pass a callback to the final `subscribe` instead.
You'll notice I also pipe into the result of `getTestRunsByRelease`. That is because unlike `Promise`s, we don't have an alternative syntax like `async/await` that help with keeping previous state in an easy way. Instead we have to rely on the monoid operation `map` from within our monad operation `flatMap`/`mergeMap`. For `Promise`s, both `map` and `flatMap` are `.then`.
To wrap this up, let's put it all together:
ngOnInit(): void {
this.router.paramMap.pipe(
takeUntil(this.ngUnsubscribe),
filter(params => params.has("team"))
map(params => params.get("team"))
switchMap(team => {
this.teamToFilterOn = team as string;
const releaseDef$ = this.apiService.getReleaseDefinitions(this.teamToFilterOn as string).pipe(
filter(releaseDef => releaseDef.lastRelease),
mergeMap(lastReleaseDef => this.apiService.getRelease(releaseDefinition.lastRelease.id)),
filter(info => !!info)
mergeMap(info => this.apiService.getTestRunsByRelease(info.releaseId).pipe(map(testruns => ({ testruns, infos }))))
tap(({ testruns, infos }) => {
this.isLoading = false;
this.results = [...this.results, { info: info, testruns: testruns, totals: this.calculateEnvironmentTotals(testruns.testRunResults)}];
this.dataSource.data = this.results;
})
);
const pipelineDef$ = // You didn't present code for this, but you get the idea
return combineLatest([releaseDef$, pipelineDef$]);
})
).subscribe(([release, pipeline]) => {
/* add your logic here if you don't use `tap`s */
});
}
You'll notice I only used on `takeUntil(this.ngUnsubscribe)` as the "main" observable chain will stop with that, which means operation will stop as well.
If you're unsure or encounter issues, you can still sprinkle them as the very first argument of each `.pipe`. |
the problem seems to be in the way you are printing your statements with the use of your "__repr__" function, the method of the ListNode class is recursively calling it self, causing it to be infinite recursion hence, the hanging
try the below given code:
def __repr__(self, visited=None):
if visited is None:
visited = set()
if self in visited:
return "ListNode(...)"
visited.add(self)
next_node = self.next
val = self.val
out = f"ListNode({val}, "
if next_node is not None:
out += next_node.__repr__(visited)
out += ")"
return out
|
So I'm getting started with Rust and as a practice project I decided to create a small program that would just find a number in a given range using what I think is binary search. The code works well, as far as I can tell. The issue appeared when I wanted to make a mathematical function that would predict the average amount of guesses needed to guess the given number, and rather intuitively, the function was:
`(log_2 x) + 1`
However, when testing the code experimentally, the result on large numbers like `1e12` resembled more the formula:
`(log_2 x) - 1`
Also, it could be that the formula is not correct as I have my doubts about it. Anyway here is the code, in case you find any issues with the calculations:
```rust
use rand::Rng;
use std::io;
use std::cmp::Ordering;
use std::time::Instant;
fn read_input() -> i64 {
println!("Insert the range of the number [1 - <input>]: ");
let mut max_range_str = String::new();
io::stdin()
.read_line(&mut max_range_str)
.expect("Couldn't read the line");
let max_range = max_range_str
.trim()
.parse()
.expect("Couldn't convert to int");
println!("");
max_range
}
fn create_secret_number(max_range: i64) -> i64 {
rand::thread_rng().gen_range(1..=max_range)
}
fn main() {
let max_range = read_input();
let mut loop_count = 0;
let mut sum_of_guesses = 0;
loop {
let start_time = Instant::now();
let secret_number = create_secret_number(max_range);
let mut iterations = 0;
let mut ceiling = max_range;
let mut floor = 0;
let mut guess = max_range / 2;
loop {
iterations += 1;
match guess.cmp(&secret_number) {
Ordering::Greater => ceiling = guess,
Ordering::Less => floor = guess,
Ordering::Equal => {
let elapsed_time = start_time.elapsed();
println!("The correct number was {}", secret_number);
println!("You win after {} iterations in {:?} seconds \n", iterations, elapsed_time);
sum_of_guesses += iterations;
break;
}
}
guess = (ceiling + floor) / 2;
}
loop_count += 1;
let average_guesses = sum_of_guesses / loop_count;
println!("Average guess is now: {}", average_guesses);
}
}
``` |
Application Insights capturing ALL traces while appsettings.json level is set as Inforamtion |
|c#|api|asp.net-web-api|azure-application-insights| |
null |
you are installing the pm2 application in the builder stage.
you have to give the command:
RUN npm install -g pm2
in the second stage (after the FROM nginx:alpine line)
for my use case I used this dockerfile:
FROM node:12.22.10 as build
WORKDIR '/app'
COPY ./src ./src
COPY ./*.* .
RUN npm install
RUN npm run build:ssr
FROM node:12.22.10
EXPOSE 4001
RUN npm i -g pm2@4.5.0
RUN mkdir -p /dist/browser
RUN mkdir -p /dist/server
COPY --from=build /app/dist/browser/. /dist/browser/.
COPY --from=build /app/dist/server/. /dist/server/.
COPY --from=build /app/dist/*.js /dist/.
RUN pm2 start /dist/server.js --name stem-ecommerce-pubblico-prod -o /dev/null -e /dev/null -i 3 --max-memory-restart 600M
RUN pm2 save
ENTRYPOINT [ "pm2-runtime", "start", "/dist/server.js"] |
Our Spring Java application should use Azure App Configuration service to manage feature flags.
The problem is that there is [no way to run App Configuration service locally][1] through Docker like other feature flags manager products. There is an option the [Java Azure SDK to use local configuration][2] but it is not clear how I can mix it with server configuration. What I would like to have something like this:
*1. Spring local/test profile to use file configuration. Developer to control everything.<br/>
2. Any other profiles to be use Azure App Configuration service.*
When I add `spring-cloud-azure-appconfiguration-config-web` dependency Java application start trying to connect to the server and the file configuration is ignored.
```xml
<!-- Global: take feature flags form the server -->
<dependency>
<groupId>com.azure.spring</groupId>
<artifactId>spring-cloud-azure-appconfiguration-config-web</artifactId>
</dependency>
<!-- Local -->
<dependency>
<groupId>com.azure.spring</groupId>
<artifactId>spring-cloud-azure-feature-management-web</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-thymeleaf</artifactId>
</dependency>
```
Obviously there is not meant to work together without any additional configuration.
Could you tell me please is it possible such kind of configuration? Is there some magic properties that I can use?
May be there is another way to organise local and qas/prod environments. Please advice.
[1]: https://learn.microsoft.com/en-us/answers/questions/1027362/best-practice-for-handling-local-settings-while-de
[2]: https://learn.microsoft.com/en-us/azure/azure-app-configuration/use-feature-flags-spring-boot?tabs=spring-boot-3 |
Switch between static and dynamic Azure App Configuration service |
|spring-cloud|azure-app-configuration|feature-flags| |
TL; DR: You just have to change the message handlers registration order
from
```
.AddHttpMessageHandler<LogHandler>()
.AddPolicyHandler(retryPolicy);
```
to
```
.AddPolicyHandler(retryPolicy)
.AddHttpMessageHandler<LogHandler>();
```
___
**UPDATE #1**
In order to better understand why `DelegatingHandler`s' registration order matter I will extend my original post.
## The `AddPolicyHandler`
This extension method basically registers a [`PolicyHttpMessageHandler`][1] instance into the handlers pipeline. This class implements the `DelegatingHandler` abstract class in a way that it applies a policy to the outgoing request. *That's why the policy's type should be `IAsyncPolicy<HttpResponseMessage>`.*
The important stuff here is that from the `HttpClient` perspective your `LogHandler` and this `PolicyHttpMessageHandler` are treated in the same way.
## `AddHttpMessageHandler`, `AddPolicyHandler`
If you register the handlers into the HttpClient's pipeline like you did then the output will be:
```none
Loghandler
Retry attempt 1
Retry attempt 2
```
Lets see what happens under the hood
[![wrong order][2]][2]
I've used [mermaid.js][3] to draw this diagram and I've used autonumbering to be able to correlate action and emitted output.
```
2: Loghandler
7: Retry attempt 1
10: Retry attempt 2
```
## `AddPolicyHandler`,`AddHttpMessageHandler`
This time lets switch the registration order. The output will be:
```
Loghandler
Retry attempt 1
Loghandler
Retry attempt 2
Loghandler
```
The actions sequence:
[![good order][4]][4]
and finally the correlated log:
```
4: Loghandler
8: Retry attempt 1
10: Loghandler
14: Retry attempt 2
16: Loghandler
```
___
For more details please check out [this SO topic][5].
[Here you can find a dotnet fiddle][6] to be able to play with it.
[1]: https://learn.microsoft.com/en-us/dotnet/api/microsoft.extensions.http.policyhttpmessagehandler?view=dotnet-plat-ext-3.1
[2]: https://i.stack.imgur.com/TOgTh.png
[3]: https://mermaid.js.org/
[4]: https://i.stack.imgur.com/TiY9s.png
[5]: https://stackoverflow.com/questions/63460681/polly-policy-not-working-using-addpolicyhandler/
[6]: https://dotnetfiddle.net/7QcyAo |
Why isn't the code reflecting the actual mathematical behaviour when doing a binary search guess? |
|math|rust|binary-search| |
I've written the following code to compare the theoretical alpha = 0.05 with the empirical one from the buit-in t.test in Rstudio:
```
set.seed(1)
N <- 1000
n <- 20
k <- 500
poblacion <- rnorm(N, 10, 10) #Sample
mu.pob <- mean(poblacion)
sd.pob <- sd(poblacion)
p <- vector(length=k)
for (i in 1:k) {
muestra <- poblacion[sample(1:N, n)]
p[i] <- t.test(muestra, mu=mu.pob)$p.value
}
a_teo <- 0.05
a_emp <- length(p[p < a_teo])/k
sprintf("alpha_teo = %.3f <-> alpha_emp = %.3f", a_teo, a_emp)
```
And it works printing both theoretical and empirical values. Now I wanna make it more general, to different values of 'n', so I wrote this:
```
set.seed(1)
N <- 1000
n <- 20
k <- 500
z <-c()
for (i in n){
poblacion <- rnorm(N, 10, 10)
mu.pob <- mean(poblacion)
sd.pob <- sd(poblacion)
p <- vector(length=k)
for (j in 1:k){
muestra <- poblacion[sample(1:N, length(n))]
p[j] <- t.test(muestra, mu = mu.pob)$p.value
}
a_teo = 0.05
a_emp = length(p[p<a_teo])/k
append(z, a_emp)
print(sprintf("alpha_teo = %.3f <-> alpha_emp = %.3f", a_teo, a_emp))
}
plot(n, z)
``` |
{"Voters":[{"Id":4253229,"DisplayName":"Eugene Sh."},{"Id":16217248,"DisplayName":"CPlus"},{"Id":9455518,"DisplayName":"Hamed"}]} |
Compare theoretical and empirical alpha in R [SOLVED] |
|python|django|deep-learning|webcam|mediapipe| |
In case it is relevant, I am observing this behavior in Tcl 8.6.13.
Normally, errors in Tcl include line numbers and filenames when applicable.
However, I'm finding that when errors occur in scripts executed with ```interp eval```, I do not get this information.
The two examples below are exactly the same, except that Example 1 evaluates the code in the main/parent interpreter, while Example 2 evaluates it in the child interpreter.
Example 1
```
#!/bin/sh
# This line continues for Tcl, but is a single line for 'sh' \
exec tclsh "$0" ${1+"$@"}
::safe::interpCreate i
eval expr {"a" + 1}
```
Output
```
./example.tcl
invalid bareword "a"
in expression "a + 1";
should be "$a" or "{a}" or "a(...)" or ...
(parsing expression "a + 1")
invoked from within
"expr "a" + 1"
("eval" body line 1)
invoked from within
"eval expr {"a" + 1}"
(file "./example.tcl" line 6)
```
Example 2
```
#!/bin/sh
# This line continues for Tcl, but is a single line for 'sh' \
exec tclsh "$0" ${1+"$@"}
::safe::interpCreate i
i eval expr {"a" + 1}
```
Output
```
./example.tcl
invalid bareword "a"
in expression "a + 1";
should be "$a" or "{a}" or "a(...)" or ...
(parsing expression "a + 1")
invoked from within
"expr "a" + 1"
invoked from within
"i eval expr {"a" + 1}"
(file "./example.tcl" line 6)
```
The error messages are nearly the same, except one line is missing in Example 2's output:
``` ("eval" body line 1)```
In this example, missing that part of the error message is not a problem, since there is only one line of code being evaluated; if it were a large script, or if the error occurred when ```source```'ing a file, that might be a different story.
This behavior seems weird; partially because because it is inconsistent, but also because the child interpreter must know which code it is executing, so it should be able to report the line numbers of errors in that code; also, when ```source```ing a file, it should know the file it is reading the code from, since the source command was invoked from the child.
So is there any way to get line and file information when using ```interp eval```?
Alternatively, is there a way to write this code differently that could provide better error messages in scripts run in child interpreters?
Some additional examples (their output still misses the same line):
Example 3 (passing code to child interpreter as a single argument).
```
#!/bin/sh
# This line continues for Tcl, but is a single line for 'sh' \
exec tclsh "$0" ${1+"$@"}
::safe::interpCreate i
i eval {expr {"a" + 1}}
```
Output
```
./example.tcl
can't use non-numeric string as operand of "+"
invoked from within
"expr {"a" + 1}"
invoked from within
"i eval {expr {"a" + 1}}"
(file "./example.tcl" line 6)
```
Example 4 (passing code to child interpreter as a single argument in a list).
```
#!/bin/sh
# This line continues for Tcl, but is a single line for 'sh' \
exec tclsh "$0" ${1+"$@"}
::safe::interpCreate i
i eval [list expr {"a" + 1}]
```
Output
```
./example.tcl
can't use non-numeric string as operand of "+"
while executing
"expr {"a" + 1}"
invoked from within
"i eval [list expr {"a" + 1}]"
(file "./example.tcl" line 6)
```
Example 5 (passing code to child interpreter as a single argument built from lists).
```
#!/bin/sh
# This line continues for Tcl, but is a single line for 'sh' \
exec tclsh "$0" ${1+"$@"}
::safe::interpCreate i
i eval [list expr [list "a" + 1]]
```
Output
```
./example.tcl
invalid bareword "a"
in expression "a + 1";
should be "$a" or "{a}" or "a(...)" or ...
(parsing expression "a + 1")
invoked from within
"expr {a + 1}"
invoked from within
"i eval [list expr [list "a" + 1]]"
(file "./example.tcl" line 6)
``` |
How to integrate real-time webcam MediaPipe code to Django |
Recently I also faced same problm in MacOS while opening R studio , it produces a pop-up error saying
> File Listing Error, Error navigating to ~/Desktop: Operation not permitted.
so waht u have to do
- open R studio
- you may find RStudio next to apple icon Click over there
- Then click on Service setting
- Then Click on Accessibility
and then
- Click on Restore default
Now when u open R studio again it will ask
to all permission for folder etc
- click allow
|
|php|symfony|symfony4|user-roles|symfony-security| |
null |
In NextJs unlike other frameworks it doesn't provide filter chain or request pre-processing or post processing.
This could be a common need for any node project or some library developers to have their logic to be executed before and after actual call delegated to API endpoint / Web pages.
```
In even this feature available in express or Nest js framework.
Request
---------> | pre-interceptor | ------>
API /Pages
<-----------| post-interceptor| <-----
Response
```
Middleware provides a way with limited feature(Edge Runtime) for request pre-processing but that can't be used for request post-processing.
This can be handled with custom express server which provides handle() & next() function to control the flow.
If I go with custom server on my own infra, am I lose any optimization or performance stuff?
Any idea how to deal with it next server? |
I have an expo app, I need to run `eas build`.
Before that I needed to log in to the expo account but I have signed up with Google, so I don't have a password.
I have created a new auth token from expo.
But I don't see how can I use this auth token to login from the console.
|
eas login with access token |
|react-native|expo|eas| |
I have a Combobox and a Listbox like this:
```python
cities = df[df.columns[0]].tolist()
city_frame = tk.Frame(root)
city_frame.grid(row=1, column=1, sticky=tk.N)
cities_var = tk.Variable()
cities_var.set(cities)
tk.Label(city_frame, text="Thành phố").grid(row=0, column=0)
selected_cities_lb = tk.Listbox(
city_frame, height=20, listvariable=cities_var, selectmode=tk.MULTIPLE
)
selected_cities_lb.grid(row=1, column=0)
plot_types = ['parallel coordinate', 'stacked bar chart', 'grouped bar chart']
plot_type_frame = tk.Frame(root)
plot_type_frame.grid(row=1, column=2, sticky=tk.N)
plot_type_var = tk.StringVar()
plot_type_var.set(plot_types[0])
tk.Label(plot_type_frame, text="Loại biểu đồ").grid(row=0, column=0)
selected_plot_type_lb = ttk.Combobox(
plot_type_frame, textvariable=plot_type_var, values=plot_types, state='readonly')
selected_plot_type_lb.grid(row=1, column=0)
```
There is one problem: when I selected any values from the Combobox, it unsets all checked values from the Listbox, which is very tedious. Any ideas what is going on here? Because to me they are completely unrelated to each other. |
tkinter - selecting values from Combobox resets the value of an another Listbox |
|python|tkinter| |
You need to use **Align.CenterHorizontally** and add it if you don't want to check your import
import androidx.compose.ui.Alignment
Column (
modifier = Modifier.fillMaxWidth(),
verticalArrangement = Arrangement.Center,
horizontalAlignment = Alignment.CenterHorizontally
) {
//content here
} |
you are installing the pm2 application in the builder stage.
you have to give the command:
RUN npm install -g pm2
in the second stage (after the FROM nginx:alpine line)
for my use case I used this dockerfile:
FROM node:12.22.10 as build
WORKDIR '/app'
COPY ./src ./src
COPY ./*.* .
RUN npm install
RUN npm run build:ssr
FROM node:12.22.10
EXPOSE 4001
RUN npm i -g pm2@4.5.0
RUN mkdir -p /dist/browser
RUN mkdir -p /dist/server
COPY --from=build /app/dist/browser/. /dist/browser/.
COPY --from=build /app/dist/server/. /dist/server/.
COPY --from=build /app/dist/*.js /dist/.
RUN pm2 start /dist/server.js --name site-name -o /dev/null -e /dev/null -i 3 --max-memory-restart 600M
RUN pm2 save
ENTRYPOINT [ "pm2-runtime", "start", "/dist/server.js"] |
As mentioned in the comments: yes, this documentation is imprecise at best. I think it is referring to the behavior between scalars of the same type:
```python3
import numpy
a = numpy.uint32(4294967295)
print(a.dtype) # uint32
a += np.uint32(1) # WILL wrap to 0 with warning
print(a) # 0
print(a.dtype) # uint32
```
The behavior of your example, however, will change due to [NEP 50][1]. So as frustrating as the old behavior is, there's not much to be done but wait, unless you want to file an issue about backporting a documentation change. As documented in the [Migration Guide][2].
> The largest backwards compatibility change of this is that it means that the precision of scalars is now preserved consistently...
> `np.float32(3) + 3.` now returns a `float32` when it previously returned a `float64`.
The second NumPy 2.0 release candidate is out, in case you'd like to try it:
https://mail.python.org/archives/list/numpy-discussion@python.org/thread/EGXPH26NYW3YSOFHKPIW2WUH5IK2DC6J/
[1]: https://numpy.org/neps/nep-0050-scalar-promotion.html#nep50
[2]: https://numpy.org/devdocs/numpy_2_0_migration_guide.html |
How can I make my text visible when my custom curser hovers over it |
|javascript|css|cursor| |
null |
I want to create 2 lists from one column based on condition in other column. Currently, I am able to get the 2 lists by scaning dataframe twice.
1. Is it possible to get the 2 lists in single scan?
2. Getting lists by individual groups?
```
data = {
"co2": [95, 90, 99, 104, 105, 94, 99, 104],
"model": [
"Citigo",
"Fabia",
"Fiesta",
"Rapid",
"Focus",
"Mondeo",
"Octavia",
"B-Max",
],
"car": ["Skoda", "Skoda", "Ford", "Skoda", "Ford", "Ford", "BMW", "Ford"],
}
df = pd.DataFrame(data)
# For 2 lists
list_skoda = df.loc[df["car"] == "Skoda", "model"].tolist()
print(f"{list_skoda=}")
list_others = df.loc[df["car"] != "Skoda", "model"].tolist()
print(f"{list_others=}")
# For individual groups
df.groupby(["car"]).apply(print)
l = df.groupby(["car"])["model"].groups
print(f"{l=}") # This gives indices not names
```
Please suggest.
|
creating lists in single scan |
|python|pandas| |
in my vs code in c++ when i am declaring a pointer and then printing the size of the pointer its giving me 4 bytes instead of 8 bytes even though my system and compiler both are 64 bits. pls help i am a beginner.
i tried asking chat gpt but got no answer |
pointer size not 8 bytes |
|visual-studio-code|pointers| |
null |
You can think of using Azure DevOps REST API for this purpose. You can create one PowerShell or Python Script to use Azure DevOps REST API to get the deployment status and then link it to the work item. |
- Mitmproxy: 10.2.4
- Python: 3.10.12
```python
from mitmproxy import http, flow
def request(flow: flow):
headers = [
(b'set-cookie', b'name1=value1'),
(b'set-cookie', b'name2=value2'),
(b'content-type', b'text/html'),
]
flow.response = http.Response.make(200, b'<html>my content</html>', headers)
|
ModuleNotFoundError: This app has encountered an error. The original error message is redacted to prevent data leaks. Full error details have been recorded in the logs (if you're on Streamlit Cloud, click on 'Manage app' in the lower right of your app).
Traceback:
File "/home/adminuser/venv/lib/python3.9/site-packages/streamlit/runtime/scriptrunner/script_runner.py", line 542, in _run_script
exec(code, module._dict_)
File "/mount/src/finalyearproject/streamlit.py", line 2, in <module>
import tensorflow as tf
Reviewed the Streamlit logs for additional error messages or clues about the issue
How to fix it? |
|c++|visual-studio-code|pointers| |
The most frequent issue that arises while using kubectl kustomize for Json6902 patching is the failure to recognize that "add" might occasionally behave like "replace”. The same problem & its solution is explained in this **[Medium blog](https://pauldally.medium.com/the-most-common-problem-i-see-when-using-json6902-patching-with-kustomize-1d19a0f4a038)** by Paul Dally.
By themselves, Kubernetes patches are made to work with resources that already exist; they do not automatically support adding missing directories to the path. However, if the "/root" directory doesn't already exist, you can create it and then use a strategic merge patch to modify "/root/subdir" in order to get the desired behavior.
**Strategic Merge patch :**
patchesJson6902:
- target:
kind: MyKind
name: config
version: v1beta1
group: mygroup.com
patch: |-
spec:
array:
spec:{} #Empty object to create the nested structure
newField: test #value to be added or replaced
Refer to this official Kubernetes [doc][1] for more information about Strategic Merge patch.
This patch attempts to merge the provided spec section which includes the nested “newfield” if the “spec/array/0/spec” does not exist, it will be created with the specified “newField” value By following this approach you can effectively create or update the nested structure within your kubernetes resources using patches.
[1]: https://kubernetes.io/docs/tasks/manage-kubernetes-objects/kustomization/#customizing |
NextJs handle logic for request pre-processing & post-processing |
|next.js| |
You can use the CLI to generate a standard dedicated [Web Worker][1] and then update the code to use Shared Worker instead.
new SharedWorker(new URL('./shared.worker', import.meta.url))
Angular CLI will handle compilation and bundling the worker file automatically.
[1]: https://angular.io/guide/web-worker |
Your fairly "verbose" code can be written a lot shorter if you leave out all the repetitions:
<!-- begin snippet:js console:true -->
<!-- language:lang-html -->
A<button id="na">1</button>
B<button id="nb">1</button>
X<button id="nx">1</button>
Y<button id="ny">1</button>
Z<button id="nz">1</button>
<div id="highest"></div>
<!-- language:lang-js -->
const btns=[...document.querySelectorAll("button")],
highest=document.getElementById("highest");
btns.forEach(b=>b.onclick=()=>{
b.textContent=+b.textContent+1;
// sessionStorage.setItem(b.id,b.textContent);
highest.textContent="The most popular button ID is "+btns.reduce((a,c)=>(+c.textContent>+a.textContent?c:a)).id
});
setInterval(()=>btns[Math.floor(btns.length*Math.random())].click(), 1000);
<!-- end snippet -->
The above snippet does not use local storage as this is neither supported on Stackoverflow nor is it useful for the action you require here. If you still want this feature, then you should uncomment the local storage command in the `onclick` function definition.
Maybe the expression
```
btns.reduce((a,c)=>(+c.textContent>+a.textContent?c:a)).id
```
deserves a short explanation: it uses the `Array.reduce()` method that loops over all elements of the `btns` array and compares each element's `.textContent` attribute value with the one in the accumulator variable `a`. Tha `a` variable is initially set to the first element of the `btns` array and is assigned the result of the ternary operator expression `(+c.textContent>+a.textContent?c:a)`. So, if the current element's (`c`) value is higher than the one stored in `a` it will be assigned to `a`. At the end of the `.reduce()` loop the element stored in `a` is returned and its attribute `.id` is then used for the outer expression stating the most popular id. |
I need to generate a YAML file. There is the following dictionary `{'fruits': ['banana', 'apple'], 'car': [], 'clothes': ['jeans', 't-shirt', 'shirt'], 'headdress': [], 'shoes': []}`.
But I can't seem to generate the following file itself:
```yaml
config:
product:
fruits:
name: Fruits
object_types:
fruits:
goods:
banana
name: banana
apple:
name: apple
name: Fruits
car:
name: Car
object_types:
fruits:
goods: {}
name: Car
clothes:
name: Clothes
object_types:
clothes:
goods:
jeans:
name: Jeans
t-shirt:
name: T-shirt
shirt:
name: Shirt
name: Clothes
headdress:
name: Headdress
object_types:
headdress:
goods: {}
name: Headdress
shoes:
name: Shoes
object_types:
shoes:
goods: {}
name: Shoes
``` |
Python generate file YAML |
{"OriginalQuestionIds":[12708381],"Voters":[{"Id":1048572,"DisplayName":"Bergi","BindingReason":{"GoldTagBadge":"html"}}]} |
I am trying to introduce an API, where you pass object and the object's methods will have `this` typed as the object itself without specifying the type of the object explicitly.
I tried this:
```typescript
interface BaseProcessor<T> {
process(this: T): void;
[key: string | symbol]: unknown;
}
function makeProcessor<T extends BaseProcessor<T>>(definition: T) {
// code...
}
export const processor = makeProcessor({
hello: "world!",
// other possible properties of different types...
process() {
console.log(this.hello); // <--- this.hello is of type "unknown"
}
});
```
([Typescript Playground link](https://www.typescriptlang.org/play?#code/JYOwLgpgTgZghgYwgAgEJwM4QApQPZIYZ5QA8AKgHzIDeAUMo8gA76EYAUYAFsBgFzJyASkEA3PMAAmAbgZMA2gGsIAT0EYwUUAHNkAH2QZVAWwBGeADYBdQQFcQSkHgDuIOQF86dGA4RhgPBBkEzgVXAIIIhIKZAgAD0gQKQw0TBw2KOIyKkoOKQgYUGAAoMERWnlGAHpq5AQ8AoA6FrovOgaQTRZM6KhkAF4QsIzIvo56JmRuCEtLPEEAIhcSSykAQkWAGirkWuQ8HmgWPCJgM0sUVjxmaACog5hkKWAYGGgIcGQwVVuMFqau1213YHGElSmU06xEuTXmOi4vH+MzmeGEMj2dVIAFpcd8kU0UfNkHxHt9fihFg4nK4QItdl4POi6EA))
What I would like to achieve is to have `this.hello` to be of type `string`. Similarly other properties that would be added to the object would be typed on `this` as what their value is.
Is it even possible to do that? Any help or suggestions appreciated. |
I have a `FloatingActionButton` that changes its icon to a pause icon when clicked. How can I toggle the icon back to the play icon when the button is clicked again?
**Code:**
Row(
mainAxisAlignment: MainAxisAlignment.center,
crossAxisAlignment: CrossAxisAlignment.center,
children: [
AnimatedBuilder(
animation: controller,
builder: (context, child) {
return FloatingActionButton.extended(
onPressed: () {
if (controller.isAnimating)
controller.stop();
else {
controller.reverse(
from: controller.value == 0.0
? 1.0
: controller.value);
}
},
icon: Icon(
controller.isAnimating
? Icons.pause
: Icons.play_arrow ,
color: Color(0xffF2F2F2),),
label: Text(
controller.isAnimating ? "Pause" : "Play",));
}),
SizedBox(width: 20,),
AnimatedBuilder(
animation: controller,
builder: (context, child) {
return FloatingActionButton.extended(
onPressed: () {
if (controller.isAnimating)
controller.reset();
},
icon: Icon(Icons.refresh,
color: Color(0xffF2F2F2),),
label: Text("Refresh",),);
}),
],
)
**Explanation:**
I want to be able to switch seamlessly between a play icon (`Icons.play_arrow`) and a pause icon (`Icons.pause`) on my button with each click. |
Do you have `AllowsEditing: true` in your `ImagePicker.launchImageLibraryAsync` ?
I have the same error and i fix it by set `AllowsEditing: false` and instead of using axios or fetch to send request i use `FileSystem.uploadAsync` in `expo-file-system`
p/s: sorry if my english was too bad :)) |
The command sometimes succesfully launches the batch file however, sometimes it doesn't work and it immediately closes.
I was wondering what I could do to fix this issue.
#a couple of the commands I have tried
start-process -wait -filepath "C:\Windows\System32\cmd.exe" -argumentlist "/c 'C:\path\stream3.bat' .\refresh.bat"
start-process -wait -filepath "C:\Windows\System32\cmd.exe" -argumentlist "/c 'C:\path\stream3.bat'"
|
For simplicity, let's only look at a translation.
For that we assume a scenario where the sun sits at the origin, the planet earth one unit to the right on the x-axis and the moon one unit further right of the earth.
If we want to render the sun, planet and the moon we typically would render the sun first, translate one unit to the right, render the earth and lastly translate again one unit to the right and render the moon.
If the planet would have two ore more moons instead of one, we would have to translate back to be again at the origin of the planet and translate to the position of the other moons. The same goes for many planets orbiting the sun.
Remember that any transformation operation requires matrix multiplication. For many moons orbiting the planet and many planets orbiting the sun, that will become costly very fast.
Therefore, instead of transforming back, we store the current matrix (`glPushMatrix`) and operate on the copy. To revert back we simply discard the copy and go back to the stored matrix (`glPopMatrix`).
From [glPushMatrix/glPopMatrix][1]
> `glPushMatrix` pushes the current matrix stack down by one, duplicating the current matrix. That is, after a `glPushMatrix` call, the matrix on top of the stack is identical to the one below it.
>
> `glPopMatrix` pops the current matrix stack, replacing the current matrix with the one below it on the stack.
[1]: https://registry.khronos.org/OpenGL-Refpages/gl2.1/xhtml/glPushMatrix.xml |
I have a python code which uses Hugging Face Transformers to run an NLP task on a PDF document. When I run this code in Jupyter Notebook, it takes more than 1.5 hours to complete. I then setup the same code to run via a locally hosted Streamlit web app. To my surprise, it ran in under 5 mins!
I believe I am comparing apples to apples because:
- I am analyzing the same PDF document in each case
- Since the Streamlit app is locally hosted, all computation is running on my laptop CPU. I am not using any Hugging Face virtual resources. The HF models are being downloaded to my computer.
- The Jupyter Notebook is also running locally on my computer
- The `.py` file is generated from the Jupyter Notebook using 'streamlit-juypter' which just takes the Python code in the notebook and adds a few Streamlit statements
So, essentially same code running on same data using same hardware.
The only differences I can think of which may explain this are:
- Streamlit is running a `.py` python file from the command line instead of a `.ipynb` notebook
- Streamlit is running inside a virtual environment instead of my main Python installation
Has anyone ever experienced something like this? Can running the same python code from the command line result in 20x greater speed? |
I have a very simple Asp .Net API project where I am trying to add some trace logging.
I need some help with Application Insights configuration for that logging.
Right now its capturing ALL traces while in *appsettings.json* I have set the **Default** level to **Information**.
I have this code in my *Program.cs*
`services.AddApplicationInsightsTelemetry();`
Followed by:
Trace.Listeners.Add(new Microsoft.ApplicationInsights.TraceListener.ApplicationInsightsTraceListener());
And the following in my *appsettings.json*:
"Logging": {
"LogLevel": {
"Default": "Information",
"Microsoft": "Warning",
},
"ApplicationInsights": {
"LogLevel": {
"Default": "Warning",
"Microsoft": "Warning"
}
}
I am expecting that only traces with level *Information* or above should get logged to Application Insights however I am also seeing every thing getting logged to Application Insights.
What do I need to do change the trace logging level globally? |
Levels have been explained in the official documentation below.
https://docs.amplify.aws/javascript/build-a-backend/storage/upload/
Example usage from there:
```
uploadData({
key: file.name,
data: file,
options: {
accessLevel: 'private'
}
});
``` |
I'm sorry for the short answer. This is a problem widely discussed in the Timescale community.
The problem is related to the immutable vs not immutable time bucket function signature dealing with different types of data. So, in resume timestamp is not immutable and timestamptz is immmutable.
Let's navigate on it:
```
SELECT p.proname
, PG_GET_FUNCTION_ARGUMENTS(p.oid)
, CASE p.provolatile
WHEN 'i'
THEN 'IMMUTABLE'
WHEN 's'
THEN 'STABLE'
WHEN 'v'
THEN 'VOLATILE'
END AS volatility
FROM pg_proc p
JOIN pg_namespace n ON p.pronamespace = n.oid
WHERE n.nspname = 'pg_catalog'
AND p.proname = 'time_bucket';
```
will return
```sql
proname | pg_get_function_arguments | volatility
-----------+--------------------------------------+------------
timestamp | date | IMMUTABLE
timestamp | timestamp with time zone | STABLE
timestamp | date, time without time zone | IMMUTABLE
timestamp | timestamp without time zone, integer | IMMUTABLE
```
As you can see timestamp is not immutable
but if you see `timestamptz` it' is immutable.
```
timestamptz | timestamp with time zone, integer | IMMUTABLE
```
Maybe this snippet can also shed some light.
```sql
SELECT pg_typeof(timezone('UTC', '2024-01-05')::timestamptz)
```
**Resolution**: Change your column type to timestamptz. |
I have problem with selecting certain checkboxes in Vue 3 compositonApi by primeVue components.
For example static data works correctly i can select whichever element i want:
```
<DataTable v-model:selection="selectedItem" :value="itemsFromApi" dataKey="id" tableStyle="min-width: 50rem">
<Column selectionMode="multiple" headerStyle="width: 3rem"></Column>
<Column field="Id" header="id"></Column>
<Column field="Name" header="Nazwa"></Column>
</DataTable>
<script setup>
import { ref } from 'vue';
const selectedItem = ref();
const items = ref([
{ id: 1, name: 'does it work' },
{ id: 2, name: 'test' },
]);
</script>
```
But if i want to fetch all data from api (laravel simple Model::all() return response()-\>json()) by axios
no matter which element i choose it always selecting every element, and i have no idea why it happens.
```
<template>
<DataTable v-model:selection="selectedItem" :value="itemsFromApi" dataKey="id" tableStyle="min-width: 50rem">
<Column selectionMode="multiple" headerStyle="width: 3rem"></Column>
<Column field="Id" header="id"></Column>
<Column field="Name" header="Nazwa"></Column>
</DataTable>
</template>
<script setup>
import { ref, onMounted } from 'vue';
import axios from 'axios';
const selectedItem = ref();
const itemsFromApi = ref(); },
]);
const fetchData = async () => {
try {
const response = await axios.get('API_URL');
itemsFromApi.value = response.data;
} catch (error) {
console.error('Error: ', error);
}
}
onMounted(fetchData);
</script>
```
I have tried to get it from other endpoints always same result i have no idea why. |
I had the same issue before, in my case the Url was not properly set because I was using an env var that was not propagating correctly.
I was dong something like
use: {
baseURL: process.env.URL <- I had a problem with my env variable
},
Just in case, I would suggest to hardcode the value directly in the `playwright.config.ts` only to debug and confirm that the value is there.
Also one of the best ways to finally fix any problem with the baseURL is to create a second node project, but this time using the [official playwright template][1], you can install the playwright template with this command:
npm init playwright@latest
And then you can compare your wrong `playwright.config.ts` against the template.
---
Also remember that there are two places in the config file where you can set baseURL
import { defineConfig, devices } from '@playwright/test';
export default defineConfig({
use: {
baseURL: 'https://example.com', <-- Global for all test files
},
projects: [
{
name: 'chromium',
use: {
...devices['Desktop Chrome'],
baseURL: 'https://anotherpage.com', <-- Overrides global baseURL
}
}
]
});
Hope it helps!
[1]: https://playwright.dev/docs/intro
|
Not sure which version you are on, but currently the pager is using a flexbox layout. So you can prepend an element with the `k-spacer` class that is used to separate items in toolbars. For the latest version it does the trick:
`$(".k-grid-pager").prepend('<div class="k-spacer"></div>')`
Here is an [example](https://dojo.telerik.com/akicULas). |
Let's pretend I want to add an hyperbolic curve to this plot
data(cars)
xyplot(dist ~ speed, cars)
Even though, such function won't fit data, the curve should be like the one you see in the picture. Could you please suggest to me the proper code?
![][1]
[1]: https://i.stack.imgur.com/AuXqm.png |
How to add hyperbolic curve in lattice |
|r|curve|lattice|hyperbolic-function| |
When I login to bastion server every time, there is initial shell script which will run and records sessions.
```if [[ -z $SSH_ORIGINAL_COMMAND ]]; then
LOG_FILE="`date --date="today" "+%Y-%m-%d_%H-%M-%S"`_`whoami`"
LOG_DIR="/var/log/ssh-bastion/"
echo ""
echo "NOTE: This SSH session will be recorded"
echo "AUDIT KEY: $LOG_FILE"
echo ""
# suffix the log file name with a random string.
SUFFIX=`mktemp -u _XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX`
script -qf --timing=$LOG_DIR$LOG_FILE$SUFFIX.time $LOG_DIR$LOG_FILE$SUFFIX.data --command=/bin/bash
else
echo "This bastion supports interactive sessions only. Do not supply a command"
exit 1
fi```
Now when I try to login its giving below error
```script: cannot open /var/log/ssh-bastion/2024-03-29_06-52-11_username_m8hI1GxwGYd5847vhanBcn9Www1Koq8X.data: Permission denied```
It was working well earlier and I have been facing this issue since 2 days. All my directory permissions are same and there is no change in any file/directory permissions. |
Iam not able to login to bastion server-permission denied error |
|amazon-ec2|bastion-host|bastion| |
Vue 3 Problem with PrimeVue DataTables Checkboxes component |
|laravel|datatable|vuejs3|vue-composition-api|primevue| |
null |
I am working on a simple React app in which I have a basket that stores the number of items a user wants to purchase.
I'm storing this number in localstorage and that all works fine.
I'd now like to use that same number to populate the trolley icon at the top of my page. This lives in the main React App component.
I've set up an event listener in the main App component as so:
```
useEffect(() => {
window.addEventListener("su", handleCartUpdate(), false)
return () => {
document.removeEventListener("su", handleCartUpdate(), false);
};
}, []);
```
This will only fire on initiation but, *if I understand eventlisteners correctly*, the listener will persist.
After writing the number to localstorage, I fire an event from the product page component which I assumed would be picked up by the eventlistener I set up previously:
```
const e = new Event("su");
window.dispatchEvent(e);
```
My problem is that the event doesn't seem to be picked up by the listener. Does anyone know why this is, please? |
Why isn't my JS eventlistener triggering a function? |
|javascript|reactjs|listener|event-listener| |
Nothing is stopping you from sending an image or any other arbitrary data over TCP.
You can serialize the image to JSON, and send as text, or send the image as one or more `TypedArray`s.
Then reconcontruct the data per your specification on the other side of the socket. |
I'm in the process of upgrading Cypress from version 8 to version 13. In version 8, I was able to do something like this:
cy.getBySel('mealList-totalCalories').invoke('text').as('totalCals');
This accesses the original value of a value that will change later on once a new item is added to the meal list. Once the calorie value changes, I want to ensure the new value is equal to the previous value plus the caloric value of the item that was added. This used to work:
cy.get('@totalCals').then((totalCals) => {
console.log(totalCals) // This used to equal the original value, 2020, now is 2300.
const expected = parseInt(totalCals) + 280;
cy.getBySel('mealList-totalCalories').should('have.text', expected);
});
It appears that previously Cypress was storing the original text value of `totalCals`, which I could later access. However, now the `totalCals` value is already equal to the previous value + 280, which makes me think Cypress is just running the previous query again to access the current value.
Is there a new way to store the text of an element in Cypress for later access? |
How to access the previous value of text in Cypress that you expect to change? |
|cypress| |
A typed property can only be assigned values of the same type. Another value would either be silently converted, or crash the program. As such the type is guaranteed to be correct, **which makes the `Type` assertion useless**.
Also, note that it is better to validate data before they make their way inside your entities. Otherwise, it means your entities can be invalid and you’re just a validator call away to persist wrong data. |
I have a Python script to download an image from a URL and upload it to AWS S3. This script works perfectly when I run it on my local machine. However, when I deploy and run the same script on an AWS EC2 instance, I encounter a `ReadTimeout` error.
The error I'm receiving is as follows:
```
requests.exceptions.ReadTimeout: HTTPSConnectionPool(host='www.net-a-porter.com', port=443): Read timed out. (read timeout=100)
```
Below is the relevant part of my code:
```python
import requests
import tempfile
import os
def upload_image_to_s3_from_url(self, image_url, filename, download_timeout=120):
"""
Downloads an image from the given URL to a temporary file and uploads it to AWS S3,
then returns the S3 file URL.
"""
try:
headers = {
"User-Agent": "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/88.0.4324.96 Safari/537.36",
'Accept': 'image/avif,image/webp,image/apng,image/*,*/*;q=0.8'
}
# Request the image
response = requests.get(image_url, timeout=download_timeout, stream=True, headers=headers)
response.raise_for_status()
# Determine the content type
content_type = response.headers.get('Content-Type', 'image/jpeg') # Default to image/jpeg
# Create a temporary file
with tempfile.NamedTemporaryFile(delete=False) as tmp_file:
# Write the response content to the temporary file
for chunk in response.iter_content(chunk_size=8192):
tmp_file.write(chunk)
# Now that we have the image locally, upload it to S3 with the correct content type
file_url = self.upload_image_to_s3(tmp_file.name, filename, content_type)
# Optionally, delete the temporary file here if you set delete=False
os.unlink(tmp_file.name)
return file_url
except requests.RequestException as e:
raise Exception(f"Failed to download or upload image. Error: {e}")
# Example URL causing issues
image_url = "https://www.net-a-porter.com/variants/images/1647597326276381/in/w1365_a3-4_q60.jpg"
```
This issue occurs when trying to download an image from `www.net-a-porter.com`. The timeout is set to 120 seconds, which I assumed would be more than enough.
What I've tried so far:
- Increasing the timeout duration
- Changing the `User-Agent` in the request headers
- Running the script at different times of the day to rule out server load issues
Any insights or suggestions on how to resolve this issue would be greatly appreciated. |
ReadTimeout error when downloading images on AWS EC2 but not locally |
|python-3.x|amazon-ec2|request|timeoutexception| |
I debug the code using `{{ dd($order->id) }}`, I got 100238 order id value. But when I type
<form action="{{ route('admin.pos.update_order', ['id' => $order->id]) }}" method="post">
then its not working. Then I write this `{{ route('admin.pos.update_order', ['id' => 100238]) }}` its works great.
I cannot solve this abnormal behaviour. Can anyone guide me what is the actual issue?
As in debugging it is providing order id then it should work in the action code as `$order->id`
Route is:
Route::group(['middleware' => ['admin']], function () {
Route::group(['prefix' => 'pos', 'as' => 'pos.', 'middleware' => ['module:pos_management']], function () {
Route::post('update-cart-items', 'POSController@update_cart_items')->name('update_cart_items');
});
});
Controller is:
public function update_order(Request $request, $id): RedirectResponse
{
$order = $this->order->find($order_id);
if (!$order) {
Toastr::error(translate('Order not found'));
return back();
}
$order_type = $order->order_type;
if ($order_type == 'delivery') {
Toastr::error(translate('Cannot update delivery orders'));
return back();
}
$delivery_charge = 0;
if ($order_type == 'home_delivery') {
if (!session()->has('address')) {
Toastr::error(translate('Please select a delivery address'));
return back();
}
$address_data = session()->get('address');
$distance = $address_data['distance'] ?? 0;
$delivery_type = Helpers::get_business_settings('delivery_management');
if ($delivery_type['status'] == 1) {
$delivery_charge = Helpers::get_delivery_charge($distance);
} else {
$delivery_charge = Helpers::get_business_settings('delivery_charge');
}
$address = [
'address_type' => 'Home',
'contact_person_name' => $address_data['contact_person_name'],
'contact_person_number' => $address_data['contact_person_number'],
'address' => $address_data['address'],
'floor' => $address_data['floor'],
'road' => $address_data['road'],
'house' => $address_data['house'],
'longitude' => (string)$address_data['longitude'],
'latitude' => (string)$address_data['latitude'],
'user_id' => $order->user_id,
'is_guest' => 0,
];
$customer_address = CustomerAddress::create($address);
}
// Update order details
$order->coupon_discount_title = $request->coupon_discount_title == 0 ? null : 'coupon_discount_title';
$order->coupon_code = $request->coupon_code ?? null;
$order->payment_method = $request->type;
$order->transaction_reference = $request->transaction_reference ?? null;
$order->delivery_charge = $delivery_charge;
$order->delivery_address_id = $order_type == 'home_delivery' ? $customer_address->id : null;
$order->updated_at = now();
try {
// Save the updated order
$order->save();
// Clear session data if needed
session()->forget('cart');
session(['last_order' => $order->id]);
session()->forget('customer_id');
session()->forget('branch_id');
session()->forget('table_id');
session()->forget('people_number');
session()->forget('address');
session()->forget('order_type');
Toastr::success(translate('Order updated successfully'));
//send notification to kitchen
//if ($order->order_type == 'dine_in') {
$notification = $this->notification;
$notification->title = "You have a new update in order " . $order_id . " from POS - (Order Confirmed). ";
$notification->description = $order->id;
$notification->status = 1;
try {
Helpers::send_push_notif_to_topic($notification, "kitchen-{$order->branch_id}", 'general');
Toastr::success(translate('Notification sent successfully!'));
} catch (\Exception $e) {
Toastr::warning(translate('Push notification failed!'));
}
//}
//send notification to customer for home delivery
if ($order->order_type == 'delivery'){
$value = Helpers::order_status_update_message('confirmed');
$customer = $this->user->find($order->user_id);
$fcm_token = $customer?->fcm_token;
if ($value && isset($fcm_token)) {
$data = [
'title' => translate('Order'),
'description' => $value,
'order_id' => $order_id,
'image' => '',
'type' => 'order_status',
];
Helpers::send_push_notif_to_device($fcm_token, $data);
}
//send email
$emailServices = Helpers::get_business_settings('mail_config');
if (isset($emailServices['status']) && $emailServices['status'] == 1) {
Mail::to($customer->email)->send(new \App\Mail\OrderPlaced($order_id));
}
}
// Redirect back to wherever needed
return redirect()->route('admin.pos.index');
} catch (\Exception $e) {
info($e);
Toastr::warning(translate('Failed to update order'));
return back();
}
}
error is:
> POST
> https://fd.sarmadengineeringsolutions.com/admin/pos/update-cart-items
> 500 (Internal Server Error) |
You should never have written `b.__breadth = 5`. The intent of double underscore is that this variable is private to the class and should never be accessed from outside it. If you need to modify `__breadth` from outside, you should have a `set_breadth` method inside the class.
```
class Rectangle:
...
def set_breadth(value):
self.__breadth = value
b.set_breadth(5)
```
Alternatively, create a `breadth` property. Google "Python Properties" to learn how to do this. |
I got a new Laravel project but it showing the following error and it is not letting me run it using php artisan serve I tried using composer self-update too but it is not working....
Here's the error =>
php artisan serve
PHP Fatal error: During inheritance of ArrayAccess: Uncaught ErrorException: Return type of Illuminate\Support\Collection::offsetExists($key) should either be compatible with ArrayAccess::offsetExists(mixed $offset): bool, or the #[\ReturnTypeWillChange] attribute should be used to temporarily suppress the notice in C:\xampp\htdocs\fitway\vendor\laravel\framework\src\Illuminate\Support\Collection.php:1349
Stack trace:
#0 C:\xampp\htdocs\fitway\vendor\laravel\framework\src\Illuminate\Support\Collection.php(11): Illuminate\Foundation\Bootstrap\HandleExceptions->handleError(8192, 'Return type of ...', 'C:\\xampp\\htdocs...', 1349)
#1 C:\xampp\htdocs\fitway\vendor\composer\ClassLoader.php(576): include('C:\\xampp\\htdocs...')
#2 C:\xampp\htdocs\fitway\vendor\composer\ClassLoader.php(427): Composer\Autoload\{closure}('C:\\xampp\\htdocs...')
#3 C:\xampp\htdocs\fitway\vendor\laravel\framework\src\Illuminate\Support\helpers.php(110): Composer\Autoload\ClassLoader->loadClass('Illuminate\\Supp...')#4 C:\xampp\htdocs\fitway\vendor\laravel\framework\src\Illuminate\Foundation\PackageManifest.php(130): collect(Array)
#5 C:\xampp\htdocs\fitway\vendor\laravel\framework\src\Illuminate\Foundation\PackageManifest.php(106): Illuminate\Foundation\PackageManifest->build()
#6 C:\xampp\htdocs\fitway\vendor\laravel\framework\src\Illuminate\Foundation\PackageManifest.php(89): Illuminate\Foundation\PackageManifest->getManifest()
#7 C:\xampp\htdocs\fitway\vendor\laravel\framework\src\Illuminate\Foundation\PackageManifest.php(78): Illuminate\Foundation\PackageManifest->config('aliases')
#8 C:\xampp\htdocs\fitway\vendor\laravel\framework\src\Illuminate\Foundation\Bootstrap\RegisterFacades.php(26): Illuminate\Foundation\PackageManifest->aliases()
#9 C:\xampp\htdocs\fitway\vendor\laravel\framework\src\Illuminate\Foundation\Application.php(230): Illuminate\Foundation\Bootstrap\RegisterFacades->bootstrap(Object(Illuminate\Foundation\Application))
#10 C:\xampp\htdocs\fitway\vendor\laravel\framework\src\Illuminate\Foundation\Console\Kernel.php(310): Illuminate\Foundation\Application->bootstrapWith(Array)
#11 C:\xampp\htdocs\fitway\vendor\laravel\framework\src\Illuminate\Foundation\Console\Kernel.php(127): Illuminate\Foundation\Console\Kernel->bootstrap()
#12 C:\xampp\htdocs\fitway\artisan(37): Illuminate\Foundation\Console\Kernel->handle(Object(Symfony\Component\Console\Input\ArgvInput), Object(Symfony\Component\Console\Output\ConsoleOutput))
#13 {main} in C:\xampp\htdocs\fitway\vendor\laravel\framework\src\Illuminate\Support\Collection.php on line 11
In Collection.php line 11:
During inheritance of ArrayAccess: Uncaught ErrorException: Return type of Illuminate\Support\Collection::offsetExists($key) should either be compatib
le with ArrayAccess::offsetExists(mixed $offset): bool, or the #[\ReturnTypeWillChange] attribute should be used to temporarily suppress the notice in
C:\xampp\htdocs\fitway\vendor\laravel\framework\src\Illuminate\Support\Collection.php:1349
Stack trace:
#0 C:\xampp\htdocs\fitway\vendor\laravel\framework\src\Illuminate\Support\Collection.php(11): Illuminate\Foundation\Bootstrap\HandleExceptions->handle
Error(8192, 'Return type of ...', 'C:\\xampp\\htdocs...', 1349)
#1 C:\xampp\htdocs\fitway\vendor\composer\ClassLoader.php(576): include('C:\\xampp\\htdocs...')
#2 C:\xampp\htdocs\fitway\vendor\composer\ClassLoader.php(427): Composer\Autoload\{closure}('C:\\xampp\\htdocs...')
#3 C:\xampp\htdocs\fitway\vendor\laravel\framework\src\Illuminate\Support\helpers.php(110): Composer\Autoload\ClassLoader->loadClass('Illuminate\\Supp
...')
#4 C:\xampp\htdocs\fitway\vendor\laravel\framework\src\Illuminate\Foundation\PackageManifest.php(130): collect(Array)
#5 C:\xampp\htdocs\fitway\vendor\laravel\framework\src\Illuminate\Foundation\PackageManifest.php(106): Illuminate\Foundation\PackageManifest->build()
#6 C:\xampp\htdocs\fitway\vendor\laravel\framework\src\Illuminate\Foundation\PackageManifest.php(89): Illuminate\Foundation\PackageManifest->getManife
st()
#7 C:\xampp\htdocs\fitway\vendor\laravel\framework\src\Illuminate\Foundation\PackageManifest.php(78): Illuminate\Foundation\PackageManifest->config('a
liases')
#8 C:\xampp\htdocs\fitway\vendor\laravel\framework\src\Illuminate\Foundation\Bootstrap\RegisterFacades.php(26): Illuminate\Foundation\PackageManifest-
>aliases()
#9 C:\xampp\htdocs\fitway\vendor\laravel\framework\src\Illuminate\Foundation\Application.php(230): Illuminate\Foundation\Bootstrap\RegisterFacades->bo
otstrap(Object(Illuminate\Foundation\Application))
#10 C:\xampp\htdocs\fitway\vendor\laravel\framework\src\Illuminate\Foundation\Console\Kernel.php(310): Illuminate\Foundation\Application->bootstrapWit
h(Array)
#11 C:\xampp\htdocs\fitway\vendor\laravel\framework\src\Illuminate\Foundation\Console\Kernel.php(127): Illuminate\Foundation\Console\Kernel->bootstrap
()
#12 C:\xampp\htdocs\fitway\artisan(37): Illuminate\Foundation\Console\Kernel->handle(Object(Symfony\Component\Console\Input\ArgvInput), Object(Symfony
\Component\Console\Output\ConsoleOutput))
#13 {main}
Please tell me how can I resolve it.
I have tried changing the Php version in the .env file as well but it is not working either.
Chat or Gemini have not been much of help either.... |
The easiest way to find out correct image is to check [Ubuntu on Azure][1]:
For example for `Ubuntu 22.04 LTS - Jammy Jellyfish` the image is going to be something like that:
image:
offer: "0001-com-ubuntu-server-jammy"
publisher: "Canonical"
sku: "22_04-lts"
version: "latest"
[1]: https://canonical-azure.readthedocs-hosted.com/en/latest/azure-how-to/instances/find-ubuntu-images/ |
>Is there something similar to the Azure SQL database deployment for SQL Server? Conceptually, I'm thinking running a task that executes Update-Database as a deployment task. Am I on the right track?
The `Azure SQL database deployment` task used SqlPackage.exe and Invoke-Sqlcmd cmdlet to deploy DACPACs or SQL Server scripts to Azure SQL server. For SQL server, the similar tick is [SQL Server database deploy][1].
You are on the right track by using Entity Framework Core (EF Core) to manage your database schema. Instead of `Update-Database`, you can use `dotnet ef migrations script --idempotent` which is equal, please check the similar [ticket][2] for your reference.
[1]: https://github.com/microsoft/azure-pipelines-tasks/blob/master/Tasks/SqlDacpacDeploymentOnMachineGroupV0/README.md
[2]: https://stackoverflow.com/questions/58027720/entity-framework-core-migrations-through-ci-and-cd-in-azure-devops |
I am getting audio from mic. with using AvAudioEngine but when coming call and makes interruption, I want to stop and then restart getting audio process but When I try handleInterruption(I shared below) method I am getting this error. how can I stop and restart AvAudioEngine after interruption
AURemoteIO.cpp:1702 AUIOClient_StartIO failed (561145187)
AVAEInternal.h:109 [AVAudioEngineGraph.mm:1545:Start: (err = PerformCommand(*ioNode, kAUStartIO, NULL, 0)): error 561145187
Can't start the engine: Error Domain=com.apple.coreaudio.avfaudio Code=561145187 "(null)" UserInfo={failed call=err = PerformCommand(*ioNode, kAUStartIO, NULL, 0)}
@objc func handleInterruption(notification: Notification) {
guard let userInfo = notification.userInfo,
let typeValue = userInfo[AVAudioSessionInterruptionTypeKey] as? UInt,
let type = AVAudioSession.InterruptionType(rawValue: typeValue) else {
return
}
// Switch over the interruption type.
switch type {
case .began:
print("interrupt begin")
socket.close()
socket=nil
self.socket = GCDAsyncUdpSocket(delegate: self, delegateQueue: DispatchQueue.main)
audioEngine.stop()
print("Audio player stopped")
case .ended:
print("interrupt end")
self.audioEngine = AVAudioEngine()
self.audioPlayer = AVAudioPlayerNode()
self.mixer = AVAudioMixerNode()
do {
try AVAudioSession.sharedInstance().setCategory(.playAndRecord, mode: .voiceChat, options: [.allowBluetooth, .allowBluetoothA2DP])
print("Audio session category set to playback")
} catch {
print("Setting category to AVAudioSessionCategoryPlayback failed: \(error)")
}
self.mixer = AVAudioMixerNode()
self.mixer.volume = 0
self.audioEngine.attach(audioPlayer)
self.audioEngine.attach(mixer)
try! self.audioEngine.inputNode.setVoiceProcessingEnabled(true)
try! AVAudioSession.sharedInstance().setActive(true)
DispatchQueue.global(qos: .background).async { [weak self] in
guard let self = self else { return }
do {
self.socket.setIPv4Enabled(true)
self.socket.setIPv6Enabled(false)
try self.socket.connect(toHost:"239.10.10.100" ?? "", onPort: 4545 ?? 0)
try self.socket.beginReceiving()
print("Socket started")
} catch {
print("Socket Started Error: \(error)")
}
}
audioEngine.inputNode.installTap(onBus: 0, bufferSize: 1024, format: audioEngine.inputNode.inputFormat(forBus: 0)) {
(buffer: AVAudioPCMBuffer!, time: AVAudioTime!) -> Void in
do {
let inputBlock: AVAudioConverterInputBlock = { _, outStatus in
outStatus.pointee = AVAudioConverterInputStatus.haveData
return buffer
}
let frameCapacity =
AVAudioFrameCount(self.outputFormat.sampleRate) * buffer.frameLength
/ AVAudioFrameCount(buffer.format.sampleRate)
let outputBuffer = AVAudioPCMBuffer(
pcmFormat: self.outputFormat,
frameCapacity: frameCapacity
)!
var error: NSError?
self.converter.convert(to: outputBuffer, error: &error, withInputFrom: inputBlock)
//let data = Data(bytes: (outputBuffer.int16ChannelData![0]), count: Int(outputBuffer.frameLength))
let data = Data(buffer: UnsafeBufferPointer(start: outputBuffer.int16ChannelData![0], count: Int(outputBuffer.frameLength)))
print(data)
DispatchQueue.global(qos: .background).async { [weak self] in
guard let self = self else { return }
do {
self.socket.send(data, withTimeout: 0, tag: 0)
} catch {
print("Socket send Error: \(error)")
}
}
} catch {
print(error)
}
}
audioEngine.prepare()
do {
try audioEngine.start()
print("Audio player started")
} catch {
print("Can't start the engine: \(error)")
}
default:
print("Default print")
break
}
} |
|javascript|hoisting|function-expression| |
I disagree; when you've learnt 5 or so programming languages, you usually come to the point where you realize it's more or less the same, because *program design* is "language agnostic" and you (ought to) design programs in the same way no matter if you code in assembler or C#. What's actually most different isn't so much the languages, as the various libs, frameworks and tool chains.
What really sucks for creativity is when you are struggling with the language itself, rather than the application/actual problem you are trying to solve. C++ being the perfect example - the language makes it super-easy to derail into some meta programming practices that have absolutely zero to do with the actual application. Or when you try to debug something and get a 10 line long incomprehensible compiler warning just because the language is ridiculously bloated with extra complexity that you don't need.
These days I almost only code in C and it's completely effortless since I know the language inside-out, so that I don't have to waste time looking up language details or interpret the meaning of compiler errors etc. I don't even need to think about how to design the program, it's all on auto pilot to the point where I can focus solely on whatever the program is supposed to be doing. That makes one very productive and that is a good thing. |
null |
I'm using Anaconda3-2024.02-1-Windows-x86_64 (yes, Windows 64) on windows 11, with python 3.11.7. I was trying to build an offline translator and i forgot what led to what... Eventually i was trying to install EasyNMT and it faild because of fasttext package. Then, I decided to install fasttext separately but failed and can't seem to know what the problem is. I am using pip on Anaconda Prompt to install almost all my packages. And I suffer from a weak internet connection. The code is below:
```
(base) C:\Users\pc>pip install wheel
Collecting fasttext
Using cached fasttext-0.9.2.tar.gz (68 kB)
Installing build dependencies ... done
Getting requirements to build wheel ... error
error: subprocess-exited-with-error
× Getting requirements to build wheel did not run successfully.
│ exit code: 1
╰─> [28 lines of output]
C:\Users\pc\anaconda3\python.exe: No module named pip
Traceback (most recent call last):
File "<string>", line 38, in __init__
ModuleNotFoundError: No module named 'pybind11'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "C:\Users\pc\anaconda3\Lib\site-packages\pip\_vendor\pyproject_hooks\_in_process\_in_process.py", line 353, in <module>
main()
File "C:\Users\pc\anaconda3\Lib\site-packages\pip\_vendor\pyproject_hooks\_in_process\_in_process.py", line 335, in main
json_out['return_val'] = hook(**hook_input['kwargs'])
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\pc\anaconda3\Lib\site-packages\pip\_vendor\pyproject_hooks\_in_process\_in_process.py", line 118, in get_requires_for_build_wheel
return hook(config_settings)
^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\pc\AppData\Local\Temp\pip-build-env-31ahp4pz\overlay\Lib\site-packages\setuptools\build_meta.py", line 325, in get_requires_for_build_wheel
return self._get_build_requires(config_settings, requirements=['wheel'])
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\pc\AppData\Local\Temp\pip-build-env-31ahp4pz\overlay\Lib\site-packages\setuptools\build_meta.py", line 295, in _get_build_requires
self.run_setup()
File "C:\Users\pc\AppData\Local\Temp\pip-build-env-31ahp4pz\overlay\Lib\site-packages\setuptools\build_meta.py", line 487, in run_setup
super().run_setup(setup_script=setup_script)
File "C:\Users\pc\AppData\Local\Temp\pip-build-env-31ahp4pz\overlay\Lib\site-packages\setuptools\build_meta.py", line 311, in run_setup
exec(code, locals())
File "<string>", line 72, in <module>
File "<string>", line 41, in __init__
RuntimeError: pybind11 install failed.
[end of output]
note: This error originates from a subprocess, and is likely not a problem with pip.
error: subprocess-exited-with-error
× Getting requirements to build wheel did not run successfully.
│ exit code: 1
╰─> See above for output.
note: This error originates from a subprocess, and is likely not a problem with pip.
```
I have just updated my MS Build Tools for Visual Studio because of another project.
I tried to install pybind11 on it's own. and it was already downloaded I guess.
pip install pybind11
Requirement already satisfied: pybind11 in c:\users\pc\anaconda3\lib\site-packages (2.11.1)
So!!!! I don't know what to do...... HELP ME PLEASE!!
|
If you are using InterActiveServer per page/component it will not work. You need to change the `App.razor` to global Interactive server mode.
```
<Routes @rendermode="InteractiveServer" />
```
If you are using Blazor Webassembly standalone. To use `MudMenu` you need to make sure following steps have been finished.
1. `builder.Services.AddMudServices();` in the program.cs
2. Following code are added to "wwwroot/index.html"
```
...
<link href="https://fonts.googleapis.com/css?family=Roboto:300,400,500,700&display=swap" rel="stylesheet" />
<link href="_content/MudBlazor/MudBlazor.min.css" rel="stylesheet" />
...
<script src="_content/MudBlazor/MudBlazor.min.js"></script>
```
3.Following component are added to Mainlayout.razor
```
@inherits LayoutComponentBase
<MudThemeProvider />
...
```
Then it will work.<br>
Reference: https://www.mudblazor.com/getting-started/installation#manual-install-add-components |
Where you store the user credential? Inside the cookie or session? Which authentication schema you have used? According to your description, it seesm the user credential expire which caused your application login again.
If you just want to know how to modify the IIS application pool idle time, you could refer to below steps:
Details, you could refer to below image:
1.Locate the application pool:
[![enter image description here][1]][1]
2.Modify the advanced settings.
[![enter image description here][2]][2]
[1]: https://i.stack.imgur.com/8L0P5.png
[2]: https://i.stack.imgur.com/sPZhr.png |
When I run `Rake -T` in production mode, rails throws error about missing `pry` gem.
Who can help me explain why it happens?
My command: `RAILS_ENV=production bundle exec rake -T`
Ruby: "3.3.0"
Rails version: 7.1.3.2
Bundle command
```
bundle config set without 'development test'
bundle install -j 20 -r 5 && bundle clean --force
```
Error Stacktrace:
```sh
RAILS_ENV=production bundle exec rake -T:
0.240 bundler: failed to load command: rake (/usr/local/bin/rake)
0.240 /usr/local/bundle/gems/bundler-2.5.7/lib/bundler/resolver.rb:332:in `raise_not_found!': Could not find gem 'pry' in locally installed gems. (Bundler::GemNotFound) 0.240 from /usr/local/bundle/gems/bundler-2.5.7/lib/bundler/resolver.rb:392:in `block in prepare_dependencies'
0.240 from /usr/local/bundle/gems/bundler-2.5.7/lib/bundler/resolver.rb:377:in `each' 0.240 from /usr/local/bundle/gems/bundler-2.5.7/lib/bundler/resolver.rb:377:in `map'
0.240 from /usr/local/bundle/gems/bundler-2.5.7/lib/bundler/resolver.rb:377:in `prepare_dependencies' 0.240 from /usr/local/bundle/gems/bundler-2.5.7/lib/bundler/resolver.rb:61:in `setup_solver'
0.240 from /usr/local/bundle/gems/bundler-2.5.7/lib/bundler/resolver.rb:28:in `start' 0.240 from /usr/local/bundle/gems/bundler-2.5.7/lib/bundler/definition.rb:626:in `start_resolution'
0.240 from /usr/local/bundle/gems/bundler-2.5.7/lib/bundler/definition.rb:311:in `resolve' 0.240 from /usr/local/bundle/gems/bundler-2.5.7/lib/bundler/definition.rb:579:in `materialize'
0.240 from /usr/local/bundle/gems/bundler-2.5.7/lib/bundler/definition.rb:203:in `specs' 0.240 from /usr/local/bundle/gems/bundler-2.5.7/lib/bundler/definition.rb:270:in `specs_for'
0.240 from /usr/local/bundle/gems/bundler-2.5.7/lib/bundler/runtime.rb:18:in `setup' 0.240 from /usr/local/bundle/gems/bundler-2.5.7/lib/bundler.rb:162:in `setup'
0.240 from /usr/local/bundle/gems/bundler-2.5.7/lib/bundler/setup.rb:26:in `block in <top (required)>' 0.240 from /usr/local/bundle/gems/bundler-2.5.7/lib/bundler/ui/shell.rb:159:in `with_level'
0.240 from /usr/local/bundle/gems/bundler-2.5.7/lib/bundler/ui/shell.rb:111:in `silence' 0.240 from /usr/local/bundle/gems/bundler-2.5.7/lib/bundler/setup.rb:26:in `\<top (required)\>'
0.240 from /usr/local/bundle/gems/bundler-2.5.7/lib/bundler/cli/exec.rb:56:in `require_relative' 0.240 from /usr/local/bundle/gems/bundler-2.5.7/lib/bundler/cli/exec.rb:56:in `kernel_load'
0.240 from /usr/local/bundle/gems/bundler-2.5.7/lib/bundler/cli/exec.rb:23:in `run' 0.240 from /usr/local/bundle/gems/bundler-2.5.7/lib/bundler/cli.rb:451:in `exec'
0.240 from /usr/local/bundle/gems/bundler-2.5.7/lib/bundler/vendor/thor/lib/thor/command.rb:28:in `run' 0.240 from /usr/local/bundle/gems/bundler-2.5.7/lib/bundler/vendor/thor/lib/thor/invocation.rb:127:in `invoke_command'
0.240 from /usr/local/bundle/gems/bundler-2.5.7/lib/bundler/vendor/thor/lib/thor.rb:527:in `dispatch' 0.240 from /usr/local/bundle/gems/bundler-2.5.7/lib/bundler/cli.rb:34:in `dispatch'
0.240 from /usr/local/bundle/gems/bundler-2.5.7/lib/bundler/vendor/thor/lib/thor/base.rb:584:in `start' 0.240 from /usr/local/bundle/gems/bundler-2.5.7/lib/bundler/cli.rb:28:in `start'
0.240 from /usr/local/bundle/gems/bundler-2.5.7/exe/bundle:28:in `block in <top (required)>' 0.240 from /usr/local/bundle/gems/bundler-2.5.7/lib/bundler/friendly_errors.rb:117:in `with_friendly_errors'
0.240 from /usr/local/bundle/gems/bundler-2.5.7/exe/bundle:20:in `<top (required)>' 0.240 from /usr/local/bundle/bin/bundle:25:in `load'
0.240 from /usr/local/bundle/bin/bundle:25:in \`\<main\>'
-
failed to solve: process "/bin/sh -c test -z "$RAILS_MASTER_KEY" || RAILS_ENV=production bundle exec rake -T" did not complete successfully: exit code: 1
```
**My Gemfile**
```
group :development, :test do
# See https://guides.rubyonrails.org/debugging_rails_applications.html#debugging-with-the-debug-gem
gem "debug", platforms: %i[mri windows]
gem 'pry'
gem 'pry-doc', '>= 0.6.0'
# Help to kill N+1 queries and unused eager loading
gem 'bullet'
end
```
Who can help me explain why rails throws error? |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.