instruction
stringlengths 0
30k
β |
---|
null |
I can't say for sure, but I would expect it would be some kind of JIT optimization.
At a guess: while the JIT [does have logic][1] to do bounds check elision, but it could be it doesn't work if you refer to the field twice, rather than a local variable.
The function [`ThrowForEmptyStack`][2] is known to be non-returning:
```cs
private void ThrowForEmptyStack()
{
Debug.Assert(_size == 0);
throw new InvalidOperationException(SR.InvalidOperation_EmptyStack);
}
```
So when the code does the check
```cs
if ((uint)size >= (uint)array.Length)
{
ThrowForEmptyStack();
}
```
the JIT now knows that the array index cannot go out-of-bounds, because the bounds check has already been done. So it can elide the normal array bounds check in the generated machine code.
But it could be that if you use the field then it can't make that assumption. Why, I'm not sure, but possibly because of cross-threading problems.
[1]: https://learn.microsoft.com/en-gb/archive/blogs/clrcodegeneration/array-bounds-check-elimination-in-the-clr
[2]: https://github.com/dotnet/runtime/blob/8fbfb3204abba1eb7b42c617a656fa6aff8df84c/src/libraries/System.Collections/src/System/Collections/Generic/Stack.cs#L346 |
In my Angular project, my unit tests fail with the following error:
> TypeError: Cannot read properties of undefined (reading
> 'reRenderOnLangChange')
> at shouldListenToLangChanges (node_modules/@ngneat/transloco/fesm2022/ngneat-transloco.mjs:385:33)
> at TranslocoPipe.transform (node_modules/@ngneat/transloco/fesm2022/ngneat-transloco.mjs:1313:36)
> at Ι΅Ι΅pipeBind1 (node_modules/@angular/core/fesm2022/core.mjs:27718:22)
> at templateFn (ng:///DocumentToPrintRowComponent.js:68:34)
> at executeTemplate (node_modules/@angular/core/fesm2022/core.mjs:12159:9)
> at refreshView (node_modules/@angular/core/fesm2022/core.mjs:13392:13)
> at detectChangesInView (node_modules/@angular/core/fesm2022/core.mjs:13617:9)
> at detectChangesInViewIfAttached (node_modules/@angular/core/fesm2022/core.mjs:13580:5)
> at detectChangesInComponent (node_modules/@angular/core/fesm2022/core.mjs:13569:5)
> at detectChangesInChildComponents (node_modules/@angular/core/fesm2022/core.mjs:13630:9)
|
|excel|vba| |
null |
Unable to set database URL dynamically in Sails.js |
If you are OK with using `Invoke-WebRequest`, just use it like so (assuming you did not `Remove-item alias:curl`):
curl -Uri http://<some-address>:<some-port>/ -Method POST -Body @{param1='my param';param2='420'}
You can also (as suggested [here][1]) do the following:
$postParams = @{param1='my param';param2='420'}
curl -Uri http://<some-address>:<some-port>/ -Method POST -Body $postParams
[1]: https://stackoverflow.com/a/17330952/3303532 |
1. `char c[2]` is needed hold a string with one character with the terminating `'\0'`.
You could also use `char c[1]` or just `char c` and read a single non-space character with ` %c` (space prefix is significant). When you print the character you then need to use the `%c` (instead of `%s`) conversion specifier.
1. Always use a maximum field width when reading a string with `scanf()` to avoid buffer overflow.
1. The format string `%s` requires a matching `char *` which you write as just `c`.
1. On success `scanf()` return the number of items successfully matched. If you don't check for that you may be operating on uninitialized variables.
1. (Not fixed) If you only enter two numbers, `scanf()` will be waiting for the operator. If you don't want this behavior use `fgets()` or `getline()` then use `sscanf()` or lower level functions like `strtol()` to parse the line of input.
```
#include <stdio.h>
int main(void) {
int a;
int b;
char c[2];
if(scanf("%i%i%1s", &a, &b, c) != 3) {
printf("scanf failed\n");
return 1;
};
printf("%i %i %s\n", a, b, c);
}
```
Example run:
```
$ ./a.out
1 x
scanf failed
$ ./a.out
1 2 +
1 2 +
```
|
While creating a html email template, I got a problem with making a large background text beneath the content using tables.
for example :
```
<table>
<td>bg-text-bottom-layered </td>
<td>top-layer-content. </td>
</table>
```
Is it possible to make a top-layer-content without using z-index?
It really helps me if you answer this question? |
Can we achieve z-index like functionality without using z-index for html email template? |
|css|html-table|html-email|email-templates|html4| |
null |
I debug the code using `{{ dd($order->id) }}`, I got 100238 order id value. But when I type
<form action="{{ route('admin.pos.update_order', ['id' => $order->id]) }}" method="post">
then its not working. Then I write this `{{ route('admin.pos.update_order', ['id' => 100238]) }}` its works great.
I cannot solve this abnormal behaviour. Can anyone guide me what is the actual issue?
As in debugging it is providing order id then it should work in the action code as `$order->id`
Route is:
Route::group(['middleware' => ['admin']], function () {
Route::group(['prefix' => 'pos', 'as' => 'pos.', 'middleware' => ['module:pos_management']], function () {
Route::post('update_order/{id}', 'POSController@update_order')->name('update_order');
});
});
Controller is:
public function update_order(Request $request, $id): RedirectResponse
{
$order = $this->order->find($order_id);
if (!$order) {
Toastr::error(translate('Order not found'));
return back();
}
$order_type = $order->order_type;
if ($order_type == 'delivery') {
Toastr::error(translate('Cannot update delivery orders'));
return back();
}
$delivery_charge = 0;
if ($order_type == 'home_delivery') {
if (!session()->has('address')) {
Toastr::error(translate('Please select a delivery address'));
return back();
}
$address_data = session()->get('address');
$distance = $address_data['distance'] ?? 0;
$delivery_type = Helpers::get_business_settings('delivery_management');
if ($delivery_type['status'] == 1) {
$delivery_charge = Helpers::get_delivery_charge($distance);
} else {
$delivery_charge = Helpers::get_business_settings('delivery_charge');
}
$address = [
'address_type' => 'Home',
'contact_person_name' => $address_data['contact_person_name'],
'contact_person_number' => $address_data['contact_person_number'],
'address' => $address_data['address'],
'floor' => $address_data['floor'],
'road' => $address_data['road'],
'house' => $address_data['house'],
'longitude' => (string)$address_data['longitude'],
'latitude' => (string)$address_data['latitude'],
'user_id' => $order->user_id,
'is_guest' => 0,
];
$customer_address = CustomerAddress::create($address);
}
// Update order details
$order->coupon_discount_title = $request->coupon_discount_title == 0 ? null : 'coupon_discount_title';
$order->coupon_code = $request->coupon_code ?? null;
$order->payment_method = $request->type;
$order->transaction_reference = $request->transaction_reference ?? null;
$order->delivery_charge = $delivery_charge;
$order->delivery_address_id = $order_type == 'home_delivery' ? $customer_address->id : null;
$order->updated_at = now();
try {
// Save the updated order
$order->save();
// Clear session data if needed
session()->forget('cart');
session(['last_order' => $order->id]);
session()->forget('customer_id');
session()->forget('branch_id');
session()->forget('table_id');
session()->forget('people_number');
session()->forget('address');
session()->forget('order_type');
Toastr::success(translate('Order updated successfully'));
//send notification to kitchen
//if ($order->order_type == 'dine_in') {
$notification = $this->notification;
$notification->title = "You have a new update in order " . $order_id . " from POS - (Order Confirmed). ";
$notification->description = $order->id;
$notification->status = 1;
try {
Helpers::send_push_notif_to_topic($notification, "kitchen-{$order->branch_id}", 'general');
Toastr::success(translate('Notification sent successfully!'));
} catch (\Exception $e) {
Toastr::warning(translate('Push notification failed!'));
}
//}
//send notification to customer for home delivery
if ($order->order_type == 'delivery'){
$value = Helpers::order_status_update_message('confirmed');
$customer = $this->user->find($order->user_id);
$fcm_token = $customer?->fcm_token;
if ($value && isset($fcm_token)) {
$data = [
'title' => translate('Order'),
'description' => $value,
'order_id' => $order_id,
'image' => '',
'type' => 'order_status',
];
Helpers::send_push_notif_to_device($fcm_token, $data);
}
//send email
$emailServices = Helpers::get_business_settings('mail_config');
if (isset($emailServices['status']) && $emailServices['status'] == 1) {
Mail::to($customer->email)->send(new \App\Mail\OrderPlaced($order_id));
}
}
// Redirect back to wherever needed
return redirect()->route('admin.pos.index');
} catch (\Exception $e) {
info($e);
Toastr::warning(translate('Failed to update order'));
return back();
}
}
error is:
> POST
> https://fd.sarmadengineeringsolutions.com/admin/pos/update-cart-items
> 500 (Internal Server Error) |
I'm looking for guidance on how to efficiently handle a large number of API calls using Delphi 10.4 along with the OmniThreadLibrary.
Specifically, I need to make approximately 3,000 API calls to a server. To optimize performance, I'd like to limit the number of concurrent threads to 5. Once one thread finishes processing a call, I want to start another thread until all 3,000 calls are completed.
Could someone provide a basic example or offer advice on how to accomplish this task effectively using Delphi and OmniThreadLibrary? Thank you for your assistance!
I attempted to utilize the scheduling mechanism, but it did not provide a satisfactory solution |
Efficiently Handling Large Number of API Calls with Delphi 10.4 and OmniThreadLibrary |
|multithreading|rest|delphi|threadpool|omnithreadlibrary| |
null |
I am new in R and I'm trying to run a code written by Feng et al., (2019). They already provide the code in their paper, and I prepared all the text files that are needed for it, but for some reason the code keeps running on the results the authors provide (the example data). I can't figure it out and I would appreciate any advise you can give me. Thanks for your time!
The code:
setwd("C:/Users/ElinD/OneDrive/Desktop/ELISAtools/extdata")
#R code to run 5-parameter logistic regression on ELISA data
#load the library
library(ELISAtools)
system.file("extdata", package="ELISAtools")
utils:::menuInstallPkgs()
###
#get file folder
dir_file<-system.file("extdata", package="ELISAtools")
dir_file<-("C://Users//ElinD//OneDrive//Desktop//ELISAtools//extdata")
#setwd(dir_file)
batches<-loadData(file.path(dir_file,"mytextfile2.txt"))
head(batches)
#******IMPORTANT***********
#now set the working directory to somewhere you have permission to write
#*************
#now add
reportHtml(batches,file.dir=tempdir());
#make a guess for the parameters, the other two parameters a and d
#will be estimated based on data.
model<-"5pl"
pars<-c(7.2,0.5, 0.015) #5pl inits
names(pars)<-c("xmid", "scal", "g")
#model<-"4pl"
#pars<-c(7.2,0.9) #4pl inits
#names(pars)<-c("xmid", "scal")
#do fitting. model will be written into data set.
batches<-runFit(pars=pars, batches=batches, refBatch.ID=1, model=model )
#batches<-runFit(pars=pars, batches=batches, refBatch.ID=1, model="4pl" )
#now call to do predications based on the model.
batches<-predictAll(batches);
#reporting.
reportHtml(batches, file.name="report_ana",file.dir=tempdir())
#now saving the combine data.
saveDB(batches, file.path(tempdir(),"elisa_tool1.rds"));
batches.old<-loadDB(file.path(tempdir(),"elisa_tool1.rds"));
#now suppose want to join/combine the two batches, old and new
batches.com<-combineData(batches, batches.old);
reportHtml(batches.com, file.name="report_com",file.dir=tempdir());
batches.com<-runFit(pars=pars, batches=batches.com, refBatch.ID=1 ,model=model)
#now call to do predications based on the model.
batches.com<-predictAll(batches.com);
reportHtml(batches.com,file.name="report_com_ana", file.dir=tempdir());
the output in the console:
HTML> setwd("C:/Users/ElinD/OneDrive/Desktop/ELISAtools/extdata")
HTML> #load the library
HTML> library(ELISAtools)
HTML> ###
HTML> #get file folder
HTML> dir_file<-system.file("extdata", package="ELISAtools")
HTML> dir_file<-("C://Users//ElinD//OneDrive//Desktop//ELISAtools//extdata")
HTML> #setwd(dir_file)
HTML> batches<-loadData(file.path(dir_file,"mytextfile2.txt"))
Reading Data for Batch: 1 -- Batch1
Experiment: 1 -- Exp1
Reading Data for Batch: 2 -- Batch2
Experiment: 1 -- Exp2
Experiment: 2 -- Exp3
Reading Data for Batch: 3 -- Batch3
Experiment: 1 -- Exp4
Done!!!
Warning messages:
1: In read.plates(fileName = file.path(dir_design, dfile[ind[j], ]$FileName), :
not enough plates found in OD data file (no starting point), please check
2: In read.plates(fileName = file.path(dir_design, dfile[ind[j], ]$FileName), :
not enough plates found in OD data file (no starting point), please check
3: In read.plates(fileName = file.path(dir_design, dfile[ind[j], ]$FileName), :
not enough plates found in OD data file (no starting point), please check
HTML> #now add
HTML> reportHtml(batches,file.dir=tempdir());
*** Output redirected to directory: C:\Users\ElinD\AppData\Local\Temp\RtmpeSTxNM
*** Use HTMLStop() to end redirection.Error in batches[[i]]@runs[[j]]@plates[[k]] : subscript out of bounds
HTML> #make a guess for the parameters, the other two parameters a and d
HTML> #will be estimated based on data.
HTML> model<-"5pl"
HTML> pars<-c(7.2,0.5, 0.015) #5pl inits
HTML> names(pars)<-c("xmid", "scal", "g")
HTML> #do fitting. model will be written into data set.
HTML> batches<-runFit(pars=pars, batches=batches, refBatch.ID=1, model=model )
Error in erun@plates[[k]] : subscript out of bounds
HTML> #batches<-runFit(pars=pars, batches=batches, refBatch.ID=1, model="4pl" )
HTML> #now call to do predications based on the model.
HTML> batches<-predictAll(batches);
Error in batch@runs[[i]]@plates[[j]] : subscript out of bounds
HTML> #reporting.
HTML> reportHtml(batches, file.name="report_ana",file.dir=tempdir())
*** Output redirected to directory: C:\Users\ElinD\AppData\Local\Temp\RtmpeSTxNM
*** Use HTMLStop() to end redirection.Error in batches[[i]]@runs[[j]]@plates[[k]] : subscript out of bounds
HTML> #now saving the combine data.
HTML> saveDB(batches, file.path(tempdir(),"elisa_tool1.rds"));
the specified database file exists, and will be overwritten ***saving ELISA data set: C:\Users\ElinD\AppData\Local\Temp\RtmpeSTxNM/elisa_tool1.rds
***success!!
NULL
HTML> batches.old<-loadDB(file.path(tempdir(),"elisa_tool1.rds"));
***loading ELISA data set: C:\Users\ElinD\AppData\Local\Temp\RtmpeSTxNM/elisa_tool1.rds
***Success!!
HTML> #now suppose want to join/combine the two batches, old and new
HTML> batches.com<-combineData(batches, batches.old);
Error in `*tmp*`[[j]] : subscript out of bounds
HTML> reportHtml(batches.com, file.name="report_com",file.dir=tempdir());
*** Output redirected to directory: C:\Users\ElinD\AppData\Local\Temp\RtmpeSTxNM
*** Use HTMLStop() to end redirection.the specified file for saving analysis results exists. It will be overwritten
An html reprot," report_com.html", has been generate.
A text file," report_com.txt", has been gerated.
HTML> batches.com<-runFit(pars=pars, batches=batches.com, refBatch.ID=1 ,model=model)
It. 0, RSS = 889.308, Par. = 7.2 0.5 0.015 0.0719858 -0.0266009 0.0719858 0 -0.0266009 0.561701 0.384977 0.561701 0.384977
It. 1, RSS = 139.158, Par. = 35.8813 3.97873 0.133009 -0.270759 -0.452295 -0.270759 5.97425e-06 -0.452295 0.334172 0.480591 0.334172 0.480591
It. 2, RSS = 112.986, Par. = 26.1378 2.36643 0.174916 4.10825 1.81756 4.10825 2.74169 1.81756 8.65759 7.12208 8.65759 7.12208
It. 3, RSS = 105.515, Par. = 24.7934 1.97934 0.180712 5.09848 3.543 5.09848 4.37846 3.543 7.54402 6.80119 7.54402 6.80119
It. 4, RSS = 93.4139, Par. = 21.249 1.45113 0.189637 6.26388 5.3633 6.26388 5.87884 5.3633 7.6974 7.28614 7.6974 7.28614
It. 5, RSS = 56.8623, Par. = 16.4689 0.674741 0.205784 6.97741 6.58967 6.97741 6.78527 6.58967 7.54321 7.36921 7.54321 7.36921
It. 6, RSS = 34.5791, Par. = 12.2045 0.162262 0.211635 4.60506 4.49657 4.60505 4.54929 4.49657 4.77532 4.69369 4.77532 4.69369
It. 7, RSS = 24.1277, Par. = 10.2342 0.355997 0.390465 2.41104 2.27346 2.41104 2.34386 2.27346 2.92567 2.59445 2.92567 2.59445
It. 8, RSS = 13.4201, Par. = 9.23399 0.432692 0.376411 1.38052 1.20043 1.38052 1.28085 1.20043 1.84804 1.55157 1.84804 1.55157
It. 9, RSS = 8.26105, Par. = 7.14944 0.473578 0.468126 -0.538793 -0.717473 -0.538793 -0.631408 -0.717473 -0.100735 -0.347507 -0.100735 -0.347507
It. 10, RSS = 5.00907, Par. = 7.67438 0.517169 0.541405 0.0597223 -0.113151 0.0597224 -0.0270897 -0.113151 0.503446 0.263039 0.503446 0.263039
It. 11, RSS = 4.96774, Par. = 7.76539 0.4754 0.487626 0.0857056 -0.0852621 0.0857054 0.000585553 -0.0852621 0.52106 0.291157 0.52106 0.291157
It. 12, RSS = 4.96571, Par. = 7.75928 0.474743 0.49038 0.0877858 -0.0848344 0.0877857 -4.99128e-06 -0.0848344 0.52828 0.287807 0.52828 0.287807
It. 13, RSS = 4.96554, Par. = 7.76666 0.471071 0.484697 0.0874921 -0.0845127 0.0874921 0.000353075 -0.0845127 0.528853 0.287704 0.528853 0.287704
It. 14, RSS = 4.96552, Par. = 7.76644 0.470852 0.484619 0.0876323 -0.0846264 0.0876322 -3.03794e-06 -0.0846264 0.52935 0.287241 0.52935 0.287241
It. 15, RSS = 4.96552, Par. = 7.76641 0.470706 0.484426 0.087351 -0.0848635 0.0873509 -0.00050396 -0.0848635 0.529169 0.286958 0.529169 0.286958
It. 16, RSS = 4.96552, Par. = 7.76677 0.470642 0.484334 0.0876079 -0.0846141 0.0876079 -5.90993e-06 -0.0846141 0.529443 0.287189 0.529443 0.287189
It. 17, RSS = 4.96552, Par. = 7.76677 0.470642 0.484334 0.0876079 -0.0846141 0.0876079 -5.90993e-06 -0.0846141 0.529443 0.287189 0.529443 0.287189
HTML> #now call to do predications based on the model.
HTML> batches.com<-predictAll(batches.com);
HTML> reportHtml(batches.com,file.name="report_com_ana", file.dir=tempdir());
*** Output redirected to directory: C:\Users\ElinD\AppData\Local\Temp\RtmpeSTxNM
*** Use HTMLStop() to end redirection.the specified file for saving analysis results exists. It will be overwritten
An html reprot," report_com_ana.html", has been generate.
A text file," report_com_ana.txt", has been gerated.
|
How to run a computational solution for ELISA long-term projects in r using new data files? |
|r| |
I'm using an intermediate certificate to sign the client certificates,
while trying to enable client certificate validation on server side using python 3.10 with following code
```
ssl_context = ssl.create_default_context(purpose=ssl.Purpose.CLIENT_AUTH)
ssl_context.load_cert_chain(certfile=settings.CERTS_TLS_SERVER_CERT,
keyfile=settings.CERTS_TLS_SERVER_KEY,
password=settings.CERTS_TLS_SERVER_CERT_PASSWORD)
ssl_context.load_verify_locations(cafile=settings.CERTS_CA_CERT)
ssl_context.verify_mode = ssl.CERT_REQUIRED
```
but this is throwing the following error while client connect
Error
```
transport: <asyncio.sslproto._SSLProtocolTransport object at 0x7f657f115fc0> β
β Traceback (most recent call last): β
β File "/usr/local/lib/python3.10/asyncio/selector_events.py", line 213, in _accept_connection2 β
β await waiter β
β File "/usr/local/lib/python3.10/asyncio/sslproto.py", line 534, in data_received β
β ssldata, appdata = self._sslpipe.feed_ssldata(data) β
β File "/usr/local/lib/python3.10/asyncio/sslproto.py", line 188, in feed_ssldata β
β self._sslobj.do_handshake() β
β File "/usr/local/lib/python3.10/ssl.py", line 975, in do_handshake β
β self._sslobj.do_handshake() β
β ssl.SSLCertVerificationError: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: self-signed certificate in certificate chain (_ssl.c:1007)
```
The Client context
```
context = ssl.SSLContext(ssl.PROTOCOL_TLS_CLIENT)
context.verify_mode = ssl.CERT_REQUIRED
context.load_verify_locations(cafile='certs/generated/ca.crt')
context.load_cert_chain(certfile="certs/generated/client.crt",
keyfile="certs/generated/client.key")
```
I verified
- CA cert availability in server SSL context, ca cert containing both Root CA and Intermediate CA
- Set `ssl_context.verify_flags = ssl.VERIFY_X509_PARTIAL_CHAIN`
No luck
Any idea why this error happening?
|
<!-- language-all: js -->
*You may try:*
=let(
data,A3:C5,
task,"test",
weekinfo,2,
countif(choosecols(data,weekinfo),task))
***OR*** *with a slight modification to your formula*
=LET(
SA,A3:A5,
SB,B3:B5,
SC,C3:C5,
task,"test",
weekinfo,2,
array,IF(weekinfo=1,SA,IF(weekinfo=2,SB,IF(weekinfo,SC))),
COUNTIF(array,task)
)
[![enter image description here][1]][1]
[1]: https://i.stack.imgur.com/aM1Kb.png |
I'm currently working on integrating [Firebase Cloud Messaging (FCM)][1] for push notifications in my web application.
I have a question regarding the behavior of clicks on web-push notifications and how to define this behavior and gather information and analytics.
My question is some what similar to these questions:
- https://stackoverflow.com/questions/50148266/click-action-attribute-for-web-push-notification-through-fcm
- https://stackoverflow.com/questions/48457799/missing-click-action-key-for-webpush-payload-on-message-sent-via-fcm-httpv1
- https://stackoverflow.com/questions/49177428/http-v1-api-click-action-for-webpush-notification/
However, there are some differences.
The [Firebase documentation][2] doesn't explicitly address whether to use `click_action` or `service worker` to handle click behavior.
I have read the [Message Types][3] article, which mentions two types of messages:
* Notification messages
* Data messages
But I didn't understand the difference between them.
Let's say I want end-users who receive notifications from my backend systems to be directed to `example.com` when they click on the notification.
I've achieved this using a service worker:
self.addEventListener("notificationclick", (event) => {
const notificationData = event.notification.data;
const targetPageUrl = notificationData && notificationData.tracking
return clients.openWindow(targetPageUrl);
}
As you can see, it parses the data field in the notification payload and then opens the URL.
Here's the payload I'm using:
"webpush": {
"notification": {
"title": "This is a test notification",
"body": "test test test",
"data": {
"tracking": "https://example.com?source=data_field"
},
"actions": [
{
"action": "yes",
"title": "Enter to the site",
},
{
"action": "no",
"title": "Ignore",
}
],
},
"fcm_options": {
"link": "https://example.com?source=fcm_options"
}
}
However, if I understand correctly from the [Firebase documentation][2], I shouldn't need to do this. I can implement the URL in the payload as part of `fcm_options.link`, and this should handle the click operation without the need to handle it in the service worker.
But for some reason, it doesn't work for me - nothing happens when I click on the notification (if I remove the handling of the `onclick` from the service worker).
I would appreciate some assistance and clarification on how to handle the click behavior of web-push notifications by using the `fcm_options.link` field and without the need of the `service worker`.
[1]: https://firebase.google.com/docs/cloud-messaging/js/client
[2]: https://firebase.google.com/docs/cloud-messaging/js/receive#setting_notification_options_in_the_send_request
[3]: https://firebase.google.com/docs/cloud-messaging/concept-options#notifications_and_data_messages |
How should I open links on web-push notification click - service worker or"click_action"? |
|firebase|browser|push-notification|firebase-cloud-messaging|service-worker| |
> Regex Match a pattern that only contains one set of numerals, and not more
I would start by writing a _grammar_ for the "forgiving parser" you are coding. It is not clear from your examples, for instance, whether `<2112` is acceptable. Must the brackets be paired? Ditto for quotes, etc.
Assuming that brackets and quotes do not need to be paired, you might have the following grammar:
##### _sign_
`+` | `-`
##### _digit_
`0` | `1` | `2` | `3` | `4` | `5` | `6` | `7` | `8` | `9`
##### _non-digit_
_any-character-that-is-not-a-digit_
##### _integer_
[ _sign_ ] _digit_ { _digit_ }
##### _prefix_
_any-sequence-without-a-sign-or-digit_
[ _prefix_ ] _sign_ _non-digit_ [ _any-sequence-without-a-sign-or-digit_ ]
##### _suffix_
_any-sequence-without-a-digit_
##### _forgiving-integer_
[ _prefix_ ] _integer_ [ _suffix_ ]
Notes:
- Items within square brackets are optional. They may appear either 0 or 1 time.
- Items within curly braces are optional. They may appear 0 or more times.
- Items separated by `|` are alternatives from which 1 must be chosen
- Items on separate lines are alternatives from which 1 must be chosen
One subtlety of this grammar is that integers can have only one sign. When more than one sign is present, all except the last are treated as part of the _prefix_, and, thus, are ignored.
Are the following interpretations acceptable? If not, then the grammar must be altered.
- `++42` parses as `+42`
- `--42` parses as `-42`
- `+-42` parses as `-42`
- `-+42` parses as `+42`
Another subtlety is that whitespace following a sign causes the sign to be treated as part of the prefix, and, thus, to be ignored. This is perhaps counterintuitive, and, frankly, may be unacceptable. Nevertheless, it is how the grammar works.
In the example below, the negative sign is ignored, because it is part of the prefix.
- `- 42` parses as `42`
### A solution without `std::regex`
With a grammar in hand, it should be easier to figure out an appropriate regular expression.
My solution, however, is to avoid the inefficiencies of `std::regex`, in favor of coding a simple "parser."
In the following program, function `validate_integer` implements the foregoing grammar. When `validate_integer` succeeds, it returns the integer it parsed. When it fails, it throws a `std::runtime_error`.
Because `validate_integer` uses `std::from_chars` to convert the integer sequence, it will not convert the test case `2112.0` from the OP. The trailing `.0` is treated as a second integer. All the other test cases work as expected.
The only tricky part is the initial loop that skips over non-numeric characters. When it encounters a sign (`+` or `-`), it has to check the following character to decide whether the sign should be interpreted as the start of a numeric sequence. That is reflected in the "tricky" grammar for _prefix_ given above, where a _sign_ must be followed by a _non-digit_, if it is to be treated as part of the prefix.
```lang-cpp
// main.cpp
#include <cctype>
#include <charconv>
#include <iomanip>
#include <iostream>
#include <stdexcept>
#include <string>
#include <string_view>
bool is_digit(unsigned const char c) {
return std::isdigit(c);
}
bool is_sign(const char c) {
return c == '+' || c == '-';
}
int validate_integer(std::string const& s)
{
enum : std::string::size_type { one = 1u };
std::string::size_type i{};
// skip over prefix
while (i < s.length())
{
if (is_digit(s[i]) || is_sign(s[i])
&& i + one < s.length()
&& is_digit(s[i + one]))
break;
++i;
}
// throw if nothing remains
if (i == s.length())
throw std::runtime_error("validation failed");
// parse integer
// due to foregoing checks, this cannot fail
if (s[i] == '+')
++i; // `std::from_chars` does not accept leading plus sign.
auto const first{ &s[i] };
auto const last{ &s[s.length() - one] + one };
int n;
auto [end, ec] { std::from_chars(first, last, n)};
i += end - first;
// skip over suffix
while (i < s.length() && !is_digit(s[i]))
++i;
// throw if anything remains
if (i != s.length())
throw std::runtime_error("validation failed");
return n;
}
void test(std::ostream& log, bool const expect, std::string s)
{
std::streamsize w{ 46 };
try {
auto n = validate_integer(s);
log << std::setw(w) << s << " : " << n << '\n';
}
catch (std::exception const& e) {
auto const msg{ e.what() };
log << std::setw(w) << s << " : " << e.what()
<< ( expect ? "" : " (as expected)")
<< '\n';
}
}
int main()
{
auto& log{ std::cout };
log << std::left;
test(log, true, "<2112>");
test(log, true, "[(2112)]");
test(log, true, "\"2112, \"");
test(log, true, "-2112");
test(log, true, ".2112");
test(log, true, "<span style = \"numeral\">2112</span>");
log.put('\n');
test(log, true, "++42");
test(log, true, "--42");
test(log, true, "+-42");
test(log, true, "-+42");
test(log, true, "- 42");
log.put('\n');
test(log, false, "2112.0");
test(log, false, "");
test(log, false, "21,12");
test(log, false, "\"21\",\"12, \"");
test(log, false, "<span style = \"font - size:18.0pt\">2112</span>");
log.put('\n');
return 0;
}
// end file: main.cpp
```
### Output
The "hole" in the output, below the entry for 2112.0, is the failed conversion of the null-string.
```lang-none
<2112> : 2112
[(2112)] : 2112
"2112, " : 2112
-2112 : -2112
.2112 : 2112
<span style = "numeral">2112</span> : 2112
++42 : 42
--42 : -42
+-42 : -42
-+42 : 42
- 42 : 42
2112.0 : validation failed (as expected)
: validation failed (as expected)
21,12 : validation failed (as expected)
"21","12, " : validation failed (as expected)
<span style = "font - size:18.0pt">2112</span> : validation failed (as expected)
```
|
{"Voters":[{"Id":10871073,"DisplayName":"Adrian Mole"},{"Id":722804,"DisplayName":"Terry Jan Reedy"},{"Id":466862,"DisplayName":"Mark Rotteveel"}]} |
I suggest you to use django built-in auth system.
You import like below.
```python
from django.contrib.auth import authenticate, login
```
And then you use it as below.
```python
user = authenticate(request, username=student_id, password=password)
```
Which are built in methods.
This `authenticate` function doesn't use your password field in your model.(unless you you already wrote code for custom authentication backend and didn't post here)
So, when you try to log in, that user doesn't exist in django built-in auth system.
Django already has auth system, which you can customise for your needs.
|
TL;DR **Get definitions straight.**
---
**To know whether a case violates 3NF you have to look at the criteria used in some definition.**
Your question is rather like asking, I know an even number is one that is divisible by 2 or one whose decimal representation ends in 0, 2, 4, 6 or 8, but what if it's three times a square? Well, you have to *use the definition*--show that the given conditions imply that it's divisible by two or that its decimal representation ends in one of those digits. Why do you even care about other properties than the ones in the definition?
When some FDs (functional dependencies) hold, others must also hold. We say the latter are implied by the former. So when given FDs hold usually tons of others also hold. So one or more arbitrary FDs holding doesn't necessarily tell you anything about any normal forms might hold. Eg when U is a superset of V, U β V must hold; such FDs are called *trivial* because they are implied by any collection of FDs. Eg when U β V, every superset of U determines every subset of V. *Armstrong's axioms* are some rules that can be mechanically applied to find all FDs that hold. There are algorithms to find a *canonical/minimal/irreducible cover* for a given set, a set of FDs that imply all those in it with no proper subset that does. There are also algorithms to determine whether a relation satisfies certain NFs (normal forms), and to decompose them into components with higher NFs when they're not.
**Sometimes we think there is a case that the definition doesn't handle but really we have got the definition wrong.**
The definition you are trying to refer to for a relation being in 3NF actually requires that there be no transitive functional dependence of a non-prime attribute on a candidate key.
In your non-3NF example you should say *there is* a transitive FD, not "this is a transitive FD", because the violating FD is of the form *CK* β A not Y β A. Also, U β V is transitive when there is an X where U β X AND X β V AND NOT X β U. It doesn't matter whether X is a prime attribute.
**PS** It's not very helpful to ask "why" something is or isn't so in mathematics. We describe a situation in terms of some givens, and a bunch of things follow. We can say that if certain of the givens weren't so then that thing wouldn't be so. But if certain *other givens* weren't so then it might also not be so. We can give a proof that something is or isn't so as "why" but it's not the only proof. |
TypeError: Cannot read properties of undefined (reading 'reRenderOnLangChange') |
|angular|typescript|angular-test|transloco| |
I fixed the problem by importing the `TranslocoTestingModule` to my tests' setup.
More info about this module here: https://jsverse.github.io/transloco/docs/unit-testing |
I have a mongoDb with my data stored (currently as application/json) and I want my frontend to be able to have a search bar where they start searching for a name, e.g type "Tho" when you want to filter on name "Thomas", and by typing "Tho" you should get all entries containing "Tho" in their name.
I'm using DaprClient's QueryStateAsync to query my mongodb with a jsonQuery like this:
StateQueryResponse<Employee> response = await _daprClient.QueryStateAsync<Employee>(DB_NAME, jsonQuery);
The jsonQuery is created on frontend and passed through the entire backend.
Is this possible using the DaprClient or is another client needed for this type of filtering?
ref Dapr documentation: [Dapr - How to: Query State][1]
[1]: https://docs.dapr.io/developing-applications/building-blocks/state-management/howto-state-query-api/ |
Contains filter in Dapr's QueryStateAsync |
|c#|.net|mongodb|mongodb-.net-driver|dapr| |
I have the following code written in ECharts and you can replace it in the link: https://echarts.apache.org/examples/en/editor.html?c=pie-rich-text&lang=ts
```ts
import * as echarts from 'echarts';
type EChartsOption = echarts.EChartsOption;
var chartDom = document.getElementById('main')!;
var myChart = echarts.init(chartDom);
var option: EChartsOption;
const values = [0, 0, 0, 0, 1];
const names = ['A', 'B', 'C', 'D', 'E'];
const initialValues = values.map((v, i) => ({ name: names[i], value: v }));
option = {
tooltip: {
trigger: 'item',
formatter: '{a} <br/>{b} : {c} ({d}%)'
},
legend: {
bottom: 10,
left: 'center',
data: names
},
series: [
{
type: 'pie',
radius: '65%',
center: ['50%', '50%'],
selectedMode: 'single',
data: initialValues,
emphasis: {
itemStyle: {
shadowBlur: 10,
shadowOffsetX: 0,
shadowColor: 'rgba(0, 0, 0, 0.5)'
}
}
}
]
};
option && myChart.setOption(option);
```
The problem occurs when I have only one `value`.
```ts
const values = [0, 0, 0, 0, 1];
```
When selecting item E to remove the labels, 1/4 of all items appear, but they are zero.
[](https://i.stack.imgur.com/eyIpv.png)
My question is how to ensure that none of these items appear and there is 0% in item E.
Something similar to this
[](https://i.stack.imgur.com/tLF9x.png) |
Problem generating pie charts in Vue (ECharts) |
|typescript|vue.js|pie-chart|echarts| |
null |
I want to obtain the token addresses of new tokens released on Solana that are compatible for swapping on Raydium.
I've explored various API providers, but unfortunately, none of them offer this specific data.
I experimented with Helius using the mainnet.helius-rpc.com endpoint, and I believe I'm headed in the right direction. However, I'm uncertain about which parameter I should input for my scope.
Is there any solution? |
|mysql|best-first-search| |
I am building Docker images of a web app locally on my Mac and then running them via Docker Compose on a remote server that's running Ubuntu AMD64. I get an error that the platform isn't compatible, and I'm curious what's the best practice in real production environments to tackle this issue.
Do I support 2 image types? Do I specify something in the Dockerfile that allows the image to run on all platforms?
I was able to bypass this by re-building the image on the remote server, but not sure if this is a best practice.
Thanks. |
|php|wordpress| |
null |
I am trying to create a Static Web App in Azure Portal and deploy React code written in Visual Studio Code from github.
The instructions I am following are here:
[Quickstart: Building your first static site with Azure Static Web Apps][1]
Steps:
1. Create Static Web App -> my-react-project
[![enter image description here][2]][2]
2. Select a Region -> East US 2
[![enter image description here][3]][3]
3. Choose build preset -> React
[![enter image description here][4]][4]
4. Enter location of app code -> /
[![enter image description here][5]][5]
5. Enter the location of the build output -> build
[![enter image description here][6]][6]
6. At this point I receive this message:
[![enter image description here][7]][7]
7. Then the build deploy fails:
The content server has rejected the request with: BadRequest
Reason: No matching Static Web App was found or the api key was invalid.
[![enter image description here][8]][8]
Run Azure/static-web-apps-deploy@v1
with:
azure_static_web_apps_api_token: ***
repo_token: ***
action: upload
app_location: /
api_location: api
output_location: build
/usr/bin/docker run --name b469e5e693774fc22146d1ae8f22f864953d6a_86588a --label b469e5 --workdir /github/workspace --rm -e "INPUT_AZURE_STATIC_WEB_APPS_API_TOKEN" -e "INPUT_REPO_TOKEN" -e "INPUT_ACTION" -e "INPUT_APP_LOCATION" -e "INPUT_API_LOCATION" -e "INPUT_OUTPUT_LOCATION" -e "INPUT_API_BUILD_COMMAND" -e "INPUT_APP_ARTIFACT_LOCATION" -e "INPUT_APP_BUILD_COMMAND" -e "INPUT_ROUTES_LOCATION" -e "INPUT_SKIP_APP_BUILD" -e "INPUT_CONFIG_FILE_LOCATION" -e "INPUT_SKIP_API_BUILD" -e "INPUT_PRODUCTION_BRANCH" -e "INPUT_DEPLOYMENT_ENVIRONMENT" -e "INPUT_IS_STATIC_EXPORT" -e "INPUT_DATA_API_LOCATION" -e "HOME" -e "GITHUB_JOB" -e "GITHUB_REF" -e "GITHUB_SHA" -e "GITHUB_REPOSITORY" -e "GITHUB_REPOSITORY_OWNER" -e "GITHUB_REPOSITORY_OWNER_ID" -e "GITHUB_RUN_ID" -e "GITHUB_RUN_NUMBER" -e "GITHUB_RETENTION_DAYS" -e "GITHUB_RUN_ATTEMPT" -e "GITHUB_REPOSITORY_ID" -e "GITHUB_ACTOR_ID" -e "GITHUB_ACTOR" -e "GITHUB_TRIGGERING_ACTOR" -e "GITHUB_WORKFLOW" -e "GITHUB_HEAD_REF" -e "GITHUB_BASE_REF" -e "GITHUB_EVENT_NAME" -e "GITHUB_SERVER_URL" -e "GITHUB_API_URL" -e "GITHUB_GRAPHQL_URL" -e "GITHUB_REF_NAME" -e "GITHUB_REF_PROTECTED" -e "GITHUB_REF_TYPE" -e "GITHUB_WORKFLOW_REF" -e "GITHUB_WORKFLOW_SHA" -e "GITHUB_WORKSPACE" -e "GITHUB_ACTION" -e "GITHUB_EVENT_PATH" -e "GITHUB_ACTION_REPOSITORY" -e "GITHUB_ACTION_REF" -e "GITHUB_PATH" -e "GITHUB_ENV" -e "GITHUB_STEP_SUMMARY" -e "GITHUB_STATE" -e "GITHUB_OUTPUT" -e "RUNNER_OS" -e "RUNNER_ARCH" -e "RUNNER_NAME" -e "RUNNER_ENVIRONMENT" -e "RUNNER_TOOL_CACHE" -e "RUNNER_TEMP" -e "RUNNER_WORKSPACE" -e "ACTIONS_RUNTIME_URL" -e "ACTIONS_RUNTIME_TOKEN" -e "ACTIONS_CACHE_URL" -e "ACTIONS_RESULTS_URL" -e GITHUB_ACTIONS=true -e CI=true -v "/var/run/docker.sock":"/var/run/docker.sock" -v "/home/runner/work/_temp/_github_home":"/github/home" -v "/home/runner/work/_temp/_github_workflow":"/github/workflow" -v "/home/runner/work/_temp/_runner_file_commands":"/github/file_commands" -v "/home/runner/work/my-react-project/my-react-project":"/github/workspace" b469e5:e693774fc22146d1ae8f22f864953d6a
DeploymentId: 04ac8c8e-306d-498e-890f-885b056fb322
Try to validate location at: '/github/workspace'.
App Directory Location: '/' was found.
Try to validate location at: '/github/workspace/swa-db-connections'.
Looking for event info
The content server has rejected the request with: BadRequest
Reason: No matching Static Web App was found or the api key was invalid.
For further information, please visit the Azure Static Web Apps documentation at https://docs.microsoft.com/en-us/azure/static-web-apps/
If you believe this behavior is unexpected, please raise a GitHub issue at https://github.com/azure/static-web-apps/issues/
Exiting
The repo is here:
[REPO][9]
Thanks in advance for any guidance.
[1]: https://learn.microsoft.com/en-us/azure/static-web-apps/getting-started?tabs=react
[2]: https://i.stack.imgur.com/9kj7k.png
[3]: https://i.stack.imgur.com/hASBi.png
[4]: https://i.stack.imgur.com/cTdlI.png
[5]: https://i.stack.imgur.com/dtbVK.png
[6]: https://i.stack.imgur.com/fNDuN.png
[7]: https://i.stack.imgur.com/dbb7B.png
[8]: https://i.stack.imgur.com/HPhXv.png
[9]: https://github.com/drdexter33/my-react-project |
I simply need to calculate a new field date in select in eloquent... but it does not seems to work although if I copy the mysql query in phpmyadmin it works.
SQLSTATE[42S22]: Column not found: 1054 Unknown column
'IF(date_start<'2024-03-10','2024-03-10',date_start)' in 'field list'
select
*,
`IF(date_start<'2024-03-10','2024-03-10',date_start)` as `date`.
from
`table`
where
(project = 'xxx')
order by
id asc
Of course, I can do it in not-eloquent... but I want to learn... Is it possible? |
Laravel eloquent select not accepting if function |
|mysql|laravel-5|select|eloquent| |
I was working on my spreadsheet to keep track of my goals.
I stuck on some strange problem I can't find any fix for that, not so advanced in google sheets yet :(
I made test sheet just to show problem in isolation. https://docs.google.com/spreadsheets/d/19xUOeLoXTPH3heVFMssya5sh8ZeOaIlIBpHnypaDx-A/edit?usp=sharing
I need to count amount of specific task in the arrays(weekdays) by specific value.
I have weekinfo which checks what day is on the calendar by position - in the test table I just made it equals 2
And then I have IFS function which assigns specific array by the weekinfo number - SA,SB,SC which are just arrays under variable $A$3:$A.
But for some reason any IFS - countifs, ifs etc - they can't give arrays as outcome, they just can't process them and they just give error that ifs has mismatched range sizes.
Does anyone know what might be a solution for that?
|+|A|B|C|D|E|F|
|:-|:-|:-|:-|:-|:-|:-|
|1|Β |Β |Β |Β |Β |Β |
|2|array1|array2|array3|Β |Β |Β |
|3|test|5|test|Β |Β |Β |
|4|test|6|test|Β |Β |Β |
|5|3|test|test|Β |Β |Β |
|6|Β |Β |Β |Β |Β |Β |
|7|Β |Β |Β |Β |0|- should be 1|
**formula** (`tried in Cell_E7`)
=LET(
SA,A3:A5,
SB,B3:B5,
SC,C3:C5,
task,"test",
weekinfo,2,
array,IFS(weekinfo=1,SA,weekinfo=2,SB,weekinfo,SC),
COUNTIF(array,task)
) |
Ok so I've been working on this project where I am trying to detect an anomaly and relate it to some certain phenomenon. I know that pandas have builtin functions i.e. pd.rolling(window= frequency).statistics_of_my_choice() but for some reasons I am not getting the desired results. I have calculated rolling mean, r.median, r.upper & lower = mean +- 1.6 r.std.
But when I plot it, the upper and lower bounds are always above the data. IDK what's happening here, it doesn't make sense. Please take a look at the figure for a better understanding.
Here's what I am getting:

and here's what I want to achieve:

Here's the paper that I am trying to implement:
https://www.researchgate.net/publication/374567172_Analysis_of_Ionospheric_Anomalies_before_the_Tonga_Volcanic_Eruption_on_15_January_2022/figures
Here's my code snippet
```
def gen_features(df):
df["ma"] = df.TEC.rolling(window="h").mean()
df["mstd"] = df.TEC.rolling(window="h").std()
df["upper"] = df["ma"] + (1.6* df.mstd)
df["lower"] = df["ma"] - (1.6* df.mstd)
return df
```
|
|sap-commerce-cloud|cdc|gigya|sap-cloud-identity-services| |
In my vs code in c++ when I declare a pointer and then print the size of the pointer, its reporting 4 bytes instead of 8 even though my system and compiler both are 64 bit.
|
Parent-Child vs `Range.IndentLevel`
-
[![enter image description here][1]][1]
**The Calling Procedure (Example)**
<!-- language: lang-vb -->
Sub RunParentChild()
Dim ws As Worksheet: Set ws = ThisWorkbook.Sheets("Target")
Dim rg As Range:
Set rg = ws.Range("A1", ws.Cells(ws.Rows.Count, "A").End(xlUp))
Dim IndentLevels() As Long: IndentLevels = GetIndentLevels(rg)
With rg.EntireRow
.Columns("B").Value = IndentLevels ' not necessary
.Columns("C") _
.Value = GetIndentedParentChildFromColumn(rg, IndentLevels, "SUB")
End With
End Sub
**The Called (Helper) Procedures**
<!-- language: lang-vb -->
Function GetIndentLevels(rg As Range) As Long()
Dim rCount As Long: rCount = rg.Rows.Count
Dim cCount As Long: cCount = rg.Columns.Count
Dim Data() As Long: ReDim Data(1 To rCount, 1 To cCount)
Dim r As Long, c As Long
For r = 1 To rCount
For c = 1 To cCount
Data(r, c) = rg.Cells(r, c).IndentLevel
Next c
Next r
GetIndentLevels = Data
End Function
<!-- language: lang-vb -->
Function GetIndentedParentChildFromColumn( _
rg As Range, _
IndentLevels() As Long, _
ChildBeginsWith As String, _
Optional ColumnIndex As Long = 1) _
As Variant
Dim cData As Variant: cData = GetRange(rg.Columns(ColumnIndex))
Dim rCount As Long: rCount = UBound(cData, 1)
Dim r As Long, i As Long, IsFirstFound As Boolean
For r = 1 To rCount
If IsFirstFound Then
i = IndentLevels(r, 1)
Select Case IndentLevels(r - 1, 1)
Case Is < i
If r <> rCount Then
If IndentLevels(r + 1, 1) > i Then
cData(r, 1) = "P"
Else
cData(r, 1) = "C"
End If
Else
cData(r, 1) = "C"
End If
Case i
If r = rCount Then
If InStr(1, CStr(cData(r, 1)), ChildBeginsWith, _
vbTextCompare) = 1 Then
cData(r, 1) = "C"
Else
cData(r, 1) = "P"
End If
Else
If IndentLevels(r + 1, 1) > i Then
cData(r, 1) = "P"
Else
cData(r, 1) = "C"
End If
End If
Case Is > i
If InStr(1, CStr(cData(r, 1)), ChildBeginsWith, _
vbTextCompare) = 1 Then
cData(r, 1) = "C"
Else
cData(r, 1) = "P"
End If
End Select
Else
cData(r, 1) = "P"
IsFirstFound = True
End If
Next r
GetIndentedParentChildFromColumn = cData
End Function
<!-- language: lang-vb -->
Function GetRange(rg As Range) As Variant()
If rg.Rows.Count + rg.Columns.Count = 2 Then
ReDim Data(1 To 1, 1 To 1): Data(1, 1) = rg.Value: GetRange = Data
Else
GetRange = rg.Value
End If
End Function
[1]: https://i.stack.imgur.com/n6JBT.jpg
|
Get new tokens on Solana (realtime) |
|solana|solana-web3js|solana-cli|solana-program-library| |
I have a `FloatingActionButton` that changes its icon to a pause icon when clicked. How can I toggle the icon back to the play icon when the button is clicked again?
**Code:**
Row(
mainAxisAlignment: MainAxisAlignment.center,
crossAxisAlignment: CrossAxisAlignment.center,
children: [
AnimatedBuilder(
animation: controller,
builder: (context, child) {
return FloatingActionButton.extended(
onPressed: () {
if (controller.isAnimating)
controller.stop();
else {
controller.reverse(
from: controller.value == 0.0
? 1.0
: controller.value);
}
},
icon: Icon(
controller.isAnimating
? Icons.pause
: Icons.play_arrow ,
color: Color(0xffF2F2F2),),
label: Text(
controller.isAnimating ? "Pause" : "Play",));
}),
SizedBox(width: 20,),
AnimatedBuilder(
animation: controller,
builder: (context, child) {
return FloatingActionButton.extended(
onPressed: () {
if (controller.isAnimating)
controller.reset();
},
icon: Icon(Icons.refresh,
color: Color(0xffF2F2F2),),
label: Text("Refresh",),);
}),
],
)
**Explanation:**
I want to be able to switch seamlessly between a play icon (`Icons.play_arrow`) and a pause icon (`Icons.pause`) on my button with each click. |
im trying to parse a XML with contains a value with big numbers, but is returning me :
\~only examples\~
```
2.9999903656804e+43
```
but the expected is:
```
2999999656804999999550999008999491999425661
```
my actual config is:
```ts
const parser = new XMLParser({
ignoreAttributes: false,
numberParseOptions: {
// com essa opΓ§Γ£o, caso o valor seja zero a esquerda, vira uma string.
leadingZeros: false,
},
});
```
i read the documentation, but not found the explain. |
fast-xml-parser dealing with large numbers |
|javascript|node.js|fast-xml-parser| |
null |
Make sure to call setDisplayHomeAsUpEnabled(true) and setSupportActionBar(toolbar) inside onCreate()
@Override
public boolean onOptionsItemSelected(MenuItem item) {
switch (item.getItemId()) {
case android.R.id.home:
onBackPressed();
return true;
}
return super.onOptionsItemSelected(item);
}
and add above line for closing the screen on back icon press. |
The following are the steps I use to deploy SQL server using DevOps pipeline via DACPAC file for your reference.
1. Create a SQL Server DB project (SSTD) in VS and import my DB into the project. You can make changes to the DB in the project.
Right click on the project -> Import ->Database
[![enter image description here][1]][1]
2. Create a repo in Azure DevOps and push the changes.
[![enter image description here][2]][2]
3. Build the project to get the DACPAC file in a pipeline.
```
trigger:
- none
pool:
vmImage: 'windows-latest'
variables:
solution: '**/*.sln'
buildPlatform: 'Any CPU'
buildConfiguration: 'Release'
steps:
- task: VSBuild@1
inputs:
solution: '$(solution)'
platform: '$(buildPlatform)'
configuration: '$(buildConfiguration)'
- task: CopyFiles@2
inputs:
SourceFolder: '$(system.defaultworkingdirectory)'
Contents: '**\bin\$(BuildConfiguration)\**'
TargetFolder: '$(build.artifactstagingdirectory)'
- task: PublishPipelineArtifact@1
inputs:
targetPath: '$(build.artifactstagingdirectory)'
artifact: 'drop'
publishLocation: 'pipeline'
```
4. Create a classic release pipeline to deploy the DACPAC file to your target DB.
- Select the build pipeline above as an artifact.
- Deploy using [SQL Server database deploy](https://learn.microsoft.com/en-us/azure/devops/pipelines/tasks/reference/sql-dacpac-deployment-on-machine-group-v0?view=azure-pipelines) task. (For Azure SQL, you can use [Azure SQL Database deployment](https://learn.microsoft.com/en-us/azure/devops/pipelines/targets/azure-sqldb?view=azure-devops&tabs=yaml) task. For Azure SQL Data Warehouse, you can use [Azure SQL Data Warehouse deployment](https://marketplace.visualstudio.com/items?itemName=ms-sql-dw.SQLDWDeployment) task.)
[![enter image description here][3]][3]
- Ensure the user you use to login has enough permissions to make the changes you define in your SSTD project.
[1]: https://i.stack.imgur.com/hOPdX.png
[2]: https://i.stack.imgur.com/5S3IV.png
[3]: https://i.stack.imgur.com/h0GfI.png |
I am using jsPDF to make an export functionality but the scripts required for that are not enqueuing. The following is the function in which the handling of scripts is done:
```
public function admin_scripts( $page ) {
wp_enqueue_script( 'my-plugin-pro', plugins_url( 'assets/js/my-plugin-pro.js', dirname( __FILE__ ) ), false, ANALYTIFY_PRO_VERSION );
wp_localize_script( 'my-plugin-pro', 'MY_Plugin', array(
'ajaxurl' => admin_url( 'admin-ajax.php' ),
'exportUrl' => esc_url_raw( add_query_arg( array( 'action' => 'my_plugin_export' ), admin_url( 'admin-ajax.php' ) ) ),
'export_nonce' => wp_create_nonce( 'analytify_export_nonce' ),
) );
}
```
I added the following code inside the function
```
wp_enqueue_script('jspdf', 'https://cdnjs.cloudflare.com/ajax/libs/jspdf/2.5.1/jspdf.umd.min.js', array(), null, true);
wp_enqueue_script('html2canvas', 'https://cdnjs.cloudflare.com/ajax/libs/html2canvas/1.4.1/html2canvas.min.js', array(), null, true);
```
The scripts are present when clicking the view source but in javascript I get following error:
**jsPDF library is not available.**
And I am using the code in javascript as below:
```
$(document).on('click', '.my_plugin_export_pdf_btn', function(e) {
e.preventDefault();
const { jsPDF } = window.jspdf;
let doc = new jsPDF('l', 'mm', [1500, 1400]);
let pdfjs = document.querySelector('.analytify_wraper');
doc.html(pdfjs, {
callback: function(doc) {
doc.save("newpdf.pdf");
},
x: 12,
y: 12
});
});
```
How can I fix this? Or how I can separately enqueue them? |
|php|laravel|laravel-5| |
null |
I am about to try to create simple 2D game using Lua. I have watched couple of videos and tuts and checked also syntax but I am confused of multiple developes using different approach in OOP.
Some uses Metatables, some consider it too much complex and create object using like functions that return table.
And I am lost... It is not straight forward what should I use.
I want to use reall OOP like for example Java or C# have.
I saw many videos where they are creating 2D games where they don't use metatables for its difficulty to read and work with.
What shell I choose though? |
Using metatables in Lua or functions for OOP |
|oop|lua|2d| |
One possible explanation for this behavior is that the file gets created at the end of the run. The current directory of the process may at that point be a place you don't expect (or even a directory where the file can't be created). The quickest way to see if this is your problem is to leave out the `-o output.file`. cProfile then writes the profile to stdout. You can also try specifying an absolute path, e.g. `-o /var/tmp/output.file`. |
Your code is using a single thread executor task which submits another task to same single thread executor, and then awaits that sub-task to exit. It is the same as this example which would print "ONE" and "THREE", and never print "TWO", "cf" or "FOUR":
ExecutorService executor = Executors.newSingleThreadExecutor();
CompletableFuture<Void> future = CompletableFuture.runAsync(() -> {
log("ONE");
// This subtask can never start while current task is still running:
Future<?> cf = executor.submit(() -> log("TWO"));
log("THREE");
try
{
// Blocks forever if run from single thread executor:
log("cf"+cf.get());
}
catch (Exception e)
{
throw new RuntimeException("It failed");
}
log("FOUR");
}, executor);
The subtask could only run after the main task exits - if "FOUR" was printed - but is stuck awaiting `cf.get()`.
The solution is easy - you should process the initial task on separate Thread or executor queue to the service used by the sub-tasks, or chain each component of the subtasks with `.thenRun(...)`.
CompletableFuture<Void> future = CompletableFuture.runAsync(subtask1, executor)
.thenRun(subtask2);
|
I am building a website with 4 pages.
The homepage has 4 scripts injected, 3 of which are a one-time code execution, but the last one is a looping event of a 1.5 seconds interval.
When the route changes to another page, the looping script stays in memory and keeps executing, printing in the console that it can't find a specific element, as expected.
I have tried to force a redirect where the entire page is requested.
Although I don't have a problem with 1 page doing this, and adding a small delay, I believe there has to be a better way to do this.
I am using NextJS/Scripts component to load the scripts. When changing pages, the scripts remain in the html body. Even if unmounting them from the document, they remain running in memory |
I'm (unfortunately) suspecting that what I'd like to do is impossible, so I'm very open to best-practice workarounds.
In short, I've got a setup like the following:
```rust
pub struct OffsetMod<'a> {
kind: OffsetKind,
composition: ChemicalComposition<'a>,
}
```
Most of the time, that struct should indeed be the owner of that `ChemicalComposition`, but sometimes this data is stored differently, and the struct is peeled apart into something approximating `HashMap<ChemicalComposition<'a>, OffsetKind>`, then I have a method that iterates over this `HashMap` and returns a series of reconstituted `OffsetMod<'a>`s. Well, sorta `OffsetMod<'a>`s... Obviously the only way I could get those owned `ChemicalComposition`s back (without some very expensive `.clone()`ing) is to drain / move the `HashMap`, but that's not something I want to do.
Downstream, I actually only need a reference to those `ChemicalCompositions`, so really what I want to return is a sort of borrowed `OffsetMod<'a>` β something like:
```rust
pub struct BorrowedOffsetMod<'a> {
kind: OffsetKind,
composition: &'a ChemicalComposition<'a>,
}
```
The issue, however, with creating that second struct, is that I'd now have to duplicate all of the methods / trait impls I have for `OffsetMod` for `BorrowedOffsetMod`!
Elsewhere in my code, when encountering a similar case of needing a struct to come in a form that borrows its contents, and another that doesn't, I wrote something like this:
```rust
pub struct Target<S = String> {
pub group: S,
pub location: Option<S>,
pub residue: Option<S>,
}
```
By default, it owns its data (`String`s, in this case), but my impl blocks are written as:
```rust
impl<S: Borrow<str>> Target<S> {
// β snip β
}
```
In this way, I can typically assume that `Target` owns its data, but I can also "borrow" `Target` as `Target<&'a str>`, and all of the same methods will work fine because of the `Borrow<str>` bound.
**Now here comes my very troubling issue:**
If I try the same with my `OffsetMod` struct, the pain begins:
```rust
pub struct OffsetMod<'a, C = ChemicalComposition<'a>> {
kind: OffsetKind,
composition: C,
}
```
Which hits me with the ever-awful
```
error[E0392]: parameter `'a` is never used
help: consider removing `'a`, referring to it in a field, or using a marker such as `PhantomData`
```
Now, to open with the "best" solution I've found so far, I'm a bit perplexed at why this is so easy and works fine:
```rust
pub type OffsetMod<'a> = OffsetModInner<ChemicalComposition<'a>>;
struct OffsetModInner<C> {
kind: OffsetKind,
composition: C,
}
```
When that feels quite a lot like what I'd expect `OffsetMod<'a, C = ChemicalComposition<'a>>` to do behind the scenes... The reason this solution isn't quite cutting it for me, is that it now "splits" my type in two: my parameters may be able to use `OffsetMod<'a>` just fine, but for the borrowed version, I can't do `OffsetMod<&ChemicalComposition<'a>>`, but I need to use that second "hidden" `OffsetModInner<&ChemicalComposition<'a>>` or make another alias like `BorrowedOffsetMod<'a>`. Whilst this is the best I can land on, it still feels a bit messy.
Now, I understand that Rust wants this `'a` bound to show up in a field somewhere, so that if you're _not_ using the default type, it still knows how to populate that lifetime. Perhaps we could claim that `C` always lives as long as `'a`, since `C` always shows up in the `Composition` field.
```rust
pub struct OffsetMod<'a, C: 'a = ChemicalComposition<'a>> {
kind: OffsetKind,
composition: C,
}
```
But no dice there either β same error as before. Though I've tried a multitude of other things, I've not found one I'm pleased with:
1. `PhantomData` looks and feels hacky, and I'm not certain if it's semantically correct either? I don't know, perhaps I'm more open to this one than I thought...
2. Something like `Cow<'a, ChemicalComposition<'a>>` feels a bit nasty and adds runtime overhead where there really doesn't need to be. I'll only ever need references (no need to copy), it's just that these `OffsetMod` structs are sometimes the only place the underlying data has to live (e.g. for `ChemicalComposition`s that _aren't_ stored in that `HashMap`)!
3. I tried some unholy GAT stuff with a `Borrow`esque trait that looked like this:
```rust
pub trait NestedBorrow<T> {
type Content<'a>
where
Self: 'a;
fn borrow(&self) -> &T;
}
impl<T> NestedBorrow<T> for T {
type Content<'a> = T
where
Self: 'a;
fn borrow(&self) -> &T {
self
}
}
impl<T> NestedBorrow<T> for &T {
type Content<'a> = T
where
Self: 'a;
fn borrow(&self) -> &T {
&**self
}
}
#[derive(Clone, PartialEq, Eq, Hash, Debug, Serialize)]
pub struct OffsetMod<'a, C: NestedBorrow<ChemicalComposition<'a>> + 'a = ChemicalComposition<'a>> {
kind: OffsetKind,
composition: C::Content<'a>,
}
```
But that leaks `NestedBorrow` into the public API for `OffsetMod` (see https://stackoverflow.com/a/66369912), caused problems with `&ChemicalComposition<'_>` not being accepted by `OffsetKind` β it was always looking for the owned `ChemicalComposition<'_>` version, something I never figured out β and it's generally revolting.
What do you think, is it possible to do better than the `type` aliases? Is there some Rust pattern that's better suited for structs that sometimes own, and sometimes borrow data β whilst keeping my `impl` blocks unified with `impl<T: Borrow<ChemicalComposition<'a>>> OffsetMod<'a, T>`? |
You need to provide a User-Agent HTTP header.
For example:
import requests
import json
url = "https://www.tablebuilder.singstat.gov.sg/api/table/resourceid"
params = {
"isTestApi": "true",
"keyword": "manufacturing",
"searchoption": "all"
}
headers = {
"User-Agent": "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/121.0.0.0 Safari/537.36"
}
with requests.get(url, params=params, headers=headers) as response:
response.raise_for_status()
print(json.dumps(response.json(), indent=2)) |
In case performance matters and you do not want to reinvent the wheel (and don't want to deal with pitfalls and issues as "*what if there are duplicate keys in either list?*").
The [`Join-Object`](https://www.powershellgallery.com/packages/Join) cmdlet (see also [In Powershell, what's the best way to join two tables into one?](https://stackoverflow.com/questions/1848821/in-powershell-whats-the-best-way-to-join-two-tables-into-one)) is based on the approach [@Mathias R. Jessen](https://stackoverflow.com/users/712649/mathias-r-jessen) mentioned and takes **less then a second** for joining the two `csv` files with the given sizes:
1..2600 | ForEach-Object {
[pscustomobject]@{HOST_NAME = 'head{0:00}.com' -f ($_ % 100); IPAddress = 10, 16, [math]::floor($_ / 256), ($_ % 256) -Join '.'; HOST_ID = 'ABI'; SERVER_TYPE = 'WB'}
} | Export-Csv .\csv1.csv
1..850 | ForEach-Object {
[pscustomobject]@{lastboot = Get-Date -f s; IPAddress = 10, 16, [math]::floor(3 * $_ / 256), (3 * $_ % 256) -Join '.'; SystemUpTime = Get-Random 999999; OSType = 'Unix'}
} | Export-Csv .\csv2.csv
Measure-Command {
Import-Csv .\csv1.csv | Join (Import-Csv .\csv2.csv) -On IPAddress -Property HOST_NAME, IPAddress, HOST_ID, SERVER_TYPE, SystemUpTime | Export-Csv .\Result.Csv
}
* ***Note1:*** to keep the PowerShell pipeline flowing (which preserves memory), you should try to avoid assigning the file content to a variable (or using brackets `(...)`)
* ***Note2:*** Putting the larger file at the left side of the `Join-Object` command is usually a little faster then the other way around.
|
Hello awesome developers! As you can see, I'd like to ensure my application starts when the phone boots up. Everything else is functioning correctly; I receive toasts and background processes are working. However, my application fails to run. What could be the issue, and how can I fix it? Thank you.
```
class BootCompleteReceiver : BroadcastReceiver() {
@RequiresApi(Build.VERSION_CODES.O)
override fun onReceive(context: Context?, intent: Intent?) {
if (intent?.action == Intent.ACTION_BOOT_COMPLETED) {
Toast.makeText(context, "Boot Completed", Toast.LENGTH_LONG).show()
val mainIntent = Intent(context, MainActivity::class.java)
mainIntent.setFlags(Intent.FLAG_ACTIVITY_NEW_TASK)
context?.startForegroundService(mainIntent)
}
}
}
```
```
<uses-permission android:name="android.permission.RECEIVE_BOOT_COMPLETED" />
<receiver
android:name=".BootCompleteReceiver"
android:exported="false"
android:permission="android.permission.RECEIVE_BOOT_COMPLETED">
<intent-filter android:priority="1000">
<action android:name="android.intent.action.BOOT_COMPLETED" />
<category android:name="android.intent.category.DEFAULT" />
</intent-filter>
</receiver>
```
```
class MainActivity : ComponentActivity() {
override fun onCreate(savedInstanceState: Bundle?) {
super.onCreate(savedInstanceState)
setContent {
AutoBootTheme {
// A surface container using the 'background' color from the theme
Surface(
modifier = Modifier.fillMaxSize(),
color = MaterialTheme.colorScheme.background
) {
Greeting("Android")
}
}
}
}
}
@Composable
fun Greeting(name: String, modifier: Modifier = Modifier) {
Text(
text = "Hello $name!",
modifier = modifier
)
}
@Preview(showBackground = true)
@Composable
fun GreetingPreview() {
AutoBootTheme {
Greeting("Android")
}
}
``` |
> Please make sure that the backend API sets the file name in the response header. Usually, the file name is included in content-disposition header.
- And if you are using CORS make sure that you are exposing your header.
I found it thanks to [this][1] answer and also [this][2] that CORS policy don't allow Angular to see all the headers, I'm using Java as the backend and this was how I solved the issue:
Angular:
this.myService.downloadBlob(BlobID).subscribe(
(response: HttpResponse<Blob>) => {
//same code as andsilver
// Extract content disposition header
const contentDisposition = response.headers.get('content-disposition');
// Rest of your code to extract filename using contentDisposition
// Extract the file name
const filename = contentDisposition
.split(';')[1]
.split('filename')[1]
.split('=')[1]
.trim();
this.downloadBlob(new Blob([(response.body)], { type: 'text/plain' }), filename);
}
Service Angular:
downloadBlob(BlobID: number): Observable<HttpResponse<Blob>> {
return this.http.get(myUrl+'?BlobID='+BlobID, { observe: 'response', responseType: 'blob' });
}
Server side (Java in my case):
@GetMapping("/downloadBlob")
public ResponseEntity<byte[]> downloadBlob(@RequestParam("BlobID") BigDecimal BlobID) {
BlobAndName blobAndName = service.getDocumento(BlobID);
byte[] blobAsBytes = Utils.blob2ByteArray(blobAndName.getBlob());
HttpHeaders head = new HttpHeaders();
head.add("content-disposition", "attachment; filename=" + blobAndName.getName());
ArrayList<String> exposedHead = new ArrayList<>();
exposedHead.add("content-disposition");
head.setAccessControlExposeHeaders(exposedHead);
return ResponseEntity.ok().headers(head).body(blobAsBytes);
}
I must say that due to security concerns from my colleagues (that were not explained to me, so I don't even know what they are), this is not the solution I implemented, I got the name and the blob in different methods as I had the chance to do so by recycling a previous one that already got me different information I needed and which now gives me the blob name too.
[1]: https://stackoverflow.com/questions/52443706/angular-httpclient-missing-response-headers
[2]: https://stackoverflow.com/questions/29934703/access-control-expose-headers-configuration-for-custom-response-headers-angularj |
i want to validate the message on toast message, I already used accessiblity id but still facing issue : element ("~toast message") still not displayed after 10000ms
accessibility id that gived from dev : toast message
|
how to verify toast message on ios using appium |
|toast|webdriver-io|appium-ios| |
A common method of creating private methods (of sorts) in JavaScript is this:
Class = function (arg0, arg1) {
var private_member = 0;
var privateMethod = function () {
return private_member;
};
}
The above example could also have been done with a function declaration instead of a function expression:
Class = function (arg0, arg1) {
var private_member = 0;
function privateMethod () {
return private_member;
}
}
In what ways are these two ways of declaring a private method different? (Outside of one being an expression and the other being a declaration)
For example, the expression obviously generates a new function every time the constructor gets called. Does this happen with the function declaration as well, or does it only get evaluated once because function declarations are evaluated at parse time? (As opposed to function expressions which are evaluated at execution time, you get the idea.)
I'm aware that JavaScript doesn't *really* have private methods. I'm using the the term loosely.
Not a duplicate of [var functionName = function() {} vs function functionName() {}][1], if anything a duplicate of [function expression vs function declaration with regard to javascript 'classes'][2]. My question isn't about the difference between function expressions and function declarations in general, but their differences specifically in regards to "private members" in JavaScript "classes".
[1]: https://stackoverflow.com/questions/336859/var-functionname-function-vs-function-functionname
[2]: https://stackoverflow.com/questions/6327213/function-expression-vs-function-declaration-with-regard-to-javascript-classes |
Is there a way to unload a script from memory in NextJS? |
|javascript|next.js|frontend|javascript-framework| |
null |
|php|woocommerce| |
null |
|r|ggplot2|label|facet-wrap| |
null |
I am working on a Java application where I need to execute a batch of SQL queries using JDBC's PreparedStatement. I have encountered an issue where the PreparedStatement seems to clear the batch every time it is initialized with a new query. Here is the relevant portion of my code:
```java
@Transactional
public int[] executeBatchQuery(String branchCode, List<String> queries, List<List<String>> parametersList) throws Exception {
Map<String, String> errorMap = new HashMap<>();
int[] count = {};
if (queries.size() != parametersList.size()) {
logger.info("Both lists length is not equal");
return null;
}
PreparedStatement pstmt = null;
Connection connection = null;
try {
connection = dataSource.getConnectionByBranchCode(branchCode);
for (int i = 0; i < queries.size(); i++) {
String query = queries.get(i);
List<String> parameters = parametersList.get(i);
pstmt = connection.prepareStatement(query);
// Set parameters for the prepared statement
for (int j = 0; j < parameters.size(); j++) {
pstmt.setString(j + 1, parameters.get(j));
}
pstmt.addBatch();
}
if (pstmt != null) {
count = pstmt.executeBatch();
}
else {
throw new SQLException();
}
}
catch (SQLException exception) {
logger.info("Roll back successfully executed");
logger.info("printing StackTrace: ");
exception.printStackTrace();
errorMap.put("mBoolean", "true");
errorMap.put("errorMessage", exception.getMessage());
}
finally {
try {
if (pstmt != null) {
pstmt.close();
}
if (connection != null) {
connection.close();
}
} catch (SQLException e) {
e.printStackTrace();
}
}
return count;
}
```
> In the list of queries, each query is different, with varying columns, WHERE clauses, and sometimes different tables
I have noticed that every time I initialize the PreparedStatement with a new query, it clears the batch. This behavior is not desirable for my application, as I need to execute multiple queries in a batch. Can someone suggest a workaround or an alternative approach to prevent the PreparedStatement from clearing the batch every time it is initialized with a new query?
I attempted to execute a batch of SQL queries using JDBC's PreparedStatement in a Java application. The goal was to add multiple queries to the batch and execute them together for efficiency.
Specifically, I initialized a PreparedStatement object and iterated through a list of queries, adding each query to the batch using the addBatch() method. I then expected the PreparedStatement to retain all the queries in the batch, allowing me to execute them in one go.
However, despite adding multiple queries to the batch, I observed that only the last query added to the batch was being executed. It seems that each time the PreparedStatement was initialized with a new query, it cleared the batch, resulting in only the latest query being executed.
I expected all the queries in the batch to be executed sequentially, but this was not the case. Instead, only the last query in the batch was executed, disregarding the previous queries. |
How to automatically launch an app when an Android phone boots up for Kotlion |
|java|android|kotlin|android-jetpack-compose|android-jetpack| |
null |
After you have applied your index you need to call MyClientDataSet.First to go to start of records to be sure you are looping them all. At the time you apply your index the current record is re-sorted and is no longer the first in the dataset so you are starting the loop from halfway and missing all records above it |
I have trouble on using PEAR on local file while trying to export data to Excel file.
This is the stack trace : <br /> <b>Notice</b>: tempnam(): file created in the system's temporary directory in <b>/Applications/XAMPP/xamppfiles/htdocs/\_test/vendor/pear/ole/OLE/PPS/File.php</b> on line <b>105</b><br /> <br /> <b>Fatal error</b>: Uncaught ValueError: Path cannot be empty in /Applications/XAMPP/xamppfiles/htdocs/\_test/vendor/pear/ole/OLE/PPS/File.php:106 Stack trace: #0 /Applications/XAMPP/xamppfiles/htdocs/\_test/vendor/pear/ole/OLE/PPS/File.php(106): fopen('', 'w+b') #1 /Applications/XAMPP/xamppfiles/htdocs/\_test/vendor/pear/spreadsheet_excel_writer/Spreadsheet/Excel/Writer/Workbook.php(599): OLE_PPS_File-\>init() #2 /Applications/XAMPP/xamppfiles/htdocs/\_test/vendor/pear/spreadsheet_excel_writer/Spreadsheet/Excel/Writer/Workbook.php(576): Spreadsheet_Excel_Writer_Workbook-\>\_storeOLEFile() #3 /Applications/XAMPP/xamppfiles/htdocs/\_test/vendor/pear/spreadsheet_excel_writer/Spreadsheet/Excel/Writer/Workbook.php(235): Spreadsheet_Excel_Writer_Workbook-\>\_storeWorkbook() #4 /Applications/XAMPP/xamppfiles/htdocs/\_test/content/xls/xls_common.php(292): Spreadsheet_Excel_Writer_Workbook-\>close() #5 /Applications/XAMPP/xamppfiles/htdocs/\_test/content/showcontent.php(439): require_once('/Applications/X...') #6 {main} thrown in <b>/Applications/XAMPP/xamppfiles/htdocs/\_test/vendor/pear/ole/OLE/PPS/File.php</b> on line <b>106</b><br /> It seems to be a permission case but I can't get rid of it.
I have modified the folder's permission in : pear/ole/OLE/PPS to allow both reading & writing but it didn't work. Besides, i'm not sure this is the correct path nor thing to do.
Any help would be greatly appreciated.
I'm using Xampp 8.2 with php 8 and mysql 8 on a mac. |
PEAR SPREADSHEET EXCEL issue on writing local file |
|php|export-to-excel|pear| |
null |
Try to find missing curly braces {} or unclosed comments /*...*/ or misplaced code in your code. |
Current API
const { stageid, subjectid, boardid, scholarshipid, childid } = req.params;
edcontentmaster
.aggregate([
{
$match: {
stageid: stageid,
subjectid: subjectid,
boardid: boardid,
// scholarshipid: scholarshipid,
},
},
{
$addFields: {
convertedField: {
$cond: {
if: { $eq: ["$slcontent", ""] },
then: "$slcontent",
else: { $toInt: "$slcontent" },
},
},
},
},
{
$sort: {
slcontent: 1,
},
},
{
$lookup: {
from: "edchildrevisioncompleteschemas",
let: { childid: childid, subjectid:subjectid,topicid:"$topicid" },
pipeline: [
{
$match: {
$expr: {
$and: [
{
$eq: [
"$childid",
"$$childid"
]
},
{
$in: [
"$$subjectid",
"$subjectDetails.subjectid"
]
},
{
$in: [
"$$topicid",
{
$reduce: {
input: "$subjectDetails",
initialValue: [],
in: {
$concatArrays: [
"$$value",
"$$this.topicDetails.topicid"
]
}
}
}
]
}
]
}
}
},
{
$project: {
_id: 1,
childid: 1
}
}
],
as: "studenttopic",
},
},
{
$group: {
_id: "$topic",
topicimage: { $first: "$topicimage" },
topicid: { $first: "$topicid" },
sltopic: { $first: "$sltopic" },
studenttopic: { $first: "$studenttopic" },
reviewquestionsets: {
$push: {
id: "$_id",
sub: "$sub",
topic: "$topic",
contentset: "$contentset",
stage: "$stage",
timeDuration: "$timeDuration",
contentid: "$contentid",
studentdata: "$studentdata",
subjectIamge: "$subjectIamge",
topicImage: "$topicImage",
contentImage: "$contentImage",
isPremium: "$isPremium",
},
},
},
},
{
$project: {
_id: 0,
topic: "$_id",
topicimage: 1,
topicid: 1,
sltopic: 1,
studenttopic:1,
contentid: "$contentid",
reviewquestionsets: 1,
},
},
])
.sort({ sltopic: 1 })
.collation({
locale: "en_US",
numericOrdering: true,
})
in this API I am getting subjectid from the user, now the user only passes stageid, boardid, and scholarshipid by these data first I need to get all subjects from 'subject' table then for each subject the above query will be the same each subject will have all topics related to that subjectid, and each topic will have all content related to topicid each topic should have child data like the above query. |
Preventing PreparedStatement from Clearing Batch in Java JDBC |
|java|jdbc|prepared-statement|batch-processing| |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.