instruction
stringlengths 0
30k
⌀ |
---|
|python|autohotkey| |
null |
Error Required Param value not set
Code:
SELECT id_vst,
CASE
WHEN id_sys = 155 THEN CAST('Visible' AS varchar(80))
ELSE CAST(' ' AS varchar(80))
END AS result
FROM DOC_ACC_CNT
|
|sql|fastreport| |
I'm working on a project in C/C++ buildable with cmake...
During build process I've to build configurations (headers and sources) based on some configuration file written in json format.
I reached the prefixed result adopting a script file in javascript and in the cmake I placed this command:
```
add_custom_target( BuildStructure ALL node ${CMAKE_CURRENT_SOURCE_DIR}/util/buildProjectStructCfg.js ...)
```
Ok, it is working fine but...
Is a portable solution ?
Is it possible to try better solutions ?
I adopted javascript because to handle json files for me is the best...
Thank you |
CMAKE running Javascript utility |
|javascript|json|cmake| |
Im developping a web app using reactjs and nextjs. When i run it in dev mode, my ram is used up 100%. Im having a 12gb ram and almost 6gb is used up by node js runtime. My colleague has a 24gb ram and when he run it, node js uses almost 15gb ram. When a modification is done to the code, the node js runtime uses up 100% and everything else crashes.
The versions are as follows (as per package.json);
"next": "^13.2.1",
"react": "^18.2.0"
In the terminal, the below error is displayed.
`<w> [webpack.cache.PackFileCacheStrategy] Caching failed for pack: RangeError: Array buffer allocation failed`
[Terminal log](https://i.stack.imgur.com/bSL96.png)
Ive added the folloing code block to the next.config.js
```
webpack(config, { webpack }) {
config.infrastructureLogging = { debug: /PackFileCache/ }; // Define infrastructureLogging inside the webpack function
return config;
}
```
According to those logs, when the web app starts, it creates a cache file and its size is almost 10gb. Everytime a change is made, the cache file is recreated.
[Infrastructure Log][1]
[1]: https://i.stack.imgur.com/wy9aS.png |
While comparing, convert var1 to string:
var1 = int(input("enter a number:"))
num = str(var1)[::-1]
if str(var1) == num:
print("its a palindrome")
else:
print("its not a palindrome") |
SQL Data entry - finding sequence to enter info |
|sql|sql-server|sql-server-2008| |
null |
Compare elements according to 1St coordinates if 1st coordinates are equal, sort based on 2nd coordinates.
Arrays.sort(contest,(a,b)->{
if(a[0]==b[0]){
return Integer.compare(a[1],b[1]);
}
return Integer.compare(a[0],b[0]);
});
This works for me. |
I have to get the data from the Choice field using the lookup column. I followed the following tutorial to resolve the issue.
The hint is simple create a new Calculated Column and copy the data from the original choice column to the newly created column. Now you can get the newly created column instead of the choice column.
Please follow this article: https://www.sharepointdiary.com/2017/03/sharepoint-lookup-on-choice-field.html#:~:text=Using%20choice%20fields%20as%20the,fields%20on%20any%20other%20list! |
I'm currently working on a Kotlin project to automate the upkeep of emergency location data. As part of that project I need to compare two collections of Json objects to determine what data needs to be added/updated/deleted. Collection A is the collection of existing data. Collection B is the incoming data from the job. While the objects in the collections are not identical, they have a unique id that can be used to link them.
For each item that exists in A but not in B I need to make a delete call. For each item that is in B but not in A I need to do a create. For each item that exists in A and B I need to do an update.
I need to figure out a way to determine the needed operations, each of which will involve an HTTP request to a 3rd party API, and execute the needed operations as efficiently as possible. I know I could simply brute force the problem by iterating over all the items in each collection. However since this is going to be part of an AWS Lambda, I don't think that is going to cut it.
What is the most efficient approach to a problem like this using Kotlin?
**Update:**
I ended up going with the solution from @broot however it required an extra step. Since the objects in each list are from two different classes, I needed to create an interface to define their common properties. When I was done my classes looked like
interface HasUid{
val uid:String
}
data class OldData(val id:String, val name:String, val mac:String):HasUid {
override val uid: String
get() = name
}
data class NewData(val identifier:String, val display_name:String, val mac:String):HasUid {
override val uid: String
get() = display_name
}
and I called the function with
val(toDelete,toAdd,toUpdate) = diffSetsBy(oldDataList, newDataList, HasUid::uid)
println("to update: " + toUpdate.size)
println("to add: " + toAdd.size)
println("to delete: " + toDelete.size) |
Reading bytes from a stream does not guarantee that it will read _all_ the bytes you request. try `readNBytes()` instead. (or you can use `readAllBytes()` as mentioned by @DavidConrad). |
I have an API in C#, and with Vue.js I am trying to upload an image by accessing the API. I tried with fetch and axios, nothing changes, and I tried with headers in application/json, application/octet-stream and 'multipart/form-data. I receive a 500 error.
```
async uploadImage() {
try {
const formData = new FormData();
formData.append('image', this.newActivity.selectedImage);
const response = await axios.post('/api/v1/Images', formData, {
headers: {
'Content-Type': 'application/json'
}
});
handleImageChange(event) {
const file = event.target.files[0];
this.newActivity.selectedImage = file;
const reader = new FileReader();
reader.onload = () => {
this.newActivity.selectedImageURL = reader.result;
};
reader.readAsDataURL(file);
},
```
```
[HttpPost]
public async Task<ActionResult> SetImages(ImagesModel imagesModel)
{
var currentimage = await _imagesRepository.FindByIdAsync(imagesModel.Id);
if (currentimage == null)
{
ImagesEntity imagesEntity = new()
{
Name = imagesModel.Name ?? string.Empty,
CategoriesId = imagesModel.Category_id,
Created_by = imagesModel.Modified_by ?? string.Empty,
Created_date = DateTimeOffset.Now,
Description = imagesModel.Description ?? string.Empty,
Blob = imagesModel.Blob ?? string.Empty,
Path = imagesModel.Path ?? string.Empty,
Url = imagesModel.Url ?? string.Empty,
Active = imagesModel.Active,
};
await _imagesRepository.AddAsync(imagesEntity);
await _imagesRepository.SaveAsync();
return Ok(imagesEntity);
```
|
|c++|vector|floating-point|operator-overloading| |
null |
* Formula that could work in table please.
I have some values 1,2 in a range of 7 columns. I want to arrange the values in the order they appear. I want for example I have a1= 0, b1=1, c1=2, d1=1, e1=0, f1=0, g1=1, I want i1=1, j1=2, k1=1, l1=1, m1=0, n1=0, o1=0. You can see and the image with some examples. Thank you.
I tried with small function, but it didn't work. |
{"OriginalQuestionIds":[66289122],"Voters":[{"Id":8690857,"DisplayName":"Drew Reese","BindingReason":{"GoldTagBadge":"reactjs"}}]} |
|reactjs| |
My team and I are building an application for our company and we need to use Java Money (JSR-354) and its Reference Implementation to represent monetary values. We are trying to build this application as per the guidelines of DDD. To that end, we wanted to understand if it is acceptable to use value objects such as `javax.money.MonetaryAmount` from inside a domain entity (example `Bill`).
Since this couples the domain model with a third-party library, is this practice considered normal in the DDD paralance?
While we are using this arrangement at this point, we wanted to understand if this is considered an anti-pattern and if it would ideal to use our own value object to represent money value and move the use the `javax.money.MonetaryAmount` during persistence (inside adapters). |
Is usage of value objects from third-party libraries in domain entities acceptable as per Domain-Driven Design? |
|java|oop|design-patterns|domain-driven-design|java-money| |
null |
I've written the following code to compare the theoretical alpha = 0.05 with the empirical one from the buit-in t.test in Rstudio:
```
set.seed(1)
N <- 1000
n <- 20
k <- 500
poblacion <- rnorm(N, 10, 10) #Sample
mu.pob <- mean(poblacion)
sd.pob <- sd(poblacion)
p <- vector(length=k)
for (i in 1:k) {
muestra <- poblacion[sample(1:N, n)]
p[i] <- t.test(muestra, mu=mu.pob)$p.value
}
a_teo <- 0.05
a_emp <- length(p[p < a_teo])/k
sprintf("alpha_teo = %.3f <-> alpha_emp = %.3f", a_teo, a_emp)
```
And it works printing both theoretical and empirical values. Now I wanna make it more general, to different values of 'n', so I wrote this:
```
set.seed(1)
N <- 1000
n <- 20
k <- 500
z <-c()
for (i in n){
poblacion <- rnorm(N, 10, 10)
mu.pob <- mean(poblacion)
sd.pob <- sd(poblacion)
p <- vector(length=k)
for (j in 1:k){
muestra <- poblacion[sample(1:N, length(n))]
p[j] <- t.test(muestra, mu = mu.pob)$p.value
}
a_teo = 0.05
a_emp = length(p[p<a_teo])/k
append(z, a_emp)
print(sprintf("alpha_teo = %.3f <-> alpha_emp = %.3f", a_teo, a_emp))
}
plot(a_teo, z)
```
Finally, I want to plot in one single graph the empirical (a_emp) values against the 0.05. I've made a 'z' vector to store the empirical values, but I get only one point in the graph. Any ideas? |
null |
null |
null |
{"Voters":[{"Id":635608,"DisplayName":"Mat"},{"Id":9473764,"DisplayName":"Nick"},{"Id":4122889,"DisplayName":"sommmen"}]} |
Linux user can set the `DOTNET_ROOT` environment variable to point to their .NET SDK installation directory:
**1. Choose an Editing Method:**
There are two main ways to set environment variables in Linux:
* **Editing Shell Profile:** This method makes the change persistent across terminal sessions. It involves modifying a configuration file like `.bashrc` or `.zshrc`.
* **Setting in Terminal:** This method only affects the current terminal session. You can directly set the variable using the `export` command in the terminal.
**2. Editing Shell Profile (Persistent):**
* Open your shell profile using a text editor. Common profiles include:
* Bash: `.bashrc` (located in your home directory)
* Zsh: `.zshrc` (located in your home directory)
* Add the following line to the end of the file, replacing `$HOME` with your actual home directory path:
```bash
export DOTNET_ROOT=$HOME/.dotnet
```
* Save the changes and close the editor.
* **Source the Profile (Optional):**
- To make the changes take effect immediately in the current terminal session, run the following command:
```bash
source ~/.bashrc # For bash profile
source ~/.zshrc # For zsh profile
```
**3. Setting in Terminal (Current Session):**
* Open a terminal window.
* Run the following command, replacing `$HOME` with your actual home directory path:
```bash
export DOTNET_ROOT=$HOME/.dotnet
```
This sets the `DOTNET_ROOT` variable only for the current terminal session. Once you close the terminal, the variable will be unset.
**Important Notes:**
* Make sure to replace `$HOME` with your actual home directory path if it's different from the default location.
* You might need to restart any running applications that rely on the .NET SDK for the changes to take effect fully. |
class GCNModel(nn.Module):
def __init__(self, in_channels, hidden_dim, out_channels, edge_dim): # 5, 64, 6, 3
super(GCNModel, self).__init__()
self.conv1 = Edge_GCNConv(in_channels=in_channels, out_channels=hidden_dim, edge_dim=edge_dim)
self.conv2 = Edge_GCNConv(in_channels=hidden_dim, out_channels=out_channels, edge_dim=edge_dim)
self.batch_norm1 = nn.BatchNorm1d(hidden_dim)
self.batch_norm2 = nn.BatchNorm1d(out_channels)
self.linear = nn.Linear(out_channels, out_channels)
def forward(self, x, edge_index, edge_attr):
x, edge_index, edge_attr = x, edge_index, edge_attr
x1 = self.conv1(x, edge_index, edge_attr)
x1 = self.batch_norm1(x1)
x1 = F.relu(x1)
x1 = F.dropout(x1, p=0.1, training=self.training)
# print("GCNModel x1:", x1)
# print("GCNModel x1.shape:", x1.shape) # (24, 64)
x2 = self.conv2(x1, edge_index, edge_attr)
x2 = self.batch_norm2(x2)
x2 = F.relu(x2)
# print("GCNModel x2:", x2)
# print("GCNModel x2.shape:", x2.shape) # (24, 6)
x2 = F.dropout(x2, p=0.1, training=self.training)
out = self.linear(x2)
print("GCNModel out:", out)
print("GCNModel out.shape:", out.shape) # (24, 6)
return out
def train_model(train_loader, val_loader, model, optimizer, output_dim, threshold_value, num_epochs=100, early_stopping_rounds=50, batch_size=4):
"""
训练GNN模型,使用 k 折交叉验证
Args:
train_loader: 训练数据加载器
val_loader: 验证数据加载器
model: GNN模型
optimizer: 优化器
num_epochs: 训练轮数 (default: 100)
early_stopping_rounds: 早停轮数 (default: 50)
"""
best_val_loss = float('inf')
best_accuracy = 0 # Track best validation accuracy
rounds_without_improvement = 0
# 创建损失函数
criterion = nn.CrossEntropyLoss()
# criterion = nn.BCEWithLogitsLoss() # 二分类
for epoch in range(num_epochs):
model.train()
total_loss = 0
correct = 0
total = 0
for data in train_loader:
optimizer.zero_grad()
#################### error #################
out = model(data.x, data.edge_index, data.edge_attr)
# 转换 data.y 为多热编码
one_hot_labels = convert_to_one_hot(data.y, output_dim)
print("train_model out.shape:", out.shape) # (24, 6)
print("train_model data.y.shape:", data.y.shape) # (18, 2)
print("train_model data.edge_attr.shape:", data.edge_attr.shape) # (18, 3)
print("train_model data.edge_attr:", data.edge_attr)
print("train_model one_hot_labels.shape:", one_hot_labels.shape) # (18, 6)
loss = criterion(out, one_hot_labels)
#################################################
# print("train_model loss:", loss)
total_loss += loss.item()
# print("torch.sigmoid(out):", torch.sigmoid(out))
predicted = (torch.sigmoid(out) >= threshold_value).long()
# print("predicted:", predicted)
correct += (predicted == one_hot_labels).all(dim=1).sum().item()
# print("correct:", correct)
total += len(data.y)
# print("total:", total )
loss.backward()
optimizer.step()
avg_train_loss = total_loss / len(train_loader)
train_accuracy = correct / total
The tensor of out.shape is the output obtained by data.x.shape
My data.y.shape is obtained from data.edge_attr
What can I do to fix the mismatch between the number of out and one_hot_labels
Am I modifying the model or am I modifying the dimensions of the output???
The tensor of out.shape is the output obtained by data.y.shape |
Expected input batch_size (24) to match target batch_size (18) |
|python| |
null |
```
$ cat inputfile| while read i ;do echo ${#i}; done
12550
12972
13035
... snip
0
$ for i in {1..21} ; do sed -n "$i"p inputfile |wc -c ; done
13226
13680
13759
... snip
1
```
The input file contains 21 JSON objects concatenated together. Trying to feed each to jq errors on the first while it works with the second. There's clearly a difference but what?
(The output of both loops is 21 line long.) |
Difference in printing the Nth line with sed vs while ... read? |
|bash| |
I debug the code using `{{ dd($order->id) }}`, I got 100238 order id value. But when I type
<form action="{{ route('admin.pos.update_order', ['id' => $order->id]) }}" method="post">
then its not working. Then I write this `{{ route('admin.pos.update_order', ['id' => 100238]) }}` its works great.
I cannot solve this abnormal behaviour. Can anyone guide me what is the actual issue?
As in debugging it is providing order id then it should work in the action code as `$order->id`
Route is:
Route::group(['middleware' => ['admin']], function () {
Route::group(['prefix' => 'pos', 'as' => 'pos.', 'middleware' => ['module:pos_management']], function () {
Route::post('update-cart-items', 'POSController@update_cart_items')->name('update_cart_items');
});
});
Controller is:
public function update_order(Request $request, $id): RedirectResponse
{
$order = $this->order->find($order_id);
if (!$order) {
Toastr::error(translate('Order not found'));
return back();
}
$order_type = $order->order_type;
if ($order_type == 'delivery') {
Toastr::error(translate('Cannot update delivery orders'));
return back();
}
$delivery_charge = 0;
if ($order_type == 'home_delivery') {
if (!session()->has('address')) {
Toastr::error(translate('Please select a delivery address'));
return back();
}
$address_data = session()->get('address');
$distance = $address_data['distance'] ?? 0;
$delivery_type = Helpers::get_business_settings('delivery_management');
if ($delivery_type['status'] == 1) {
$delivery_charge = Helpers::get_delivery_charge($distance);
} else {
$delivery_charge = Helpers::get_business_settings('delivery_charge');
}
$address = [
'address_type' => 'Home',
'contact_person_name' => $address_data['contact_person_name'],
'contact_person_number' => $address_data['contact_person_number'],
'address' => $address_data['address'],
'floor' => $address_data['floor'],
'road' => $address_data['road'],
'house' => $address_data['house'],
'longitude' => (string)$address_data['longitude'],
'latitude' => (string)$address_data['latitude'],
'user_id' => $order->user_id,
'is_guest' => 0,
];
$customer_address = CustomerAddress::create($address);
}
// Update order details
$order->coupon_discount_title = $request->coupon_discount_title == 0 ? null : 'coupon_discount_title';
$order->coupon_code = $request->coupon_code ?? null;
$order->payment_method = $request->type;
$order->transaction_reference = $request->transaction_reference ?? null;
$order->delivery_charge = $delivery_charge;
$order->delivery_address_id = $order_type == 'home_delivery' ? $customer_address->id : null;
$order->updated_at = now();
try {
// Save the updated order
$order->save();
// Clear session data if needed
session()->forget('cart');
session(['last_order' => $order->id]);
session()->forget('customer_id');
session()->forget('branch_id');
session()->forget('table_id');
session()->forget('people_number');
session()->forget('address');
session()->forget('order_type');
Toastr::success(translate('Order updated successfully'));
//send notification to kitchen
//if ($order->order_type == 'dine_in') {
$notification = $this->notification;
$notification->title = "You have a new update in order " . $order_id . " from POS - (Order Confirmed). ";
$notification->description = $order->id;
$notification->status = 1;
try {
Helpers::send_push_notif_to_topic($notification, "kitchen-{$order->branch_id}", 'general');
Toastr::success(translate('Notification sent successfully!'));
} catch (\Exception $e) {
Toastr::warning(translate('Push notification failed!'));
}
//}
//send notification to customer for home delivery
if ($order->order_type == 'delivery'){
$value = Helpers::order_status_update_message('confirmed');
$customer = $this->user->find($order->user_id);
$fcm_token = $customer?->fcm_token;
if ($value && isset($fcm_token)) {
$data = [
'title' => translate('Order'),
'description' => $value,
'order_id' => $order_id,
'image' => '',
'type' => 'order_status',
];
Helpers::send_push_notif_to_device($fcm_token, $data);
}
//send email
$emailServices = Helpers::get_business_settings('mail_config');
if (isset($emailServices['status']) && $emailServices['status'] == 1) {
Mail::to($customer->email)->send(new \App\Mail\OrderPlaced($order_id));
}
}
// Redirect back to wherever needed
return redirect()->route('admin.pos.index');
} catch (\Exception $e) {
info($e);
Toastr::warning(translate('Failed to update order'));
return back();
}
}
error is:
> POST
> https://fd.sarmadengineeringsolutions.com/admin/pos/update-cart-items
> 500 (Internal Server Error) |
Duplicate slugs in categories |
Move the filtering to a subquery:
~~~sql
SELECT
tblDailyData.TradeableDate,
tblDailyData.TradeableCode
FROM
tblDailyData
WHERE
(tblDailyData.TradeableCode = "MSFT")
OR
(tblDailyData.TradeableCode IS NULL
AND
tblDailyData.TradeableDate IS NULL);
~~~
Save it as, say, `qryDailyData`.
Then run this query:
~~~sql
SELECT
tblCalendar.dDate,
tblCalendar.dDay,
qryDailyData.TradeableDate,
qryDailyData.TradeableCode
FROM
tblCalendar
LEFT JOIN
qryDailyData ON tblCalendar.[dDate] = qryDailyData.[TradeableDate]
ORDER BY
tblCalendar.dDate;
~~~
|
Error when I try to post an image to the API |
|javascript|c#|vue.js| |
I am using Terraform to create an ECS cluster with an EC2 instance. My goal is to have a single task running on only one EC2 instance. I am managing both the capacity provider and auto-scaling for this cluster.
Initially, the deployment of a task to an EC2 instance runs smoothly. However, when I try to deploy a new task definition to replace the existing task, the ECS keeps the task in a PROVISIONING state. The task stays in this state until I change the auto-scaling group's max_size from 1 to 2.
Once I do that, the new task deployment is done in a new EC2 instance, and the previous instance is not removed (I was expected the removal of this "idle" instance).
For now, in my non-production environment, I want to keep only one instance as part of my cluster and allow multiple deployments of the same task on the same instance.
Terraform code:
```
# ECS service
resource "aws_ecs_service" "this" {
name = "cluster"
iam_role = aws_iam_role.ecs_role.arn
cluster = aws_ecs_cluster.cluster.id
task_definition = aws_ecs_task_definition.task_definition.arn
desired_count = 1
force_new_deployment = true
load_balancer {
target_group_arn = aws_alb_target_group.lb.arn
container_name = aws_ecs_task_definition.task_definition.family
container_port = 80
}
ordered_placement_strategy {
type = "binpack"
field = "memory"
}
capacity_provider_strategy {
capacity_provider = aws_ecs_capacity_provider.ecs_capacity_provider.name
base = 1
weight = 100
}
lifecycle {
create_before_destroy = true
ignore_changes = [
desired_count
]
}
}
```
```
# Auto scaling
resource "aws_autoscaling_group" "ecs_asg" {
name = "asg"
vpc_zone_identifier = [for subnet in var.public_subnet_ids : subnet]
max_size = 1
min_size = 1
desired_capacity = 1
health_check_type = "EC2"
protect_from_scale_in = false
launch_template {
id = aws_launch_template.template.id
version = "$Latest"
}
instance_refresh {
strategy = "Rolling"
}
lifecycle {
create_before_destroy = true
}
}
```
```
# capacity provider
resource "aws_ecs_capacity_provider" "ecs_capacity_provider" {
name = "ecs_capacity_provider"
auto_scaling_group_provider {
auto_scaling_group_arn = aws_autoscaling_group.ecs_asg.arn
managed_termination_protection = "DISABLED"
managed_scaling {
maximum_scaling_step_size = 2
minimum_scaling_step_size = 1
status = "ENABLED"
target_capacity = 100
}
}
}
resource "aws_ecs_cluster_capacity_providers" "ecs_capacity_providers" {
cluster_name = aws_ecs_cluster.cluster.name
capacity_providers = [aws_ecs_capacity_provider.ecs_capacity_provider.name]
default_capacity_provider_strategy {
base = 1
weight = 100
capacity_provider = aws_ecs_capacity_provider.ecs_capacity_provider.name
}
}
```
```
# ECS Task
resource "aws_ecs_task_definition" "task" {
family = "task"
container_definitions = jsonencode([
{
name = "task",
image = "test",
cpu = "768",
memory = "4096",
essential = true
portMappings = [
{
containerPort = 80
hostPort = 80
protocol = "tcp"
}
]
logConfiguration = {
logDriver = "awslogs",
options = {
"awslogs-group" = aws_cloudwatch_log_group.logs.name,
"awslogs-region" = var.region,
"awslogs-stream-prefix" = "app"
}
}
}
])
execution_role_arn = aws_iam_role.ecs_exec.arn
task_role_arn = aws_iam_role.ecs_task.arn
}
```
I noticed there is a related cloud watch alarm being triggered when I try the second task deployment which is linked to the auto scaling:
Alarm: "TargetTracking-test-ecs-asg-AlarmHigh-e5a4556-5686-5669-26546-e745a5ed90cb"
[![enter image description here][1]][1]
[1]: https://i.stack.imgur.com/0knxW.png |
I am currently trying to create a cluster of 2 VMs with Ubuntu and an arbitrator node. The cluster is running in such a way that Node 1 and 2 synchronize the databases. When I restart Node 2, it immediately rejoins the cluster, and the same applies to the arbitrator. However, when Node 1 (the node with the cluster creation command "galera_new_cluster") reboots, the cluster needs to be "recreated" each time using the same command.
Is there a possibility or does anyone know of a way to configure Node 1 so that it can rejoin the cluster?
**Node1 Settings:**
```
[galera]
# Mandatory settings
wsrep_on = ON
wsrep_provider = /usr/lib/galera/libgalera_smm.so
wsrep_cluster_name = "MariaDB Galera Cluster"
wsrep_cluster_address = "gcomm://IP Node1, IP Node2, IP Arbitrator"
wsrep_sst_method = rsync
binlog_format = row
default_storage_engine = InnoDB
innodb_autoinc_lock_mode = 2
innodb_force_primary_key = 1
innodb_doublewrite = 1
# Allow server to accept connections on all interfaces.
bind-address = 0.0.0.0
# Optional settings
wsrep_slave_threads = 4
innodb_flush_log_at_trx_commit = 0
wsrep_node_name = MyNode1
wsrep_node_address = "IP Node1"
# Logfile
log_error = /var/log/mysql/error.log
```
**Node2 Settings:**
```
[galera]
# Mandatory settings
wsrep_on = ON
wsrep_provider = /usr/lib/galera/libgalera_smm.so
wsrep_cluster_name = "MariaDB Galera Cluster"
wsrep_cluster_address = "gcomm://IP Node1, IP Node2, IP Arbitrator"
wsrep_sst_method = rsync
binlog_format = row
default_storage_engine = InnoDB
innodb_autoinc_lock_mode = 2
innodb_force_primary_key = 1
innodb_doublewrite = 1
# Allow server to accept connections on all interfaces.
bind-address = 0.0.0.0
# Optional settings
wsrep_slave_threads = 4
innodb_flush_log_at_trx_commit = 0
wsrep_node_name = MyNode2
wsrep_node_address = IP Node2
# Logfile
log_error = /var/log/mysql/error.log
```
Since my knowledge of MariaDB and Linux is rather rudimentary, I have attempted to find a solution through internet research. The problem primarily lies in the fact that it fails only on the first node, and I cannot narrow down the issue further. The suspicion that arises to me after reviewing the search results is that Node 1 acts as kind of a master and therefore cannot reconnect.
I tried to add:
```
'bootstrap'
# Bootstrap the cluster, start the first node
# that initiate the cluster
echo $echo_n "Bootstrapping the cluster"
$0 start $other_args --wsrep-new-cluster
;;
```
into **/usr/bin/galera_new_cluster** without any positive result.
I still had to manually create the cluster "new" with the command galera_new_cluster and received this error message:
```
Failed to restart start.service: Unit start.service not found.
/usr/bin/galera_new_cluster: 31: bootstrap: not found
Bootstrapping the cluster
```
|
As stated in the comments, I don't know whether there's an option to make things work exactly the way you want them, but you can use a directory structure like this to separate code from other data:
```
\ (Project root)
-- Awesomeproject.sln
-- \build
-- \documentation
-- Specification.doc
-- Readme.txt
-- \src
-- \AwesomeProject1
-- AwesomeProject1.csproj
-- \namespace1
-- \namespace2
-- \AwesomeLibrary1
-- AwesomeLibrary1.csproj
-- \namespace...
-- \bin
-- \obj
```
There can be any number of folders between solution and it's projects (and the projects can be spread to many folders), but not between the project file and it's subfolders.
|
null |
Assuming you have your flights table created (... "So we have a flights table. ...") with a BOOLEAN last column...
If you have your csv data imported into database table or you can have it as a dataset like:
~~~sql
/*
FID A_DATE ORIGINAL_AIRPORT AIRLINE_NAME AIRCRAFT_TYPE IS_HEAVY
--- ---------- ------------------ -------------- -------------- ---------
1 2024-02-25 101 Ryanair A-320 no
2 2024-02-25 102 AirFrance A-380 yes
3 2024-02-25 103 Vueling A-319 no */
~~~
... you could use it to insert data into flights table using case expression converting 'yes' to 1 (or TRUE with no quotes) and 'no' to 0 (or FALSE with no quotes)
~~~sql
INSERT INTO flights
( Select FID, A_DATE, ORIGINAL_AIRPORT, AIRLINE_NAME, AIRCRAFT_TYPE,
Case When IS_HEAVY = 'yes' Then 1 -- or TRUE instead of 1
Else 0 -- or FALSE instead of 0
End as IS_HEAVY
From csv_flights
);
~~~
~~~sql
-- test it
Select * From flights;
/* R e s u l t :
FID A_DATE ORIGINAL_AIRPORT AIRLINE_NAME AIRCRAFT_TYPE IS_HEAVY
--- ---------- ------------------ -------------- -------------- ---------
1 2024-02-25 101 Ryanair A-320 0
2 2024-02-25 102 AirFrance A-380 1
3 2024-02-25 103 Vueling A-319 0 */
~~~
If you are doing it while reading csv out of db and passing commands to a database - do the same - convert 'yes' to 1 (TRUE) and 'no' to 0 (FALSE) while creating a command to be executed in db. |
cucumber.api.cli.Main run
WARNING: You are using deprecated Main
class. Please use
io.cucumber.core.cli.Main
0 Scenarios
0 Steps
0m0.014s
I'm not getting snippets for my steps.
Exception in thread "main"
io.cucumber.core.exception.CompositeCucumbe
rException: There were 2 exceptions. The details are in the stacktrace below.
at
io.cucumber.core.runtime.RethrowingThrowabl
eCollector.getThrowable(RethrowingThrowableCollector.java:57)
and my login feature file
Feature: Login
Scenario: Successful Login with valid credentials
Given User Launch Chrome browser
When User Opens URL "http://admin-demo.nopcommerce.com/login"
And User enters email as "admin@yourstore.com" and Password "admin"
And Click on Login
Then Page Title Should be "Dashboard / nopcommerce administration"
When User Click on Log out Link
Then Page Title should be "Your store. Login"
And Close browser
I have added all necessary dependencies in pom.xml
like
|
I want to use a subclass of Paint with an additional attribute "dashed".
As a minimal example, I just added the following lines to the standard flutter-create code:
```
class MyPaint extends Paint {
bool dashed;
MyPaint()
: dashed = false,
super();
}
```
Everything works fine if I use `flutter build linux`, but with `flutter build web` I get the following error:
```
Target dart2js failed: ProcessException: Process exited abnormally:
lib/main.dart:7:7:
Error: The non-abstract class 'MyPaint' is missing implementations for these members:
- Paint.blendMode
- Paint.blendMode=
- Paint.color
- Paint.color=
- ...
class MyPaint extends Paint {
^^^^^^^
...
lib/main.dart:11:9:
Error: Superclass has no constructor named 'Paint'.
super();
^^^^^
Error: Compilation failed.
Command: /home/XXX/snap/flutter/common/flutter/bin/cache/dart-sdk/bin/dart --disable-dart-dev
/home/XXX/snap/flutter/common/flutter/bin/cache/dart-sdk/bin/snapshots/dart2js.dart.snapshot
--platform-binaries=/home/XXX/snap/flutter/common/flutter/bin/cache/flutter_web_sdk/kernel --invoker=flutter_tool
-Ddart.vm.product=true -DFLUTTER_WEB_AUTO_DETECT=true
-DFLUTTER_WEB_CANVASKIT_URL=https://www.gstatic.com/flutter-canvaskit/3f3e560236539b7e2702f5ac790b2a4691b32d49/
--native-null-assertions --no-source-maps -o
/data/XXX/flutter/test/.dart_tool/flutter_build/232123a258e4d852bd909407e7ea3a68/app.dill
--packages=.dart_tool/package_config.json --cfe-only
/data/XXX/flutter/test/.dart_tool/flutter_build/232123a258e4d852bd909407e7ea3a68/main.dart
```
Is there some wrong usage of class inheritance?
Or some strange problem in dart2js which does not appear with other build targets?
I am using Flutter (Channel stable, 3.16.5, on Ubuntu 22.04.2 LTS 6.5.0-25-generic, locale de_DE.UTF-8).
|
First of all, let me explain the problem.
The following happens when I make categories with similar names:
- https://www.example.com/cars/ford/parts<br>
- https://www.example.com/cars/toyota/parts1
Why is this the case? The parents are different so why would WordPress at a 1 to the second parts category?
**Is there a workaround for this?**
I saw numerous people with the same issue. Would be nice if there was a way for them to have the same slugs when the parent's slug is different.
|
You should specify the table:
var languages []Language
err := db**.Table("TableOfLanguages")**.Find(&languages).Error
|
Unable to launch WebDriverAgent. Original error: xcodebuild failed with code 65. This usually indicates an issue with the local Xcode setup or WebDriverAgent project configuration or the driver-to-platform version mismatch
Whenever i start session in appium inspector, can someone help?, where i was try to inspect element in my real IOS device |
Unable to launch WebDriverAgent |
|automation| |
null |
Because the variable types are not the same:
var1 = int(input("enter a number:"))
num = str(var1)[::-1]
var2 = int(num)
if var1 == var2:
print("its a palindrome")
else:
print("its not a palindrome")
you need convert to integer type, or:
var1 = input("enter a number:")
var2 = var1[::-1]
if var1 == var2:
print("its a palindrome")
else:
print("its not a palindrome") |
Looks like you missed the return statement.
try {
const response = await fetch(url, options);
const data = await response.json();
console.log(data);
return data;
} catch (err) {
console.error('error:' + err);
}
https://stackblitz.com/edit/stackblitz-starters-a63xvf |
> the returned .pmml file does only include the data dictionary without the specifications of the setted high/low values etc.
The domain specification gets stored in two places inside the PMML document.
First, the decision whether some input value is valid or invalid, gets encoded into the `/PMML/DataDictionary/DataField` element. For example, specifying that "Sepal.Length" only accepts values within the `[4.3, 7.9]` range:
```xml
<DataField name="Sepal_Length" displayName="Sepal length in cm" optype="continuous" dataType="double">
<Interval closure="closedClosed" leftMargin="4.3" rightMargin="7.9"/>
</DataField>
```
Second, the decision how the pre-categorized input value (ie. valid/invalid/missing) should be handled by any given model. This is encoded into the `/PMML/<Model>/MiningSchema/MiningField` elements. For example, specifying that invalid and missing values are not permitted:
```xml
<MiningSchema>
<MiningField name="Sepal_Length" invalidValueTreatment="returnInvalid" missingValueTreatment="returnInvalid"/>
</MiningSchema>
```
Your pipeline does not contain a model object, so there is physically no place where the second part of the domain specification could be stored.
> Interestingly when I am putting a LogisticRegression inside the Pipeline I get a correct looking PMML but there I only have two outputs...
The SkLearn2PMML package performs PMML optimization, where unused field declarations are automatically removed in order to spare human and computer cognition efforts.
Looks like your `LogisticRegression` model object only uses two input fields then.
You could try replacing `LogisticRegression` with some other model type that uses all input fields. Some "greedy" algorithm such as random forest might be a good fit.
TLDR: You can't separate domain decorators such as `ContinuousDomain` from the actual model object. If you want to use pre-fitted domain decorators in multiple pipelines, then you should store them as pure Python objects in Pickle or Dill data formats. Alternatively, generate some utility function (and package it as a Python library) that you can import and activate conveniently. |
You need to delete the lock file in home/(user).config/JetBrains/(application name/version/ Do this and it should start up fine. |
I have encountered interesting case during practicing writing Python algorithms on Leetcode. The task: "You are given two integer arrays nums1 and nums2, sorted in non-decreasing order, and two integers m and n, representing the number of elements in nums1 and nums2 respectively." The result should be stored inside the nums1 (which has length m+n, where n represents those elements that should be ignored from nums1 and added from nums2).
I have came with the simples solution, It is giving the expected results in VSCode but it is not passing test cases designed in LeetCode. I am wondering why... What I am missing?
`class Solution(object):
def merge(self, nums1, m, nums2, n):
"""
:type nums1: List[int]
:type m: int
:type nums2: List[int]
:type n: int
:rtype: None Do not return anything, modify nums1 in-place instead.
"""
nums1 = sorted(nums1[0:m] + nums2[0:n])
nums1 = [1, 2, 3, 0, 0, 0]
m = 3
nums2 = [2, 5, 6]
n = 3
solution = Solution()
result = solution.merge(nums1, m, nums2, n)
print(result)`
Please note I am a beginner. I am probably not taking something important into consideration. I guess it might me unwisely updating nums1 because I am loosing information about array, but is there anything else?
I couldn't find code for test cases in LeetCode to understand why problem occurred. Maybe anyone could help and guide me where to look for it? |
Merging sorted arrays with defined length in Python (good practices) |
|python|arrays|array-merge| |
null |
```
package com.example.movieflix.ui.adapters;
import static com.google.android.material.internal.ContextUtils.getActivity;
import android.content.Context;
import android.content.Intent;
import android.view.LayoutInflater;
import android.view.View;
import android.view.ViewGroup;
import android.widget.Button;
import android.widget.ImageView;
import android.widget.TextView;
import androidx.annotation.NonNull;
import androidx.recyclerview.widget.RecyclerView;
import com.bumptech.glide.Glide;
import com.bumptech.glide.request.RequestOptions;
import com.example.movieflix.R;
import com.google.android.ads.nativetemplates.TemplateView;
import com.google.android.gms.ads.AdListener;
import com.google.android.gms.ads.AdLoader;
import com.google.android.gms.ads.AdRequest;
import com.google.android.gms.ads.LoadAdError;
import com.google.android.gms.ads.nativead.NativeAdOptions;
import java.util.List;
public class MovieAdapter extends RecyclerView.Adapter<MovieAdapter.MyViewHolder> {
private final Context mContext;
private final List<VideoItems> mData;
private final RequestOptions option;
public M[enter image description here][1]ovieAdapter(Context mContext, List<VideoItems> mData) {
this.mContext = mContext;
this.mData = mData;
this.option = new RequestOptions().centerCrop().placeholder(R.drawable.loading_icon).error(R.drawable.loading_icon);
}
@NonNull
@Override
public MyViewHolder onCreateViewHolder(@NonNull ViewGroup parent, int viewType) {
View view = LayoutInflater.from(mContext).inflate(R.layout.activity_video_items, parent, false);
return new MyViewHolder(view);
}
@Override
public void onBindViewHolder(@NonNull MyViewHolder holder, int position) {
if ((position + 1) % 3 == 0) { // Show native ad after every 3 items
holder.templateView.setVisibility(View.VISIBLE);
loadNativeAd(holder.templateView);
} else {
holder.templateView.setVisibility(View.GONE);
}
VideoItems item = mData.get(position);
holder.bind(item);
}
@Override
public int getItemCount() {
return mData.size();
}
private void loadNativeAd(TemplateView templateView) {
AdLoader adLoader = new AdLoader.Builder(mContext, "ca-app-pub-3940256099942544/2247696110")
.forNativeAd(nativeAd -> {
// Display the native ad
templateView.setNativeAd(nativeAd);
})
.withAdListener(new AdListener() {
private boolean adLoaded = false;
@Override
public void onAdLoaded() {
super.onAdLoaded();
// Mark ad as loaded
adLoaded = true;
}
@Override
public void onAdFailedToLoad(@NonNull LoadAdError loadAdError) {
super.onAdFailedToLoad(loadAdError);
}
})
.withNativeAdOptions(new NativeAdOptions.Builder().build())
.build();
adLoader.loadAd(new AdRequest.Builder().build());
}
public class MyViewHolder extends RecyclerView.ViewHolder {
TextView tv_name;
TextView tv_type;
TextView tv_detail;
ImageView img_thumbnail;
TemplateView templateView;
public MyViewHolder(@NonNull View itemView) {
super(itemView);
img_thumbnail = itemView.findViewById(R.id.movieThumbnail);
tv_name = itemView.findViewById(R.id.movieTitle);
tv_type = itemView.findViewById(R.id.movieType);
tv_detail = itemView.findViewById(R.id.movieDetail);
templateView = itemView.findViewById(R.id.my_template);
}
public void bind(VideoItems item) {
tv_name.setText(item.getName());
tv_type.setText(item.getType());
tv_detail.setText(item.getDetail());
// Load image with Glide
if (item.getImage_url() != null && !item.getImage_url().isEmpty()) {
Glide.with(mContext)
.load(item.getImage_url())
.apply(option)
.error(R.drawable.loading_icon) // Placeholder image in case of loading error
.into(img_thumbnail);
} else {
// If image URL is empty, you can set a default placeholder image
img_thumbnail.setImageResource(R.drawable.loading_icon);
}
// Set OnClickListener for the item
itemView.setOnClickListener(v -> {
Intent intent = new Intent(mContext, MoviePlayer.class);
intent.putExtra("video_url", item.getVideo_url());
mContext.startActivity(intent);
});
}
}
}
```
Please check the adapter. I have implemented for the native ads in it and ads are loading and showing multiple for the same position. Ad loads every time new when i scroll down to it. I think once the ad is loaded and showed up it should not load and showed again a new ad for the same position. Please help me. This auto multiple time loading and showing as is making app so slow & lagging in behaviour.
[1]: https://i.stack.imgur.com/gFGQn.jpg |
null |
```
package com.example.movieflix.ui.adapters;
import static com.google.android.material.internal.ContextUtils.getActivity;
import android.content.Context;
import android.content.Intent;
import android.view.LayoutInflater;
import android.view.View;
import android.view.ViewGroup;
import android.widget.Button;
import android.widget.ImageView;
import android.widget.TextView;
import androidx.annotation.NonNull;
import androidx.recyclerview.widget.RecyclerView;
import com.bumptech.glide.Glide;
import com.bumptech.glide.request.RequestOptions;
import com.example.movieflix.R;
import com.google.android.ads.nativetemplates.TemplateView;
import com.google.android.gms.ads.AdListener;
import com.google.android.gms.ads.AdLoader;
import com.google.android.gms.ads.AdRequest;
import com.google.android.gms.ads.LoadAdError;
import com.google.android.gms.ads.nativead.NativeAdOptions;
import java.util.List;
public class MovieAdapter extends RecyclerView.Adapter<MovieAdapter.MyViewHolder> {
private final Context mContext;
private final List<VideoItems> mData;
private final RequestOptions option;
public M[enter image description here][1]ovieAdapter(Context mContext, List<VideoItems> mData) {
this.mContext = mContext;
this.mData = mData;
this.option = new RequestOptions().centerCrop().placeholder(R.drawable.loading_icon).error(R.drawable.loading_icon);
}
@NonNull
@Override
public MyViewHolder onCreateViewHolder(@NonNull ViewGroup parent, int viewType) {
View view = LayoutInflater.from(mContext).inflate(R.layout.activity_video_items, parent, false);
return new MyViewHolder(view);
}
@Override
public void onBindViewHolder(@NonNull MyViewHolder holder, int position) {
if ((position + 1) % 3 == 0) { // Show native ad after every 3 items
holder.templateView.setVisibility(View.VISIBLE);
loadNativeAd(holder.templateView);
} else {
holder.templateView.setVisibility(View.GONE);
}
VideoItems item = mData.get(position);
holder.bind(item);
}
@Override
public int getItemCount() {
return mData.size();
}
private void loadNativeAd(TemplateView templateView) {
AdLoader adLoader = new AdLoader.Builder(mContext, "ca-app-pub-3940256099942544/2247696110")
.forNativeAd(nativeAd -> {
// Display the native ad
templateView.setNativeAd(nativeAd);
})
.withAdListener(new AdListener() {
private boolean adLoaded = false;
@Override
public void onAdLoaded() {
super.onAdLoaded();
// Mark ad as loaded
adLoaded = true;
}
@Override
public void onAdFailedToLoad(@NonNull LoadAdError loadAdError) {
super.onAdFailedToLoad(loadAdError);
}
})
.withNativeAdOptions(new NativeAdOptions.Builder().build())
.build();
adLoader.loadAd(new AdRequest.Builder().build());
}
public class MyViewHolder extends RecyclerView.ViewHolder {
TextView tv_name;
TextView tv_type;
TextView tv_detail;
ImageView img_thumbnail;
TemplateView templateView;
public MyViewHolder(@NonNull View itemView) {
super(itemView);
img_thumbnail = itemView.findViewById(R.id.movieThumbnail);
tv_name = itemView.findViewById(R.id.movieTitle);
tv_type = itemView.findViewById(R.id.movieType);
tv_detail = itemView.findViewById(R.id.movieDetail);
templateView = itemView.findViewById(R.id.my_template);
}
public void bind(VideoItems item) {
tv_name.setText(item.getName());
tv_type.setText(item.getType());
tv_detail.setText(item.getDetail());
// Load image with Glide
if (item.getImage_url() != null && !item.getImage_url().isEmpty()) {
Glide.with(mContext)
.load(item.getImage_url())
.apply(option)
.error(R.drawable.loading_icon) // Placeholder image in case of loading error
.into(img_thumbnail);
} else {
// If image URL is empty, you can set a default placeholder image
img_thumbnail.setImageResource(R.drawable.loading_icon);
}
// Set OnClickListener for the item
itemView.setOnClickListener(v -> {
Intent intent = new Intent(mContext, MoviePlayer.class);
intent.putExtra("video_url", item.getVideo_url());
mContext.startActivity(intent);
});
}
}
}
```
Please check the adapter. I have implemented the native ads in it and ads are loading and showing multiple for the same position. Ad loads every time new when I scroll down to it. I think once the ad is loaded and showed up it should not load and showed again a new ad for the same position. This auto multiple time loading and showing as is making app so laggy.
[1]: https://i.stack.imgur.com/gFGQn.jpg |
I have some custom post types and I've made some custom WordPress user roles and want them to be able to create and update WordPress block patterns.
What capabilities do I need to add to make creating patterns available to the user?
As an administrator the option is available, as a user with only permissions to the custom post type the option is hidden. I've googled for the answer with no result. |
What capabilities does a WordPress User need to create and user patterns? |
|wordpress|design-patterns| |
null |
i have the following classes
public class GetTestByIdQuery : ICqrsQuery<JustTestDto>
{
public Guid JustTestId { get; set; }
public GetTestByIdQuery(Guid justTestId)
{
JustTestId = justTestId;
}
}
and
public class ListTestsQuery : ICqrsQuery<List<JustTestDto>>
{
}
how can i register them dynamically at the run time using reflection by getting all the classes that implements the interfaces `ICqrsQuery` and `ICqrsQuery<T>` ?
i already wrote the following code
`public static IServiceCollection AddCqrsQueries(this IServiceCollection services)
{
var assembly = Assembly.GetExecutingAssembly();
var queries = assembly.GetTypes()
.Where(t => t.IsClass && !t.IsAbstract &&
t.GetInterfaces().Any(i => i.IsGenericType &&
i.GetGenericTypeDefinition() == typeof(ICqrsQuery<>)));
foreach (var query in queries)
{
var interfaces = query.GetInterfaces();
foreach (var @interface in interfaces)
{
if (@interface.IsGenericType && @interface.GetGenericTypeDefinition() == typeof(ICqrsQuery<>))
{
var constructor = query.GetConstructors().FirstOrDefault();
var constParameters = constructor?.GetParameters() ?? null;
var parameterValues = constParameters != null ? new object[constParameters.Length] : null;
if (parameterValues?.Length > 0 && parameterValues != null)
{
for (int i = 0; i < constParameters.Length; i++)
{
parameterValues[i] = services.BuildServiceProvider().GetService(constParameters[i].ParameterType);
}
services.AddTransient(serviceProvider =>
{
var instance = Activator.CreateInstance(query, parameterValues);
return instance;
});
}
else
{
///no parameters
services.AddTransient(query);
}
}
}
}
return services; }`
it works fine with the classes that has no parameters such as `ListTestsQuery` but for classes with constructor parameters it doesn't such as `GetTestByIdQuery`
i have notice that the problem is with `var instance = Activator.CreateInstance(query, parameterValues);` since its return type is `object?` although at run time its evaluated to `GetTestByIdQuery`, if i do explicit casting to type `GetTestByIdQuery` it works otherwise it doesn't
so how can i solve this problem?
thanks in advance |
Dynamic Dependency Injection at The Run Time |
|.net|.net-core|dependency-injection|dynamic-programming| |
The issue is simple: you're confusing the `datasets` library with PyTorch's `Dataset` class. Your import statement should target PyTorch's dataset-handling facilities, not another library. Ensure you're using `from torch.utils.data import Dataset` to get the correct base class for your custom dataset. |
|android|push-notification| |
A possible solution is to use the <code>writeData</code> method to write formulas rowwise instead of the <code>writeFormula</code> method:
```
# define formulas as row matrix
f <- matrix(data = f, ncol = length(f), byrow = TRUE)
# convert formula matrix to formula data.frame
f <- as.data.frame(f, stringsAsFactors = FALSE)
# add "formula" to class of the column vectors
for (i in seq_along(f)) class(f[, i]) <- c(class(f[, i]), "formula")
# use writeData to add the formula data.frame
writeData(wb, sheet = 1, x = f, startCol = 1, startRow = 5, colNames = FALSE)
```
|
If you call your back-end directly, like any 3rd party rest API, then you wouldn't be asking about the NextJS/amplify deployment, so I'll assume you're calling your back-end from within `/pages/api`.
When you do that yourAmplify deploys your /api code as Lambda@Edge. (Note: It may be deployed to a different region. I deploy to Ohio, but the Lambda@Edge is put into N. Virginia.) The Lambda will have a timeout of 30 seconds. You may be able to increase this manually. Lambda@Edge may have a max-timeout of 30 seconds.
Another issues is that the front-end calls the Lambda@Edge function via API Gateway, which has a 29 second timeout that cannot be increased.
If you're getting `504 timeout` or similar at about the 30 second mark, it's likely API GW.
How to fix? If your backend is slow, then the front-end API should just "initiate a job" and return quickly, then use a GQL subscription or polling to get the job completion event. |
I am trying to create a notification to my email or teams emails but wanted to try it out with myself first on Azure.
However not sure where I am going wrong and I hope someone can point me in the correct direction.
I want to create an email notification that if any Work Item Type gets assigned to myself and the State is Changed to Ready to Test I get notified.
[![enter image description here][1]][1]
Any clues as to what I am clearly missing.
thanks!
Tried changing the @me to just my name and email address, have looked at the delivery settings, set it up to deliver to custom email address
[1]: https://i.stack.imgur.com/6W4Jg.png |
Azure Devops Notification - Subscription query |
|azure|azure-devops|notifications|subscription| |
In order to calculate the average of kilometers traveled in a year by a car, you should just divide the km traveled by the age of the car.
Example, given:
- current year: 2024
- year of registration: 1920
- total km traveled (ODO): 7 km
the average of km traveled in a year is calculated by:
7 km / (2024 - 1920) ~ 67.31 m/year
Code example:
func getOdoByYear(registration: Int, currentYear: Int, currentODO: Int) -> Double {
// TODO: check parameters, i.e. currentYear > registration, currentODO >= 0
var carAge = currentYear - registration
var average:Double = Double(currentODO) / Double(carAge)
return average
}
you also should handle the case `registration == currentYear`, otherwise you get a division by zero. This is up to you: you can just return the `currentODO` (even the first year is not completed yet) or throw an error because it might not make sense at all. |
I tried to launch chrome browser on ecplise. I already added all JAR file which is required for the selenium webdriver. I add maven dependencies executed successfully. But when I tried the code of launching chrome browser it gives me different types of error everytime I click on run. I tried on diffrent different website to sortout this error.
I tried various types of jar files to add in maven dependencies, but that did not work then I cleaned up the Eclipse build project then added external jar files. When I executed it gave me an editor that was not the main type.
some changes in that it gives selection is not a main type.
I expect the result like that I want to know about How to launch the Chrome browser and click on all buttons to click on that automatically and how to work automatically in that browser.
This is my all Jar Files below:
C:\\Users\\Owner\\Downloads\\auto-service-annotations-1.1.1.jar
C:\\Users\\Owner\\Downloads\\byte-buddy-1.14.12.jar
C:\\Users\\Owner\\Downloads\\checker-qual-3.41.0.jar
C:\\Users\\Owner\\Downloads\\commons-exec-1.3.jar
C:\\Users\\Owner\\Downloads\\error_prone_annotations-2.23.0.jar
C:\\Users\\Owner\\Downloads\\failsafe-3.3.2.jar
C:\\Users\\Owner\\Downloads\\failureaccess-1.0.2.jar
C:\\Users\\Owner\\Downloads\\guava-33.0.0-jre.jar
C:\\Users\\Owner\\Downloads\\j2objc-annotations-2.8.jar
C:\\Users\\Owner\\Downloads\\jsr305-3.0.2.jar
C:\\Users\\Owner\\Downloads\\listenablefuture-9999.0-empty-to-avoid-conflict-with-guava.jar
C:\\Users\\Owner\\Downloads\\opentelemetry-api-1.35.0.jar
C:\\Users\\Owner\\Downloads\\opentelemetry-api-events-1.35.0-alpha.jar
C:\\Users\\Owner\\Downloads\\opentelemetry-context-1.35.0.jar
C:\\Users\\Owner\\Downloads\\opentelemetry-exporter-logging-1.35.0.jar
C:\\Users\\Owner\\Downloads\\opentelemetry-extension-incubator-1.35.0-alpha.jar
C:\\Users\\Owner\\Downloads\\opentelemetry-sdk-1.35.0.jar
C:\\Users\\Owner\\Downloads\\opentelemetry-sdk-common-1.35.0.jar
C:\\Users\\Owner\\Downloads\\opentelemetry-sdk-extension-autoconfigure-1.35.0.jar
C:\\Users\\Owner\\Downloads\\opentelemetry-sdk-extension-autoconfigure-spi-1.35.0.jar
C:\\Users\\Owner\\Downloads\\opentelemetry-sdk-logs-1.35.0.jar
C:\\Users\\Owner\\Downloads\\opentelemetry-sdk-metrics-1.35.0.jar
C:\\Users\\Owner\\Downloads\\opentelemetry-sdk-trace-1.35.0.jar
C:\\Users\\Owner\\Downloads\\opentelemetry-semconv-1.23.1-alpha.jar
C:\\Users\\Owner\\Downloads\\selenium-api-4.18.1.jar
C:\\Software\\selenium-chrome-driver-2.53.0.jar
C:\\Users\\Owner\\Downloads\\selenium-chrome-driver-4.18.1.jar
C:\\Users\\Owner\\Downloads\\selenium-chromium-driver-4.18.1.jar
C:\\Users\\Owner\\Downloads\\selenium-devtools-v120-4.18.1.jar
C:\\Users\\Owner\\Downloads\\selenium-devtools-v121-4.18.1.jar
C:\\Users\\Owner\\Downloads\\selenium-devtools-v122-4.18.1.jar
C:\\Users\\Owner\\Downloads\\selenium-devtools-v85-4.18.1.jar
C:\\Users\\Owner\\Downloads\\selenium-edge-driver-4.18.1.jar
C:\\Users\\Owner\\Downloads\\selenium-firefox-driver-4.18.1.jar
C:\\Users\\Owner\\Downloads\\selenium-http-4.18.1.jar
C:\\Users\\Owner\\Downloads\\selenium-ie-driver-4.18.1.jar
C:\\Users\\Owner\\Downloads\\selenium-java-4.18.1.jar
C:\\Users\\Owner\\Downloads\\selenium-json-4.18.1.jar
C:\\Users\\Owner\\Downloads\\selenium-manager-4.18.1.jar
C:\\Users\\Owner\\Downloads\\selenium-os-4.18.1.jar
C:\\Users\\Owner\\Downloads\\selenium-remote-driver-4.18.1.jar
C:\\Users\\Owner\\Downloads\\selenium-safari-driver-4.18.1.jar
C:\\Users\\Owner\\Downloads\\selenium-support-4.18.1.jar
This is my error below:
```
Exception in thread "main" java.lang.Error: Unresolved compilation problem:
at OpenApplication.main(OpenApplication.java:12)
```
This is my code below:
```
package JavaPackage;
import org.openqa.selenium.WebDriver;
import org.openqa.selenium.chrome.ChromeDriver;
@SuppressWarnings("unused")
public class OpenApplication {
public static void main(String[] args) throws Exception {
//WebDriverManager.chromedriver().setup();
// 1. Setting the property of chrome browser and passing ChromeDriver path
System.setProperty("webdriver.chrome.driver",
"C:\\Software\\chromedriver-win64.exe");
// 2. Launcing Chrome Browser Instance
WebDriver driver = new ChromeDriver();
//Thread.sleep(5000);//Thread.sleep() causes the current thread to suspend execution for a specified period.
//3. Open Url using get() method
// get method is a abstract method it is only declaration method not a defination method
// get method is used under the webdriver interface but declaration under the chrome driver class
driver.get("https://www.facebook.com/");
// 4. Maximize the window
// Thread is a class name
// whenever name in itallic form it will be static format.
Thread.sleep(2000);
driver.manage().window().maximize();
// 5. Delete all the cookies
Thread.sleep(2000);
driver.manage().deleteAllCookies();
// 6. Open Url using navigate() method
Thread.sleep(2000);
driver.navigate().to("https://www.google.com");
// 7. Refresh the page
Thread.sleep(2000);
driver.navigate().refresh();
// 8. Navigate to back
Thread.sleep(2000);
driver.navigate().back();
// 9. Navigate to forward
Thread.sleep(2000);
driver.navigate().forward();
// 10. Fetch the current URL
Thread.sleep(2000);
System.out.println(driver.getCurrentUrl());
// 11. Get Title of the web page
Thread.sleep(2000);
System.out.println(driver.getTitle());
// 12. Close the browser
driver.close();
// This is all WEB DRIVER commands as well as WEB DRIVER ELEMENTS
}
}
``` |
i get an error in the imports and in the chrome-driver and web-driver |
|java|eclipse|maven|selenium-webdriver|testing| |
null |
null |
Try setting the `na.last` argument of `rank` to `NA`:
```
> rank(temp_df$stat1)
[1] 5 3 2 10 11 7 8 9 4 12 1 6
```
whereas:
```
> rank(temp_df$stat1, na.last = NA)
[1] 5 3 2 7 8 9 4 1 6
``` |
The CRM365 forms provide the `OnSave` event to be able to perform business validations.
If one of the validations fails, we can use the `executionContext.getEventArgs().preventDefault()` to stop the record creation.
It is all good vor validating the fields of the current form, however, there are situations when the validations must also require executing a query against the entity records.
For example, creating a reservation of a room must check if there is any other pre-existing reservation and stop if they overlap.
The problem is that the REST API call is an asynch and needs time to execute and return results. By the time the information becomes available in the Response, the `OnSave` function is way ended and the record saved basically without validation.
My questions are as follows:
1. Is any reverse of the `executionContext.getEventArgs().preventDefault()`? We can stop the save operation but is any "allow save" sort of speak?
I have tried to the `formContext.data.entity.save();` but since I am in the `OnSave` event it created an infinite loop. It is almost unthinkable to me that this flag can be set but not reset.
2. Is any effective to stop the JavaScript or make it "sleep" until the REST API data becomes available? everything revolves around `SetTimeout` function but that is a non-blocking function and my JavaScript just runs through it, of course.
I am sure I am not the only running into this situation, it must be pattern to solve these REST API based validations.
I should be probably add that I am looking for a client side based solution; all these could be relatively easy to implement in a plugin or custom workflow.
|
|javascript|rest|validation|dynamics-crm-365| |
delete .next folder and start server npm run dev |
Instead of providing a set of keys, I would just parse the input data to recognise the "key: value" pattern.
Note that you could get ambiguity. For instance, if an input line were:
```
TEST: A B C: OK
```
Then, this could be interpreted as:
{
"TEST": "A",
"B C": "OK"
}
or as:
{
"TEST": "A B",
"C": "OK"
}
To break such ties, we could make the capture of the value *greedy*, so that in the above example the second output would be generated. If however we find that there is a separation of at least three spaces, then we could interpret what follows as a new key/value pair, so that this input:
```
TEST: A B C: OK
```
...would be interpreted as:
{
"TEST": "A",
"B C": "OK"
}
Secondly, if a value has commas, you could turn that value into array (except if the comma is part of a numeric value).
We can use the power of regular expressions to do this kind of parsing.
Here is a function `makeObject` and how that could work for your sample input:
<!-- begin snippet: js hide: false console: true babel: false -->
<!-- language: lang-js -->
const multiple = arr => arr.length > 1 ? arr : arr[0];
const regex = /((?:[A-Z.]+ )*[A-Z.]+):((?: {0,2}(?!\S*:)\S+)*)/g;
const makeObject = data => Object.fromEntries(
Array.from(data.join("\n").matchAll(regex), ([, key, value]) => [
key,
multiple(value.split(/,(?!\d)/).map(val => val.trim()))
])
);
// Your sample data:
const data = ['-1911312-14668500FECHA: 15-12-25','NOMBRE Y APELLIDO: Jhon dee','C.I.: 20020202 EDAD: 45 ','DIRECCION: LA CASA','TLF: 55555555','CORREO: thiisatest@gmail',' HISTORIA CLINICA GINECO-OBSTETRICA','HO','NULIG','FUR','3-8-23','EG','','FPP','','GS','','GSP','','','MC: CONTROL GINECOLOGICO','HEA','','APP: NIEGA PAT, NIEGA ALER, QX NIEGA.','APF: MADRE HTA, ABUELA DM.','','AGO: MENARQUIA: 10 FUR: CICLO: 4/28 ',' TIPO: EUM',' MET ANTICONCEP: GENODERM DESDE HACE 3 AÑOS.','PRS: NPS: ITS: VPH LIE BAJO GRADO 2017 , BIOPSIA.','FUC: NOV 2022, NEGATIVA. COLPO NEGATIVA.','','','EMBARAZO','#/AÑO','TIPO DE PARTO','INDICACION','RN','SEXO','RN','PESO','OBSERVACIONES','','','','','','','','','','','','','','','','','','','','EXAMEN FISICO:','PESO: 80,1 TALLA: TA: MMHG FC: FR: ','','PIEL Y MUCOSA: DLN','CARDIOPULMONAR: DLN','','MAMAS: ','','ABDOMEN: ','GENITALES: CUELLO SIN SECRECION , COLPO SE EVDIENCIA DOS LEISONES HPRA 1 Y HORA 5','','EXTREMIDADES: DLN','NEUROLOGICO: DLN','',' IDX: LESION EN CUELLO UTERINO','','PLAN: DEFEROL OMEGA, CAUTERIZACION Y TIPIFICACION VIRAL','22-8-23','SE TOMA MUESTRA DE TIPIFICACION VIRAL.','','','','LABORATORIOS:','FECHA','HB/HTO','LEU/PLAQ','GLICEMIA','UREA','CREAT','HIV/VDRL','UROANALISIS','','','','','','','','',];
console.log(makeObject(data));
<!-- end snippet -->
You'll see in the output *all* keys it could find, even those that have an empty value (like `AGO`). Just extract from this object what you need. |
I am studying about JetPack Compose and learning the official example: OwlApp(https://material.io/design/material-studies/owl.html). I can't understand the followed code:
In MainActivity file:
```
class MainActivity : ComponentActivity() {
override fun onCreate(savedInstanceState: Bundle?) {
super.onCreate(savedInstanceState)
// This app draws behind the system bars, so we want to handle fitting system windows
WindowCompat.setDecorFitsSystemWindows(window, false)
setContent {
// ?? finishActivity类型的参数呢,OwlApp(finishActivity_){finish()}
** OwlApp { finish() }**
}
}
}
```
In OwlApp file:
```
@Composable
**fun OwlApp(finishActivity: () -> Unit) {**
ProvideWindowInsets {
BlueTheme {
val tabs = remember { CourseTabs.values() }
val navController = rememberNavController()
Scaffold(
backgroundColor = MaterialTheme.colors.primarySurface,
bottomBar = { OwlBottomBar(navController = navController, tabs) }
) { innerPaddingModifier ->
NavGraph(
finishActivity = finishActivity,
navController = navController,
modifier = Modifier.padding(innerPaddingModifier)
)
}
}
}
}
```
I could not understand the use of `OwlApp { finish() }` because the args should be there.
Hope for help, TAHNKS |
How to understand OwlApp { finish() } , while the defination of function OwlApp is fun OwlApp(finishActivity: () -> Unit) {}? |
|android-jetpack-compose| |
null |
You can check the chart status by referring to load, redraw, or render events.
API reference: https://api.highcharts.com/highcharts/chart.events.load
Demo: https://jsfiddle.net/gh/get/library/pure/highcharts/highcharts/tree/master/samples/highcharts/chart/events-render/
chart: {
events: {
load() {
createLabel(this, 'load event', 80, colors[0]);
},
redraw() {
createLabel(this, 'redraw event', 80, colors[3]);
},
render() {
createLabel(this, 'render event', 120, colors[1]);
}
}
},
Let me know if there's something I could clarify
|
You should specify the table where your list of languages are in:
<p>var languages []Language
<p>err := db<b>.Table("TableOfLanguages")</b>.Find(&languages).Error
|
My question is with traversing. In my problem the sequence of the traversal is not following as it should. I am using the general logic for the inorder, preorderand the postorder traversal but it is giving in the wrong sequence.
My problem is my code is not traversing like my expected output. The expected output should look like this:
Initially, how many integers you want:
5
Enter the integers: 7 9 1 2 10
Press 1 for inorder traverse, 2 for preorder traverse, 3 for postorder traverse, 4 for inserting new item, 5
for deleting an item, or 6 for exit the program:
4
Enter the item for insert:
2
This item already exists.
Press 1 for inorder traverse, 2 for preorder traverse, 3 for postorder traverse, 4 for inserting new item, 5
for deleting an item, or 6 for exit the program:
4
Enter the item for insert:
8
This item is inserted.
Press 1 for inorder traverse, 2 for preorder traverse, 3 for postorder traverse, 4 for inserting new item, 5
for deleting an item, or 6 for exit the program:
5
Enter the item for delete:
12
This item not found.
Press 1 for inorder traverse, 2 for preorder traverse, 3 for postorder traverse, 4 for inserting new item, 5
for deleting an item, or 6 for exit the program:
5
Enter the item for delete:
7
This item is deleted.
Press 1 for inorder traverse, 2 for preorder traverse, 3 for postorder traverse, 4 for inserting new item, 5
for deleting an item, or 6 for exit the program:
1
**Inorder Traverse: 1 2 8 9 10**
Press 1 for inorder traverse, 2 for preorder traverse, 3 for postorder traverse, 4 for inserting new item, 5
for deleting an item, or 6 for exit the program:
2
**Preorder Traverse: 2 1 9 8 10**
Press 1 for inorder traverse, 2 for preorder traverse, 3 for postorder traverse, 4 for inserting new item, 5
for deleting an item, or 6 for exit the program:
3
**Postorder Traverse: 1 8 10 9 2**
Press 1 for inorder traverse, 2 for preorder traverse, 3 for postorder traverse, 4 for inserting new item, 5
for deleting an item, or 6 for exit the program:
6
Program terminated
But my output is showing the traversal like this:
Inorder Traverse: 1 2 8 9 10
Preorder Traverse: 8 1 2 9 10
Postorder Traverse: 2 1 10 9 8
My code is given below:
```
#include <iostream>
using namespace std;
struct Node
{
int data;
Node *left;
Node *right;
Node(int value) : data(value), left(nullptr), right(nullptr) {}
};
class BST
{
private:
Node *root;
Node *insertRecursive(Node *root, int value)
{
if (root == nullptr)
{
return new Node(value);
}
if (value < root->data)
{
root->left = insertRecursive(root->left, value);
}
else if (value > root->data)
{
root->right = insertRecursive(root->right, value);
}
else
{
cout << "This item already exists." << endl;
}
return root;
}
Node *findMin(Node *root)
{
while (root->left != nullptr)
{
root = root->left;
}
return root;
}
Node *deleteRecursive(Node *root, int value)
{
if (root == nullptr)
{
cout << "Item not found." << endl;
return nullptr;
}
if (value < root->data)
{
root->left = deleteRecursive(root->left, value);
}
else if (value > root->data)
{
root->right = deleteRecursive(root->right, value);
}
else
{
if (root->left == nullptr)
{
Node *temp = root->right;
delete root;
return temp;
}
else if (root->right == nullptr)
{
Node *temp = root->left;
delete root;
return temp;
}
Node *temp = findMin(root->right);
root->data = temp->data;
root->right = deleteRecursive(root->right, temp->data);
}
return root;
}
void inorderTraversal(Node *root)
{
if (root == nullptr)
{
return;
}
inorderTraversal(root->left);
cout << root->data << " ";
inorderTraversal(root->right);
}
void preorderTraversal(Node *root)
{
if (root == nullptr)
{
return;
}
cout << root->data << " ";
preorderTraversal(root->left);
preorderTraversal(root->right);
}
void postorderTraversal(Node *root)
{
if (root == nullptr)
{
return;
}
postorderTraversal(root->left);
postorderTraversal(root->right);
cout << root->data << " ";
}
bool searchRecursive(Node *root, int value)
{
if (root == nullptr)
{
return false;
}
if (value < root->data)
{
return searchRecursive(root->left, value);
}
else if (value > root->data)
{
return searchRecursive(root->right, value);
}
else
{
return true;
}
}
public:
BST() : root(nullptr) {}
void insert(int value)
{
root = insertRecursive(root, value);
}
void remove(int value)
{
root = deleteRecursive(root, value);
}
bool search(int value)
{
return searchRecursive(root, value);
}
void inorder()
{
cout << "Inorder Traverse: ";
inorderTraversal(root);
cout << endl;
}
void preorder()
{
cout << "Preorder Traverse: ";
preorderTraversal(root);
cout << endl;
}
void postorder()
{
cout << "Postorder Traverse: ";
postorderTraversal(root);
cout << endl;
}
};
int main()
{
BST bst;
int n, choice, item;
cout << "Initially, how many integers do you want: ";
cin >> n;
cout << "Enter the integers: ";
for (int i = 0; i < n; i++)
{
cin >> item;
bst.insert(item);
}
cout << endl;
while (true)
{
cout << "Press 1 for inorder traverse, 2 for preorder traverse, "
<< "3 for postorder traverse, 4 for inserting new item, "
<< "5 for deleting an item, or 6 for exit the program: " << endl;
cin >> choice;
switch (choice)
{
case 1:
bst.inorder();
cout << endl;
break;
case 2:
bst.preorder();
cout << endl;
break;
case 3:
bst.postorder();
cout << endl;
break;
case 4:
cout << "Enter the item for insert: " << endl;
cin >> item;
bst.insert(item);
cout << endl;
break;
case 5:
cout << "Enter the item for delete: ";
cin >> item;
bst.remove(item);
cout << endl;
break;
case 6:
cout << "Program terminated." << endl;
return 0;
default:
cout << "Invalid choice. Please try again." << endl;
}
}
return 0;
}
```
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.