instruction
stringlengths 0
30k
⌀ |
---|
I solved the problem. With the **ACF Quick Edit Fields** plugin, you can edit the field values very easily.
https://wordpress.org/plugins/acf-quickedit-fields/ |
It looks like there are some problems with latest releases of NUnit package. 4.1.0 by me also does not work. I have downgraded NUnit back to 3.13.3 and it all works with again with method AreEqual. |
I am using enterprise environment and no installations are allowed.
What I need is to create some kind of script to do several things:
1. Open an Internet Explorer website (with this browser).
2. Website window must be clicked to activate it and navigate using TAB key at least 20 times to get a new link which opens a new window (java) to enter values.
3. Those values are "user" and "password", I can navigate between fields using TAB. So I need to enter a string (or keypress simulation to enter a word) for USER, pres TAB, enter another string for PASSWORD, press TAB twice and then ENTER.
Is it possible?
I tried batch scripts, "autohotkey" and "PhraseExpress" stuff with no results. |
Automating login in simulation with Internet Explorer |
|authentication|automation| |
***WARNING**: git filter-branch is [no longer officially recommended](https://git-scm.com/docs/git-filter-branch#_warning). The official recommendation is to use [git-filter-repo][1]; see [André Anjos' answer for details](https://stackoverflow.com/questions/10067848/remove-folder-and-its-contents-from-git-githubs-history/61544937#61544937).*
----
If you are here to copy-paste code:
This is an example which removes `node_modules` from history
git filter-branch --tree-filter "rm -rf node_modules" --prune-empty HEAD
git for-each-ref --format="%(refname)" refs/original/ | xargs -n 1 git update-ref -d
echo node_modules/ >> .gitignore
git add .gitignore
git commit -m 'Removing node_modules from git history'
git gc
git push origin main --force
**What git actually does:**
The first line iterates through all references on the same tree (`--tree-filter`) as HEAD (your current branch), running the command `rm -rf node_modules`. This command deletes the node_modules folder (`-r`, without `-r`, `rm` won't delete folders), with no prompt given to the user (`-f`). The added `--prune-empty` deletes useless (not changing anything) commits recursively.
The second line deletes the reference to that old branch.
The rest of the commands are relatively straightforward.
[1]: https://github.com/newren/git-filter-repo/ |
Well, actually the ```key``` is not used in your code. But if there isn't it, you can't run the project correctly.
Because the array distinguish its elements as the ```key```.
Of course if you don't need to change any states of its elements, it doesn't care.
But if you wanna change or remove the individual element, you must use the ```key```.
Thank you for reading carefully. |
The way I've done this things in the past is through nohup (https://en.wikipedia.org/wiki/Nohup). I've tested in Amazon Linux 2023, but should work in pretty much any Linux distribution.
For instance, I tested starting a web server like this:
```
nohup uvicorn docker_webhook_server:app --host 0.0.0.0 --port 7000 &
```
I can then log out from EC2 and come back and the process is still running. Since it's a webserver, it's easy to check using an HTTP request, but you can also check your background process using:
```
ps aux | grep your_command_name
```
For instance, to see my webserver PID:
```
ps aux | grep uvicorn
```
|
> Regex Match a pattern that only contains one set of numerals, and not more
I would start by writing a _grammar_ for the "forgiving parser" you are coding. It is not clear from your examples, for instance, whether `<2112` is acceptable. Must the brackets be paired? Ditto for quotes, etc.
Assuming that brackets and quotes do not need to be paired, you might have the following grammar:
##### _sign_
`+` | `-`
##### _digit_
`0` | `1` | `2` | `3` | `4` | `5` | `6` | `7` | `8` | `9`
##### _integer_
[ _sign_ ] _digit_ { _digit_ }
##### _prefix_
_any-sequence-without-a-sign-or-digit_
[ _prefix_ ] { _sign_ } _any-sequence-without-a-sign-or-digit_
##### _suffix_
_any-sequence-without-a-digit_
##### _forgiving-integer_
[ _prefix_ ] _integer_ [ _suffix_ ]
Notes:
- Items within square brackets are optional. They may appear either 0 or 1 time.
- Items within curly braces are optional. They may appear 0 or more times.
- Items separated by `|` are alternatives from which 1 must be chosen
- Items on separate lines are alternatives from which 1 must be chosen
With a grammar in hand, it should be much easier to figure out an appropriate regular expression.
### Program
My solution, however, is to avoid the inefficiencies of `std::regex` in favor of coding a simple "parser."
Function `validate_integer`, in the following program, implements the foregoing grammar. When `validate_integer` succeeds, it returns the integer it parsed. When it fails, it throws a `std::runtime_error` exception.
Because `validate_integer` uses `std::from_chars` to convert the integer sequence, it will not convert the test case `2112.0` from the OP. The trailing `.0` is treated as a second integer. All the other test cases work as expected.
The only tricky part is the initial loop that skips over non-numeric characters. When it encounters a sign (`+` or `-`), it has to check the following character to decide whether the sign should be interpreted as the start of a numeric sequence.
```lang-cpp
// main.cpp
#include <cctype>
#include <charconv>
#include <iomanip>
#include <iostream>
#include <stdexcept>
#include <string>
#include <string_view>
bool is_digit(unsigned const char c) {
return std::isdigit(c);
}
bool is_sign(const char c) {
return c == '+' || c == '-';
}
int validate_integer(std::string const& s)
{
enum : std::string::size_type { one = 1u };
std::string::size_type i{};
// skip over prefix
while (i < s.length())
{
if (is_digit(s[i]) || is_sign(s[i])
&& i + one < s.length()
&& is_digit(s[i + one]))
break;
++i;
}
// throw if nothing remains
if (i == s.length())
throw std::runtime_error("validation failed");
// parse integer
// due to foregoing checks, this cannot fail
auto const first{ &s[i] };
auto const last{ &s[s.length() - one] + one };
int n;
auto [end, ec] { std::from_chars(first, last, n)};
i += end - first;
// skip over suffix
while (i < s.length() && !is_digit(s[i]))
++i;
// throw if anything remains
if (i != s.length())
throw std::runtime_error("validation failed");
return n;
}
void test(std::ostream& log, bool const expect, std::string s)
{
std::streamsize w{ 46 };
try {
auto n = validate_integer(s);
log << std::setw(w) << s << " : " << n << '\n';
}
catch (std::exception const& e) {
auto const msg{ e.what() };
log << std::setw(w) << s << " : " << e.what()
<< ( expect ? "" : " (as expected)")
<< '\n';
}
}
int main()
{
auto& log{ std::cout };
log << std::left;
test(log, true, "<2112>");
test(log, true, "[(2112)]");
test(log, true, "\"2112, \"");
test(log, true, "-2112");
test(log, true, ".2112");
test(log, true, "<span style = \"numeral\">2112</span>");
log.put('\n');
test(log, false, "2112.0");
test(log, false, "");
test(log, false, "21,12");
test(log, false, "\"21\",\"12, \"");
test(log, false, "<span style = \"font - size:18.0pt\">2112</span>");
test(log, false, "21TwentyOne12");
log.put('\n');
return 0;
}
// end file: main.cpp
```
### Output
The "hole" in the output, below the entry for 2112.0, is the failed conversion of the null-string.
```lang-none
<2112> : 2112
[(2112)] : 2112
"2112, " : 2112
-2112 : -2112
.2112 : 2112
<span style = "numeral">2112</span> : 2112
2112.0 : validation failed (as expected)
: validation failed (as expected)
21,12 : validation failed (as expected)
"21","12, " : validation failed (as expected)
<span style = "font - size:18.0pt">2112</span> : validation failed (as expected)
21TwentyOne12 : validation failed (as expected)
```
|
I found my answer you have to add a -- <command> flag. So in my case it would be: pd login --shared-tmp debian --user mrdual -- display |
please attention to this simple program:
from time import sleep
from threading import Thread
def fun1():
sleep(10)
thread1 = Thread(target = fun1)
thread1.start()
#sign
sleep(100)
print("hello")
how can I stop execution of the codes below the #sign when thread1 finished.
thank you for your helping |
`next()` is low-level and `StopIteration` is low-level too. Those are part of the iteration protocol. The other side must obey the rules too, a `StopIteration` must be caught (or it becomes a `RuntimeError` - PEP479).
The `StopIteration` exception is always raised at the end of the generator and if you don't see an error, the caller must have caught the exception directly (`try-except`) or indirectly, e.g.:
gen = sample()
# the for statement speaks the iteration protocol
# behind the scenes
for v in gen:
print(f"got {v}")
|
First, you have to ensure you put username and password correctly.
You can check your username with the help [MySQL Shell][1] using below command:
```
Select user();
```
You can check your port number using this command in MySQL Shell:
```
Select @@port;
```
Now You can establish connection without any SSL warning as show below:
Connection connection = DriverManager.getConnection("jdbc:mysql://localhost:3306/YourDatabaseName?autoReconnect=true&useSSL=false","root","pass") ;
[1]: https://dev.mysql.com/doc/mysql-shell/8.3/en/ |
please attention to this simple program:
from time import sleep
from threading import Thread
def fun1():
sleep(10)
thread1 = Thread(target = fun1)
thread1.start()
#sign
sleep(100)
print("hello")
how can I stop execution of the codes below the #sign when thread1 finished.
thank you for your helping |
using the kotlin-kapt with my issue:
```java
plugins {
...
id("com.google.dagger.hilt.android")
id("kotlin-kapt")
} |
I have a .NET Framework 4.8 class library project (`ChildProject1`) in `Solution1`.
I need to use this class library in my .NET Core app (.NET 6.0 or .NET 8.0) in `Solution2`.
The requirement is such that I need to reference it as .NET Standard 2.0.
Note: I couldn't convert/upgrade to NS2.0 directly since `Solution1` is a WPF application.
So I created a new project (class Lib targetting NS2.0) in `Solution1` which has project reference to `ChildProject1`. Named this new project as `ChildProjectWrapper`. Build this project and copied the dll to .NET Core app (created a new folder in my project in `Solution2` and copied only the `ChildProjectWrapper.dll`)
Now I added dll reference in `Solution2` pointing to this folder and tried to reference the method in `ChildProject1`. Obviously I could reference the `ChildProjectWrapper` namespace, but couldn't reference the methods or namespace in `ChildProject1`.
What am I missing?
Thanks,
PRI |
I'm currently working a UI selenium project in which I want to get the String data from the UI and parse it into a variable in my .feature scenario file.
I will use this scenario to explain, in this case, I only have the customerID:
**Given** I search for customer A <customerID>
**When** I get the customer info: customer name <customer name> and last orderID <lastOrderID> in customer general info page
**Then** I compare it with the actual last orderID in the order list to see if the customer name <customername> and the last orderID <lastOrderID> are matching.
Example:
|customerID|customername|lastOrderID|
|3434423432| ??? | ?? |
I would like to get String data of the <customername> and <lastOrderID> from the UI and parse these values into the cucumber variables. What should I do here?
Btw, this is the first time I send a question on stackoverflow, so appologize if my info is being explained vaguely. Much appreciate for any helps!
I have looked around on the Internet for help, but I'm kinda stuck here. |
Because when you use BuiltInTripleCamera as device type, our App will use Ultra Wide Camera as active primary device. But iPhone 12 Pro don't support Macro Photography so this device don't work well.
You should set device.videoZoomFactor = device.virtualDeviceSwitchOverVideoZoomFactors.first, to let Wide Angle camera can be active primary device. And let iPhone choose camera auto.
guard let virtualDeviceSwitchOverVideoZoomFactor = input.device.virtualDeviceSwitchOverVideoZoomFactors.first as? CGFloat else {
return
}
try? input.device.lockForConfiguration()
input.device.videoZoomFactor = virtualDeviceSwitchOverVideoZoomFactor
input.device.unlockForConfiguration()
|
In very simple terms.
1. **A Virtual Machine (VM)** virtualizes both the **Kernel Layer** and the **Application Layer**, effectively simulating a full hardware stack. This means the VM includes not only the application and its dependencies but also the entire operating system on top of a virtualized hardware layer. This setup is more isolated and can run different operating systems on the same host machine but tends to be heavier in terms of resource usage.
2. **Docker (or containers, more broadly)** virtualizes only the **Application Layer**. Containers share the same kernel of the host's operating system but package the application and its dependencies into a single self-contained unit. This approach is more lightweight and efficient than VMs because it doesn't need to duplicate the OS for each application, leading to better resource utilization and faster start times.
|
That happened because the controllers are defined inside the `build` method, so they get reinstantiated every time the widget rebuilds.
To fix it, move them out from the `build` method.
```dart
class Testhome extends StatelessWidget {
const Testhome({super.key});
CarouselController homepagecontroller = CarouselController();
final SliderController slidercontrollerinstance = SliderController();
@override
Widget build(BuildContext context) {
// ...
}
```
This is as shown in the [example from the package's readme](https://pub.dev/packages/carousel_slider#carousel-controller). |
null |
null |
null |
null |
i have a react app, it is deployed, available and registred at google long ago. Google sees sitemap, pages that are on sitemap are indexed, but all what google sees - is like - so only page before loading. The same in url inspection tool - look the same blank.
```
...
<body>
<div id="root"></div>
</body>
```
So what am i missing? Should't nowadays google crawler be able index react app? |
Google doesn't index react pages |
|reactjs|google-analytics|google-search-console| |
On my local KinD cluster, I deployed the `kube-prometheus-stack` with the default values file. Prometheus is configured inside my `prometheus` namespace.
In another namespace `redis`, I installed [`redis-ha`](https://github.com/DandyDeveloper/charts/tree/master/charts/redis-ha) using the following values file:
```yaml
image:
repository: redis/redis-stack-server
tag: 7.2.0-v6
pullPolicy: IfNotPresent
replicas: 1
redis:
config:
protected-mode: "no"
min-replicas-to-write: 0
loadmodule: /opt/redis-stack/lib/redisbloom.so
disableCommands:
- FLUSHALL
exporter:
enabled: true
image: oliver006/redis_exporter
tag: v1.57.0
pullPolicy: IfNotPresent
# prometheus port & scrape path
port: 9121
portName: exporter-port
scrapePath: /metrics
# Address/Host for Redis instance. Default: localhost
# Exists to circumvent issues with IPv6 dns resolution that occurs on certain environments
##
address: localhost
## Set this to true if you want to connect to redis tls port
# sslEnabled: true
# cpu/memory resource limits/requests
resources: {}
# Additional args for redis exporter
extraArgs: {}
serviceMonitor:
# When set true then use a ServiceMonitor to configure scraping
enabled: true
# Set the namespace the ServiceMonitor should be deployed
namespace: "prometheus"
# Set how frequently Prometheus should scrape
interval: 15s
# Set path to redis-exporter telemtery-path
# telemetryPath: /metrics
# Set labels for the ServiceMonitor, use this to define your scrape label for Prometheus Operator
labels:
app: redis-ha
# Set timeout for scrape
# timeout: 10s
# Set additional properties for the ServiceMonitor endpoints such as relabeling, scrapeTimeout, tlsConfig, and more.
endpointAdditionalProperties: {}
# prometheus exporter SCANS redis db which can take some time
# allow different probe settings to not let container crashloop
livenessProbe:
initialDelaySeconds: 15
timeoutSeconds: 3
periodSeconds: 15
readinessProbe:
initialDelaySeconds: 15
timeoutSeconds: 3
periodSeconds: 15
successThreshold: 2
```
Then I created a `ServiceMonitor` manifest and applied it:
```yaml
apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
name: redis-exporter
namespace: redis
spec:
endpoints:
- interval: 15s
port: exporter-port
selector:
matchLabels:
app: redis-ha
```
In Prometheus, I can see redis related metrics thanks to auto-complete when I type `redis`, but when I go to "Targets" or "Service Discovery" I don't see my redis exporter.
I checked the Prometheus exporter logs but didn't find any errors. The redis-exporter is up and running, the labels seem to match.
I don't understand why would I see the metrics but not the target. |
Reference Every Nth Cell
-
[![enter image description here][1]][1]
**Usage**
<!-- language: lang-vb -->
Sub Test()
Dim ws As Worksheet: Set ws = ActiveSheet ' improve!
Dim rg As Range:
Set rg = ws.Range("P2", ws.Cells(ws.Rows.Count, "P").End(xlUp))
Dim nameBank As Range: Set nameBank = RefNthCellsInColumn(rg, 3)
If nameBank Is Nothing Then Exit Sub
nameBank.Copy ws.Range("Q2")
MsgBox nameBank.Cells.Count & " cells in range """ _
& nameBank.Address(0, 0) & """.", vbInformation
End Sub
**The Function**
<!-- language: lang-vb -->
Function RefNthCellsInColumn( _
ByVal singleColumnRange As Range, _
ByVal Nth As Long, _
Optional ByVal First As Long = 1) _
As Range
Dim rg As Range, r As Long
For r = First To singleColumnRange.Cells.Count Step Nth
If rg Is Nothing Then
Set rg = singleColumnRange.Cells(r)
Else
Set rg = Union(rg, singleColumnRange.Cells(r))
End If
Next r
Set RefNthCellsInColumn = rg
End Function
[1]: https://i.stack.imgur.com/kq33d.jpg |
Without more information I'll have to guess, maybe during deployment your T instance ran out of credits and stopped responding, try using a "M" instance (you can save money by running on m6g - ARM64) and see if it improve your deployment experience. Also part of the slow deployment could be due to the low mem/cpu of your EC2 instance. Check cloudWatch metrics and make sure you did not run out of credits and that the machine CPU is not at peak usage most of the time.
Three alternatives that you can consider:
1. is either using [AWS AppRunner][1] pass the code and define your environment, but more expensive.
2. If you have more than one website or plan on running multiple instances consider using containers (using ECS/EKS) on a bigger EC2 instance, you would have the ability to run multiple websites isolated from one another on the same infrastructure (EC2 + EBS).
3. 3. If you're willing to make code changes, perhaps it would be simpler to run on AWS Lambda, with API Gateway for small workloads it could be more cost efficient
Other than that if you only plan to run on small instance then the alternative is to use self managed EC2 instance but then you will need to manage the deployments as well.
[1]: https://aws.amazon.com/apprunner/ |
how to get document that being deleted in the query middleware?
cartSchema.pre(/delete/i, async function (next) { |
I don't know if this would work, but as mdn, there is a property in the [ErrorEvent][1], the [ErrorEvent][1] is defined in here :
```js
addEventListener("error",function(err){
console.log(err) //Here it is your error, ErrorEvent
})
```
Now, when you know the name of the javascript file, you can use the file, so you can filter the errors by file names :
```js
err.filename
```
and if you are looking for chrome extensions, maybe it is near to impossible, as the chrome extensions are a part of the UI and therefore, a part of the window object
[1]: https://developer.mozilla.org/en-US/docs/Web/API/ErrorEvent |
In my case **http**://localhost was working, but **https**://localhost gave a HTTP 503 Internal server error.
1. Am consuming API using the http request.
2. If an API response is received within 15 seconds, it means it worked.
3. If you exceed 15 seconds, you will receive a 503 (Service Unavailable) error.
I have implemented the following solutions, but they do not work.
1. ```<httpRuntime executionTimeout="300" enable="true" maxQueryStringLength="32768" maxUrlLength="65536" maxRequestLength="4096" useFullyQualifiedRedirectUrl="false" targetFramework="4.5" /> <sessionState mode="InProc" stateNetworkTimeout="40" timeout="2400" /><sessionState mode="InProc" stateNetworkTimeout="30" />```
2. [![enter image description here][1]][1]
3. [![enter image description here][2]][2]
[1]: https://i.stack.imgur.com/2Ul6a.png
[2]: https://i.stack.imgur.com/cF8pS.png
If there is any other way to resolve this solution, please let me know. |
Can't see Redis exporter in Prometheus |
|kubernetes|prometheus|kubernetes-helm| |
I am dealing with a library database today. The structure is kind of odd, and I am having trouble pulling the data how I want it to appear.
I have this query:
SELECT
lc.catalogID, hb_g.intro AS 'Genre/Subject', gk.kidsAge AS 'Ages',
pb_g.intro AS 'Genre/Subject', pb.ageRange AS 'Ages'
FROM
library.libraryCatalog lc
INNER JOIN
library.hardbacks hb ON lc.catalogID = hb.catalogId
INNER JOIN
library.paperbacks pb ON lc.catalogID = pb.catalogId
LEFT JOIN
library.genres hb_g ON hb.genreId = hb_g.genreId
LEFT JOIN
library.genres pb_g ON pb.genreId = pb_g.genreId
LEFT JOIN
library.bookSeries bs ON hb.id = bs.logId
LEFT JOIN
library.genreKids gk ON bs.kidsId = gk.kidsId
WHERE
lc.libraryID = 87
It produces results shown below. The issue I have is that I need the `Fairy Tales` and `12+` result to appear in the same columns as the other genres.
catalogID Genre/Subject Age up to Genre/Subject Ages
--------------------------------------------------------------
2021 Mystery 8+ Fairy Tales 12+
2021 Sci-Fi/Fantasy 12+ Fairy Tales 12+
2021 Fiction 10+ Fairy Tales 12+
2021 Non-Fiction 12+ Fairy Tales 12+
2021 Biography 16+ Fairy Tales 12+
2021 Historical 10+ Fairy Tales 12+
I am hoping for something like this:
catalogID Genre/Subject Age up to
------------------------------------------
2021 Mystery 8+
2021 Sci-Fi/Fantasy 12+
2021 Fiction 10+
2021 Non-Fiction 12+
2021 Biography 16+
2021 Historical 12+
2021 Fairy Tales 12+ <---- moved here
I tried using `ISNULL` and `COALESCE` but neither of those worked.
Is something like this possible?
Thanks! |
I'm using the extension "Custom right-click menu" to add a custom site search
When I use the context menu, it adds spaces to the search term. I need it to have a +.
var search = crmAPI.getSelection() || prompt('Please enter a search query');
var url = 'https://www.subetalodge.us/list_all/search/%s/mode/name';
var toOpen = url.replace(/%s/g,search);
window.open(toOpen, '_blank');
|
null |
I'm using the extension "Custom right-click menu" to add a custom site search
When I use the context menu, it adds spaces to the search term. I need it to have a +.
```
var search = crmAPI.getSelection() || prompt('Please enter a search query');
var url = 'https://www.subetalodge.us/list_all/search/%s/mode/name';
var toOpen = url.replace(/%s/g,search);
window.open(toOpen, '_blank');
```
|
Is there a built in feature in ISS so that the default page (the root one) shows a list of all configured web pages? |
Replace default page in ISS with app summary |
|iis| |
You can use the `post_page_move` signal, [distinguish between a move and a reorder](https://docs.wagtail.org/en/stable/reference/signals.html#distinguishing-between-a-move-and-a-reorder), the retrieve the page index according to the children position (just like [Wagtail's `set_page_position`](https://github.com/wagtail/wagtail/blob/e30c25c3b1b5064ad95fbb68922baf0c14f0e252/wagtail/admin/views/pages/ordering.py#L23)):
```py
from wagtail.signals import post_page_move
def clear_page_url_from_cache_on_move(sender, **kwargs):
if kwargs['url_path_before'] == kwargs['url_path_after']:
page = kwargs['instance']
order = list(kwargs['parent_page_after'].get_children()).index(page)
print('moved page %s - new order: %d' % (page.title, order))
post_page_move.connect(clear_page_url_from_cache_on_move)
``` |
I do not know why my bundling command:
```
eas build --platform android
```
keeps returning this error:
>CombinedError: [GraphQL] Entity not authorized: AccountEntity[bf4d40b0-5121-4c66-96ed-6be99586c703] (viewer = RegularUserViewerContext[2200e4b1-4483-428d-87f3-0643e995c424], action = READ,
ruleIndex = -1)
|
Step 1: I made changes in standlone.xml
<interfaces>
<interface name="management">
<inet-address value="${jboss.bind.address.management:[::1}]"/>
</interface>
<interface name="public">
<inet-address value="${jboss.bind.address.management:[::1]}"/>
</interface>
</interfaces>
<socket-binding-group name="standard-sockets" default-interface="public" port-offset="${jboss.socket.binding.port-offset6:0}">
<socket-binding name="ajp" port="${jboss.ajp.port:8009}"/>
<socket-binding name="http" port="${jboss.http.port:8080}"/>
<socket-binding name="https" port="${jboss.https.port:8443}"/>
<socket-binding name="management-native" interface="management" port="${jboss.management.native.port:8080}"/>
<socket-binding name="management-http" interface="management" port="${jboss.management.http.port:9990}"/>
<socket-binding name="management-https" interface="management" port="${jboss.management.https.port:9993}"/>
<socket-binding name="txn-recovery-environment" port="4712"/>
<socket-binding name="txn-status-manager" port="4713"/>
<outbound-socket-binding name="mail-smtp">
<remote-destination host="${jboss.mail.server.host:localhost}" port="${jboss.mail.server.port:25}"/>
</outbound-socket-binding>
</socket-binding-group>
Step 2: Made changes in standalone.config
if [ "x$JAVA_OPTS" = "x" ]; then
JAVA_OPTS="-Xms1303m -Xmx1303m -XX:MetaspaceSize=96M -XX:MaxMetaspaceSize=256m -Djava.net.preferIPv4Stack=false"
JAVA_OPTS="$JAVA_OPTS -Djboss.modules.system.pkgs=$JBOSS_MODULES_SYSTEM_PKGS -Djava.awt.headless=true"
else
echo "JAVA_OPTS already set in environment; overriding default settings with values: $JAVA_OPTS"
fi
Step 3: Made changes standalone.conf.bat
rem # Prefer IPv6
set "JAVA_OPTS=%JAVA_OPTS% -Djava.net.preferIPv4Stack=false -Djava.net.preferIPv6Stack=true"
when i start jboss with address Admin console listening on http://[::1]:9990
able to run jboss
when i try to run jboss Admin console listening on http://[::1]:8080 address
its shwoing unable to redirect page
but in ipv4 when i try 8080 port its working its able redirect 9990 port number
so can anyone help me out via IPV6
i m expecting that
Admin console listening on http://[::1]:8080
for ipv6 machine jboss should run with ipv6 address
so 8080 port is listening its hsowing but not able to start jboss
i need Admin console listening on http://[::1]:8080 this address should work for jboss via ipv6 machine
the error i m getting
version jboss-eap-7.4
http://[::1]:8080/noredirect.html
Unable to redirect.
An automatic redirect to the Administration Console is not currently available, this is most likely due to the administration console being exposed over a network interface different to the one you are connected to.
To access the Administration console you should contact the administrator responsible for this JBoss EAP installation and ask them to provide you with the correct address. |
MVC application return 503 (Service Unavailable) error, Deployed in IIS with https protocol |
|c#|asp.net-mvc|ssl|iis|iis-8.5| |
Heroku does support legacy and outdated Ruby versions. See the list of Ruby versions that are supported on Heroku: https://devcenter.heroku.com/articles/ruby-support#ruby-versions
You will need to upgrade to at least Ruby 3.0 to run you application on Heroku. I suggest upgrading to the latest 3.3 version. |
I have wrote some Integration Tests for my OpenLiberty with MicroProfile application. In order for the tests to work I must first execute the `libertyDev` command. So I thought it was a good idea to use Testcontainers in which I will try to create an OpenLiberty server container and load it with the proper configuration files. Following this scope my tests are as follows.
```groovy
@Testcontainers
class TrackingInventoryIntegrationSpec extends Specification {
private static final String LIBERTY_CONFIG_BASE_PATH = 'src/main/liberty/config'
private static final String CONTAINER_CONFIG_BASE_PATH = 'config'
@Shared
GenericContainer openLiberty = new GenericContainer(DockerImageName.parse("icr.io/appcafe/open-liberty:full-java17-openj9-ubi"))
.withExposedPorts(9080, 9443)
.withCopyFileToContainer(MountableFile.forHostPath("build/libs/tracking-inventory-0.0.1-SNAPSHOT.war"), "${CONTAINER_CONFIG_BASE_PATH}/apps/tracking-inventory-0.0.1-SNAPSHOT.war")
.withCopyFileToContainer(MountableFile.forHostPath("${LIBERTY_CONFIG_BASE_PATH}/server.xml"), "${CONTAINER_CONFIG_BASE_PATH}/server.xml")
.withCopyFileToContainer(MountableFile.forHostPath("${LIBERTY_CONFIG_BASE_PATH}/bootstrap.properties"), "${CONTAINER_CONFIG_BASE_PATH}/bootstrap.properties")
.withCopyFileToContainer(MountableFile.forHostPath("${LIBERTY_CONFIG_BASE_PATH}/GeneratedSSLInclude.xml"), "${CONTAINER_CONFIG_BASE_PATH}/GeneratedSSLInclude.xml")
.withCopyFileToContainer(MountableFile.forHostPath("${LIBERTY_CONFIG_BASE_PATH}/users.xml"), "${CONTAINER_CONFIG_BASE_PATH}/users.xml")
.waitingFor(Wait.forLogMessage(".*CWWKZ0001I: Application .* started in .* seconds.*", 1)).withStartupTimeout(Duration.ofMinutes(2))
.withLogConsumer(new Slf4jLogConsumer(LoggerFactory.getLogger("openLiberty")))
@Shared
Jsonb jsonb
def requestBody
@Shared
HttpClient client
@Shared
String appUrl
def setup() {
openLiberty.start()
appUrl = "http://${openLiberty.getHost()}:${openLiberty.getMappedPort(9080)}/inventory"
client = HttpClient.newHttpClient()
jsonb = JsonbBuilder.create()
requestBody = TestItemProvider.generateRandomItem()
}
def 'Successful create item and persist it into JSON, HTML and CSV'() {
when: 'the call is succeeded'
def response = doPost(jsonb.toJson(requestBody))
then: 'empty response body means successful request'
response.status == Response.Status.CREATED.statusCode
}
def 'Failed to create the item'() {
given: 'an item with null name'
def requestBody = TestItemProvider.createItemWithNullName()
when: 'calling the application with this item'
def response = doPost(jsonb.toJson(requestBody))
then: 'a response with error message is returned'
response.status != Response.Status.CREATED.statusCode
response.readEntity(ErrorResponse.class).errors.size() > 0
}
def 'Successfully get an item'() {
given: 'an item is already created'
doPost(jsonb.toJson(requestBody))
when: 'the call is made to the get api'
def response = doGet()
then: 'the response contains an OK status'
response.status == Response.Status.OK.statusCode
and: 'the response body contains the correct information'
def returnedItemMap = response.readEntity(List.class)[0]
returnedItemMap.name == requestBody.name
returnedItemMap.serialNumber == requestBody.serialNumber
returnedItemMap.value == requestBody.value
}
def 'Successfully delete an item'() {
given: 'an item is already created'
doPost(jsonb.toJson(requestBody))
when: 'the call is made to the delete api'
def response = doDelete(requestBody.serialNumber)
then: 'the response contains an OK status'
response.status == Response.Status.NO_CONTENT.statusCode
}
def doPost(Object requestPayload) {
Client client = ClientBuilder.newClient()
String targetUrl = "$appUrl/tracking-inventory/inventory"
return client.target(targetUrl)
.request(MediaType.APPLICATION_JSON)
.post(Entity.json(requestPayload))
}
def doGet() {
Client client = ClientBuilder.newClient()
String targetUrl = "$appUrl/tracking-inventory/inventory"
return client.target(targetUrl)
.request(MediaType.APPLICATION_JSON)
.get()
}
}
```
Here I am mounting the war and my server.xml file as well as various configuration files. The WAR file gets placed into build/libs folder and I have created a task in my build.gradle file for this procedure which is the `tests.dependsOn war`, to properly generate it before tests execution.
My problem is that upon running the first test, I am getting the next error:
```
org.testcontainers.containers.ContainerLaunchException: Container startup failed for image open-liberty:23.0.0.12-full-java17-openj9
at app//org.testcontainers.containers.GenericContainer.doStart(GenericContainer.java:359)
at app//org.testcontainers.containers.GenericContainer.start(GenericContainer.java:330)
at org.testcontainers.spock.TestcontainersMethodInterceptor.startContainers_closure3(TestcontainersMethodInterceptor.groovy:83)
at app//groovy.lang.Closure.call(Closure.java:433)
at app//groovy.lang.Closure.call(Closure.java:422)
at app//org.testcontainers.spock.TestcontainersMethodInterceptor.startContainers(TestcontainersMethodInterceptor.groovy:80)
at app//org.testcontainers.spock.TestcontainersMethodInterceptor.interceptSetupSpecMethod(TestcontainersMethodInterceptor.groovy:25)
at app//org.spockframework.runtime.extension.AbstractMethodInterceptor.intercept(AbstractMethodInterceptor.java:36)
at app//org.spockframework.runtime.extension.MethodInvocation.proceed(MethodInvocation.java:101)
at app//org.spockframework.runtime.model.MethodInfo.invoke(MethodInfo.java:156)
at java.base@17.0.9/java.util.ArrayList.forEach(ArrayList.java:1511)
Caused by: org.rnorth.ducttape.RetryCountExceededException: Retry limit hit with exception
at app//org.rnorth.ducttape.unreliables.Unreliables.retryUntilSuccess(Unreliables.java:88)
at app//org.testcontainers.containers.GenericContainer.doStart(GenericContainer.java:344)
... 10 more
Caused by: org.testcontainers.containers.ContainerLaunchException: Could not create/start container
at app//org.testcontainers.containers.GenericContainer.tryStart(GenericContainer.java:563)
at app//org.testcontainers.containers.GenericContainer.lambda$doStart$0(GenericContainer.java:354)
at app//org.rnorth.ducttape.unreliables.Unreliables.retryUntilSuccess(Unreliables.java:81)
... 11 more
Caused by: org.testcontainers.containers.ContainerLaunchException: Timed out waiting for URL to be accessible (http://localhost:55240/ should return HTTP [200])
at app//org.testcontainers.containers.wait.strategy.HttpWaitStrategy.waitUntilReady(HttpWaitStrategy.java:320)
at app//org.testcontainers.containers.wait.strategy.AbstractWaitStrategy.waitUntilReady(AbstractWaitStrategy.java:52)
at app//org.testcontainers.containers.GenericContainer.waitUntilContainerStarted(GenericContainer.java:909)
at app//org.testcontainers.containers.GenericContainer.tryStart(GenericContainer.java:500)
... 13 more
Caused by: org.rnorth.ducttape.TimeoutException: Timeout waiting for result with exception
at app//org.rnorth.ducttape.unreliables.Unreliables.retryUntilSuccess(Unreliables.java:54)
at app//org.testcontainers.containers.wait.strategy.HttpWaitStrategy.waitUntilReady(HttpWaitStrategy.java:252)
... 16 more
Caused by: java.lang.RuntimeException: java.net.SocketException: Unexpected end of file from server
at org.testcontainers.containers.wait.strategy.HttpWaitStrategy.lambda$null$6(HttpWaitStrategy.java:312)
at org.rnorth.ducttape.ratelimits.RateLimiter.doWhenReady(RateLimiter.java:27)
at org.testcontainers.containers.wait.strategy.HttpWaitStrategy.lambda$waitUntilReady$7(HttpWaitStrategy.java:257)
at org.rnorth.ducttape.unreliables.Unreliables.lambda$retryUntilSuccess$0(Unreliables.java:43)
at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1136)
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635)
at java.base/java.lang.Thread.run(Thread.java:857)
Caused by: java.net.SocketException: Unexpected end of file from server
at java.base/sun.net.www.http.HttpClient.parseHTTPHeader(HttpClient.java:954)
at java.base/sun.net.www.http.HttpClient.parseHTTP(HttpClient.java:761)
at java.base/sun.net.www.http.HttpClient.parseHTTPHeader(HttpClient.java:951)
at java.base/sun.net.www.http.HttpClient.parseHTTP(HttpClient.java:761)
at java.base/sun.net.www.protocol.http.HttpURLConnection.getInputStream0(HttpURLConnection.java:1688)
at java.base/sun.net.www.protocol.http.HttpURLConnection.getInputStream(HttpURLConnection.java:1589)
at java.base/java.net.HttpURLConnection.getResponseCode(HttpURLConnection.java:529)
at org.testcontainers.containers.wait.strategy.HttpWaitStrategy.lambda$null$6(HttpWaitStrategy.java:276)
... 7 more
```
The container logs are the following:
```
WARNING: Unknown module: jdk.management.agent specified to --add-exports
WARNING: Unknown module: jdk.attach specified to --add-exports
Launching defaultServer (Open Liberty 23.0.0.12/wlp-1.0.84.cl231220231127-1901) on Eclipse OpenJ9 VM, version 17.0.10+7 (en_US)
[AUDIT ] CWWKE0001I: The server defaultServer has been launched.
[AUDIT ] CWWKG0093A: Processing configuration drop-ins resource: /opt/ol/wlp/usr/servers/defaultServer/configDropins/defaults/keystore.xml
[AUDIT ] CWWKG0093A: Processing configuration drop-ins resource: /opt/ol/wlp/usr/servers/defaultServer/configDropins/defaults/open-default-port.xml
[AUDIT ] CWWKG0028A: Processing included configuration resource: /opt/ol/wlp/usr/servers/defaultServer/users.xml
[AUDIT ] CWWKG0028A: Processing included configuration resource: /opt/ol/wlp/usr/servers/defaultServer/GeneratedSSLInclude.xml
[AUDIT ] CWWKG0102I: Found conflicting settings for defaultKeyStore instance of keyStore configuration.
Property password has conflicting values:
Secure value is set in file:/opt/ol/wlp/usr/servers/defaultServer/configDropins/defaults/keystore.xml.
Secure value is set in file:/opt/ol/wlp/usr/servers/defaultServer/GeneratedSSLInclude.xml.
Property password will be set to the value defined in file:/opt/ol/wlp/usr/servers/defaultServer/GeneratedSSLInclude.xml.
[AUDIT ] CWWKG0102I: Found conflicting settings for defaultHttpEndpoint instance of httpEndpoint configuration.
Property host has conflicting values:
Value * is set in file:/opt/ol/wlp/usr/servers/defaultServer/configDropins/defaults/open-default-port.xml.
Value localhost is set in file:/opt/ol/wlp/usr/servers/defaultServer/server.xml.
Property host will be set to localhost.
[AUDIT ] CWWKZ0058I: Monitoring dropins for applications.
[AUDIT ] CWWKS4104A: LTPA keys created in 0.206 seconds. LTPA key file: /opt/ol/wlp/output/defaultServer/resources/security/ltpa.keys
[AUDIT ] CWWKT0016I: Web application available (default_host): http://localhost:9080/jwt/
[AUDIT ] CWWKT0016I: Web application available (default_host): http://localhost:9080/openapi/platform/
[AUDIT ] CWWKT0016I: Web application available (default_host): http://localhost:9080/openapi/
[AUDIT ] CWWKT0016I: Web application available (default_host): http://localhost:9080/IBMJMXConnectorREST/
[AUDIT ] CWWKT0016I: Web application available (default_host): http://localhost:9080/metrics/
[AUDIT ] CWWKT0016I: Web application available (default_host): http://localhost:9080/health/
[AUDIT ] CWWKT0016I: Web application available (default_host): http://localhost:9080/openapi/ui/
[AUDIT ] CWWKT0016I: Web application available (default_host): http://localhost:9080/ibm/api/
[AUDIT ] CWPKI0803A: SSL certificate created in 1.513 seconds. SSL key file: /opt/ol/wlp/output/defaultServer/resources/security/key.p12
[AUDIT ] CWWKI0001I: The CORBA name server is now available at corbaloc:iiop:localhost:2809/NameService.
[AUDIT ] CWWKT0016I: Web application available (default_host): http://localhost:9080/
[AUDIT ] CWWKZ0001I: Application tracking-inventory-0.0.1-SNAPSHOT started in 2.275 seconds.
[AUDIT ] CWWKF1037I: The server added the [appAuthentication-2.0, appAuthorization-2.0, appClientSupport-2.0, appSecurity-4.0, batch-2.0, beanValidation-3.0, cdi-3.0, concurrent-2.0, connectors-2.0, connectorsInboundSecurity-2.0, enterpriseBeans-4.0, enterpriseBeansHome-4.0, enterpriseBeansLite-4.0, enterpriseBeansPersistentTimer-4.0, enterpriseBeansRemote-4.0, expressionLanguage-4.0, faces-3.0, jakartaee-9.1, json-1.0, jsonb-2.0, jsonp-2.0, jwt-1.0, localConnector-1.0, mail-2.0, managedBeans-2.0, mdb-4.0, messaging-3.0, messagingClient-3.0, messagingSecurity-3.0, messagingServer-3.0, microProfile-5.0, monitor-1.0, mpConfig-3.0, mpFaultTolerance-4.0, mpHealth-4.0, mpJwt-2.0, mpMetrics-4.0, mpOpenAPI-3.0, mpOpenTracing-3.0, mpRestClient-3.0, pages-3.0, persistence-3.0, persistenceContainer-3.0, restConnector-2.0, restfulWS-3.0, restfulWSClient-3.0, servlet-5.0, transportSecurity-1.0, webProfile-9.1, websocket-2.0, xmlBinding-3.0, xmlWS-3.0] features to the existing feature set.
[AUDIT ] CWWKF0012I: The server installed the following features: [appAuthentication-2.0, appAuthorization-2.0, appClientSupport-2.0, appSecurity-4.0, batch-2.0, beanValidation-3.0, cdi-3.0, concurrent-2.0, connectors-2.0, connectorsInboundSecurity-2.0, distributedMap-1.0, enterpriseBeans-4.0, enterpriseBeansHome-4.0, enterpriseBeansLite-4.0, enterpriseBeansPersistentTimer-4.0, enterpriseBeansRemote-4.0, expressionLanguage-4.0, faces-3.0, jakartaee-9.1, jdbc-4.2, jndi-1.0, json-1.0, jsonb-2.0, jsonp-2.0, jwt-1.0, localConnector-1.0, mail-2.0, managedBeans-2.0, mdb-4.0, messaging-3.0, messagingClient-3.0, messagingSecurity-3.0, messagingServer-3.0, microProfile-5.0, monitor-1.0, mpConfig-3.0, mpFaultTolerance-4.0, mpHealth-4.0, mpJwt-2.0, mpMetrics-4.0, mpOpenAPI-3.0, mpOpenTracing-3.0, mpRestClient-3.0, pages-3.0, persistence-3.0, persistenceContainer-3.0, restConnector-2.0, restfulWS-3.0, restfulWSClient-3.0, servlet-5.0, ssl-1.0, transportSecurity-1.0, webProfile-9.1, websocket-2.0, xmlBinding-3.0, xmlWS-3.0].
[AUDIT ] CWWKF0013I: The server removed the following features: [appClientSupport-1.0, appSecurity-2.0, appSecurity-3.0, batch-1.0, beanValidation-2.0, cdi-2.0, concurrent-1.0, ejb-3.2, ejbHome-3.2, ejbLite-3.2, ejbPersistentTimer-3.2, ejbRemote-3.2, el-3.0, j2eeManagement-1.1, jacc-1.5, jaspic-1.1, javaMail-1.6, javaee-8.0, jaxb-2.2, jaxrs-2.1, jaxrsClient-2.1, jaxws-2.2, jca-1.7, jcaInboundSecurity-1.0, jms-2.0, jpa-2.2, jpaContainer-2.2, jsf-2.3, jsonb-1.0, jsonp-1.1, jsp-2.3, managedBeans-1.0, mdb-3.2, servlet-4.0, wasJmsClient-2.0, wasJmsSecurity-1.0, wasJmsServer-1.0, webProfile-8.0, websocket-1.1].
[AUDIT ] CWWKF0011I: The defaultServer server is ready to run a smarter planet. The defaultServer server started in 6.601 seconds.
```
I tried to run the image outside my tests using the `docker run --rm -p 9080:9080 open-liberty:23.0.0.12-full-java17-openj9` command and then check the `http://localhost:9080` and it is working as expected. Why am I losing connection with the server? What am I missing? |
ContainerLaunchException: Cannot start Testcontainer OpenLiberty Server in Integration Tests using Spock |
I always use this code to "re-initialize" window.find()<BR>
window.find documentation: window.find(aString, aCaseSensitive, aBackwards,...
while(window.find(' ',false,true)!==false){} |
I'm getting an inexplicable error in my Eclipse IDE:
The type java.util.Collection cannot be resolved. It is indirectly referenced from required type picocli.CommandLine
It is not allowing me to iterate a List object like the following:
```
List<ServerRecord> data = execute(hostnames);
for (ServerRecord record : data) {
hostInfoList.add(new String[] { record.hostname(), record.ip(), record.mac(), record.os(),
record.release(), record.version(), record.cpu(), record.memory(), record.name(),
record.vmware(), record.bios() });
}
```
"data" is underlined in red and it says: Can only iterate over an array or an instance of java.lang.Iterable
Tried cleaning the project, rebuilding, checked Java version (21), updated Maven project, pom.xml specified target and source to be 21, deleted project and recreated it. Same error.
Here is my pom.xml:
```
<project xmlns="http://maven.apache.org/POM/4.0.0"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 https://maven.apache.org/xsd/maven-4.0.0.xsd">
<modelVersion>4.0.0</modelVersion>
<groupId>gov.uscourts.bnc.app</groupId>
<artifactId>server-query</artifactId>
<version>1.0.0</version>
<name>server-query</name>
<properties>
<project.build.sourceEncoding>UTF-8</project.build.sourceEncoding>
<maven.compiler.target>21</maven.compiler.target>
<maven.compiler.source>21</maven.compiler.source>
</properties>
<build>
<plugins>
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-jar-plugin</artifactId>
<version>3.2.0</version>
<configuration>
<archive>
<manifest>
<addDefaultImplementationEntries>true</addDefaultImplementationEntries>
<addDefaultSpecificationEntries>true</addDefaultSpecificationEntries>
<addClasspath>true</addClasspath>
<classpathPrefix>libs/</classpathPrefix>
<mainClass>
gov.uscourts.bnc.app.CollectServerData
</mainClass>
</manifest>
</archive>
</configuration>
</plugin>
</plugins>
</build>
<dependencies>
<dependency>
<groupId>gov.uscourts.bnc</groupId>
<artifactId>bnc</artifactId>
<version>1.0.0</version>
</dependency>
<!-- https://mvnrepository.com/artifact/junit/junit -->
<dependency>
<groupId>junit</groupId>
<artifactId>junit</artifactId>
<version>4.13.2</version>
<scope>test</scope>
</dependency>
<!-- https://mvnrepository.com/artifact/info.picocli/picocli -->
<dependency>
<groupId>info.picocli</groupId>
<artifactId>picocli</artifactId>
<version>4.7.5</version>
</dependency>
</dependencies>
</project>
```
|
jboss configuration via ipv6 version jboss-eap-7.4 |
|configuration|jboss|ipv6| |
null |
Some skills are not supported in the latest version of `@azure/search-documents`.
Use the command below to install the beta version:
`npm i @azure/search-documents@12.1.0-beta.1`
Then, execute your code.

And in the skillset:

You can read more about this in the [documentation](https://learn.microsoft.com/en-us/javascript/api/overview/azure/search-documents-readme?view=azure-node-preview).

Above, you can see the skills added in the preview version, but they are not present in the latest version. |
Our application is a web application which is registered in Entra Id. The applications uses OIDC for authentication. We have added Databricks scope (user_impersonation) in API permissions in Entra. The Databricks scope (`<databrics_app_id>/.default`) is added as scope to OpenIdConnect client.
When the user logs in, the access token returned by Entra doesn't have the `user_impersonation` scope. When the application makes the call to Databricks using the access token, we get an http 401 UnAuthorized error.
I am using OidcClient to login using OIDC:
```
var options = new OidcClientOptions()
{
Scope = "<databricks_id>/.default openid",
}
var _oidcClient = new OidcClient(options);
```
I also tried
```
.AddOpenIdConnect("oidc", options =>
{
options.Scope.Add("<databricks_id>/.default");
}
``` |
Call Databricks API from an ASP.NET Core web application |
Because when you use BuiltInTripleCamera as device type, our App will use Ultra Wide Camera as active primary device. But iPhone 12 Pro don't support Macro Photography so this device don't work well.
You should set device.videoZoomFactor = device.virtualDeviceSwitchOverVideoZoomFactors.first.
To let Wide Angle camera can be active primary device. And let iPhone choose camera auto.
guard let virtualDeviceSwitchOverVideoZoomFactor = input.device.virtualDeviceSwitchOverVideoZoomFactors.first as? CGFloat else {
return
}
try? input.device.lockForConfiguration()
input.device.videoZoomFactor = virtualDeviceSwitchOverVideoZoomFactor
input.device.unlockForConfiguration()
|
When using Playwright (version 1.42.1) to test services hosted on the local machine using localhost or 127.0.0.1 with Chrome 108 and Chromium 100 browsers on Windows 7 Professional, the testing process is fast. However, when testing services hosted on the local machine's IP address (e.g., 10.132.xxx.xxx) or other IP addresses within the same intranet, the testing process becomes significantly slower.
Environment:
Playwright Version: 1.42.1
Operating System: Windows 7 Professional
Browsers: Chrome 108, Chromium 100
Network Condition: Intranet environment
Playwright should be able to test services within the same intranet environment, including those hosted on the local machine's IP address, with similar performance as testing services using localhost or 127.0.0.1. |
I'm creating a Wpf C# .NET 8 app using the MTConnect.NET-SHDR 6.0.9-beta package. I'm publishing on the local network some dataitems, and they are correctly displayed if the sequence number is greater than a certain amount (otherwise the value is UNAVAILABLE). I think the problem is with the version of .NET, because it is not supported, but I'm not sure because using a previous version of TrakHound every dataitem value is displayed correctly (I tried locally).
```
//code for updating adapter
myAdapter = new ShdrAdapter("Dreno");
myAdapter.Start();
myAdapter.AddDataItem("StopRequest", UtilityTools.StopRequest);
//[...]
myAdapter.AddDataItem("CalibrationCompleted", UtilityTools.CalibrationCompleted);
myAdapter.SendChanged();
```
[Output](https://i.stack.imgur.com/IbMBv.png)
As I said tried using Trakhound agent 5.4.3.0 and the data was displayed in a table format and there were no UNAVAILABLE items. |
Trakhound agent displaying UNAVAILABLE items |
|c#|.net|mtconnect| |
null |
|c| |
If none of the other answers worked for you and you are using "Git Bash" (mintty) from a local git installation.
I ended up creating an alias from mvn.cmd to mvn and this solved it for me.
alias mvn=mvn.cmd
and put that in my .bashrc to execute on console startup |
The code below works on chrome well
var options = { mimeType: 'video/webm; codecs=av1,opus', audioBitsPerSecond: 8000, videoBitsPerSecond: 32000 };
try to put a higher number for videoBitsPerSeconds (eg, 512000) and you will see the quality of video is better |
Q: What is my goal?
A: Need a way to search best match columns across many rows.
Q: Can you explain with more details?
A: Suppose I have following data:
| ID | Key A | Key B | Key C | Val A | Val B | Val C |
| -- | ----- | ----- | ----- | ----- | ----- | ----- |
|1| abc | b* | c* | va0 | NULL | vc0 |
|2| a* | bcd | c* | NULL | vb1 | vc1 |
In my circumanstance, I have a score function that will calculate best match values. For a given condition keys as: `Key A = abc`, `Key B = bcd` and `Key C = cde`, then I need to query the best match values as:
`Val A = va0`, `Val B = vb1` and `Val C = vc1`(a little special as following).
As as conclusion, suppose our score function is:
- When same: `2*i` (`i` is the position index starts from `1`)
- When pattern match: `1+i`
- When value is NULL, not match
With `Key A = abc`, `Key B = bcd` and `Key C = cde`, for row 1(`ID=1`):
`Key A` is exactly match, `Key B` is pattern match, `Key C` is pattern match too but the longer match with larger index is prior, the weight is `weight1 = 2*1 + (1+2) + 1+3 = 9`.
With the same algorithm, for row 2(`ID=2`), the weight is `weight2 = (1+1) + 2(1+2) + 1+3 = 12`.
For `Val A`, row 2 is NULL, then we got `va0` from row 1, for `Val B` row 1 is NULL then we got `vb1` from row 2. However for `Val C`, since `weight2 > weight1` then we got `vc1`.
That's why we got `Val A = va0`, `Val B = vb1` and `Val C = vc1`.
Further more, for a entire row to find the highest score row is clear(for the given data and query conditions row 2 is expected), but for column-level highest score match would be a problem.
---
Maybe we could search columns one-by-one, but if there are more than 100 columns this solution will die and it's not acceptable. BTW, any version of ElasticSearch is fine. We are using latest
`8.12.2` for testing purpose. We are in POC phase, and any solution will be appriciate and we are glad to do any test about that.
1. We tried to index entire rows and query the row has highest score with custom score function, it works and the solution is easy to know.
2. We read the manual and know `bulk query` API might be work but our columns could be ~5k it cannot work as expected. |
null |
[Task :react-native-google-signin_google-signin:compileDebugJavaWithJavac FAILED](https://i.stack.imgur.com/2iywv.png)
Task :react-native-google-signin_google-signin:compileDebugJavaWithJavac FAILED
FAILURE: Build failed with an exception.
* What went wrong:
Execution failed for task ':react-native-google-signin_google-signin:compileDebugJavaWithJavac'.
> Could not resolve all files for configuration ':react-native-google-signin_google-signin:androidJdkImage'.
> Failed to transform core-for-system-modules.jar to match attributes {artifactType=_internal_android_jdk_image, org.gradle.libraryelements=jar, org.gradle.usage=java-runtime}.
> Execution failed for JdkImageTransform: C:\Users\user\AppData\Local\Android\Sdk\platforms\android-34\core-for-system-modules.jar.
> Error while executing process C:\Program Files\Java\jdk-21\bin\jlink.exe with arguments {--module-path C:\Users\user\.gradle\caches\transforms-3\4b1308f73665eeac924c132c45e496df\transformed\output\temp\jmod --add-modules java.base --output C:\Users\user\.gradle\caches\transforms-3\4b1308f73665eeac924c132c45e496df\transformed\output\jdkImage --disable-plugin system-modules}
* Try:
> Run with --stacktrace option to get the stack trace.
> Run with --info or --debug option to get more log output.
> Run with --scan to get full insights.
> Get more help at https://help.gradle.org.
Deprecated Gradle features were used in this build, making it incompatible with Gradle 9.0.
You can use '--warning-mode all' to show the individual deprecation warnings and determine if they come from your own scripts or plugins.
For more on this, please refer to https://docs.gradle.org/8.3/userguide/command_line_interface.html#sec:command_line_warnings in the Gradle documentation |
when i run npx expo run:android i get this error Task :react-native-google-signin_google-signin:compileDebugJavaWithJavac FAILED |
|expo| |
null |
If you want exact matches for the commands to be ignored:
`$export HISTIGNORE="history:exit"`
Put the above shell variable export in your `.bashrc` file so that the next time you login to bash the earlier `HISTIGNORE` setting continues to be effective.
If you don't want any lines beginning with the `history` command, like `history -a` then put this into your `.bashrc` file:
`$export HISTIGNORE="history*:exit"`
If you don't want any commands to be saved into the history, put:
`$export HISTIGNORE="*"`
Check your HISTIGNORE setting:
`$echo "$HISTIGNORE"`
From `man bash`:
Each colon-separated list of patterns is anchored at the beginning of the line and must match the complete line. (no implicit '*' is appended). Each pattern is tested against the line after the checks specified by `HISTCONTROL` are applied.
|
I had similar issue, but resolved it in two steps.
First: run this command; php artisan key:generate
Second: run this command; php artisan optimize
Issue resolved. |
i am using sox for creating synth with 100ms, this is my command:
```
/usr/bin/sox -V -r 44100 -n -b 64 -c 1 file.wav synth 0.1 sine 200 vol -2.0dB
```
now when i create 3 sine wave files and i combine all with
```
/usr/bin/sox file1.wav file2.wav file3.wav final.wav
```
then i get gaps between the files. i dont know why. but when i open for example file1.wav then i also see a short gap in front and at the end of the file.
how can i create a sine with exact 100ms without gaps in front and end?
and my 2nd question: is there also a possibility to create e.g. 10 sine wave synths with one command in sox? like sox f1 200 0.1, f2 210 01, f3 220 01, ... first 200hz 10ms, 210hz 10ms, 220hz 10ms
thank you so much many greets
i have tried some different options in sox but always each single sine file looks like that:
[enter image description here][1]
[1]: https://i.stack.imgur.com/12EUp.jpg |
I'm trying to build a flutter app that fetches and posts from an api, here is the code, I can't seem to make the code render the posts on the UI, it comes back as black with a single white box. I have been debugging, the posts and fetches work fine ad the console logs the json being retrieved but somehow it can't render on the UI.
```
import 'dart:convert';
import 'package:dawgs/Discussion.dart';
import 'package:dawgs/components/WallPost.dart';
import 'package:flutter/gestures.dart';
import 'package:http/http.dart' as http;
import 'package:flutter/material.dart';
import 'package:dawgs/components/text_field.dart';
import 'dart:developer' as developer;
void main() {
runApp(const MyApp());
}
class MyApp extends StatelessWidget {
const MyApp({super.key});
// This widget is the root of your application.
@override
Widget build(BuildContext context) {
return MaterialApp(
title: 'The Buzz',
theme: ThemeData(
colorScheme:
ColorScheme.fromSeed(seedColor: Color.fromARGB(255, 32, 7, 80)),
useMaterial3: true,
),
home: const MyHomePage(title: 'Home Page'),
);
}
}
class MyHomePage extends StatefulWidget {
const MyHomePage({super.key, required this.title});
// This widget is the home page of your application. It is stateful, meaning
// that it has a State object (defined below) that contains fields that affect
// how it looks.
// This class is the configuration for the state. It holds the values (in this
// case the title) provided by the parent (in this case the App widget) and
// used by the build method of the State. Fields in a Widget subclass are
// always marked "final".
final String title;
@override
State<MyHomePage> createState() => _MyHomePageState();
}
class _MyHomePageState extends State<MyHomePage> {
final titleController = TextEditingController();
final messageController = TextEditingController();
late Future<List<Discussion>> _future_list_discussions;
final backendUrl = "https://team-dawgs.dokku.cse.lehigh.edu";
@override
void initState() {
super.initState();
_future_list_discussions = fetchDiscussions();
}
@override
Widget build(BuildContext context) {
return build_v3(context);
}
Widget build_v3(BuildContext context) {
var fb = FutureBuilder<List<Discussion>>(
future: _future_list_discussions,
builder:
(BuildContext context, AsyncSnapshot<List<Discussion>> snapshot) {
Widget child;
if (snapshot.hasData) {
// developer.log('`using` ${snapshot.data}', name: 'my.app.category');
// create listview to show one row per array element of json response
child = ListView.builder(
//shrinkWrap: true, //expensive! consider refactoring. https://api.flutter.dev/flutter/widgets/ScrollView/shrinkWrap.html
padding: const EdgeInsets.all(16.0),
itemCount: snapshot.data!.length,
itemBuilder: /*1*/ (context, i) {
return Card(
child: Padding(
padding: const EdgeInsets.only(
top: 32.0, bottom: 32.0, left: 16.0, right: 16.0),
child: Column(
crossAxisAlignment: CrossAxisAlignment.start,
children: <Widget>[
Text(
snapshot.data![i].title,
style: TextStyle(
fontSize: 22, fontWeight: FontWeight.bold),
),
Text(
snapshot.data![i].message,
style: TextStyle(
fontSize: 22, fontWeight: FontWeight.bold),
),
],
)),
);
});
} else if (snapshot.hasError) {
// newly added
child = Text('${snapshot.error}');
} else {
// awaiting snapshot data, return simple text widget
// child = Text('Calculating answer...');
child = const CircularProgressIndicator(); //show a loading spinner.
}
return child;
},
);
return fb;
}
}
/* // Post Discussion
Padding(
padding: const EdgeInsets.all(25.0),
child: Row(
children: [
Expanded(
child: MyTextField(
controller: titleController,
hintText: 'Title now!',
obscureText: false,
),
),
SizedBox(width: 10),
Expanded(
child: MyTextField(
controller: messageController,
hintText: 'Content please',
obscureText: false,
),
),
IconButton(
onPressed: () {
setState(() {
_future_list_discussions = postDiscussion(
titleController.text,
messageController.text,
);
});
},
icon: const Icon(Icons.arrow_circle_up),
)
],
),
)
],
),
),
);
} */
Future<List<Discussion>> postDiscussion(String title, String message) async {
final response = await http.post(
Uri.parse('https://team-dawgs.dokku.cse.lehigh.edu/discussions'),
headers: <String, String>{
'Content-Type': 'application/json; charset=UTF-8',
},
body: jsonEncode(<String, String>{
'mTitle': title,
'mMessage': message,
}),
);
if (response.statusCode == 200) {
final responseBody = jsonDecode(response.body);
// Check if the response body is in the expected format
if (responseBody is List<dynamic>) {
// Map each item in the list to a Discussion object
return responseBody.map((item) => Discussion.fromJson(item)).toList();
}
}
// Throw a generic error if any unexpected scenario occurs
throw Exception(
'Failed to post a discussion. Status code: ${response.statusCode}, Reason: ${response.reasonPhrase}');
}
Future<List<Discussion>> fetchDiscussions() async {
final response = await http
.get(Uri.parse('https://team-dawgs.dokku.cse.lehigh.edu/discussions'));
if (response.statusCode == 200) {
// If the server did return a 200 OK response, then parse the JSON.
final List<Discussion> returnData;
var res = jsonDecode(response.body);
print('json decode: $res');
if (res is List) {
returnData =
(res as List<dynamic>).map((x) => Discussion.fromJson(x)).toList();
} else if (res is Map) {
returnData = <Discussion>[
Discussion.fromJson(res as Map<String, dynamic>)
];
} else {
developer
.log('ERROR: Unexpected json response type (was not a List or Map).');
returnData = List.empty();
}
return returnData;
} else {
// If the server did not return a 200 OK response,
// then throw an exception.
throw Exception('Did not receive success status code from request.');
}
}
```
I tried different ways and different youtube videos to render it differently but nothing worked. Please help!
|
Can anybody help me with rendering the UI for these posts? |
|flutter|dart| |
null |
I need to write a code that prints the address of the last byte of the second pointer.
I tried to do this via GPT, but I can't explain why pointer type conversions are needed here.
Help explain this.
```
#include <iostream>
int main() {
int q = 10;
int *pq = &q,
**ppq = &pq;
char* lastByte = (char*)(ppq) + sizeof(ppq) - 1;
std::cout << ppq << std::endl;
std::cout << (void*)lastByte << std::endl;
return 0;
}
```
0x7ff7bfeff150
0x7ff7bfeff157
I tried this, but the answer is not correct.
```
#include <iostream>
int main() {
int q = 10;
int *pq = &q,
**ppq = &pq;
std::cout << ppq << std::endl;
std::cout << ppq + sizeof(ppq) - 1 << std::endl;
return 0;
}
```
0x7ff7bfeff150
0x7ff7bfeff188 |
The address of the last byte of the second pointer |
|c++|c| |
null |
in my case, the result was not same.
I couldn't reduce Prompt Tokens(On the contrary, little bit increased prompt tokens and responsetime).
but embedded prompt returned better answer.
## Case1
__Non embedded prompt__
"prompt_tokens": 3295,
"completion_tokens": 347,
"openai_process_time": 4.253575,
__Embedded prompt__
(answered better)
"prompt_tokens": 3602,
"completion_tokens": 686,
"openai_process_time": 8.553565,
## Case2
__Non embedded prompt__
"prompt_tokens": 3355,
"completion_tokens": 347,
"openai_process_time": 4.67733,
__Embedded prompt__
(answered better)
"prompt_tokens": 3669,
"completion_tokens": 583,
"openai_process_time": 7.52354,
---
those were resulted with GPT-3.5.
but also GPT-4 didn't reduce prompt tokens. |
I managed to solve my issue by configuring the `<httpEndpoint>` element in my `server.xml` file and specifically the `host` parameter as `host="*"`. Beforehand I had it set to `host="localhost"` and this was limiting the communication to my container. I am providing the server.xml for future reference:
```
<?xml version="1.0" encoding="UTF-8"?>
<server description="new server">
<!-- Enable features -->
<featureManager>
<feature>jakartaee-9.1</feature>
<feature>microProfile-5.0</feature>
<feature>localConnector-1.0</feature>
</featureManager>
<include location="users.xml" optional="true"/>
<!-- SSL Config -->
<ssl id="defaultSSLConfig" trustDefaultCerts="true"/>
<include location="GeneratedSSLInclude.xml"/>
<!-- App Config -->
<httpEndpoint
host="*"
httpPort="${default.http.port}"
httpsPort="${default.https.port}"
id="defaultHttpEndpoint"/>
<applicationManager autoExpand="true"/>
<webApplication
location="tracking-inventory-0.0.1-SNAPSHOT.war"
context-root="/">
</webApplication>
</server>
``` |
You do it pretty well as a generalisation of how you might do it for a scalar.
Pass once through the list to find the minimum at each position.
Then pass a second time through the list to see if one row matches those minima.
def minimumList( S ):
N = len( S[0] )
minima = S[0].copy()
for row in S:
for i, v in enumerate( row ):
if v < minima[i]: minima[i] = v
for row in S:
if row == minima: return True, row
return False, []
print( minimumList( [[4, 5, 6], [3, 6, 9], [1, 4, 6], [2, 5, 8]]) )
print( minimumList( [[4, 5], [3, 6]] ) )
Output:
(True, [1, 4, 6])
(False, [])
|
Cannot query multi best match columns across rows in ElasticSearch 8.12.2 |
|elasticsearch| |
null |
In Tabular you need to solve this with modeling. Either flatten parent-child hierarchies into multiple tables. See https://www.daxpatterns.com/parent-child-hierarchies/
Or, I think what you're after here, would be to materialize all the indirect "reports-to" relationship pairs in a table.
|
I recently migrated our app to .NET 8 and I'm trying to get our previously working antiforgery tokens to work. Most of it is working fine.
However, whenever opening multiple browser tabs we start getting validation errors saying the anti forgery token is not valid. I found this blurb in the Microsoft documentation stating that the synchronizer pattern is going to invalidate the antiforgery token whenever you open a new tab. It then suggests considering alternative CSRF protection patterns if this poses an issue.
However, it does not demonstrate other patterns in the documentation!
It seems to me that users using multiple browser tabs is commonplace now, so what other patterns are there? In every case they show, you are essentially creating a token, sending it down to the JavaScript and sending it back up in requests... what other pattern is even possible with an antiforgery token? I don't get it.
https://learn.microsoft.com/en-us/aspnet/core/security/anti-request-forgery?view=aspnetcore-8.0#antiforgery-in-aspnet-core
[![enter image description here][1]][1]
[1]: https://i.stack.imgur.com/tVdo6.png |
ASP.NET Core antiforgery token that works over multiple tabs? |
|angular|asp.net-core|asp.net-core-mvc|.net-8.0|antiforgerytoken| |
I would like to set variables in pycharm environment. This works fine if I set normal environment variables by going to Run/Debug Configurations and adding more variables under Environment Variables.
However, if I add environment variables that are mutually dependent, then the normal string is displayed.
For example:
domain: google.com
url: https://$domain
What I would like to see is what comes out when I call ```os.environ.get('url')```:
Desired result: https://google.com
Which unfortunately appears as the result:
Actual result: https://$domain
How can this dependency be mapped in pycharm so that the desired result is obtained? |
Pycharm: dependent user environment variables |
|python|python-3.x|pycharm|environment-variables| |
Please provide your opinion for best suited App service plan for Function APP.
I am running a .net webjob on azure app service in production environment. This webjob reads .json file from azure blob and store into sql database. Some of my the files includes data up to 2 GB.
I am getting out of memory exceptions in azure appservice, however it worked absolutely fine in development machine.
I would like to try couple of options from azure side.
1. Converting webjob to Azure function which will autoscale as per requirement.
2. Currently I am using Premium v2 P3V2. Below is the picture of app service plan
[![enter image description here][1]][1]
[1]: https://i.stack.imgur.com/mVW5B.png
I have already done some code optimization as shown below however it doesn't help
using (var zipBlobFileStream = new MemoryStream())
{
await blockBlob.DownloadToStreamAsync(zipBlobFileStream);
await zipBlobFileStream.FlushAsync();
zipBlobFileStream.Position = 0;
using (var zip = new ZipArchive(zipBlobFileStream))
{
var serializer = new JsonSerializer();
foreach (var entry in zip.Entries)
{
if (entry.Length == 0) continue; // Skip if entry has no content
using (var stream = entry.Open())
using (var streamReader = new StreamReader(stream))
using (var jsonReader = new JsonTextReader(streamReader))
{
// Move to the start of the array
while (jsonReader.Read() && jsonReader.TokenType != JsonToken.StartArray) ;
// Read each object in the array
while (jsonReader.Read() && jsonReader.TokenType != JsonToken.EndArray)
{
// Deserialize JSON object directly from the stream
var messageItem= serializer.Deserialize<MessageItem>(jsonReader);
var DataItems = new DataItems
{
items = new List<DataItem> { DataItems }
};
result.Add(MessageItems);
}
}
}
} |
Slow performance when testing non-local IP services with Playwright |
|testing|web|playwright|e2e| |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.