instruction
stringlengths 0
30k
⌀ |
---|
Run `rake -T` on production mode throws errors |
|ruby-on-rails|ruby-on-rails-7| |
null |
Not sure exactly what the problem is.
<!-- begin snippet: js hide: false console: true babel: false -->
<!-- language: lang-css -->
table {
border-collapse: collapse;
}
th, td {
border: 1px solid white;
padding: 8px;
vertical-align: top;
}
th {
text-align: center;
}
tr:not(:last-child)>:first-child, thead th:first-child {
border-bottom: 1px solid #ccc;
}
.mon, .thu {
background-color: #006400;
color: white;
}
.tue, .fri {
background-color: #ffbec8;
}
.wed {
background-color: #808080;
color: white;
}
<!-- language: lang-html -->
<table>
<thead>
<tr>
<th>Time</th>
<th class="mon">Monday</th>
<th class="tue">Tuesday</th>
<th class="wed">Wednesday</th>
<th class="thu">Thursday</th>
<th class="fri">Friday</th>
</tr>
</thead>
<tbody>
<tr>
<td>7:00</td>
<td class="mon"></td>
<td class="tue"></td>
<td class="wed">AAA</td>
<td class="thu" rowspan="2">BBB</td>
<td class="fri">CCC</td>
</tr>
<tr>
<td>8:00</td>
<td class="mon"></td>
<td class="tue"></td>
<td class="wed">DDD</td>
<td class="fri">EEE</td>
</tr>
<tr>
<td>9:00</td>
<td class="mon">CMSC 132</td>
<td class="tue"></td>
<td class="wed"></td>
<td class="thu"></td>
<td class="fri"></td>
</tr>
</tbody>
</table>
<!-- end snippet -->
|
I want to run a container which:
1) Has a non-root user running it by default (USER Dockerfile instruction)
2) Runs a system service (CMD Dockerfile instruction) as crontab
The simplest thing one can think to is to execute the following Dockerfile:
FROM openjdk:8
RUN apt-get update && apt-get -y install nano
RUN apt-get update && apt-get -y install cron
RUN useradd -u 8877 dockeras
RUN mkdir /home/dockeras
RUN chown -R dockeras /home/dockeras && chmod -R u+rwx /home/dockeras
USER dockeras
CMD ["cron", "-f"]
Obviously, the CMD instruction will return an error because the cron service requires to be run by root. How to solve this? |
How to start a service in CMD Dockerfile instruction after a USER instruction |
|docker|cron|dockerfile|containers| |
it turns out my problem was that somewhere in my code, there was code like this:
document.body.onpaste = (e) => {
e.preventDefault();
}
`preventDefault()` prevents normal paste operations.
i rewrote this line to only prevent default on a specific element that needed it (instead of the entire body), and then i was able to paste into my textboxes again |
I will supplement the answer of Cypher_CS.
I managed to fix sqlalchemy_mptt. Now I have sqlalchemy version = 2.0.23
In addition, I had to fix the following table.update().values() (In all places):
Previously: connection.execute(table.update().values()
Change to:
from sqlalchemy import update
connection.execute(update(table).where().values()
You also need to change table.delete() in one place. It will turn out like this: delete(table).where()
Do not forget to remove the square brackets (In all places).
Previously:
case([(table.c.rgt >= parent_pos_right,
table.c.rgt + 2
)],
else_=table.c.rgt)
Change to:
case((table.c.rgt >= parent_pos_right,
table.c.rgt + 2
),else_=table.c.rgt) |
I just finished writing an app using the Uno Platform framework.
It works perfectly well in Debug mode but when I run the release version (For Windows), my third-party libraries don't work well.
I have added ```<PublishTrimmed>false</PublishTrimmed>``` to the Windows Head .csproj file but I'm still having this issue so I don't think it's the ILLinker that's trimming those libraries out.
How do I fix this? |
Uno Platform app runs perfectly well in Debug but not so well in Release |
|uno-platform| |
An `awk` solution (removes the last `n` characters *from every line* in the input):
# Drop last 3 chars. from the (every) line.
$ echo '1461624333' | awk -v n=3 '{ print substr($0, 1, length($0)-n) }'
1461624
**Caveat:** Not all `awk` implementations are locale-aware and therefore not all support **UTF-8**; for instance, BWK `awk` as found on macOS 14 (Sonoma) is *not*, while GNU `awk` (`gawk`) *is*:
$ echo 'usä' | awk -v n=1 '{ print substr($0, 1, length($0)-n) }'
us? # !! on macOS, mistakenly returns the first byte of multi-byte UTF8 char. 'ä'
|
After adding the following dependency
compile "com.google.auth:google-auth-library-oauth2-http:'1.23.0'"
to build.gradle all integration tests related to rabbitMQ failed with a connection error:
org.springframework.amqp.AmqpConnectException: java.net.ConnectException: Connection refused (Connection refused)
at org.springframework.amqp.rabbit.support.RabbitExceptionTranslator.convertRabbitAccessException(RabbitExceptionTranslator.java:61)
at org.springframework.amqp.rabbit.connection.AbstractConnectionFactory.createBareConnection(AbstractConnectionFactory.java:510)
at org.springframework.amqp.rabbit.connection.CachingConnectionFactory.createConnection(CachingConnectionFactory.java:751)
at org.springframework.amqp.rabbit.connection.ConnectionFactoryUtils.createConnection(ConnectionFactoryUtils.java:214)
at org.springframework.amqp.rabbit.core.RabbitTemplate.doExecute(RabbitTemplate.java:2089)
at org.springframework.amqp.rabbit.core.RabbitTemplate.lambda$execute$13(RabbitTemplate.java:2051)
at org.springframework.retry.support.RetryTemplate.doExecute(RetryTemplate.java:287)
at org.springframework.retry.support.RetryTemplate.execute(RetryTemplate.java:180)
at org.springframework.amqp.rabbit.core.RabbitTemplate.execute(RabbitTemplate.java:2050)
at org.springframework.amqp.rabbit.core.RabbitTemplate.send(RabbitTemplate.java:1009)
at org.springframework.amqp.rabbit.core.RabbitTemplate.convertAndSend(RabbitTemplate.java:1075)
at org.springframework.amqp.rabbit.core.RabbitTemplate.convertAndSend(RabbitTemplate.java:1068)
at com.mttnow.message.delivery.common.tenant.TenantEventIntegrationTest.Tenant event with delivery config should notify listeners(TenantEventIntegrationTest.groovy:37)
Caused by:
java.net.ConnectException: Connection refused (Connection refused)
at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:350)
at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:206)
at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:188)
at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392)
at java.net.Socket.connect(Socket.java:607)
at com.rabbitmq.client.impl.SocketFrameHandlerFactory.create(SocketFrameHandlerFactory.java:60)
at com.rabbitmq.client.ConnectionFactory.newConnection(ConnectionFactory.java:1137)
at com.rabbitmq.client.ConnectionFactory.newConnection(ConnectionFactory.java:1087)
at org.springframework.amqp.rabbit.connection.AbstractConnectionFactory.connect(AbstractConnectionFactory.java:526)
at org.springframework.amqp.rabbit.connection.AbstractConnectionFactory.createBareConnection(AbstractConnectionFactory.java:473)
... 11 more
I suppose this dependency adds some filters of authorization over all connections/requests. I don't know how to resolve this issue.
I need this dependency to migrate from legacy FCM APIs to HTTP v1. |
|python|list|dictionary|yaml| |
The problem is that you are setting `head.next.next = head` and then calling `__repr__` with `print("3 HEAD", head)` or `print("4 NEW_HEAD", new_head)`. In `__repr__`the loop iteratively finds the `.next` attribute, but this causes an infinite loop as after two iterations the node being processed is `head.next.next`, which you have set to be `head` itself! The value of `next` in the loop goes:
`head` -> `head.next` -> `head.next.next` (=`head`) -> `head.next` -> `head.next.next` (=`head`)
As you can see this causes a loop, so removing the print statements fixes the problem. This is the only way to fix it given the implementation of the reversal as it will always cause loops to form, the best you can do is not try to process them...
Also, you can simplify your `__repr__` function by making it recursive:
```
def __repr__(self):
return f"ListNode({self.val}, {self.next})"
``` |
To Integrate the Monnify Payment into your php app, follow the following steps:
Assumptions:
- API Key: MK_TEST_ABCDEFGHIJ
- Contract Code: 1234567890
- Secret Key: ABCDEFGHIJ1234567890ABCDEF12345
Visit your [Monnify Dashboard][1] to get your actual keys.
In your front end, initiate the request by placing the following script right above the closing body tag `</body>`:
<!-- begin snippet: js hide: false console: true babel: false -->
<!-- language: lang-js -->
<script type="text/javascript" src="https://sdk.monnify.com/plugin/monnify.js"></script>
<script>
function pay(id) {
MonnifySDK.initialize({
amount: id,
currency: "NGN",
reference: new String((new Date()).getTime()),
customerFullName: "Customer Name",
customerEmail: "customer@email.com",
apiKey: "MK_TEST_ABCDEFGHIJ",
contractCode: "1234567890",
paymentDescription: "Monnify Payment",
metadata: {
"name": "Engr. Ukairo",
"age": 450,
},
onLoadStart: () => {
console.log("loading has started");
},
onLoadComplete: () => {
console.log("SDK is UP");
},
onComplete: function(response) {
//Implement what happens when the transaction is completed.
window.location.href = home+"paymentverify.php?ref="+response.transactionReference;
console.log(response);
},
onClose: function(data) {
//Implement what should happen when the modal is closed here
console.log(data);
}
});
}
</script>
<!-- end snippet -->
Make sure the `home` in the onComplete function is defined. In the paymentverify.php file, you first need to generate an access token, then use the access token and other details to query the result of the payment. Put the following code inside your paymentverify.php file:
<!-- begin snippet: js hide: false console: false babel: false -->
<!-- language: lang-html -->
<?php
function index($ref)
{
$ref = urlencode($ref);
$accessToken = $this->getAccessToken();
$curl = curl_init();
curl_setopt_array($curl, array(CURLOPT_URL => 'https://sandbox.monnify.com/api/v2/transactions/' . $ref . '',
CURLOPT_RETURNTRANSFER => true,
CURLOPT_ENCODING => '',
CURLOPT_MAXREDIRS => 10,
CURLOPT_TIMEOUT => 0,
CURLOPT_FOLLOWLOCATION => true,
CURLOPT_HTTP_VERSION =>
CURL_HTTP_VERSION_1_1,
CURLOPT_CUSTOMREQUEST => 'GET'));
curl_setopt(
$curl,
CURLOPT_HTTPHEADER,
array("Authorization: Bearer $accessToken")
);
$response = curl_exec($curl);
curl_close($curl);
$res = json_decode($response, true);
$responseBody = $res['responseBody'];
if ($responseBody["paymentStatus"] == "PAID") {
//payment was successful
//credit user
//amount paid = $responseBody["amountPaid"];
//paid on = $responseBody["paidOn"];
//or substr($responseBody["paidOn"], 0, 16);
return true;
} else {
return false;
//payment not verified
}
}
function getAccessToken()
{
$auth = base64_encode("MK_TEST_ABCDEFGHIJ:ABCDEFGHIJ1234567890ABCDEF12345");
$curl = curl_init();
curl_setopt_array($curl, array(
CURLOPT_URL => 'https://sandbox.monnify.com/api/v1/auth/login/',
CURLOPT_RETURNTRANSFER => true,
CURLOPT_ENCODING => '',
CURLOPT_MAXREDIRS => 10,
CURLOPT_TIMEOUT => 0,
CURLOPT_FOLLOWLOCATION => true,
CURLOPT_HTTP_VERSION => CURL_HTTP_VERSION_1_1,
CURLOPT_CUSTOMREQUEST => 'POST',
));
curl_setopt($curl, CURLOPT_HTTPHEADER, array("Authorization: Basic $auth"));
$response = json_decode(curl_exec($curl), true);
curl_close($curl);
//Assign accessToken to var
$access_token = $response['responseBody']['accessToken'];
return $access_token;
}
<!-- end snippet -->
Remember to use your actual keys and contract codes. Visit your [Monnify Dashboard][1] to get your keys. [This github repository][2] contains some codes too. Happy coding!
[1]: https://app.monnify.com/dashboard
[2]: https://github.com/jimiejosh/monnify-php-sample-codes/ |
Have you encountered a frustrating issue with your Jetpack Compose ModalBottomSheet? It works perfectly the first time you open it, but after hiding it, it becomes unresponsive and blocks all other UI interactions.
You can see Screen Shot how we call and make ModalBottomSheet.
```
Button(
onClick = {
showBottomSheet = true
}
) {
Text(text = "Done")
}
if(showBottomSheet) StyleBottomSheet()
[enter image description here](https://i.stack.imgur.com/Yv5RB.png)
```
```
@OptIn(ExperimentalMaterial3Api::class)
@Composable
fun StyleBottomSheet() {
val scope = rememberCoroutineScope()
val sheetState = rememberModalBottomSheetState()
var showBottomSheet by remember { mutableStateOf(false) }
ModalBottomSheet(
onDismissRequest = {
showBottomSheet = false
},
sheetState = sheetState
) {
// Sheet content
Button(onClick = {
scope.launch { sheetState.hide() }.invokeOnCompletion {
if (!sheetState.isVisible) {
showBottomSheet = false
}
}
}) {
Text("Hide bottom sheet")
}
}
}
```
|
Stuck Bottom Sheet in Jetpack Compose? |
|android|android-jetpack-compose|android-developer-api|android-jetpack-compose-material3|material3| |
null |
this worked for me I enabled the pointer event
and listened that according my needs
Rive(
artboard: _levelArtboard!,
useArtboardSize: true,
fit: BoxFit.fitHeight,
enablePointerEvents: true,
) |
|php|wordpress|woocommerce|discount|shipping-method| |
null |
Create test data:
CREATE UNLOGGED TABLE foo( id INT NOT NULL, created_at INT NOT NULL, data INT NOT NULL );
INSERT INTO foo SELECT n, random()*10000000, n FROM generate_series(1,40000000) n;
CREATE INDEX ON foo(id);
CREATE INDEX ON foo(created_at);
VACUUM ANALYZE foo;
Due to the large number of ids's in the query I'll use python:
ids = [ n*100 for n in range(100000) ]
cursor.execute( """
EXPLAIN ANALYZE SELECT * FROM foo
WHERE id =ANY(%s) AND created_at BETWEEN 1000000 AND 3000000
""", (ids,) )
for row in cursor:
print(row[0][:200])
Index Scan using foo_id_idx on foo (cost=0.56..334046.00 rows=20121 width=12) (actual time=8.092..331.779 rows=19845 loops=1)
Index Cond: (id = ANY ('{0,100,200,300,400,500,600,700,800,900,1000,1100,1200,1300,1400,1500,1600,1700,1800,1900,2000,2100,2200,2300,2400,2500,2600,2700,2800,2900,3000,3100,3200,3300,3400,3500,3600,
Filter: ((created_at >= 1000000) AND (created_at <= 3000000))
Rows Removed by Filter: 80154
Planning Time: 39.578 ms
Execution Time: 358.758 ms
Planning is slow, due to the large array.
It is using the index on id to fetch rows, then filters them based on created_at. Thus rows not satisfying the condition on created_at still require heap fetches. Including created_at in the index would be useful.
An index on (created_at,id) would allow to scan the requested range of created_at, but it cannot index on ids. So the ids would have to be pulled out of the index and filtered. This would only be useful if the condition on created_at is very narrow and the most selctive in the query. Looking at the row counts in your EXPLAIN, I don't feel this is the case.
An index with id as the first column allows to fetch rows for each id directly. Then created_at has to be compared with the requested range. I feel this is more useful.
CREATE INDEX ON foo( id ) INCLUDE ( created_at );
Index Scan using foo_id_created_at_idx on foo (cost=0.56..334046.00 rows=20121 width=12) (actual time=3.955..278.250 rows=19845 loops=1)
Index Cond: (id = ANY ('{0,100,200,300,400,500,600,700,800,900,1000,1100,1200,1300,1400,1500,1600,1700,1800,1900,2000,2100,2200,2300,2400,2500,2600,2700,2800,2900,3000,3100,3200,3300,3400,3500,3600,
Filter: ((created_at >= 1000000) AND (created_at <= 3000000))
Rows Removed by Filter: 80154
Planning Time: 37.395 ms
Execution Time: 299.370 ms
This pulls created_at from the index, avoiding heap fetches for rows that will be rejected, so it is slightly faster.
CREATE INDEX ON foo( id, created_at );
This would be useful if there were many rows for each id, each having a different created_at value, which is not the case here.
This query may cause lots of random IOs, so if the table is on spinning disk and not SSD, it will take a lot longer.
Using IN() instead of =ANY() does not change anything.
Besides including created_at in the index to avoid extra IO, there's not much opportunity to make it faster. This will need one index scan per id, there are 100k, so it comes down to 3µs per id which is pretty fast. Transferring that many rows to the client will also take time.
If you really need it faster, I'd recommend splitting the batches of id's into smaller chunks, and executing it in parallel over several connections. This has the advantage of parallelizing data encoding and decoding, and also processing on the client.
The following parallel python code runs in 100ms, which is quite a bit faster.
db = None
def query( ids ):
if not ids: return
global db
if not db:
db = psycopg2.connect("user= password= dbname=test")
db.cursor().execute( "PREPARE myplan AS SELECT * FROM unnest($1::INTEGER[]) get_id JOIN foo ON (foo.id=get_id AND foo.created_at BETWEEN $2 AND $3)")
cursor = db.cursor()
cursor.execute( "EXECUTE myplan(%s,1000000,3000000)", (ids,) )
if __name__ == "__main__":
ids = [ n*100 for n in range(100000) ]
chunks = [ids[offset:(offset+1000)] for offset in range( 0, len(ids)+1, 1000 )]
st = time.time()
with Pool(10) as p:
p.map(query, chunks)
print( time.time()-st )
|
I'm trying to display a ChatView if it is selected from the following code. But I only get to see the ProgressView, although I know for sure that this should not happen. Here is my code:
```
struct ChatListView: View {
@State private var shouldShowChatCreationScreen = false
@State private var chatUser: User?
@State private var shouldNavigateToChatView = false
@State private var chatForView: ChatModel?
@ObservedObject private var viewModel = ChatListViewModel()
var body: some View {
NavigationView {
ScrollView {
ForEach(viewModel.chats) { chat in
ChatListCell(chat: chat)
.onTapGesture {
shouldNavigateToChatView = true
chatForView = chat
}
}
}
.navigationTitle("Chats")
.toolbar {
ToolbarItem(placement: .navigationBarTrailing) {
Button(action: {
shouldShowChatCreationScreen.toggle()
}) {
Image(systemName: "square.and.pencil")
}
}
}
.sheet(isPresented: $shouldShowChatCreationScreen) {
ChatCreationView(didSelectUser: { user in
print("did select user \(user)")
chatUser = user
shouldNavigateToChatView.toggle()
})
}
.fullScreenCover(isPresented: $shouldNavigateToChatView) {
if let user = chatUser {
ChatView(oppUser: user)
} else if let chat = chatForView {
ChatView(chat: chat)
} else {
ProgressView()
}
}
}
}
}
```
And here is the Print statement that make me sure that the first case in the fullscreencover should activate:
did select user User(id: "XYhspARpFdh8vb7tNYYssvvQBBG2", fullname: "Bowser", email: "bowser@bowser.de", username: "bowser", profileImageUrl: Optional("https://firebasestorage.googleapis.com:443/--------blurred for security reasons ---------"), bio: Optional(""), conversations: Optional([]))
I have tried using it without the if let, but since a user is an optional return this did not help here |
The issue is that `npm` isn't just `npm`, there's actually multiple (scripts):
- `npm`
- `npm.cmd`
- `npm.ps1`
When you execute `npm` in your terminal, then your terminal is smart and automatically picks the one it recognizes. In the same way that if you (on Windows) execute `foo` it will automatically run `foo.exe`.
You can resolve your issue by changing `"npm"` to `"npm.cmd"`. To have it be a bit more flexible for other operating systems, then you can use a function like this:
```rust
pub fn npm() -> Command {
#[cfg(windows)]
const NPM: &str = "npm.cmd";
#[cfg(not(windows))]
const NPM: &str = "npm";
Command::new(NPM)
}
```
Then you simply replace your `Command::new("npm")` with `npm()`.
**Example:**
```rust
use std::process::Command;
pub fn npm() -> Command {
#[cfg(windows)]
const NPM: &str = "npm.cmd";
#[cfg(not(windows))]
const NPM: &str = "npm";
Command::new(NPM)
}
fn main() -> std::io::Result<()> {
npm().arg("-v").spawn()?.wait()?;
Ok(())
}
```
|
`read` by default trims leading and trailing whitespace and consumes backslashes. You can disable those by setting `IFS` to empty string and invoking `read` with the `-r` flag respectively.
```
$ echo ' \" ' | { read i; echo ${#i}; }
1
$ echo ' \" ' | { IFS= read -r i; echo ${#i}; }
4
``` |
It seems like the easiest solution was already mentioned in the exception message. `No Firebase App '[DEFAULT]' has been created - call Firebase.initializeApp()`
My issue got resolved after I added the Firebase.initializeApp() inside the background handler. Here is my final code.
```dart
static Future<void> backgroundHandler(RemoteMessage message) async {
await Firebase.initializeApp(options: DefaultFirebaseOptions.currentPlatform); // add this line here even if you have already added this inside main()
String? topicChannel = message.data['topic'];
debugPrint("Background notification with $topicChannel");
if (topicChannel != null && topicChannel == PredefinedNotificationTopic.topicCreated.name) {
String newTopic = message.data['newTopicToSubscribe'];
FCMHelper.addToUserTopicList(newTopic);
}
}
``` |
We are currently trying to catch an error, when a user with a company Azure AD Account tries to access the application, but does not have access to the tenant where the application resides (Multi-tenant authorization has been setup on the app).
The user enters their credentials, and gets the below error when trying to login (which is the desired and expected behavior and which we are unable to capture):
User Login Error
The user has to cancel the authentication flow (as an automatic redirect back to the app does not occur on this error), which in turn does not allow us to capture that specific error (or any errors during the login process).
Is there a method we can utilize from either the MSAL Service or the Broadcast Service that will allow us to capture errors from the Azure Login Page (pictured above).
We've cloned the following repository from Microsoft to test error capturing during the login process (using clean, working code from a trusted source, which we confirmed works as intended):
https://github.com/AzureAD/microsoft-authentication-library-for-js/blob/dev/lib/msal-angular/docs/v2-docs/redirects.md
We've found that the logging used in the MSAL Configuration (however verbose), does not actually capture the errors from the redirect flow. The closest we've come to capturing the error, was adding the below code to catch the BrowserAuthError (user_cancelled the flow) and logging it to the console. |
RabbitMQ Connection Failure after adding google auth dependency |
|amqp|google-auth-library| |
null |
That will not work. `With` is just an abbreviation so that you don't have to repeat an object name over and over again - it cannot be used for more than one object.
As you just want to set the color, it makes not much sense to use a `With`-statement, you can simply use
BldOvImgDiv1.Line.ForeColor.RGB = RGB(0, 0, 0)
BldOvImgDiv2.Line.ForeColor.RGB = RGB(0, 0, 0)
If you want to set more than one property and you don't want to repeat the code, consider to create a Sub for this.
FormatConnector BldOvImgDiv1
FormatConnector BldOvImgDiv2
(...)
Sub FormatConnector(Sh As Shape)
With Sh
.Line.ForeColor.RGB = vbBlack
.Line.BeginArrowheadStyle = msoArrowheadTriangle
.Line.Weight = 3
End With
End Sub
|
I have some custom post types and I've made some custom WordPress user roles and want them to be able to create and update WordPress block patterns.
What capabilities do I need to add to make creating patterns available to the user?
As an administrator the option is available, as a user with only permissions to the custom post type the option is hidden. I've googled for the answer with no result.
I'm looking for the specific capabilities to add to the role
like
```
$role->add_cap( 'create_patterns' );
``` |
If you prefer not to partition your topic and assign each partition to a different consumer, you can still process Kafka records in parallel. However, it is essential to ensure that you handle the consumer offset correctly.
### Handling the offset ###
Kafka maintains a numerical offset for each record in a partition. This offset acts as a unique identifier of a record within that partition, and also denotes the position of the consumer in the partition.
By default, the .NET Consumer will commit offsets automatically. This is done periodically by a background thread at an interval specified by the AutoCommitIntervalMs config property. An offset becomes eligible to be committed immediately prior to being delivered to the application via the Consume method.
`However this strategy introduces the potential for messages to be missed in the case of application failure!`
The C# client allows you also to commit offsets explicitly via the Commit method (from your code segment it seems you are following this approach).
Like in the following example:
```
var config = new ConsumerConfig
{
...
// Disable auto-committing of offsets.
EnableAutoCommit = false
}
...
while (!cancelled)
{
var consumeResult = consumer.Consume(cancellationToken);
// process message here
Process(consumeResult)
// Commit the offset
consumer.Commit(consumeResult);
}
```
If your process fails (the offset commit could fail too), your application should be programmed to implement retry logic, potentially with increasing backoff intervals.
It's crucial for your application to be designed with idempotence in mind, enabling it to handle the same record multiple times while eventually disregarding duplicates. This ensures compliance with the `at least once` delivery principle.
If an exception bubbles up, make sure to dispose of your consumer properly. Then, when you recreate it after a little break, it should pick up from where it left off, processing from the last committed offset.
### Process messages in parallel within the consumer ###
When processing records in parallel within your consumer you have two options:
1. Ensure that you commit the highest processed offset without leaving any 'holes' behind. `Otherwise, in case of failure, your consumer will ignore some records without processing them`
2. Write the failed records in a dead letter topic and keep processing the others
Coming back to your example, you would likely do something like this:
```
while (!cancelled)
{
var records = new List<ConsumeResult<TKey, TValue>>();
var errorQueue = new ConcurrentQueue<Exception>();
var processed = new ConcurrentQueue<ConsumeResult<TKey, TValue>>();
//Consume timeout
int consumeTimeoutInMs = 1000;
// Suppose you process 5 messages at a time
for(int i = 0; i < 5; i++)
{
var r = consumer.Consume(consumeTimeoutInMs);
if(r != null && !r.IsPartitionEOF)
records.Add(r)
}
if(!records.Any()) {
// It would make sense to wait a bit here before polling again.
// For example you may add: await Task.Delay(TimeSpan.FromSeconds(1));
continue;
}
// Process the messages
Parallel.ForEach(records, record =>
{
try {
//Suppose your service is designed to process messages one at a time
Process(record);
processed.Enqueue(record);
} catch(Exception e) {
errorQueue.Enqueue(e);
}
});
//Retrieve the latest offset
ConsumeResult<TKey, TValue> latestProcessedRecord = null;
foreach (var record in records)
{
if (processed.Contains(record))
latestProcessedRecord = record;
else
break;
}
// Commit the offset
if(latestProcessedRecord != null)
consumer.Commit(consumeResult);
// Halt the execution in case of errors
if(errorQueue.Any())
throw new AggregatedException(errorQueue);
}
```
`It's also a good practice to design your process to handle records in batches.`
### Examples ###
- ITERATION 1
```
READ(MESSAGE_AT_1)
READ(MESSAGE_AT_2)
READ(MESSAGE_AT_3)
READ(MESSAGE_AT_4)
READ(MESSAGE_AT_5)
// In Parallel
PROCESS(MESSAGE_AT_1) - SUCCEEDS
PROCESS(MESSAGE_AT_2) - SUCCEEDS
PROCESS(MESSAGE_AT_3) - FAILS
PROCESS(MESSAGE_AT_4) - SUCCEEDS
PROCESS(MESSAGE_AT_5) - SUCCEEDS
COMMIT(INDEX_2)
THROWS EXCEPTION
```
- ITERATION 2
```
READ(MESSAGE_AT_3)
READ(MESSAGE_AT_4)
READ(MESSAGE_AT_5)
READ(MESSAGE_AT_6)
READ(MESSAGE_AT_7)
// In Parallel
PROCESS(MESSAGE_AT_3) - SUCCEEDS
PROCESS(MESSAGE_AT_4) - SUCCEEDS
PROCESS(MESSAGE_AT_5) - SUCCEEDS
PROCESS(MESSAGE_AT_6) - FAILS
PROCESS(MESSAGE_AT_7) - SUCCEEDS
COMMIT(INDEX_5)
THROWS EXCEPTION
```
...
If you need more advanced features, I suggest you to give a look at some examples here on github:
- Examples: https://github.com/confluentinc/confluent-kafka-dotnet/blob/master/examples/Consumer/Program.cs
Or eventually to try this wrapper (kafka-flow):
- https://github.com/Farfetch/kafkaflow |
I am trying to create a reactjs module, develope and maintain separately from the main reactjs app I am working on.
```javascript
import React, { useState } from 'react';
function MyComponent(props) {
const [count, setCount] = useState(0);
return (
<div>
<h1>Hello, React!</h1>
<p>This is a basic React component.</p>
<p>You clicked {count} times</p>
<button onClick={() => setCount(count + 1)}>
Click me
</button>
</div>
);
}
export default MyComponent;
```
The build configurations look pretty much like this: https://github.com/iamsmkr/react-module-starter and the module is being used as per [README](https://github.com/iamsmkr/react-module-starter/blob/main/README.md).
Usage in the `App.js`:
```javascript
import MyComponent from 'react-module-starter';
function App() {
return (
<MyComponent></MyComponent>
);
}
export default App;
```
Trying to use `useState` hook is throwing error:
```bash
Uncaught Error: Invalid hook call. Hooks can only be called inside of the body of a function component. This could happen for one of the following reasons:
1. You might have mismatching versions of React and the renderer (such as React DOM)
2. You might be breaking the Rules of Hooks
3. You might have more than one copy of React in the same app
See https://reactjs.org/link/invalid-hook-call for tips about how to debug and fix this problem.
at resolveDispatcher (react.development.js:1476:1)
at useState (react.development.js:1507:1)
at l (index.js:4:1)
at renderWithHooks (react-dom.development.js:14985:1)
at mountIndeterminateComponent (react-dom.development.js:17811:1)
at beginWork (react-dom.development.js:19049:1)
at HTMLUnknownElement.callCallback (react-dom.development.js:3945:1)
at Object.invokeGuardedCallbackDev (react-dom.development.js:3994:1)
at invokeGuardedCallback (react-dom.development.js:4056:1)
at beginWork$1 (react-dom.development.js:23964:1)
```
I'm not sure if there is anything wrong with my build configuration or something? This is the first time I am trying to develop reactjs module and utilize in apps. Please suggest.
|
Empty property that should not be empty |
I am creating a hamburger menu in wix studios where in menu container I have placed images and text which opens a lightbox as a menu and I want a back button on the lightbox which closes the lightbox and opens the hamburger menu again.
I have tried adding a hamburger menu element which opens the menu container but it comes below the lightbox. |
clicking element closes lightbox and opens a hamburger menu in wix studios |
|wix|velo| |
null |
I want to test a simple Verilog module that outputs `1` if the minterms `2`, `3`, `5` or `7` are found in a 3-variable input.
The module looks like this:
```
module modified_prime_detector(
input [2:0] in,
input clk,
output reg out
);
always @ (posedge clk)
begin
out <= (~in[2] && in[1] && ~in[0]) || (~in[2] && in[1] && in[0]) || (in[2] && ~in[1] && in[0]) || (in[2] && in[1] && in[0]);
end
endmodule
```
I have also developed a testbench in order to run my simulation using **EDA Playground**:
```
module modified_prime_detector_tb();
reg [2:0] in;
reg clk;
wire out;
modified_prime_detector uut (
.clk(clk),
.in(in),
.out(out)
);
always #5 clk = ~clk;
always @(posedge clk) begin
if (in == 3'b111)
in <= 3'b000;
else
in <= in + 1'b1;
end
always @ (posedge clk) begin
$display("Time: %t, in: %b, out: %b", $time, in, out);
end
initial begin
$dumpfile("dump.vcd");
$dumpvars(1, modified_prime_detector_tb);
clk = 0;
in = 0;
#5;
#100;
$finish;
end
endmodule
```
Basically, I'm increasing the 3-bit value of `in` on each clock cycle. I then expect to see the `out`'s value set to `1` for the values: `010`, `011`, `101` and `111`.
However, I see that the module outputs `1` for these values: `011`, `100`, `110` and `000`? I'm not sure why it works like that. I think it might have to do with the simulation, because this is how the waveforms look like:
[![enter image description here][1]][1]
It seems that the output is "right-shifted" by one, at it also seems like my module gives no output for the input `000`. Unfortunately, I'm not sure why this happens and how to further investigate this issue.
What might be the problem?
[1]: https://i.stack.imgur.com/BCQAk.png |
Output comes 1 clock cycle later than expected |
The Dependency Inversion Principle says high-level modules should not import anything from low-level modules. So, the domain as a system's center should not depend on any library.
So, you can create your own interface for these value objects and use them via dependency injection.
This is in theory.
But if we go a bit deeper, to understand why this principle applies, we need to consider future change probabilities.
Domain changes less frequently than any other part of the application, for example, UI. So, the domain should not depend on UI to avoid the situation when we need to fix the domain model each time we change the button color. And so, we need to use dependency inversion to depend on abstraction.
In other words, you need to weigh the risk of library change. If you imagine that the library interface will change or you'll need to replace the library, you should not use it directly. Create an abstraction.
If it is a common library that has not changed its interface for years and is only one possible library to use, then there is no need to add complexity. |
Handling errors in MSAL Redirect - reactjs login with microsoft sso |
|javascript|error-handling|azure-active-directory|single-sign-on|msal-react| |
I am attempting to perform Device Authorization Flow on a CLI in Go. I have followed the steps in https://auth0.com/blog/securing-a-python-cli-application-with-auth0/ to set up my application in Auth0. After successfully requesting a device code, I attempt to get a request token.
```
// Gets a request token.
func (loginJob *LoginJob) GetRequestToken(deviceCodeData loginResponse.LResponse) error {
//Setup an http request to retreive a request token.
url := loginJob.Domain + "/oauth/token"
method := "POST"
payload := strings.NewReader("grant_type=urn%3Aietf%3Aparams%3Aoauth%3Agrant-type%3Adevice_code&device_code=" +
deviceCodeData.DeviceCode + "&client_id=" + loginJob.ClientID)
client := &http.Client{}
req, reqErr := http.NewRequest(method, url, payload)
if reqErr != nil {
fmt.Println(reqErr)
return reqErr
}
req.Header.Add("Content-Type", "application/x-www-form-urlencoded")
authenticate := false
for !authenticate {
authenticate = PollRequestTokenStatus(req, client)
time.Sleep(time.Duration(deviceCodeData.Interval) * time.Second)
}
return nil
}
func PollRequestTokenStatus(req *http.Request, client *http.Client) bool {
res, resErr := client.Do(req)
if resErr != nil {
log.Panic(resErr)
return true
}
defer res.Body.Close()
body, ReadAllErr := io.ReadAll(res.Body)
if ReadAllErr != nil {
fmt.Println(ReadAllErr)
return true
}
fmt.Println("res.Body: ")
fmt.Println(string(body))
if res.StatusCode == 200 {
fmt.Println("Authenticated!")
fmt.Println("- Id Token: ")
return true
} else if res.StatusCode == 400 {
fmt.Println("res.StatusCode: ")
fmt.Print(res.StatusCode)
return false
} else {
fmt.Println("res.StatusCode: ")
fmt.Print(res.StatusCode)
}
return false
}
```
The idea is that I poll Auth0 for a request token at specific intervals. The first time I poll, I get a 403 Forbidden:
```
{"error":"authorization_pending","error_description":"User has yet to authorize device code."}
```
However, on subsequent polls, I get 400 Bad Request:
```
<html>
<head><title>400 Bad Request</title></head>
<body>
<center><h1>400 Bad Request</h1></center>
<hr><center>cloudflare</center>
</body>
</html>
```
I am unsure why this is happening. I have tried manually polling Auth0 with Postman, and I have always managed to avoid error 400 using Postman.
[![Manually polling Auth0 with Postman][1]][1]
How do I fix this problem?
Update: It turns out for some reason Go will reset all the request headers whenever it calls *http.Client.Do(), so I tried moving the request construction inside the for loop. Now my code looks like this:
```
// Gets a request token.
func (loginJob *LoginJob) GetRequestToken(deviceCodeData loginResponse.LResponse) error {
// Setup an http request to retreive a request token.
url := loginJob.Domain + "/oauth/token"
method := "POST"
payload := strings.NewReader("grant_type=urn%3Aietf%3Aparams%3Aoauth%3Agrant-type%3Adevice_code&device_code=" +
deviceCodeData.DeviceCode + "&client_id=" + loginJob.ClientID)
authenticate := false
var pollErr error
for !authenticate {
authenticate, pollErr = loginJob.PollRequestTokenStatus(url, method, payload, deviceCodeData)
if pollErr != nil {
log.Panic(pollErr)
}
time.Sleep(time.Duration(deviceCodeData.Interval) * time.Second)
}
return nil
}
func (loginJob *LoginJob) PollRequestTokenStatus(url string, method string, payload *strings.Reader, deviceCodeData loginResponse.LResponse) (bool, error) {
// Setup an http request to retreive a request token.
client := &http.Client{
Timeout: time.Second * 10,
}
req, reqErr := http.NewRequest(method, url, payload)
if reqErr != nil {
fmt.Println(reqErr)
return false, reqErr
}
req.Header.Add("Content-Type", "application/x-www-form-urlencoded")
res, resErr := client.Do(req)
if resErr != nil {
fmt.Println(resErr)
return false, resErr
}
defer res.Body.Close()
fmt.Println("res.StatusCode:")
fmt.Println(res.StatusCode)
if res.StatusCode == 200 {
fmt.Println("Authenticated!")
fmt.Println("- Id Token: ")
return true, nil
} else if res.StatusCode == 400 {
return true, nil
}
return false, nil
}
```
I have a different issue now. Instead of returning error 400 on subsequent polls, I am getting 401 instead:
```
{"error":"access_denied","error_description":"Unauthorized"}
```
[1]: https://i.stack.imgur.com/tlAz0.png |
For those who could not understand the notification icon requirements let me break them down.
Let's consider notification icon **facebook** which is white 'f' with blue circle on the background. Now for the notification icon make the 'f' transparent from the icon and generate the notification icon using ```asset manager``` in android studio. You will notice the blue background is is transparent and 'f' is white now rest is same place the drawable generated folders in there appropriate folders.
<meta-data
android:name="com.google.firebase.messaging.default_notification_icon"
android:resource="@drawable/app_icon" />
<meta-data
android:name="com.google.firebase.messaging.default_notification_color"
android:resource="@color/facebook_blue_color" />
that's it.
|
Where could be a mistake?
// Plot der dynamisch angepassten Fibonacci-Niveaus
for i = 0 to NumAdditionalLevels - 1
plot(additional_levels[i], color=color.new(#ffffff, 0), linewidth=1, trackprice=true, show_last=1, title=f"F_level_{i+1}")
Hoppe someone could support.
Thank you in advance!
Hope someone will read my issue. |
how can I plot calculated dynamic levels? |
|pine-script-v5|tradingview-api| |
null |
I have this line of code which is outside of `script` tag `<i18n locale="en" src="../../local/en/index/first.json"></i18n>`
This one uses static source, is there a way for me to import a file by setting the `src` dynamically, or better set this as a prop when possible? |
How to make source dynamic for vue-i18n |
|vue.js|vuejs3|vue-i18n| |
struct MyStruct {
static var x = myX()
lazy var y = myY()
}
func myX() -> String {
print("myX is running")
return ""
}
func myY() -> String {
print("myY is running")
return ""
}
MyStruct.x = "X"
var myStruct = MyStruct()
myStruct.y = "Y"
print("Done")
prints:
> myX is running
> Done
In other words static lazy variable gets initialized just before assigning, while instance property is just assigned. |
Inconsistency in lazy variable initialization between static and instance properties in Swift |
|swift|properties|lazy-initialization| |
Playwright is a framework for web testing and automation. It allows testing Chromium, Firefox and WebKit, Chrome, and Microsoft Edge with a single API. Playwright's API is similar to Puppeteer, but with cross-browser support and Python, Java, and .NET bindings, in addition to Node.js. |
Using useState reacthook in a reactjs module is throwing error: Invalid hook call |
|reactjs|rollup| |
I am getting the following error in my ASP.NET Core 6 application frequently:
> Microsoft.AspNetCore.Server.Kestrel.Core.BadHttpRequestException: Unexpected end of request content
The stack trace is unfortunately not helpful, it does not point to any useful place in my own code.
```
Microsoft.AspNetCore.Server.Kestrel.Core.BadHttpRequestException: Unexpected end of request content.
at Microsoft.AspNetCore.Server.Kestrel.Core.Internal.Http.Http1ContentLengthMessageBody.ReadAsyncInternal(CancellationToken cancellationToken)
at System.Runtime.CompilerServices.PoolingAsyncValueTaskMethodBuilder`1.StateMachineBox`1.System.Threading.Tasks.Sources.IValueTaskSource<TResult>.GetResult(Int16 token)
at Microsoft.AspNetCore.Server.Kestrel.Core.Internal.Http.HttpRequestStream.ReadAsyncInternal(Memory`1 destination, CancellationToken cancellationToken)
at System.Text.Json.JsonSerializer.ReadFromStreamAsync(Stream utf8Json, ReadBufferState bufferState, CancellationToken cancellationToken)
at System.Text.Json.JsonSerializer.ReadAllAsync[TValue](Stream utf8Json, JsonTypeInfo jsonTypeInfo, CancellationToken cancellationToken)
at Microsoft.AspNetCore.Mvc.Formatters.SystemTextJsonInputFormatter.ReadRequestBodyAsync(InputFormatterContext context, Encoding encoding)
at Microsoft.AspNetCore.Mvc.Formatters.SystemTextJsonInputFormatter.ReadRequestBodyAsync(InputFormatterContext context, Encoding encoding)
at Microsoft.AspNetCore.Mvc.ModelBinding.Binders.BodyModelBinder.BindModelAsync(ModelBindingContext bindingContext)
at Microsoft.AspNetCore.Mvc.ModelBinding.ParameterBinder.BindModelAsync(ActionContext actionContext, IModelBinder modelBinder, IValueProvider valueProvider, ParameterDescriptor parameter, ModelMetadata metadata, Object value, Object container)
at Microsoft.AspNetCore.Mvc.Controllers.ControllerBinderDelegateProvider.<>c__DisplayClass0_0.<<CreateBinderDelegate>g__Bind|0>d.MoveNext()
--- End of stack trace from previous location ---
at Microsoft.AspNetCore.Mvc.Infrastructure.ControllerActionInvoker.<InvokeInnerFilterAsync>g__Awaited|13_0(ControllerActionInvoker invoker, Task lastTask, State next, Scope scope, Object state, Boolean isCompleted)
at Microsoft.AspNetCore.Mvc.Infrastructure.ResourceInvoker.<InvokeNextResourceFilter>g__Awaited|25_0(ResourceInvoker invoker, Task lastTask, State next, Scope scope, Object state, Boolean isCompleted)
at Microsoft.AspNetCore.Mvc.Infrastructure.ResourceInvoker.Rethrow(ResourceExecutedContextSealed context)
at Microsoft.AspNetCore.Mvc.Infrastructure.ResourceInvoker.Next(State& next, Scope& scope, Object& state, Boolean& isCompleted)
at Microsoft.AspNetCore.Mvc.Infrastructure.ResourceInvoker.InvokeFilterPipelineAsync()
--- End of stack trace from previous location ---
at Microsoft.AspNetCore.Mvc.Infrastructure.ResourceInvoker.<InvokeAsync>g__Awaited|17_0(ResourceInvoker invoker, Task task, IDisposable scope)
at Microsoft.AspNetCore.Mvc.Infrastructure.ResourceInvoker.<InvokeAsync>g__Awaited|17_0(ResourceInvoker invoker, Task task, IDisposable scope)
at Microsoft.AspNetCore.Routing.EndpointMiddleware.<Invoke>g__AwaitRequestTask|6_0(Endpoint endpoint, Task requestTask, ILogger logger)
at Microsoft.AspNetCore.Authorization.AuthorizationMiddleware.Invoke(HttpContext context)
at Microsoft.AspNetCore.Authentication.AuthenticationMiddleware.Invoke(HttpContext context)
at LOGS.Middleware.MultiTenantMiddleware.InvokeAsync(HttpContext httpContext) in /app/Middleware/MyMiddleware.cs:line 68
at DevModeExceptionHandlingMiddleware.InvokeAsync(HttpContext httpContext) in /app/Middleware/MyOtherMiddleware.cs:line 45
```
My main question is whether this error indicates a problem with my application or whether this is caused by invalid/aborted requests performed by a client. The error message suggests the latter to me, in which case I'd just like to hide this specific message as I can't do much about it.
So, does this error indicate a purely client-side problem? And if yes, how do I specifically hide this error thrown by the framework without hiding other exceptions from it? |
"Unexpected end of request content" exception in ASP.NET Core |
|c#|asp.net-core|.net-core| |
The input is line-buffered. When you enter characters and hit Enter the buffer contains all the characters you have entered + `'\n'`(new line).
The `getchar` function takes one character from this buffer and returns it. It does not discard the rest of this buffer. Other characters entered will be available for the next `getchar` call.
Example:
1. you enter `123456789`<kbd><samp>ENTER</samp></kbd>.
2. the buffer contains `123456789\n`
3. first call to getchar will return `'1'`
4. second call to getchar will return `'2'`
`...`
5. tenth call will return `'\n'`
Your code:
- Your first program reads one charter from this buffer, prints it and then terminates.
- your second program is looping taking the characters from this buffer one by one including the new line `'\n'` character too. |
Your question is probably opinion-based, as no one will be able to give you a definitive answer or a guarantee that the code you are using is going to be stable, even when requiring `@ExperimentalApi`. <br> I can only share my thoughts and experience.
-----------------
At the current point, basically everything in Jetpack Compose [`material3`][1] libraries needs to be marked as [`@ExperimentalApi`][2]. However, while these APIs might change in the future, in my experience they are stable in a sense of having no bugs when you go with a stable release. <br> Check out which dependency version is considered as the latest stable in the respective dependency page.
[![release table][3]][3]
Also, it seems like the Jetpack Compose team is rather quickly using this annotation as sort of a safety precaution. Even components like [`BottomAppBar`][4] which API has been stable for a long time are still requiring an `@ExperimentalApi` annotation.
So I would say, you can use it in production, but be aware that when you update the dependencies once in a while, you might have to rewrite some code. <br> I recently updated a project that I wrote two years ago, and it took two hours to refactor all the changed APIs. For me, this is acceptable.
I'd suggest if you have doubt on using a specific experimental Composable, check out the [Google Issue Tracker][5] and search for that Composable. This will give you a quick hint whether there are many or severe bugs occuring with that Composable.
[1]: https://developer.android.com/reference/kotlin/androidx/compose/material3/package-summary
[2]: https://developer.android.com/jetpack/compose/designsystems/material3#experimental-apis
[3]: https://i.stack.imgur.com/hznNf.png
[4]: https://cs.android.com/androidx/platform/frameworks/support/+/androidx-main:compose/material3/material3/src/commonMain/kotlin/androidx/compose/material3/AppBar.kt;drc=0cc02be72c56759955caf8f29d2e6ee6312d7931;l=395
[5]: https://issuetracker.google.com/issues |
|python|machine-learning|scikit-learn| |
I think the log is clear. It says __There is not enough space on the disk__. So delete some stuff to get more space and retry; and if you get errors delete __caches__ directory from _C:\Users\sahir.gradle\caches_ (and try again). |
Recently I also faced same problm in MacOS while opening R studio , it produces a pop-up error saying
> File Listing Error, Error navigating to ~/Desktop: Operation not permitted.
so waht u have to do
- open R studio
- you may find RStudio next to apple icon Click over there
- Then click on Service setting
- Then Click on Accessibility
and then
- Click on Restore default
Now when u open R studio again it will ask
to all permission for folder etc
- click allow
U can also check sysytem preference by clicking on mac icon
> apple icon>System setting > Security & Privacy > Accessibility > ..... |
I was trying to search for a url: `https://rewards.bing.com/pointsbreakdown` with the help of Selenium in Python so I wrote the following block of code
```
search_in_first = driver.find_element(By.ID, 'sb_form_q')
search_in_first.send_keys(search_random_word())
search_in_first.submit()
```
but it redirects to a new google/bing search webpage [Wrong](https://i.stack.imgur.com/WpIpU.png)
insted of [Right](https://i.stack.imgur.com/x4iRg.png). So I want to know how to get hold of the inbuilt search box in my python script.
I tried the following code block
```
search_in_first = driver.find_element(By.ID, 'sb_form_q')
search_in_first.send_keys(search_random_word())
search_in_first.submit()
```
to get hold of the search box but it doesn't work well and search in this box.
Inner search box:

instead of this:
 |
|python|json|themoviedb-api| |
you'll need to add the `multiple` parameter to the upload widget.
You can read about it via documentation:
https://cloudinary.com/documentation/upload_widget_reference#:~:text=opened.%0ADefault%3A%20local-,multiple,-Boolean |
I'm currently working on a school project where I need to apply multiple border images to an element using border-image-source. However, I'm encountering issues where only one of the border images is displayed, while the other is not being applied. It would show up as a grey border instead if both of the borders are applied, as shown below (left). When I alternate between both borders, it appears as shown in the middle and right images. However, when I try to enable both in the inspect element, it doesn't work (right). [helpme](https://i.stack.imgur.com/rYcE7.png)
Here's the full class code
```CSS
.calendar {
height: 300px;
width: 400px;
margin: 0 auto;
margin-top: 50px;
background-image: url(calendarbg.png);
background-size: auto;
background-position: center;
/* ^^ not relevant, i guess ^^ */
border-style: solid;
border-image-source: url(border1.png);
/* border-image-source: url(border.png); */
border-image-slice: 30 fill;
border-width: 20px;
}
````
I tried simplifying the code, tried different images, issue still persists. I can't seem to have both two image sources running for some reason... |
null |
I have a nestjs application where the endpoint has this code `response.cookie("refreshToken", refreshToken, {httpOnly: true, signed: true, maxAge: 1000 * 60 * 60 * 24})` and on the nextjs client in the middleware I send a request to this endpoint
```
import { cookies } from 'next/headers'
import { NextRequest, NextResponse } from 'next/server'
import { AuthConstantsEnum } from './constants/auth.constant'
import { authService } from './services/auth/auth.service'
export async function middleware(request: NextRequest) {
const refreshToken = cookies().get(AuthConstantsEnum.REFRESH_TOKEN)?.value
if (refreshToken) {
const data = await authService.getSession()
}
}
export const config = {
matcher: ['/((?!api|_next/static|_next/image|favicon.ico).*)'],
}
```
As a result, I see an entry in headers Set-Cookie, how to save cookies in the browser
there is an alternative, but it's bad
```
//middleware.ts
const response = NextResponse.next()
...
response.headers.set("Set-Cookie", data.headers.get("Set-Cookie"))
```
|
null |
class GCNModel(nn.Module):
def __init__(self, in_channels, hidden_dim, out_channels, edge_dim): # 5, 64, 6, 3
super(GCNModel, self).__init__()
self.conv1 = Edge_GCNConv(in_channels=in_channels, out_channels=hidden_dim, edge_dim=edge_dim)
self.conv2 = Edge_GCNConv(in_channels=hidden_dim, out_channels=out_channels, edge_dim=edge_dim)
self.batch_norm1 = nn.BatchNorm1d(hidden_dim)
self.batch_norm2 = nn.BatchNorm1d(out_channels)
self.linear = nn.Linear(out_channels, out_channels)
def forward(self, x, edge_index, edge_attr):
x, edge_index, edge_attr = x, edge_index, edge_attr
x1 = self.conv1(x, edge_index, edge_attr)
x1 = self.batch_norm1(x1)
x1 = F.relu(x1)
x1 = F.dropout(x1, p=0.1, training=self.training)
# print("GCNModel x1:", x1)
# print("GCNModel x1.shape:", x1.shape) # (24, 64)
x2 = self.conv2(x1, edge_index, edge_attr)
x2 = self.batch_norm2(x2)
x2 = F.relu(x2)
# print("GCNModel x2:", x2)
# print("GCNModel x2.shape:", x2.shape) # (24, 6)
x2 = F.dropout(x2, p=0.1, training=self.training)
out = self.linear(x2)
print("GCNModel out:", out)
print("GCNModel out.shape:", out.shape) # (24, 6)
return out
def train_model(train_loader, val_loader, model, optimizer, output_dim, threshold_value, num_epochs=100, early_stopping_rounds=50, batch_size=4):
"""
训练GNN模型,使用 k 折交叉验证
Args:
train_loader: 训练数据加载器
val_loader: 验证数据加载器
model: GNN模型
optimizer: 优化器
num_epochs: 训练轮数 (default: 100)
early_stopping_rounds: 早停轮数 (default: 50)
"""
best_val_loss = float('inf')
best_accuracy = 0 # Track best validation accuracy
rounds_without_improvement = 0
# 创建损失函数
criterion = nn.CrossEntropyLoss()
# criterion = nn.BCEWithLogitsLoss() # 二分类
for epoch in range(num_epochs):
model.train()
total_loss = 0
correct = 0
total = 0
for data in train_loader:
optimizer.zero_grad()
#################### error #################
out = model(data.x, data.edge_index, data.edge_attr)
# 转换 data.y 为多热编码
one_hot_labels = convert_to_one_hot(data.y, output_dim)
print("train_model out.shape:", out.shape) # (24, 6)
print("train_model data.y.shape:", data.y.shape) # (18, 2)
print("train_model data.edge_attr.shape:", data.edge_attr.shape) # (18, 3)
print("train_model data.edge_attr:", data.edge_attr)
print("train_model one_hot_labels.shape:", one_hot_labels.shape) # (18, 6)
loss = criterion(out, one_hot_labels)
#################################################
# print("train_model loss:", loss)
total_loss += loss.item()
# print("torch.sigmoid(out):", torch.sigmoid(out))
predicted = (torch.sigmoid(out) >= threshold_value).long()
# print("predicted:", predicted)
correct += (predicted == one_hot_labels).all(dim=1).sum().item()
# print("correct:", correct)
total += len(data.y)
# print("total:", total )
loss.backward()
optimizer.step()
avg_train_loss = total_loss / len(train_loader)
train_accuracy = correct / total
The tensor of out.shape is the output obtained by data.x.shape
My data.y.shape is obtained from data.edge_attr
What can I do to fix the mismatch between the number of out and one_hot_labels
Am I modifying the model or am I modifying the dimensions of the output???
The tensor of out.shape is the output obtained by data.y.shape |
Suppose you are writing a python web client to access an API of an online supermarket. Given below are the API details.
Base URL= http://host1.open.uom.lk:8080
The following product has been entered into the API server by you in the previous question.
However, it has been noted that the entered product's brand should be changed to "Araliya" instead of "CIC".
Write a python program to update the entry on the API server as required. Print the JSON response of the request.
{
"productName":"Araliya Basmathi Rice",
"description":"White Basmathi Rice imported from Pakistan. High-quality rice with extra fragrance. Organically grown.",
"category":"Rice",
"brand":"CIC",
"expiredDate":"2023.05.04",
"manufacturedDate":"2022.02.20",
"batchNumber":324567,
"unitPrice":1020,
"quantity":200,
"createdDate":"2022.02.24"
}[enter image description here](https://i.stack.imgur.com/hBze3.png)
give me the answer
i cant unders tand how to svolwe this one |
api one jason python |
|python|json|api| |
null |
I am a Django and Python beginner, and I encountered a problem while using Django. I hope to use a filter in the template to get the string I need, but the result below is not what I expected.
```python
# filter definition
@register.filter
def to_class_name(obj):
return obj.__class__.__name__
```
```
# HTML template for UpdateView (This template will be used by multiple models.)
# object = Order()
{{ object|to_class_name }} #reulst: Order
{{ 'wms'|add:object|to_class_name }} #result: str, expect: object
```
I roughly understand that the issue lies with the order, but it seems I can't add parentheses to modify it.
```
{{ 'wms'|add:(object|to_class_name) }} #cause SyntaxError
```
Is there any way to solve this problem? Or is there a better way to determine the page data I need to output based on the model class when multiple models share one template? Thank you all. |
django add filters in template is not working expectedly |
null |
I'm developing a Fortran parser using ANTLR4, adhering to the ISO Fortran Standard 2018 specifications. While implementing lexer rules, I encountered a conflict between the `NAME` and `LETTERSPEC` rules. Specifically, when the input consists of just a letter, it is always tokenized as `NAME` and never as `LETTERSPEC`. Here's a partial simplified version of the grammer:
```
lexergrammar FortrantTestLex;
// Lexer rules
WS: [ \t\r\n]+ -> skip;
// R603 name -> letter [alphanumeric-character]...
NAME: LETTER (ALPHANUMERICCHARACTER)*;
// R865 letter-spec -> letter [- letter]
LETTERSPEC: LETTER (MINUS LETTER)?;
MINUS: '-';
// R601 alphanumeric-character -> letter | digit | underscore
ALPHANUMERICCHARACTER: LETTER | DIGIT | UNDERSCORE;
// R0002 Letter ->
// A | B | C | D | E | F | G | H | I | J | K | L | M |
// N | O | P | Q | R | S | T | U | V | W | X | Y | Z
LETTER: 'A'..'Z' | 'a'..'z';
// R0001 Digit -> 0 | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9
DIGIT: '0'..'9';
// R602 UNDERSCORE -> _
UNDERSCORE: '_';
```
```
grammer FortranTest;
import FortranTestLex;
// Parser rules
programName: NAME;
// R1402 program-stmt -> PROGRAM program-name
programStmt: PROGRAM programName;
letterSpecList: LETTERSPEC (COMMA LETTERSPEC)*;
// R864 implicit-spec -> declaration-type-spec ( letter-spec-list )
implicitSpec: declarationTypeSpec LPAREN letterSpecList RPAREN;
implicitSpecList: implicitSpec (COMMA implicitSpec)*;
// R863 implicit-stmt -> IMPLICIT implicit-spec-list | IMPLICIT NONE [( [implicit-name-spec-list] )]
implicitStmt:
IMPLICIT implicitSpecList
| IMPLICIT NONE ( LPAREN implicitNameSpecList? RPAREN )?;
// R1403 end-program-stmt -> END [PROGRAM [program-name]]
endProgramStmt: END (PROGRAM programName?)?;
// R1401 main-program ->
// [program-stmt] [specification-part] [execution-part]
// [internal-subprogram-part] end-program-stmt
mainProgram: programStmt? endProgramStmt;
//R502 program-unit -> main-program | external-subprogram | module | submodule | block-data
programUnit: mainProgram;
//R501 program -> program-unit [program-unit]...
program: programUnit (programUnit)*;
```
In this case, the tokenization always results in NAME even though it could also be a valid LETTERSPEC. How can I resolve this conflict in my lexer rules to ensure correct tokenization? Can I use ANTLR4 mode feature to resolve this issue?
I've tried adjusting the order of the lexer rules and refining the patterns, but I haven't been able to achieve the desired behavior. Any insights or suggestions on how to properly handle this conflict would be greatly appreciated. Thank you! |
Same problem with you.
I tried checking again and replacing it using the getInfo function.
Link [pypi/youtube-search-python][1]
[1]: https://pypi.org/project/youtube-search-python/ |
I run `chmod -R 755 node_modules` and it works. |
I'm working on a React project utilizing Material-UI for styling components. I've heard about Normalize.css for consistent browser rendering, and I'm wondering if it's possible to integrate Normalize.css with React MUI.
Specifically, I'm interested in:
Whether using Normalize.css alongside Material-UI is recommended.
Any potential conflicts or compatibility issues between Normalize.css and Material-UI.
Best practices or considerations when combining the two.
I've already searched through the documentation and online resources but couldn't find a clear answer. If anyone has experience or insights on this topic, I'd greatly appreciate any guidance or advice. Thanks in advance!
Attempted Integration: I attempted to include Normalize.css alongside Material-UI by importing it into my project's main stylesheet, which is then included in my root component.
Expected Outcome: I expected that Normalize.css would normalize browser styles across different platforms, ensuring consistent rendering, while Material-UI would handle the custom components and styling according to its design guidelines. |
Can we use normalise.css package with react MUI CSS? |
|reactjs| |
null |
I can't see any obvious problem with your code, but I suspect there is something you are not showing us. Maybe just set up the task logger using `celery.utils.log.get_task_logger`, so you can see if it actually starts the function. Find a link with info below.
https://www.programcreek.com/python/example/88906/celery.utils.log.get_task_logger |
I have added a test case to an existing (mixed Objective-C / Swift) project. Building and running the project on simulator or devices is no problem. However, when trying to run the test, I get the error message `Expected a type` on the line `+ (UIUserInterfaceStyle)currentUIStyle;`
[![enter image description here][1]][1]
Any ideas what could cause this problem and how to solve it?
Xcode 15.2 on macOS 14.0
[1]: https://i.stack.imgur.com/ME2fS.png |
Xcode compile error "Expected a type" shows only when running tests |
|ios|objective-c|xcode|xctest| |
My main activity is this:
class MainActivity extends AppCompatActivity {
ActivityMainBinding uiBinding;
@Override
protected void onCreate(Bundle savedInstance){
super.onCreate(savedInstance);
uiBinding = ActivityMainBinding.inflate(getLayoutInflater());
ActionBar actionBar = getSupportActionBar();
if (actionBar != null) {
actionBar.setDisplayHomeAsUpEnabled(true);
}
}
}
styles.xml
<resources>
<style name="AppTheme" parent="Theme.AppCompat.Light.DarkActionBar">
<item name="colorPrimary">@color/colorPrimary</item>
<item name="colorPrimaryDark">@color/colorPrimaryDark</item>
<item name="colorAccent">@color/colorAccent</item>
<item name="android:windowAnimationStyle">@null</item>
<item name="android:spinnerItemStyle">@style/mySpinnerItemStyle</item>
</style>
<style name="mySpinnerItemStyle" parent="@android:style/Widget.Holo.DropDownItem.Spinner">
<item name="android:textColor">@android:color/black</item>
</style>
</resources>
And AndroidManifest.xml
<application
android:name=".MyApp"
android:allowBackup="false"
android:fullBackupContent="true"
android:icon="@mipmap/ic_launcher"
android:label="@string/app_name"
tools:replace="android:allowBackup"
android:theme="@style/AppTheme">
The activity runs fine, no crash. Except one thing: the back button on title bar is missing? What's wrong here? |
I am trying to use a widget that I have created in reactjs using script tags like below-
**React Implementation**
<!-- begin snippet: js hide: false console: true babel: false -->
<!-- language: lang-js -->
import React from "react";
import { Helmet } from "react-helmet";
const Dust = () => {
return (
<div>
<Helmet>
<link
href="https://master--testasdfasdfadsf.netlify.app/widget/index.css"
rel="stylesheet"
/>
<script
async
src="https://master--testasdfasdfadsf.netlify.app/widget/index.js"
></script>
</Helmet>
</div>
);
};
export default Dust;
<!-- end snippet -->
For react js the above is working fine and I can see the content on my screen.
**NextJs Implementation**
<!-- begin snippet: js hide: false console: true babel: false -->
<!-- language: lang-js -->
import Image from "next/image";
import { Inter } from "next/font/google";
import Script from "next/script";
import Head from "next/head";
const inter = Inter({ subsets: ["latin"] });
export default function Home() {
return (
<main
className={`flex min-h-screen flex-col items-center justify-between p-24 ${inter.className}`}
>
<Head>
<link
href="https://master--testasdfasdfadsf.netlify.app/widget/index.css"
rel="stylesheet"
/>
</Head>
<Script
src="https://master--testasdfasdfadsf.netlify.app/widget/index.js"
strategy="lazyOnload"
/>
</main>
);
}
<!-- end snippet -->
The above is not working in next js, however I can see the js and css loading in network tab. What might be the solution in nextjs? |
Script tag not loading content in nextjs but working fine in react |
|javascript|css|reactjs|next.js|react-widgets| |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.