instruction
stringlengths 0
30k
⌀ |
---|
```
List<Integer> b = a.size() > 10 ? new ArrayList<>(a.subList(0, 10)) : a;
``` |
I was also facing the same error when I was trying to upload numpy , tensorflow and other libraries for my training task, I wasnt able to solve the problem so what I did was I just ran the pip install commands as a task in my dag.
Here is the code:
Import sys
Import subprocess
@task
def installing_dependencies_using_subprocess():
subprocess.check_call(
[sys.executable, "-m", "pip", "install", "numpy"])
subprocess.check_call(
[sys.executable, "-m", "pip", "install", "pandas"])
subprocess.check_call(
[sys.executable, "-m", "pip", "install", "seaborn"])
subprocess.check_call(
[sys.executable, "-m", "pip", "install", "scikit-learn"])
subprocess.check_call(
[sys.executable, "-m", "pip", "install", "tensorflow"])
subprocess.check_call(
[sys.executable, "-m", "pip", "install", "joblib"])
subprocess.check_call(
[sys.executable, "-m", "pip", "install", "matplotlib"])
subprocess.check_call(
[sys.executable, "-m", "pip", "install", "s3fs"])
return {"message": "Dependencies installed successfully"}
|
It seems like the error is related to the fact that `addOption[i]` is undefined at the point where you are trying to use the `includes` method. To avoid this error, you should ensure that `addOption[i]` is defined before calling `includes` on it.
You can modify the relevant part of your code like this:
```jsx
<MenuItem value={2}>
<Checkbox checked={addOption[i]?.includes(2)} />
<ListItemText>{addOptionNames[1]}</ListItemText>
</MenuItem>
<MenuItem value={3}>
<Checkbox checked={addOption[i]?.includes(3)} />
<ListItemText>{addOptionNames[2]}</ListItemText>
</MenuItem>
<MenuItem value={4}>
<Checkbox checked={addOption[i]?.includes(4)} />
<ListItemText>{addOptionNames[3]}</ListItemText>
</MenuItem>
```
You can employ optional chaining to ensure that if `addOption[i]` exists, you can safely invoke the `includes` method on it. For further details on optional chaining, please consult this documentation: [Optional Chaining on MDN](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Operators/Optional_chaining). |
I would like to configure my GitHub Actions workflow to run based on one of two conditions:
- A source file has changed
- A configuration file has changed
Conditions should be evaluated on `OR`.
To solve the first issue I configured the action by using the `paths` parameter:
```
on:
push:
branches:
- main
paths:
- 'src/**'
```
This works and run the action on every push if at least one file in the `src` folder and its subfolders has changed. I have then added the following (inspired by [this](https://stackoverflow.com/a/76975099/23137927))
```
on:
push:
branches:
- main
paths:
- 'src/**'
- 'myconfigurationfile.json'
```
But it does not trigger at all.
Please consider that:
- Changes to the source folder are done by developers with a merge pull request
- Changes to the configuration file are done by another [github action][1] which helps to sync files to multiple repositories
[1]: https://github.com/marketplace/actions/file-sync |
Keeping Track of Coin Flips Even When They Are Not Flipped |
null |
|javascript|ecmascript-6|scope|strict-mode| |
{"OriginalQuestionIds":[16310423],"Voters":[{"Id":182668,"DisplayName":"Pointy"},{"Id":2627243,"DisplayName":"Peter Seliger"},{"Id":328193,"DisplayName":"David","BindingReason":{"GoldTagBadge":"javascript"}}]} |
I am trying to build a simple QML project on Raspberry Pi but I keep running into this
```
[6/18 29.9/sec] Running rcc for resource qmake_untitled2
FAILED: .rcc/qrc_qmake_untitled2.cpp /home/khangnguyen2003/build-untitled2-Desktop-Debug/.rcc/qrc_qmake_untitled2.cpp
cd /home/khangnguyen2003/build-untitled2-Desktop-Debug && /opt/Qt/6.6/./libexec/rcc --output /home/khangnguyen2003/build-untitled2-Desktop-Debug/.rcc/qrc_qmake_untitled2.cpp --name qmake_untitled2 /home/khangnguyen2003/build-untitled2-Desktop-Debug/.rcc/qmake_untitled2.qrc
RCC Parse Error: '/home/khangnguyen2003/build-untitled2-Desktop-Debug/.rcc/qmake_untitled2.qrc' Line: 1 Column: 0 [Premature end of document.]
[7/18 34.8/sec] Running rcc for resource appuntitled2_raw_qml_0
FAILED: .rcc/qrc_appuntitled2_raw_qml_0.cpp /home/khangnguyen2003/build-untitled2-Desktop-Debug/.rcc/qrc_appuntitled2_raw_qml_0.cpp
cd /home/khangnguyen2003/build-untitled2-Desktop-Debug && /opt/Qt/6.6/./libexec/rcc --output /home/khangnguyen2003/build-untitled2-Desktop-Debug/.rcc/qrc_appuntitled2_raw_qml_0.cpp --name appuntitled2_raw_qml_0 /home/khangnguyen2003/build-untitled2-Desktop-Debug/.rcc/appuntitled2_raw_qml_0.qrc
RCC Parse Error: '/home/khangnguyen2003/build-untitled2-Desktop-Debug/.rcc/appuntitled2_raw_qml_0.qrc' Line: 1 Column: 0 [Premature end of document.]
[8/18 33.5/sec] Running moc --collect-json for target appuntitled2
ninja: build stopped: subcommand failed.
```
problem. This problem is due to the generated `appuntitled2_raw_qml_0.qrc` and `qmake_untitled2.qrc` files being empty after generated.During the installation process for Qt6 and its submodule, which I followed [this](https://www.tal.org/tutorials/building-qt-66-raspberry-pi-raspberry-pi-os) and [this](https://www.tal.org/tutorials/building-qt-66-sub-modules) guides, I occurred the same error where running `qt-configure-modules` in the submodule folders generates empty qrc files. I managed to circumvent that issue by doing the same procedure in my Linux desktop, which actually generates populated qrc files, then modify the content in those qrc files and copy it over to my Raspberry Pi.
I would like to know what is the potential cause for this issue. Thank you very much |
null |
|rust| |
> I’m getting the same error in the colab notebook for the NLP Course: Lesson 3 - Fine-tuning model with the Trainer API. In order to get around it you can:
> 1. Run pip install accelerate -U in a cell
> 2. In the top menu click Runtime → Restart Runtime
> 3. Do not rerun any cells with !pip install in them
> 4. Rerun all the other code cells and you should be good to go!
>On a side note, be sure to turn on a GPU for this notebook by clicking Edit → Notebook Settings → GPU type - from the top menu. This was the next thing that got me : )
Source: https://discuss.huggingface.co/t/trainingargument-does-not-work-on-colab/43372/6 |
I cannot connect to the database in any way. I installed xampp on my computer and ran Apache and mysql services. But I cannot record anything. I can't query.
I wanted to save the value entered in the textboxes to the mysql database with the help of a button, but I could not succeed. This time, I thought I would at least enter the fixed record into the database, but I couldn't do that either. I wish I could save the value entered in the inputs to the database. I guess it's not connecting to the database.
```
<input type="button" value="Home" class="homebutton" id="btnHome" onClick="Javascript:window.location.href = '127.0.0.1\xampp\htdocs\tester\conn.php';" />
```
```
<?php
$servername = "localhost:3306";
$username = "deneme";
$password = "deneme";
$dbname = "deneme";
// Create connection
$conn = new mysqli($servername, $username, $password, $dbname);
// Check connection
if ($conn->connect_error) {
die("Connection failed: " . $conn->connect_error);
}
$sql = "INSERT INTO please (firstname, lastname, email)
VALUES ('John', 'Doe', 'john@example.com')";
if ($conn->query($sql) === TRUE) {
echo "New record created successfully";
} else {
echo "Error: " . $sql . "<br>" . $conn->error;
}
$conn->close();
?>
```
|
Error adding data to mysql with javascript and php |
|javascript|mysql|insert|mysql-insert-id| |
null |
I have created a child component and use Angular Material. I am showing dynamic data which is passed to parent component. I am unable to action icon and name so that I can pass event in parent component from child, this project also standalone using
[](https://i.stack.imgur.com/qOf0j.png)
Here is my code which are trying
Parent component:
```
ngOnInit() {
this.columns = [
{
title: "Country",
name: "countryName"
},
{
title: "State",
name: "state"
},
{
title: "City",
name: "cityName"
},
{
title: "Created By",
name: "createdBy"
},
{
title: "Created Date",
name: "createdAt"
},
{
title: "Action",
name: 'actions',
buttons: [
{
type: "",
icon: "edit",
class: "tbl-fav-edit",
title: ActionButtonType.Edit,
click: this.btnEditClick,
},
{
type: "",
icon: "trash-2",
class: "tbl-fav-delete",
title: ActionButtonType.Delete,
click: 'this.btnDeleteClick',
}
]
}
]
this.getAllCity();
}
@if (flg) {
<app-dynamic-mat-table [workList]="workList" [columns]="columns"></app-dynamic-mat-table>
}
```
Child component:
```
import { Component, Input, OnInit,ViewChild } from '@angular/core';
import { MatTableDataSource, MatTableModule } from '@angular/material/table';
import { MatPaginator, MatPaginatorModule } from '@angular/material/paginator';
import { MatIconModule } from '@angular/material/icon';
import { DatePipe } from '@angular/common';
import { FeatherIconsComponent } from '../feather-icons/feather-icons.component';
@Component({
selector: 'app-dynamic-mat-table',
standalone: true,
imports: [MatTableModule,MatPaginatorModule,MatPaginator,MatIconModule,DatePipe,FeatherIconsComponent, DatePipe,],
templateUrl: './dynamic-mat-table.component.html',
styleUrl: './dynamic-mat-table.component.scss'
})
export class DynamicMatTableComponent implements OnInit{
hide = true;
@ViewChild(MatPaginator, { static: true }) paginator!: MatPaginator;
@Input() workList:any=[];
@Input() columns:any=[];
dataSource:any;
displayedColumns=[];
displayedColumnss=[];
constructor(){
}
ngOnInit(): void {
console.log(this.columns);
this.dataSource = new MatTableDataSource<any>(this.workList);
this.displayedColumns = this.columns.map((column: any) => column.name);
this.displayedColumnss = this.columns.map((column: any) => column);
this.dataSource.paginator = this.paginator;
console.log(this.displayedColumns);
console.log(this.displayedColumnss);
}
getColumnTitle(columnName: []) {
console.log(columnName);
console.log(this.columns);
const columnTitle = this.columns.find((column: any) => column.name === columnName);
return columnTitle ? columnTitle.title : '';
}
<div class="responsive_table">
<table mat-table [dataSource]="dataSource" matSort class="mat-cell advance-table">
@for (column of displayedColumns; track column) {
<ng-container [matColumnDef]="column">
<th mat-header-cell *matHeaderCellDef mat-sort-header> {{ getColumnTitle(column) }} </th>
<td mat-cell *matCellDef="let element">
@if(column==='createdAt'){
{{ element[column] | date: 'dd-MM-yyyy' }}
}@else{
{{ element[column] }}
}
@if(column==='actions'){
{{element[column]}}
@for(actionButton of element.buttons; track actionButton){
<span>{{actionButton}}sww</span>
}
<td><app-feather-icons [icon]="'edit'" [class]="'tbl-fav-edit'" ></app-feather-icons></td>
<td> <app-feather-icons [icon]="'trash-2'" [class]="'tbl-fav-delete'"></app-feather-icons></td>
}
</td>
</ng-container>
}
<tr mat-header-row *matHeaderRowDef="displayedColumns"></tr>
<tr mat-row *matRowDef="let row; columns: displayedColumns;"></tr>
</table>
<mat-paginator [pageSizeOptions]="[5, 10, 20]" showFirstLastButtons></mat-paginator>
</div>
```
Here edit and delete icon are showing static, but I need to show this icon come from parent so that we can use dynamically and when we click edit icon need to print records also in parent component
|
How to reuse Angular material table in Angular 17? |
I am new to web scrapping, I am trying to web scrape this web site the 2022 Forbes Table - https://en.wikipedia.org/wiki/List_of_largest_companies_in_India ,
but the Rank column and the Forbes Rank column both have colspan - 2
so the number of table header is now - 9 but the info for these table is now - 11 so when i trying to insert the info to their corresponding header I am getting a error (cannot set a row with mismatched columns).
So how do I set the colspan for rank and forbes rank.
Here is my code -
```
from bs4 import BeautifulSoup
import requests
url = 'https://en.wikipedia.org/wiki/List_of_largest_companies_in_India'
page = requests.get(url)
soup = BeautifulSoup(page.text,'html')
soup.find('table')
table = soup.find('table')
titles = table.find_all('th')
Table_Title = [title.text.strip() for title in titles]
import pandas as pd
df = pd.DataFrame(columns = Table_Title)
df
column_data = table.find_all('tr')
column_data
for row in column_data[1:]:
row_data = row.find_all('td')
individual_row_data = [data.text.strip() for data in row_data]
length = len(df)
df.loc[length] = individual_row_data
print(individual_row_data)
```
I want to know to add colspan |
|java|jackson|flowable| |
any time i invoke npm in debian (literally any command, could be legit or not), it gives me the
```
/usr/bin/node: Syntax error: "(" unexpected
```
error. npm version is 9.2.0. same thing with nodejs:
```
/usr/bin/nodejs: cannot execute binary file: Exec format error
```
nodejs version is 18.19.0
tried removing and reinstalling them again. even reinstalled my debian wsl, same problem, still. i used the command
```
sudo apt install nodejs npm
``` |
having a syntax error any time i use an npm/nodejs command |
|node.js|npm|debian| |
null |
I am very new to ReactJS, when I test GET requests for 'all posts' page after login (which creates the token in the first place), it works FINE but when I test POST requests to extract the token and use it in the 'add post' logic, it returns 'undefined' and 'HTTP/1.1 401 Unauthorized'.
I used HTTPie to run/test requests. I have been trying to fix this error for two days now. Please help.
```
export const login = (req,res)=>{
const q = "SELECT * FROM users WHERE username=?"
db.query(q, [req.body.username], (err,data) => {
if(err) return res.status(500).json(err);
if(data.length === 0) return res.status(404).json("User not found");
const checkPassword = bcrypt.compareSync(req.body.password, data[0].password)
if(!checkPassword) return res.status(400).json("wrong password or username")
const token = jwt.sign({id:data[0].id}, "secretKey")
const {password, ...others} = data[0]
res.cookie("accessToken", token, {
httpOnly: true,
sameSite: "strict"
}).status(200).json(others);
})
}
export const addPost = (req,res) => {
const token = req.cookies.accessToken;
if(!token) return res.status(401).json("Not Logged In !!!");
//if(!token) return res.status(401).json("Not logged in!");
jwt.verify(token, "secretKey", (err, userInfo) => {
if(err) return res.status(403).json("Token is not valid!")
const q = "INSERT INTO posts (`desc`,`img`,`userId`,`createdAt`) VALUES (?)";
const values = [
req.body.desc,
req.body.img,
userInfo.id,
moment(Date.now()).format("YYYY-MM-DD HH:mm:ss")
];
db.query(q, [values], (err,data)=>{
if(err) return res.status(500).json(err);
return res.status(200).json("Post has been created");
})
});
}
import express from "express";
const app = express();
import userRoutes from "./routes/users.js";
import authRoutes from "./routes/auth.js";
import postRoutes from "./routes/posts.js";
import commentRoutes from "./routes/comments.js";
import likeRoutes from "./routes/likes.js";
import cors from "cors";
import cookieParser from "cookie-parser";
//middleware
app.use((req,res,next)=>{
res.header("Access-Control-Allow-Credentials", true);
next();
});
app.use(express.json())
app.use(
cors({
origin:"http://localhost:3000",
})
);
app.use(cookieParser())
``` |
Removing `@RunWith(AndroidJUnit4.class)` annotations from the test classes fixed the issue, although I can't say why or how it fixed it.
Edit: All right I did some more testing. I migrated my app to Kotlin, and suddenly I noticed the tests began to work with the `@RunWith` annotation, too. Here's what I found out:
import org.junit.BeforeClass;
import org.junit.Test;
import org.junit.runner.RunWith;
import androidx.test.ext.junit.runners.AndroidJUnit4;
@RunWith(AndroidJUnit4.class) // <-- @RunWith + @BeforeClass = Error
public class AndroidXJunitTestJava {
@BeforeClass
public static void setup() {
// Setting up once before all tests
}
@Test
public void testing() {
// Testing....
}
}
This Java test fails with the `Delegate runner for AndroidJunit4 could not be loaded` error. But If I remove the `@RunWith` annotation, it works. Also, if I replace the `@BeforeClass` setup with just a `@Before`, like this:
import org.junit.Before;
import org.junit.Test;
import org.junit.runner.RunWith;
import androidx.test.ext.junit.runners.AndroidJUnit4;
@RunWith(AndroidJUnit4.class) // <-- @RunWith + @Before = works?
public class AndroidXJunitTestJava {
@Before
public void setup() {
// Setting up before every test
}
@Test
public void testing() {
// Testing....
}
}
The tests will run without errors. I needed to use the `@BeforeClass` annotation, so I just removed `@RunWith`.
But now that I am using Kotlin, the following (which should be equal to the first Java example) works:
import androidx.test.ext.junit.runners.AndroidJUnit4
import org.junit.BeforeClass
import org.junit.Test
import org.junit.runner.RunWith
@RunWith(AndroidJUnit4::class)
class AndroidXJunitTest {
companion object {
@BeforeClass fun setup() {
// Setting up
}
}
@Test
fun testing() {
// Testing...
}
}
Also, as [Alessandro Biessek][1] said in an answer and @Ioane Sharvadze in the comments, the same error can happen with the `@Rule` annotation. If I add a line
@Rule val instantTaskExecutorRule = InstantTaskExecutorRule()
To the Kotlin example, the same delegate runner error happens. This must be replaced with
@get:Rule val instantTaskExecutorRule = InstantTaskExecutorRule()
Explanation [here][2].
[1]: https://stackoverflow.com/a/52934007/10518087
[2]: https://stackoverflow.com/questions/29945087/kotlin-and-new-activitytestrule-the-rule-must-be-public#32827600 |
Empty generated qrc files on Raspberry Pi |
|linux|qt|raspberry-pi|ninja| |
# Historic question.
Note that Apple now have **UICollectionViewCompositionalLayout**, which is trivial to use.
(Essentially just set fractionalWidth to 1.)
All previous apple systems for sizing collection views were garbage, and they finally fixed it with UICollectionViewCompositionalLayout.
---
In a vertical `UICollectionView` ,
Is it possible to have **full-width cells**, but, allow the **dynamic height** to be controlled by **autolayout**?
This strikes me as perhaps the "most important question in iOS with no really good answer."
|
null |
How to reverse the value of column in a dataframe? |
**WARNINGS**:
?: (staticfiles.W004) The directory 'C:\\Users\\19828037\\Desktop\\APP DE CONTROL DE REPORTES\\BackEnd\\GestionReportes\\static' ***in the STATICFILES_DIRS setting does not exist.
It is impossible to add a non-nullable field 'nivel' to reporte without specifying a default. This is because the database needs something to populate existing rows.***
Please select a fix:
1) Provide a one-off default now (will be set on all existing rows with a null value for this column)
2) Quit and manually define a default value in models.py.
Select an option: 2
PS C:\\Users\\19828037\\Desktop\\APP DE CONTROL DE REPORTES\\BackEnd\\GestionReportes\>
I need Help!, the connection with the database I have ***by default SQLite3 ***this is as a test in the functionality **soon migrate to mysql** |
how to solve this problem I am a junior programmer |
|sqlite|django-models|django-admin| |
null |
|javascript|ecmascript-6|arrow-functions| |
Change the text cursor/caret in Visual Studio |
null |
I found the line that confused me and which brings to this result
`self.image_label.setScaledContents(True)`
Just needed to remove it and everything works fine |
I'm trying to work with database SQL Lite on react native using SQL lite data base and expo library expo-sqlite.
- The package is well installed with **npx expo install expo-sqlite**
- I have already removed node_modules, package-lock.json, pods, Bulds (from IOS folder)
- Reinstalled with **npm i**
- Reinstalled pods with **pod install**
- Followed a tutorial on https://docs.expo.dev/versions/latest/sdk/sqlite/
But I have an error *"Error: Cannot find native module 'ExpoSQLite', js engine: hermes"*
So when showing app, the results is ERROR screen on emulator:
[](https://i.stack.imgur.com/rIbjh.png)
I have found the line where module is called but as I do understand, expo do not complies required files.
const ExpoSQLite = requireNativeModule('ExpoSQLite'); from **SQLite.ts** of node_modules/expo-sqlite
When comment a line of calls to a database, the error is gone and code is shown but there is no DB attached.
Strange but there is no same error when compile with Android version
Any help will be appreciated.
Thanks a lot,
Andrew
|
How to include colspan to a table header while web scrapping |
|pandas|web-scraping|beautifulsoup|jupyter-notebook| |
null |
I have created a chain filter like below:
Filters.Filter filter =
FILTERS
.condition(
FILTERS
.chain()
.filter(FILTERS.family().exactMatch(columnFamily))
.filter(FILTERS.qualifier().exactMatch("col_a"))
.filter(FILTERS.value().exactMatch("value_a"))
)
.then(FILTERS.pass())
.otherwise(FILTERS.block());
But this is just for one condition like 'col_a=value_a'. How to use multiple filters to fetch data from Bigtable?
I have created a chain filter like below:
Filters.Filter filter =
FILTERS
.condition(
FILTERS
.chain()
.filter(FILTERS.family().exactMatch(columnFamily))
.filter(FILTERS.qualifier().exactMatch("col_a"))
.filter(FILTERS.value().exactMatch("value_a"))
)
.then(FILTERS.pass())
.otherwise(FILTERS.block());
But this is just for one condition like 'col_a=value_a'. How to use multiple filters to fetch data from Bigtable?
Chain Filters behave differently. They output only the matching cell. So, I had to apply "pass" and "block" filters in "then" and "otherwise" functions.
How can I add one more condition for another column qualifier and value? |
How to apply multiple filter conditions like "col_a='some_value' AND col_b='some_another_value'" in Google Cloud Bigtable |
|google-cloud-platform|google-cloud-bigtable|bigtable| |
null |
{"Voters":[{"Id":14732669,"DisplayName":"ray"},{"Id":62576,"DisplayName":"Ken White"},{"Id":839601,"DisplayName":"gnat"}],"SiteSpecificCloseReasonIds":[18]} |
Here's a solution derived from obscure's answer to subtract the head of the arrow from the line so it does not stick out:
```dart
void drawArrow(Offset a, Offset b, Canvas canvas, Color color) {
final paint = Paint()
..color = color
..strokeWidth = 2
..strokeCap = StrokeCap.round;
const arrowSize = 10;
const arrowAngle = pi / 6;
final dX = b.dx - a.dx;
final dY = b.dy - a.dy;
final angle = atan2(dY, dX);
// Recalculate b such that it's the end of the line minus the arrow.
final Offset subtractedB = Offset(
b.dx - (arrowSize - 2) * cos(angle),
b.dy - (arrowSize - 2) * sin(angle),
);
canvas.drawLine(a, subtractedB, paint);
final path = Path();
path.moveTo(b.dx - arrowSize * cos(angle - arrowAngle),
b.dy - arrowSize * sin(angle - arrowAngle));
path.lineTo(b.dx, b.dy);
path.lineTo(b.dx - arrowSize * cos(angle + arrowAngle),
b.dy - arrowSize * sin(angle + arrowAngle));
path.close();
canvas.drawPath(path, paint);
}
``` |
null |
|arrays|go|intersection| |
POST requests test on JWT token returns "undefined" |
I'm in a process of migrating my windows servers which contains SQL from windows 2012 to windows 2016.
At the moment my Always On failover cluster contains 3 2012 servers and i want to replace 2 of them with new 2016 servers which i have already created.
The issue is that once I try to add the 2016 node the cluster goes offline right away and the "current host server" changes to the new server which is not configured yet at that point.
When i'm removing and adding back an existing 2012 server this doesn't happen (host change or offline).
The logs didn't provide any special information and the only thing i could find is this (10.47.2.99 is the cluster IP of one of the 2012 nodes) :
```
Cluster resource 'IP Address 10.47.2.99' of type 'IP Address' in clustered role 'Cluster Group' failed.
Based on the failure policies for the resource and role, the cluster service may try to bring the resource online on this node or move the group to another node of the cluster and then restart it. Check the resource and group state using Failover Cluster Manager or the Get-ClusterResource Windows PowerShell cmdlet.
```
```
The Cluster service failed to bring clustered role 'Cluster Group' completely online or offline. One or more resources may be in a failed state. This may impact the availability of the clustered role.
```
```
Clustered role 'Cluster Group' has exceeded its failover threshold. It has exhausted the configured number of failover attempts within the failover period of time allotted to it and will be left in a failed state. No additional attempts will be made to bring the role online or fail it over to another node in the cluster. Please check the events associated with the failure. After the issues causing the failure are resolved the role can be brought online manually or the cluster may attempt to bring it online again after the restart delay period.
```
I have tried many different ways to attack this issue including removing existing config from the same subnet to make sure it doesn't try to drift to that IP and alert about IP already being in use.
Another thought was maybe Quorum settings but that didn't work either.
From the looks of it I think the issue is some unknown 2016 server setting or service version which causes the host to be taken over and fail due to missing config or else I would expect it to be the same on my 2012 servers.
My end goal is to add the 2016 servers to the cluster and then configure their cluster IP as i would normally do.
Thanks for reading and any ideas you can provide. |
Microsoft SQL Always On failover cluster goes offline when adding a new node |
|sql|sql-server|alwayson|failovercluster| |
null |
I'm encountering an issue with Django's built-in password reset functionality within my project. Specifically, I have an app named 'user' and I've defined the app_name variable in my urls.py as app_name = 'user'. However, this seems to be causing problems with Django's password reset views.
Initially, I encountered the "Reverse for 'url name' not found" error when using auth_views.PasswordResetView. To address this, I tried setting the success_url parameter to 'user:password_reset_done' and providing a custom email_template_name.
However, now I'm facing a new issue where the debugger is raising a DisallowedRedirect exception with the message "Unsafe redirect to URL with protocol 'user'" when attempting to access the password reset functionality. This occurs during the execution of django.contrib.auth.views.PasswordResetView.
the error is:
```
Unsafe redirect to URL with protocol 'user'
Request Method: POST
Request URL: http://127.0.0.1:8000/accounts/reset_password/
Django Version: 5.0.2
Exception Type: DisallowedRedirect
Exception Value:
Unsafe redirect to URL with protocol 'user'
Raised during: django.contrib.auth.views.PasswordResetView
```
Here's a summary of my setup:
urls.py (within the 'user' app):
```
from django.urls import path
from django.contrib.auth import views as auth_views
app_name = 'user'
urlpatterns = [
path('reset_password/', auth_views.PasswordResetView.as_view(template_name='registration/reset_password.html',
success_url='user:password_reset_done',
email_template_name='registration/password_reset_email.html'),
name="reset_password"),
path('reset_password_sent/', auth_views.PasswordResetDoneView.as_view(template_name='registration/password_reset_done.html'),
name='password_reset_done'),
path('reset/<uidb64>/<token>/', auth_views.PasswordResetConfirmView.as_view(),
name='password_reset_confirm'),
path('reset_password_complete/', auth_views.PasswordResetCompleteView.as_view(),
name='password_reset_complete')
]
```
registration/reset_password.html:
```
{% block content %}
<h1>Password reset</h1>
<p>Forgotten your password? Enter your email address below, and we’ll email instructions for setting a new one.</p>
<form method="post">
{% csrf_token %}
{{ form.as_div }}
<input type="submit" id="reset_password" value="Reset my password">
</form>
{% endblock content %}
```
registration/password_reset_done.html:
```
{% block content %}
<h1>Password reset sent</h1>
<p>We've emailed you instructions for setting your password.</p>
{% endblock content %}
```
registration/password_reset_email.html:
```
Someone asked for a password reset for email {{ email }}. Follow the link below:
{{ protocol }}://{{ domain }}{% url 'user:password_reset_confirm' uidb64=uid token=token %}
```
I've tried various approaches but haven't been able to resolve this issue, I scoured Stack Overflow for potential solutions, yet unfortunately, none of them yielded the desired outcome.. Any insights or suggestions on how to address this would be greatly appreciated. Thank you! |
I'm not sure what your actual restrictions are, but I'd override the container's size and apply it to the child elements. Then you can override that width just for the one paragraph class. Note also the adjustment to the auto margin configuration for centering.
This just leaves the issue of the orange background color covering the wider area. I opted to apply it to a pseudo-element, but there are probably other ways, such as a linear gradient.
<!-- begin snippet: js hide: false console: false babel: false -->
<!-- language: lang-css -->
body {
margin: 0;
padding: 0;
font-size: 100%;
padding-left: 5em;
background-color: #ecf0f1;
}
#sb {
position: fixed;
left: 0;
top: 0;
width: 5em;
height: 100%;
overflow-y: auto;
z-index: 1000;
background-color: #3498db;
}
#contents {
position: relative;
}
#contents::before {
position: absolute;
content: '';
width: 20rem;
height: 100%;
left: 50%;
transform: translateX(-50%);
background: #e67e22;
z-index: -1;
}
#contents>* {
margin-left: auto;
margin-right: auto;
max-width: 20rem;
background-color: #e67e22;
}
#contents>h1,
#contents>p {
background-color: #f1c40f;
}
#contents>p.large {
max-width: 100%;
background-color: #2ecc71;
}
<!-- language: lang-html -->
<body>
<div id="sb"></div>
<div id="contents">
<h1>hello!</h1>
<p>
I'm baby slow-carb fam synth swag. Adaptogen farm-to-table air plant kickstarter put a bird on it chillwave authentic 3 wolf.
</p>
<p class="large">
Four dollar toast post-ironic intelligentsia, aesthetic taiyaki small batch succulents readymade shabby chic portland.
</p>
<p>
Ascot lyft grailed 8-bit mlkshk. Fam cornhole woke tattooed offal hot chicken post-ironic hammock hell of chartreuse pok pok gluten-free leggings marxism.
</p>
</div>
</body>
<!-- end snippet -->
|
Sometimes the server needs to be restarted after installing and setting up Tailwind CSS to get reflacted. Try canceling the current process and starting it again. |
[DotNetZip](https://www.nuget.org/packages/DotNetZip/) worked great in a clean way.
DotNetZip is a FAST, FREE class library and toolset for manipulating zip files.
**Code**
static void Main(string[] args)
{
using (ZipFile zip = new ZipFile())
{
zip.Password = "mypassword";
zip.Encryption = EncryptionAlgorithm.WinZipAes256;
zip.AddDirectory(@"C:\Test\Report_CCLF5\");
zip.Save(@"C:\Test\Report_CCLF5_PartB.zip");
}
} |
null |
Neither the `Count` nor the `Any()`. To correct API to use is the [`IsEmpty`][1] property:
// Gets a value that indicates whether the ConcurrentDictionary is empty.
public bool IsEmpty { get; }
Compared to the `Any()` it doesn't cause the allocation of an enumerator, and gives a more reliable answer. The behavior of the `ConcurrentDictionary<K,V>` enumerator is [pretty loosely documented][2]. For example [it's not guaranteed][3] that a single enumeration of the dictionary will return unique keys. It could be said that even if the enumerator never returned any elements, it wouldn't violate any specs.
Compared to the `Count` it doesn't acquire the internal locks, unless the dictionary is empty. So the only scenario that the `IsEmpty` could cause contention, is if you have a bunch of worker threads that are doing nothing else than invoking the `IsEmpty` in a tight loop on an empty dictionary.
Here is the [source code][4] of the `IsEmpty` property (.NET 8), in case you want to see it:
public bool IsEmpty
{
get
{
// Check if any buckets are non-empty, without acquiring any locks.
// This fast path should generally suffice as collections are usually not empty.
if (!AreAllBucketsEmpty())
{
return false;
}
// We didn't see any buckets containing items, however we can't be sure
// the collection was actually empty at any point in time as items may have been
// added and removed while iterating over the buckets such that we never saw an
// empty bucket, but there was always an item present in at least one bucket.
int locksAcquired = 0;
try
{
AcquireAllLocks(ref locksAcquired);
return AreAllBucketsEmpty();
}
finally
{
ReleaseLocks(locksAcquired);
}
}
}
[1]: https://learn.microsoft.com/en-us/dotnet/api/system.collections.concurrent.concurrentdictionary-2.isempty
[2]: https://learn.microsoft.com/en-us/dotnet/api/system.collections.concurrent.concurrentdictionary-2.getenumerator
[3]: https://github.com/dotnet/runtime/issues/46374 "Guarantee that ConcurrentDictionary enumerations don't emit duplicate keys"
[4]: https://github.com/dotnet/runtime/blob/v8.0.0/src/libraries/System.Collections.Concurrent/src/System/Collections/Concurrent/ConcurrentDictionary.cs#L1479 |
{"Voters":[{"Id":421705,"DisplayName":"Holger Just"},{"Id":13302,"DisplayName":"marc_s"},{"Id":839601,"DisplayName":"gnat"}],"SiteSpecificCloseReasonIds":[13]} |
I am currently in the process of migrating data from Elasticsearch 7.17.7 to OpenSearch AWS.
While following the steps outlined in the migration guide provided by AWS (https://docs.aws.amazon.com/opensearch-service/latest/developerguide/migration.html), I have successfully created a snapshot in Elasticsearch version 7.17.7 and uploaded it to an S3 bucket. Additionally, I have successfully registered the repository as shown in the attached image.
However, when attempting to restore the snapshot on AWS, I encountered the following error:
```
{
"error": {
"root_cause": [
{
"type": "parsing_exception",
"reason": "Failed to parse object: unknown field [uuid] found",
"line": 1,
"col": 25
}
],
"type": "repository_exception",
"reason": "[snapshot-temp] Unexpected exception when loading repository data",
"caused_by": {
"type": "parsing_exception",
"reason": "Failed to parse object: unknown field [uuid] found",
"line": 1,
"col": 25
}
},
"status": 500
}
```
I tried to list the snapshot and I get the same error.
Please let me know if any clue about this error, how to resolve this.
Thanks |
How about try `asp-action` , like
<form id="FormActive" asp-action="Index">
</form>
Then in controller:
[HttpPost]
public IActionResult Index(int id)
{
return View();
}
result:
[![enter image description here][1]][1]
**Update**:
Try to use `<a>` tag without With Tag Form
<a asp-controller="Home" asp-action="Index2" asp-route-id="1"><button>1</button></a>
Then in controller;
[HttpGet]
public IActionResult Index2(int id)
{
return View();
}
result:
[![enter image description here][2]][2]
**Update2**
We can add a hidden `<input>` in the form to get the id like:
<button type="submit" Form="FormActive" name="id" value="1">a</button>
<form id="FormActive" method="post" asp-controller="Home" asp-action="Index" data-ajax="true" data-ajax-method="post" data-ajax-complete="completed">
<input id="buttonid" type="hidden" name="id" />
</form>
<script src="https://ajax.googleapis.com/ajax/libs/jquery/3.6.1/jquery.min.js"></script>
<script>
$("button").click(function (e) {
var fired_button = $(this).val();
alert(fired_button);
$('#buttonid').val(fired_button);
});
</script>
result:
[![enter image description here][3]][3]
[1]: https://i.stack.imgur.com/WNnI8.png
[2]: https://i.stack.imgur.com/LjdY6.png
[3]: https://i.stack.imgur.com/t5VQ2.png |
{"Voters":[{"Id":735926,"DisplayName":"bfontaine"},{"Id":328193,"DisplayName":"David"},{"Id":839601,"DisplayName":"gnat"}]} |
The command sometimes succesfully launches the batch file however, sometimes it doesn't work and it immediately closes.
When I launch the batch file directly from either cmd or by double clicking the file, it always works flawlessly.
I was wondering what I could do to fix this issue.
#a couple of the commands I have tried
start-process -wait -filepath "C:\Windows\System32\cmd.exe" -argumentlist "/c 'C:\path\stream3.bat' .\refresh.bat"
start-process -wait -filepath "C:\Windows\System32\cmd.exe" -argumentlist "/c 'C:\path\stream3.bat'"
#The contents of the batch file
ffmpeg -loglevel error -i "rtmp://mystream" -vf "scale=426:240,boxblur=luma_radius=1:chroma_radius=1" -vcodec libx264 -an -r 25 -b:v 550000 -crf 31 -sc_threshold 0 -f hls -hls_time 5 -segment_time 5 -hls_list_size 5 "C:\nginx\srv\hls\stream3\output.m3u8"
|
Django PasswordResetView not working properly with app_name causing DisallowedRedirect |
|python|django|django-views|django-templates| |
null |
I was also facing the same error when I was trying to upload numpy , tensorflow and other libraries for my training task, I wasnt able to solve the problem so what I did was I just ran the pip install commands as a task in my dag. Also the version compatibility is also handled by pip
Here is the code:
Import sys
Import subprocess
@task
def installing_dependencies_using_subprocess():
subprocess.check_call(
[sys.executable, "-m", "pip", "install", "numpy"])
subprocess.check_call(
[sys.executable, "-m", "pip", "install", "pandas"])
subprocess.check_call(
[sys.executable, "-m", "pip", "install", "seaborn"])
subprocess.check_call(
[sys.executable, "-m", "pip", "install", "scikit-learn"])
subprocess.check_call(
[sys.executable, "-m", "pip", "install", "tensorflow"])
subprocess.check_call(
[sys.executable, "-m", "pip", "install", "joblib"])
subprocess.check_call(
[sys.executable, "-m", "pip", "install", "matplotlib"])
subprocess.check_call(
[sys.executable, "-m", "pip", "install", "s3fs"])
return {"message": "Dependencies installed successfully"}
|
It is possible to dynamically add another `div` as shown in the example below. I have removed the grid as it is not necessary to demonstrate this. I have also reduced your HTML and JavaScript code to a minimum.
The important work is being done in the `addButton` event listener.
There are several ways to calculate the next id for the `data-id` attribute. I chose to get the number of children of the `div` with an id of "phrase" using `childElementCount` and then add `1` to it.
You should inspect the newly created blue box in dev tools to compare it with the existing blue boxes to make sure that all of the classes and data-ids have been added properly.
With a little adjustment, it should be possible to use the code in the event listener to create all of the blue boxes dynamically. Probably, this code should be extracted into a separate function.
Notice that I am calling the JQuery function `draggable()` again inside the event listener with:
```
$(".words").draggable();
```
to add all of the correct classes to the newly created blue box.
This method should give you an idea how to approach the rest of your problem. I am sure that you should be able to get the text of the blue box from an appropriate `input` element.
<!-- begin snippet: js hide: false console: true babel: false -->
<!-- language: lang-js -->
$(".words").draggable();
const addButton = document.querySelector("#add-blue-box");
addButton.addEventListener(
"click", () => {
const phraseDiv = document.querySelector("#phrase");
let nextId = phraseDiv.childElementCount + 1;
const divElement = document.createElement("div");
const spanElement = document.createElement("span");
spanElement.textContent = "F6 Text";
spanElement.setAttribute('data-id', nextId);
spanElement.classList.add('words');
divElement.appendChild(spanElement);
divElement.setAttribute('data-id', nextId);
phraseDiv.appendChild(divElement);
$(".words").draggable();
});
<!-- language: lang-css -->
h1 {
text-align: center;
}
.wrap {
display: flex;
gap: 2rem;
position: relative;
padding-left: 220px;
height: 100vh;
}
#phrase {
position: absolute;
left: 0;
top: 0;
bottom: 0;
color: #fff;
width: 150px;
overflow: auto;
z-index: 2;
display: flex;
flex-direction: column;
margin: 1rem 0 .5rem;
}
#phrase>div {
margin: 0 0 10px;
width: 150px;
padding: 5px 10px;
background: #007bff;
border: 2px solid #007bff;
border-radius: 6px;
color: #fff;
}
<!-- language: lang-html -->
<script src="https://cdnjs.cloudflare.com/ajax/libs/jquery/3.3.1/jquery.min.js"></script>
<script src="https://cdnjs.cloudflare.com/ajax/libs/jqueryui/1.12.1/jquery-ui.min.js"></script>
<h1>Drag and drop example </h1>
<button id="add-blue-box" type="button">Add Blue Box</button>
<div class="wrap">
<div id="phrase">
<div data-id="1"><span data-id="1" class="words">H1 Text</span></div>
<div data-id="2"><span data-id="2" class="words">H2 Text</span></div>
<div data-id="3"><span data-id="3" class="words">H3 Text</span></div>
<div data-id="4"><span data-id="4" class="words">H4 Text</span></div>
<div data-id="5"><span data-id="5" class="words">H5 Text</span></div>
<div data-id="6"><span data-id="6" class="words">H6 Text</span></div>
</div>
</div>
<!-- end snippet -->
Alternatively, if you really wanted to do this with JQuery (which is only a JavaScript library) then the following code snippet would be one solution:
<!-- begin snippet: js hide: false console: true babel: false -->
<!-- language: lang-js -->
$(".words").draggable();
const addButton = document.querySelector("#add-blue-box");
addButton.addEventListener(
"click", () => {
let nextId = $("#phrase").children().length + 1;
let text1 = "F6";
let text2 = "Text";
$("<div>", {
"data-id": nextId
})
.append($("<span>", {
"data-id": nextId,
class: "words"
})
.text("Ln#" + text1 + " " + text2)
).appendTo($("#phrase"));
$(".words").draggable();
});
<!-- language: lang-css -->
h1 {
text-align: center;
}
.wrap {
display: flex;
gap: 2rem;
position: relative;
padding-left: 220px;
height: 100vh;
}
#phrase {
position: absolute;
left: 0;
top: 0;
bottom: 0;
color: #fff;
width: 150px;
overflow: auto;
z-index: 2;
display: flex;
flex-direction: column;
margin: 1rem 0 .5rem;
}
#phrase>div {
margin: 0 0 10px;
width: 150px;
padding: 5px 10px;
background: #007bff;
border: 2px solid #007bff;
border-radius: 6px;
color: #fff;
}
<!-- language: lang-html -->
<script src="https://cdnjs.cloudflare.com/ajax/libs/jquery/3.3.1/jquery.min.js"></script>
<script src="https://cdnjs.cloudflare.com/ajax/libs/jqueryui/1.12.1/jquery-ui.min.js"></script>
<h1>Drag and drop example </h1>
<button id="add-blue-box" type="button">Add Blue Box</button>
<div class="wrap">
<div id="phrase">
<div data-id="1"><span data-id="1" class="words">H1 Text</span></div>
<div data-id="2"><span data-id="2" class="words">H2 Text</span></div>
<div data-id="3"><span data-id="3" class="words">H3 Text</span></div>
<div data-id="4"><span data-id="4" class="words">H4 Text</span></div>
<div data-id="5"><span data-id="5" class="words">H5 Text</span></div>
<div data-id="6"><span data-id="6" class="words">H6 Text</span></div>
</div>
</div>
<!-- end snippet -->
This answer about [Creating a div element in jQuery][1] may be worth reading.
[1]: https://stackoverflow.com/a/4158203/18595321 |
Please help me to resolve this issue.
snowflake.connector is unable to import after it is successfully installed in Azure Pipeline.
**Error::**
Installing collected packages: sortedcontainers, pytz, asn1crypto, urllib3, typing-extensions, tomlkit, pyjwt, pycparser, platformdirs, packaging, idna, filelock, charset-normalizer, certifi, requests, cffi, cryptography, pyOpenSSL, snowflake-connector-python
Successfully installed asn1crypto-1.5.1 certifi-2024.2.2 cffi-1.16.0 charset-normalizer-3.3.2 cryptography-42.0.5 filelock-3.13.1 idna-3.6 packaging-24.0 platformdirs-3.11.0 pyOpenSSL-24.1.0 pycparser-2.21 pyjwt-2.8.0 pytz-2024.1 requests-2.31.0 snowflake-connector-python-3.7.1 sortedcontainers-2.4.0 tomlkit-0.12.4 typing-extensions-4.10.0 urllib3-1.26.18
Python 3.8.10
**Traceback (most recent call last):
File "snowflakepoc.py", line 1, in <module>
import snowflake as sf
ModuleNotFoundError: No module named 'snowflake'**
**##[error]Bash exited with code '1'.**
**Devops YAML Script::**
```
trigger:
branches:
include:
- snowpipe
variables:
vmImageName: 'windows-latest'
# Working Directory
workingDirectory: '$(System.DefaultWorkingDirectory)'
pool:
vmImage: $(vmImageName)
steps:
- task: UsePythonVersion@0
displayName: 'Define Python version'
inputs:
versionSpec: '3.8.x'
addToPath: true
- task: bash@3
inputs:
targetType: inline
workingDirectory: $(workingDirectory)
script: |
python -m venv worker_venv
source worker_venv/bin/activate
pip install --upgrade pip
pip install --target="./.python_packages/lib/site-packages" snowflake-connector-python
python --version
python snowflake.py
displayName: 'Install Snowflake Python Connector '
```
**I tried to execute Python inline code also. But no luck..**
```
- task: PythonScript@0
displayName: 'Run Python Script to Connect to Snowflake'
inputs:
scriptSource: 'inline'
workingDirectory: $(workingDirectory)
script: |
import snowflake.connector
print("done")
```
I tried to install the library through VS Code with Python Environment and then import the library snowflake.connector and that was successfully executed. |
As mentioned in the comments: yes, this documentation is imprecise at best. I think it is referring to the behavior between scalars of the same type:
```python3
import numpy
a = numpy.uint32(4294967295)
print(a.dtype) # uint32
a += np.uint32(1) # WILL wrap to 0 with warning
print(a) # 0
print(a.dtype) # uint32
```
The behavior of your example, however, will change due to [NEP 50][1] in NumPy 2.0. So as frustrating as the old behavior is, there's not much to be done but wait, unless you want to file an issue about backporting a documentation change. As documented in the [Migration Guide][2].
> The largest backwards compatibility change of this is that it means that the precision of scalars is now preserved consistently...
> `np.float32(3) + 3.` now returns a `float32` when it previously returned a `float64`.
I've confirmed that in your example, the type is preserved as expected.
```python3
import numpy
a = numpy.uint32(4294967295)
print(a.dtype) # uint32
a += 1 # will wrap to 0
print(a) # 0
print(a.dtype) # uint32
numpy.__version__ # '2.1.0.dev0+git20240318.6059db1'
```
The second NumPy 2.0 release candidate is out, in case you'd like to try it:
https://mail.python.org/archives/list/numpy-discussion@python.org/thread/EGXPH26NYW3YSOFHKPIW2WUH5IK2DC6J/
[1]: https://numpy.org/neps/nep-0050-scalar-promotion.html#nep50
[2]: https://numpy.org/devdocs/numpy_2_0_migration_guide.html |
As mentioned by Hans Passant in a comment, the zip file format makes use of a [MS-DOS Date & Time](https://learn.microsoft.com/en-us/windows/desktop/api/winbase/nf-winbase-dosdatetimetofiletime) structure.
This structure is defined as two separate `unsigned short` values like so:
wFatDate
The MS-DOS date. The date is a packed value with the following format.
Bits Description
0-4 Day of the month (1–31)
5-8 Month (1 = January, 2 = February, and so on)
9-15 Year offset from 1980 (add 1980 to get actual year)
wFatTime
The MS-DOS time. The time is a packed value with the following format.
Bits Description
0-4 Second divided by 2
5-10 Minute (0–59)
11-15 Hour (0–23 on a 24-hour clock)
At the time MS-DOS was created, timezones were not being used on those computers (Unix already had the concept, though, since 1970.) People who used MS-DOS were often in their office or at home and did not communicate with people in other states let alone other countries via the computer. Intranet was pretty expensive too at the time.
The company that created the zip file format made the mistake of using the FAT file system date format and it stuck. So zip files are created using local time (it doesn't have to, but it's the expected behavior, at least.)
The zip format offers ways to add [extensions](https://opensource.apple.com/source/zip/zip-6/unzip/unzip/proginfo/extra.fld.auto.html) (link from @user3342816 who posted a comment), though, including various timestamps.
0x000a NTFS (Win9x/WinNT FileTimes)
0x000d Unix
The NTFS block is described like so:
PKWARE Win95/WinNT Extra Field:
==============================
The following description covers PKWARE's "NTFS" attributes
"extra" block, introduced with the release of PKZIP 2.50 for
Windows. (Last Revision 20001118)
(Note: At this time the Mtime, Atime and Ctime values may
be used on any WIN32 system.)
[Info-ZIP note: In the current implementations, this field has
a fixed total data size of 32 bytes and is only stored as local
extra field.]
Value Size Description
----- ---- -----------
0x000a Short Tag (NTFS) for this "extra" block type
TSize Short Total Data Size for this block
Reserved Long for future use
Tag1 Short NTFS attribute tag value #1
Size1 Short Size of attribute #1, in bytes
(var.) SubSize1 Attribute #1 data
.
.
.
TagN Short NTFS attribute tag value #N
SizeN Short Size of attribute #N, in bytes
(var.) SubSize1 Attribute #N data
For NTFS, values for Tag1 through TagN are as follows:
(currently only one set of attributes is defined for NTFS)
Tag Size Description
----- ---- -----------
0x0001 2 bytes Tag for attribute #1
Size1 2 bytes Size of attribute #1, in bytes (24)
Mtime 8 bytes 64-bit NTFS file last modification time
Atime 8 bytes 64-bit NTFS file last access time
Ctime 8 bytes 64-bit NTFS file creation time
The total length for this block is 28 bytes, resulting in a
fixed size value of 32 for the TSize field of the NTFS block.
The NTFS filetimes are 64-bit unsigned integers, stored in Intel
(least significant byte first) byte order. They determine the
number of 1.0E-07 seconds (1/10th microseconds!) past WinNT "epoch",
which is "01-Jan-1601 00:00:00 UTC".
The Unix block includes two timestamps as well:
PKWARE Unix Extra Field:
========================
The following is the layout of PKWARE's Unix "extra" block.
It was introduced with the release of PKZIP for Unix 2.50.
Note: all fields are stored in Intel low-byte/high-byte order.
(Last Revision 19980901)
This field has a minimum data size of 12 bytes and is only stored
as local extra field.
Value Size Description
----- ---- -----------
0x000d Short Tag (Unix0) for this "extra" block type
TSize Short Total Data Size for this block
AcTime Long time of last access (UTC/GMT)
ModTime Long time of last modification (UTC/GMT)
UID Short Unix user ID
GID Short Unix group ID
(var) variable Variable length data field
The variable length data field will contain file type
specific data. Currently the only values allowed are
the original "linked to" file names for hard or symbolic
links, and the major and minor device node numbers for
character and block device nodes. Since device nodes
cannot be either symbolic or hard links, only one set of
variable length data is stored. Link files will have the
name of the original file stored. This name is NOT NULL
terminated. Its size can be determined by checking TSize -
12. Device entries will have eight bytes stored as two 4
byte entries (in little-endian format). The first entry
will be the major device number, and the second the minor
device number.
[Info-ZIP note: The fixed part of this field has the same layout as
Info-ZIP's abandoned "Unix1 timestamps & owner ID info" extra field;
only the two tag bytes are different.]
As we can see, the NTFS and Unix blocks clearly define their timestamp as using UTC. The NTFS date has more precision (100ns) than the Unix timestamps (1s), it will also survive much longer since it uses 64 bits (see [Year 2038 Problem](https://en.wikipedia.org/wiki/Year_2038_problem) for further details on the 32 bit timestamps). |
null |
I have data with this structure (real data is more complicated):
```
df1 <- read.table(text = "DT odczyt.1 odczyt.2
'2023-08-14 00:00:00' 362 1.5
'2023-08-14 23:00:00' 633 4.3
'2023-08-15 05:00:00' 224 1.6
'2023-08-15 23:00:00' 445 5.6
'2023-08-16 00:00:00' 182 1.5
'2023-08-16 23:00:00' 493 4.3
'2023-08-17 05:00:00' 434 1.6
'2023-08-17 23:00:00' 485 5.6
'2023-08-18 00:00:00' 686 1.5
'2023-08-18 23:00:00' 487 6.8
'2023-08-19 00:00:00' 566 1.5
'2023-08-19 05:00:00' 278 7.9
'2023-08-19 17:00:00' 561 11.5
'2023-08-19 18:00:00' 365 8.5
'2023-08-19 22:00:00' 170 1.8
'2023-08-19 23:00:00' 456 6.6
'2023-08-20 00:00:00' 498 1.5
'2023-08-20 03:00:00' 961 1.54
'2023-08-20 05:00:00' 397 1.6
'2023-08-20 19:00:00' 532 6.6
'2023-08-20 23:00:00' 493 3.8
'2023-08-21 01:00:00' 441 9.2
'2023-08-21 07:00:00' 793 8.5
'2023-08-21 13:00:00' 395 5.5", header = TRUE) %>%
mutate (DT = as.POSIXct(DT))
```
I am selecting 3 hours for which "odczyt.1" has the maximum value (I sort "odczyt.1" from the largest values and select the 3 largest values). I leave the days in which these hours occurred and delete the rest of the lines.
For the given data these are:
2023-08-20 03:00:00 (961)
2023-08-21 07:00:00 (793)
2023-08-18 00:00:00 (686)
therefore the expected result is:
```
> df1
DT odczyt.1 odczyt.2
1 2023-08-18 00:00:00 686 1.50
2 2023-08-18 23:00:00 487 6.80
3 2023-08-20 00:00:00 498 1.50
4 2023-08-20 03:00:00 961 1.54
5 2023-08-20 05:00:00 397 1.60
6 2023-08-20 19:00:00 532 6.60
7 2023-08-20 23:00:00 493 3.80
8 2023-08-21 01:00:00 441 9.20
9 2023-08-21 07:00:00 793 8.50
10 2023-08-21 13:00:00 395 5.50
``` |
try this
```
<div id="map" class="map-wrapper"></div>
<style>
.map-wrapper {
width: 100%;
height: 400px;
}
</style>
```
Adding the CSS style to the .map-wrapper class with a width of 100% and a height of 400px ensures that the Leaflet map is displayed correctly within the specified dimensions. The leaflet.css file contains the necessary styles for Leaflet maps, but these styles need to be applied to a container element to properly render the map. By setting the width and height of the .map-wrapper class, you provide the Leaflet map with the dimensions it needs to display correctly.
|
In a nuxtJs 3 project I tried to add some specific stuff into my app.config.ts a the root directory of my project.
When I tried to call useAppConfig() to use my config data, VSCode raise an error "appConfig.as is unknown type"
I'm stuck with that :(
The app.config.ts :
```typescript
interface asAppConfig {
layout: {
nav: {
isOpen: boolean
}
}
}
export default defineAppConfig({
ui: {
notifications: {
position: 'top-0 right-0 bottom-auto',
},
},
as: {
layout: {
nav: {
isOpen: false,
},
},
} satisfies asAppConfig,
})
```
Below after calling useAppConfig(), I have "appConfig.nav is unknown type"
```typescript
<script lang="ts" setup>
const appConfig = useAppConfig()
const isOpen = computed({
get: () => appConfig.nav.isOpen,
set: (value: boolean) => (appConfig.nav.isOpen = value)
})
``` |
Nuxt.js 3 - useAppConfig() returning unknown |
|typescript|nuxt.js| |
{"Voters":[{"Id":14732669,"DisplayName":"ray"},{"Id":2395282,"DisplayName":"vimuth"},{"Id":2530121,"DisplayName":"L Tyrone"}],"SiteSpecificCloseReasonIds":[13]} |
I have excel vba. As soon as I open excel file, the UserForm also opens without opening WorkBook. I can open Workbook with a button click but I want it to be read-only and not editable.
Now I have activated write protection under File -> Info -> write protection. Now when I open the excel file the message appears:
> "The author would prefer you to open 'file.xlsm' read-only - unless you need to make changes. Open read-only?"
That's not what I want. Is there a way to ignore the message or to automatically confirm with "yes" so that when someone opens the excel file, the message does not appear?
**Note:**
I dont want to use the code
```
Set wb = Workbooks.Open(Filename:=MyFile, ReadOnly:=True)
```
because i directly open UserForm when i open excel and workbook is not displayed, so i can't use this code.
if you want to know how I open a file, here is also my code on how to open UserForm without opening workbook.
Code in my ThisWorkBook:
```
Private Sub Workbook_Open()
showLoginForm
End Sub
```
Code for showLoginForm:
```
Sub showLoginForm()
If isSheetVisible Then
' Only Hide this workbook and keep the other workbooks visible
ThisWorkbook.Windows(1).Visible = False
Else
' There is no other workbook visible, hide Excel
Application.Visible = False
ThisWorkbook.Windows(1).Visible = False
End If
UserForm5.show vbModeless
End Sub
```
|
Open read-only without getting message |
```
#include <stdio.h>
void swap(int* x, int* y){
int temp;
temp = *x;
*x = *y;
*y = temp;
printf("The value of x and y inside the function is x = %d and y = %d", x, y);
}
int main()
{
int x = 2, y =3;
swap(&x, &y);
return 0;
}
```
Output:
The value of x and y inside the function is x = 6422300 and y = 6422296
Why the value of x is 6422300 and y is 6422296 inside the function?
```
#include <stdio.h>
void swap(int* x, int* y){
int temp;
temp = *x;
*x = *y;
*y = temp;
printf("The value of x and y inside the function is x = %d and y = %d", x, y);
}
int main()
{
int x = 2, y =3;
swap(&x,&y);
printf("\nThe value of x and y in the main is x = %d and y = %d", x, y);
return 0;
}
```
Output:
The value of x and y inside the function is x = 6422300 and y = 6422296
The value of x and y in the main is x = 3 and y = 2
I tried to write the print statement for x and y in main separately, then it worked properly. But why it wasn't working inside function? |
Why my print statement inside the swap function is giving such output? |
|c|function|pointers|pass-by-reference|swap| |
null |
|php|wordpress|intersection|adsense|user-roles| |
null |
I am trying to make a CountDownTimer for my Android app, but every time I try to start it using just `timer.start()`, it starts multiple timers. I don't know why.
My code for this is:
```
fun DissapearSkip() {
skipTimeButton.visibility = View.VISIBLE
exoSkip.visibility = View.GONE
skipTimeText.text = new.skipType.getType()
skipTimeButton.setOnClickListener {
exoPlayer.seekTo((new.interval.endTime * 1000).toLong())
}
val timer = object : CountDownTimer(5000, 1000) {
override fun onTick(millisUntilFinished: Long) {
println(millisUntilFinished)
if (new == null) {
skipTimeButton.visibility = View.GONE
exoSkip.visibility = View.VISIBLE
dissappeared = false
return
}
}
override fun onFinish() {
val skip = currentTimeStamp
skipTimeButton.visibility = View.GONE
exoSkip.visibility = View.VISIBLE
dissappeared = true
}
}
timer.start()
}
```
also the new variable is for getting the timestamp type and info |
Change connectivity_plus to 1.2.0, it worked for me
Pubspec.yaml> connectivity_plus: ^1.2.0
|
null |
In my OpenLiberty with Microprofile application, I am using Testcontainers for my integration tests. There is a need for these tests to communicate with an external service by making a POST request to it, get a response back and continue their execution.
When reaching this point, the communication fails with the next exception:
```
INFO Slf4jLogConsumer.accept:66 STDOUT: 2024-03-10 09:54:36 6e82989c-b1b4-42d3-b121-a970d2958051 ERROR
GenericHttpClient.callService:165 Exception occurred RESTEASY004655: Unable to invoke request:
org.apache.http.conn.ConnectTimeoutException: Connect to external-service-url [external-service-url/10.2.38.235]
failed: Read timed out for url=external-service-url, reqObject={key=someKey, timeToLive=aTimeToLiveValue}
INFO Slf4jLogConsumer.accept:66 STDOUT: jakarta.ws.rs.ProcessingException: RESTEASY004655: Unable to invoke request:
org.apache.http.conn.ConnectTimeoutException: Connect to external-service-url [external-service-url/10.2.38.235]
failed: Read timed out
```
I tried to send a request to the external service via Postman and it works as expected, but it seems that there is a network issue from my Testcontainers setup which I can't identify. Any suggestions?
I am providing my Testcontainers configuration class for better clarity.
```
class TestcontainersConfig extends Specification {
private static final String LIBERTY_CONFIG_BASE_PATH = 'src/main/liberty/config'
private static final String CONTAINER_CONFIG_BASE_PATH = 'config'
static Network network = Network.newNetwork()
@Shared
Db2Container db2
@Shared
GenericContainer openLiberty
@Shared
Sql sql
def setupSpec() {
db2 = new Db2Container("icr.io/db2_community/db2:latest")
.withExposedPorts(50000)
.withNetwork(network)
.withNetworkAliases("db2alias")
.acceptLicense()
.withReuse(true)
.waitingFor(Wait.forLogMessage(".*All databases are now active.*\\n", 1))
.withStartupTimeout(Duration.ofMinutes(3))
.withLogConsumer(new Slf4jLogConsumer(LoggerFactory.getLogger("db2")))
db2.start()
sql = Sql.newInstance(db2.getJdbcUrl(), db2.getUsername(), db2.getPassword(), db2.getDriverClassName())
openLiberty = new GenericContainer(DockerImageName.parse("icr.io/appcafe/open-liberty:full-java17-openj9-ubi"))
.withExposedPorts(9080, 9443)
.withNetwork(network)
.withEnv('db.usr', db2.getUsername())
.withEnv('db.pwd', db2.getPassword())
.withEnv('db.dbname', db2.getDatabaseName())
.withEnv('db.host', 'db2alias')
.withEnv('db.port', '50000')
.withEnv('jdbc.driver.dir', "/jdbc")
.withCopyFileToContainer(MountableFile.forHostPath("build/libs/file-name.war"), "${CONTAINER_CONFIG_BASE_PATH}/apps/file-name.war")
.withCopyFileToContainer(MountableFile.forHostPath("${LIBERTY_CONFIG_BASE_PATH}/server.xml"), "${CONTAINER_CONFIG_BASE_PATH}/server.xml")
.withCopyFileToContainer(MountableFile.forHostPath("${LIBERTY_CONFIG_BASE_PATH}/cacerts"), "${CONTAINER_CONFIG_BASE_PATH}/cacerts")
.withCopyFileToContainer(MountableFile.forHostPath("${LIBERTY_CONFIG_BASE_PATH}/db2jcc4.jar"), "/jdbc/db2jcc4.jar")
.withCopyFileToContainer(MountableFile.forHostPath("${LIBERTY_CONFIG_BASE_PATH}/server.env"), "${CONTAINER_CONFIG_BASE_PATH}/server.env")
.withCopyFileToContainer(MountableFile.forHostPath("${LIBERTY_CONFIG_BASE_PATH}/bootstrap.properties"), "${CONTAINER_CONFIG_BASE_PATH}/bootstrap.properties")
.withCopyFileToContainer(MountableFile.forHostPath("${LIBERTY_CONFIG_BASE_PATH}/GeneratedSSLInclude.xml"), "${CONTAINER_CONFIG_BASE_PATH}/GeneratedSSLInclude.xml")
.withCopyFileToContainer(MountableFile.forHostPath("${LIBERTY_CONFIG_BASE_PATH}/users.xml"), "${CONTAINER_CONFIG_BASE_PATH}/users.xml")
.waitingFor(Wait.forLogMessage(".*CWWKZ0001I: Application .* started in .* seconds.*", 1)).withStartupTimeout(Duration.ofMinutes(2))
.withLogConsumer(new Slf4jLogConsumer(LoggerFactory.getLogger("openLiberty")))
openLiberty.start()
}
}
``` |
Using Testcontainers, is it possible to connect to an external REST API service? |
|java|docker|networking|testcontainers|open-liberty| |
The error in your code lies in the way you are passing data to the template. The dictionary you're passing to the render function is not formatted correctly. You should use a key-value pair to pass data to the template, where the key is the variable name you want to use in the template and the value is the actual data. |
So I'm working in vs code not the IDE and when I build the project I get this error:
> error CS0246: The type or namespace name 'StudentDbContext' could not be found
I'm using .NET 8 and the latest version of dotnet ef.
Here is my Program.cs file where the error is:
```
using Microsoft.EntityFrameworkCore;
var builder = WebApplication.CreateBuilder(args);
// Add services to the container.
builder.Services.AddControllersWithViews();
builder.Services.AddDbContext<StudentDbContext>(options =>
options.UseSqlServer(Configuration.GetConnectionString("DefaultConnection")));
var app = builder.Build();
// Configure the HTTP request pipeline.
if (!app.Environment.IsDevelopment())
{
app.UseExceptionHandler("/Home/Error");
// The default HSTS value is 30 days. You may want to change this for production scenarios, see https://aka.ms/aspnetcore-hsts.
app.UseHsts();
}
app.UseHttpsRedirection();
app.UseStaticFiles();
app.UseRouting();
app.UseAuthorization();
app.MapControllerRoute(
name: "default",
pattern: "{controller=Home}/{action=Index}/{id?}");
app.Run();
```
It's at the top of the file where I'm configuring my database connection.
The other similar question either did not apply or where out of date.
I've tried updating and reinstall everything but I'm still getting the same output |
error CS0246: The type or namespace name 'S tudentDbContext' could not be found |
|.net|entity-framework|visual-studio-code| |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.