instruction
stringlengths 0
30k
β |
---|
i've created a custom theme, and this is my function:
```
function add_main_nav() {
if( !is_nav_menu( 'scp-main-menu' )){
register_nav_menu('scp-main-menu',__( 'Main Menu' ));
$menu_id = wp_create_nav_menu( 'scp-main-menu' );
wp_update_nav_menu_item( $menu_id, 0, array(
'menu-item-title' => 'Sobre mΓ',
'menu-item-url' => '#scp-abaout-me',
'menu-item-status' => 'publish',
'menu-item-type' => 'custom',
) );
wp_update_nav_menu_item( $menu_id, 0, array(
'menu-item-title' => 'Experiencia',
'menu-item-url' => '#scp-experience-block',
'menu-item-status' => 'publish',
'menu-item-type' => 'custom',
) );
wp_update_nav_menu_item( $menu_id, 0, array(
'menu-item-title' => 'Projects',
'menu-item-url' => '#scp-projects-block',
'menu-item-status' => 'publish',
'menu-item-type' => 'custom',
) );
}
if( !is_nav_menu( 'scp-footer-menu' )){
register_nav_menu('scp-footer-menu',__( 'Footer Menu' ));
$footer_menu_id = wp_create_nav_menu( 'scp-footer-menu' );
wp_update_nav_menu_item( $footer_menu_id, 0, array(
'menu-item-title' => 'Sobre mΓ',
'menu-item-url' => '#scp-abaout-me',
'menu-item-status' => 'publish',
'menu-item-type' => 'custom',
) );
}
}
// Hook to the init action hook, run our navigation menu function
add_action( 'init', 'add_main_nav' );
header.php:
<?php wp_nav_menu( array( 'header-menu' => 'header-menu' ) ); ?>
footer.php:
<?php wp_footer(); ?>
<footer>
<?php wp_nav_menu( array( 'footer-menu' => 'footer-menu' ) ); ?>
<p>Copyright © 2024 taniarroyo.com</p>
</footer>
```
thanks!!!
I want to show 2 different menus, one in the header and one in the footer, it generates them correctly, but the one I generated for the footer is always displayed also in the header. |
Is there any way to make OCR work on smaller fonts? (except font teaching) |
|python| |
null |
I have a website on WooCommerce built with Elementor. What I would like to happen on the single product page is for the "add to cart" button to change its label to "pre-order" only when certain sizes (variants) are selected. It's a shoe website, and I would like a user, when selecting, for example, size 38, for the button label, which was previously "add to cart," to change to "pre-order." I'm not sure how to go about changing this, as I'm not very experienced with coding. I have already tried downloading plugins, but they don't allow me this flexibility to modify the text based on user action. Can anyone help me?
Below, I copy the code that I tried to use on my website but without success
<script>
document.addEventListener('DOMContentLoaded', function() {
document.querySelector('form.variations_form').addEventListener('change', function() {
var variation_id = document.querySelector('input[name=variation_id]').value;
var variation_is_sold_out = document.querySelector('.single_variation_wrap .woocommerce-variation-availability .woocommerce-variation-availability').classList.contains('out-of-stock');
if (variation_id && variation_is_sold_out) {
document.querySelector('.single_add_to_cart_button').innerText = 'Sold Out';
} else {
document.querySelector('.single_add_to_cart_button').innerText = 'Aggiungi al carrello';
}
});
});
</script>
If it can be useful, below is the link to a product page, where I would like the button to change its wording from "Add to Cart" to "Sold Out" or "Pre-order" depending on the selected variant. If necessary, it's also fine to manually insert the IDs of individual variants within the created code, the important thing is that it works.
Additionally, I would very much like for all unavailable product variants to change the button label from "add to cart" to "sold out." Perhaps this procedure is simpler, but I haven't been able to do it either.
I've tried various codes found online, but none of them worked.
this is the product page website -> https://www.alessandramilano.com/product/mirto-black/ |
How can I change the label 'add to cart' to 'pre-order' when a product variant (size) is selected? |
|woocommerce|addition|cart|elementor|product-variations| |
null |
i am using sox for creating synth with 100ms, this is my command:
```
/usr/bin/sox -V -r 44100 -n -b 64 -c 1 file.wav synth 0.1 sine 200 vol -2.0dB
```
now when i create 3 sine wave files and i combine all with
```
/usr/bin/sox file1.wav file2.wav file3.wav final.wav
```
then i get gaps between the files. i dont know why. but when i open for example file1.wav then i also see a short gap in front and at the end of the file.
how can i create a sine with exact 10ms without gaps in front and end?
and my 2nd question: is there also a possibility to create e.g. 10 synths sine wave with one command in sox? like sox f1 200 0.1, f2 210 01, f3 220 01, ... first 200hz 10ms, 210hz 10ms, 220hz 10ms
thank you so much many greets
i have tried some different options in sox but always each single sine file looks like that:
|
I am trying to simulate a web site access via C# code. The flow is
1) HTTP Get the the login Page. This succeeds.
2) HTTP Post to login page. This returns status 302, I disabled auto redirect in HttpClientHandler. Validated the cookie returned, it has the login cookie.
3) HTTP Get the actual content page. Returns success code 200, but the content is always trimmed. This is the same page to which step 2 re-directs.
I have tried by even letting the auto-redirect enabled in the HttpClientHandler. Even then the response is trimmed.
In postman, when I directly do step 2 allowing re-direct. The content comes properly.
This used to work sometime back on the website. It's PHP based website and it's protected by CloudFare. Not sure if the cloudfare is recent thing.
I checked the headers sent via the browser for the same and replicated the same in the code but still it doesn't seems to work.
[enter image description here](https://i.stack.imgur.com/eAA48.png)
[enter image description here](https://i.stack.imgur.com/g2MTU.png)
[enter image description here](https://i.stack.imgur.com/FNgbT.png)
From the code I set the below headers for step 1:
[enter image description here](https://i.stack.imgur.com/oM0Fw.png)
[From the code I set the below headers for step 2 as is](https://i.stack.imgur.com/7Xxr9.png)
[From the code I set the below headers for step 3 as is:](https://i.stack.imgur.com/basAU.png)
The response header in code via HttpClient is as below:
[enter image description here](https://i.stack.imgur.com/cZ4zN.png)
But this response is truncated. I have enabled automatic de-compression of the data.
Any idea what might be missing.
Interestingly, when posting to login page via Postman without explicitly adding any other header, the login process works and retrieves the re-directed page.
|
HttpClient / RestSharp, unable to retrieve complete web content |
|web|httpclient|restsharp| |
null |
I've been trying to make an autocompleting textbox,but got the Access Violation error with the app closing(after being stuck a second beforehand)
This is my code:
```
private async void DashBoard_Load(object sender, EventArgs e)
{
textBox1.AutoCompleteSource = AutoCompleteSource.None;
DatabaseHelpers.DatabaseHelper1.LoadUsers();
await Task.Delay(1000);
AutoCompletion();
}
private void AutoCompletion()
{
string[] CNPS= new string[DatabaseHelpers.DatabaseHelper1.utilizatori.Count];
foreach (var item in DatabaseHelpers.DatabaseHelper1.utilizatori)
{
CNPS.Append(item.CNP.ToString());
}
textBox1.AutoCompleteCustomSource.Clear();
source.AddRange(CNPS);
AutoCompleteStringCollection collection = new AutoCompleteStringCollection();
collection.AddRange(CNPS);
textBox1.AutoCompleteMode = AutoCompleteMode.SuggestAppend;
textBox1.AutoCompleteSource = AutoCompleteSource.CustomSource;
textBox1.AutoCompleteCustomSource = collection;
}
``` |
AutoComplete Textbox stops app and gives Acces Violation error |
|c#|firebase|winforms| |
Second menu overrides first why? |
|themes| |
null |
I had the same issue, with both windows curl and mingw(git) both where version 8.4.0
I downloaded curl 8.6.0 from here https://curl.se/windows/ and it soved the issue |
|android|firebase|android-intent| |
null |
null |
Earlier when I used to use command `$DebugPreference="Continue"` within the powershell script, this used to print the output with all the debugging lines. Now I am not able to see debug results in the test pane. And this is required to check where exactlty the powershell code failing and to know the exact error.
```
Import-Module Az.Accounts
Import-Module Az.Resources
Import-Module Az.Compute
Import-Module Az.Automation
Import-Module Az.Storage
Import-Module Az.KeyVault
Import-Module Az.RecoveryServices
$DebugPreference = 'continue'
Connect-AzAccount -Identity
$context = Get-AzContext -ListAvailable write-output "context" $context Set-AzContext -Subscription "<<subscriptionID>>"
$vaultName = ""
$vaultResourcegroup = ""
$containerCtx = Get-AzRecoveryServicesVault -Name $vaultName -ResourceGroupName $vaultResourcegroup
write-output "containerCtx" $containerCtx
```
Reference:
https://learn.microsoft.com/en-us/troubleshoot/azure/general/capture-debug-stream-automation-runbook
[tag:azure] [tag:azure-automation] [tag:powershell] [tag:powershell-cmdlet]
|
null |
I'm not 100% sure I understand what exactly you are looking for but here is a suggestion for splitting your data based on `Marital_Status` and performing an ANCOVA for each subgroup:
library(tidyverse)
# Creating a sample dataset
n <- 100 # number of observations
set.seed(0) # seed for reproducibility
data <- tibble(
six_PHQ = sample(0:30, n, replace = T),
Study.Arm = sample(c("Control", "Intervention"), n, replace = T),
base_PHQ = sample(0:30, n, replace = T),
Marital_Status = sample(c("Married", "Not_Married"), n, replace = T)
)
# Grouping the data by "Marital_Status" and creating nested data frames
data_subgroups <- data %>%
group_by(Marital_Status) %>%
nest()
# Custom function to perform ANCOVA analysis
perform_ANCOVA <- function(df) {
model <- lm(six_PHQ ~ Study.Arm + base_PHQ, data = df)
anova_result <- anova(model)
return(anova_result)
}
# Applying ANCOVA analysis to each subgroup
result <- data_subgroups %>%
mutate(ANCOVA_result = map(data, perform_ANCOVA))
Is this what you are looking for?
Even though I don't know the study you are analyzing, you might think about putting everything into one model and maybe add an interaction with `Marital_Status` instead of subgrouping. |
According to [Google Calculator][1] `(-13) % 64` is `51`.
According to JavaScript, it is `-13`.
<!-- begin snippet: js hide: false console: true babel: false -->
<!-- language: lang-js -->
console.log(-13 % 64);
<!-- end snippet -->
How do I fix this?
[1]: http://www.google.com/search?q=-13+%25+64 |
I had the same error, which I solved by following these steps:
1. Download the latest version of protobuf: `pip install --upgrade proctobuf`
Now you will get another error similar to this:
[![enter image description here][1]][1]
This is because the steps given by [these installation steps](https://tensorflow-object-detection-api-tutorial.readthedocs.io/en/latest/install.html) install an older version of tensorflow that can only work with protobuf below 3.20.
2. I am assuming you are using an anaconda environment as well. Check the path where your environment is saved using `conda info --envs`
3. Inside your environment file go copy `builder.py` from `.../Lib/site-packages/google/protobuf/internal` to any another folder on your computer temporarily.
4. Downgrade the version of protobuf to lower version compatible with tensorflow. For me its below 3.20, the last version behind it is **3.19.6**.
5. Finally copy the `builder.py` file back to the `.../Lib/site-packages/google/protobuf/internal` folder.
[1]: https://i.stack.imgur.com/vLP1H.png |
I have a react native app with an android native java module that accesses my local Google Fit healthstore using the Java Google Fit API:
DataReadRequest readRequest = new DataReadRequest.Builder()
.enableServerQueries()
.aggregate(DataType.AGGREGATE_STEP_COUNT_DELTA)
.bucketByTime(interval, TimeUnit.SECONDS)
.setTimeRange(start, end, TimeUnit.MILLISECONDS)
.build();
Fitness.getHistoryClient(getReactContext(), getGoogleAccount())
.readData(readRequest)
.addOnSuccessListener(response -> {
for (Bucket bucket : response.getBuckets()) {
for (DataSet dataSet : bucket.getDataSets()) {
readDataSet(dataSet);
}
}
try {
getCallback().onComplete(getReport().toMap());
} catch (JSONException e) {
getCallback().onFailure(e);
}
})
.addOnFailureListener(e -> getCallback().onFailure(e));
My problem is that for some `start` and `end` intervals for a particular user, the code gets stuck in the `HistoryClient`'s `.readData(readRequest)`, never resolving to the `onSuccessListener` or `onFailureListener` callbacks. In one particular case, to correct this, I can vary the `start` or `end` date to reduce the range, and suddenly the history client returns a data response. There doesn't seem to be any pattern of this bug relative to the `start` and `end` of the `readRequest`. In this case, the range was only over a week or so. Note that there are only about 100 steps in the requested range.
I initially thought that some data samples in Google Fit may be corrupt, thus reducing the range of the request would miss these samples, hence explaining why it may suddenly work by tinkering with `start` and `end`. However, by repositioning the `start` and `end` to explicitly cover these suspected samples, Google Fit works normally and a response is returned. I can timeout the async call using a `CompletableFuture`, therefore I know there is a `.readData` thread spinning in there somewhere! No exception is thrown.
I have set up all relevant read permissions in my google account's oAuth credentials - I can verify in my user account settings that the connected app indeed has these health data read permissions. The scope I request in the native code is
DataType.AGGREGATE_STEP_COUNT_DELTA, FitnessOptions.ACCESS_READ
and I am using
'com.google.android.gms:play-services-fitness:21.1.0'
'com.google.android.gms:play-services-auth:21.0.0'
in my android build file. I have noticed the problem for both `react-native 0.65.3` (android `targetSdkVersion 31`, `compileSdkVersion 31`) and `react-native 0.73.2` (android `targetSdkVersion 34`, `compileSdkVersion 34`).
Are there any further steps I can take to diagnose the bug? When viewing the date range in Google Fit app, I see no problem and the step counts are there. |
I'm currently developing my first project, a simple paint application using Java Swing and AWT. While implementing the painting functionality, I encountered an issue with accurately capturing mouse movements, especially when moving the mouse quickly.
I've designed the application to update the drawing coordinates in response to mouse events (mouseDragged and mouseMoved methods in the PaintPanel class), triggering repaints to render the drawings. However, despite my efforts, I've noticed that fast mouse movements sometimes result in skipped points, leading to gaps in the drawn lines.
Here's my PaintPanel class, which manages the painting functionality:
```
public class PaintPanel extends JPanel implements MouseMotionListener{
public Point mouseCoordinates;
boolean painting = false;
public PaintPanel() {
this.setPreferredSize(new Dimension(1000,550));
this.setBackground(Color.white);
this.addMouseMotionListener(this);
}
public void paintComponent(Graphics g) {
Graphics2D g2D = (Graphics2D) g;
g2D.setRenderingHint(RenderingHints.KEY_ANTIALIASING, RenderingHints.VALUE_ANTIALIAS_ON);
if(painting == false) {
super.paintComponent(g2D);
}
if(mouseCoordinates != null) {
g2D.setColor(UtilePanel.col);
g2D.fillOval((int)mouseCoordinates.getX(),(int)mouseCoordinates.getY(),UtilePanel.brushSize, UtilePanel.brushSize);
this.setCursor( this.getToolkit().createCustomCursor(
new BufferedImage( 1, 1, BufferedImage.TYPE_INT_ARGB ),
new Point(),
null ) );
}
}
@Override
public void mouseDragged(MouseEvent e) {
mouseCoordinates = e.getPoint();
painting = true;
repaint();
}
@Override
public void mouseMoved(MouseEvent e) {
mouseCoordinates = e.getPoint();
repaint();
}
}
```
Additionally, I attempted to incorporate a game loop to continuously poll for mouse input, hoping it would improve the accuracy of mouse movement capturing. However, even with the game loop in place, the problem persists.
I'm unsure if my approach to painting by omitting super.paintComponent(g) in paintComponent is the correct way or there's a better way to do it.
Could someone provide insights or suggestions on how to improve the mouse event capturing to guarantee precise rendering, especially during rapid mouse movements?
Your assistance would be greatly appreciated. Thank you!
[1]: https://i.stack.imgur.com/3A1M7.png |
Indeed, because of your window without a partition column, all data is brought to one partition. I think you can convert your query from a window operation to a groupby operation (and then cross-join the result back for example), which may solve your problem. |
Why do I get a ValidationError from my Kubernetes configuration file, even though it's valid YAML checked with a linter? |
I am trying to simulate a web site access via C# code. The flow is
1) HTTP Get the the login Page. This succeeds.
2) HTTP Post to login page. This returns status 302, I disabled auto redirect in HttpClientHandler. Validated the cookie returned, it has the login cookie.
3) HTTP Get the actual content page. Returns success code 200, but the content is always trimmed. This is the same page to which step 2 re-directs.
I have tried by even letting the auto-redirect enabled in the HttpClientHandler. Even then the response is trimmed.
In postman, when I directly do step 2 allowing re-direct. The content comes properly.
This used to work sometime back on the website. It's PHP based website and it's protected by CloudFare. Not sure if the cloudfare is recent thing.
I checked the headers sent via the browser for the same and replicated the same in the code but still it doesn't seems to work.
[Chrome Browser Request & Response Headers for step 1](https://i.stack.imgur.com/eAA48.png)
[Chrome Browser Request & Response Headers for step 2](https://i.stack.imgur.com/g2MTU.png)
[Chrome Browser Request & Response Headers for step 3](https://i.stack.imgur.com/FNgbT.png)
From the code I set the below headers for step 1:
[enter image description here](https://i.stack.imgur.com/oM0Fw.png)
[From the code I set the below headers for step 2 as is](https://i.stack.imgur.com/7Xxr9.png)
[From the code I set the below headers for step 3 as is:](https://i.stack.imgur.com/basAU.png)
The response header in code via HttpClient is as below:
[enter image description here](https://i.stack.imgur.com/cZ4zN.png)
But this response is truncated. I have enabled automatic de-compression of the data.
Any idea what might be missing.
Interestingly, when posting to login page via Postman without explicitly adding any other header, the login process works and retrieves the re-directed page.
|
Add a JsonConverterAttribute to the CurrentLocation property
public class ObjectOfInterest
{
[JsonPropertyName("Name")]
public string Name { get; set; } = string.Empty;
[JsonPropertyName("CurrentLocation")]
[JsonConverter(typeof(LocationConverter))]
public Location CurrentLocation { get; set; } = new();
}
The converter should store the locations indexed by their name:
public class LocationConverter : JsonConverter<Location>
{
private readonly Dictionary<string, Location> _locationDictionary = new();
public LocationConverter(IEnumerable<Location> locations)
{
foreach (var location in locations)
{
_locationDictionary[location.Name] = location;
}
}
public override Location Read(ref Utf8JsonReader reader, Type typeToConvert, JsonSerializerOptions options)
{
string locationName = reader.GetString();
if (_locationDictionary.TryGetValue(locationName, out Location location))
{
return location;
}
throw new KeyNotFoundException($"Location '{locationName}' not found.");
}
public override void Write(Utf8JsonWriter writer, Location value, JsonSerializerOptions options)
{
writer.WriteStringValue(value.Name);
}
}
Finaly, you can use the LocationConverter like that:
var List<Location> locations = JsonSerializer.Deserialize<Location[]>(locationsJson);
var options = new JsonSerializerOptions();
options.Converters.Add(new LocationConverter(locations));
var objects = JsonSerializer.Deserialize<ObjectOfInterest[]>(objectsJson, options);
|
`SameSite=None` has specific requirements on security. Specifically, your server URL must be served over `https://` and the cookie must be marked as `Secure`.
For development environments where both client and server operate on the same hostname (ie `localhost`), you can use "Lax" which is also the default value.
```lang-js
res.cookie('jwt', token, {
path: '/',
httpOnly: true, // client side JS should not have access
secure: false,
sameSite: 'Lax',
maxAge: 5 * 24 * 60 * 60 * 1000,
});
```
For production mode, you'll need an HTTPS enabled server and `SameSite=None; Secure`
```lang-js
res.cookie('jwt', token, {
path: '/',
httpOnly: true,
secure: true,
sameSite: 'None',
maxAge: 5 * 24 * 60 * 60 * 1000,
});
```
You can combine these with a check on `process.env.NODE_ENV` to make your code easier to use in either environment.
```lang-js
const isProd = process.env.NODE_ENV === "production";
res.cookie('jwt', token, {
path: '/',
httpOnly: true, // client side JS should not have access
secure: isProd,
sameSite: isProd ? 'None' : 'Lax',
maxAge: 5 * 24 * 60 * 60 * 1000,
});
```
You'll also want to configure your [CORS middleware][1] to allow _credential_ requests
```lang-js
app.use(cors({
origin: ['https://client'], // cannot be '*'
credentials: true,
}));
```
---
FYI the [Request][2] object passed to `fetch()` does **not** support a `withCredentials` property. Don't confuse it with `XMLHttpRequest`.
The property you want for cross-origin `fetch()` requests with cookies is
```lang-js
fetch(url, {
method,
credentials: 'include', //
// ...
});
```
For `XMLHttpRequest` it is
```lang-js
const xhr = new XMLHttpRequest();
xhr.open(method, url);
xhr.withCredentials = true; //
// ...
```
And for Axios it is
```lang-js
axios({
url,
method,
withCredentials: true, //
// ...
});
// or
axios.get(url, { withCredentials: true });
axios.post(url, data, { withCredentials: true });
// etc...
```
It's **very important** that you use the relevant _credentials_ options in **ALL** requests. One common pattern I see on Stack Overflow is forgetting to include it on the initial _login_ request.
[1]: https://github.com/expressjs/cors
[2]: https://developer.mozilla.org/en-US/docs/Web/API/Request |
I couldn't have i+1 value for the last value of the RangeSet, therefore I want to ignore it in the constraint. However, I couldn't do that because i value always comes as default value. So, it does not enter to if statement. How can I solve the problem ?
import pyomo.environ as pyo
from pyomo.opt import SolverFactory
model = pyo.AbstractModel()
number_of_lane=2
number_of_vehicle=2
model.i = pyo.Param(within=pyo.NonNegativeIntegers, default=number_of_lane)
model.j = pyo.Param(within=pyo.NonNegativeIntegers, default=number_of_vehicle)
model.I = pyo.RangeSet(1, model.i)
model.J = pyo.RangeSet(1, model.j)
model.R = pyo.Param(default=0.5) #CAV's reaction (s)
model.D = pyo.Param(default=1.5) #Safety Distance (m)
model.lv = pyo.Param(default=4) #Length of vehicle (m)
model.xr = pyo.Param(model.I, model.J, within=pyo.NonNegativeIntegers, initialize=xr_cons)
model.x = pyo.Var(model.I, model.J, domain=pyo.NonNegativeReals, initialize=(0))
def lane_crossing_constraint_rule(m, i, j): #lane should be 2, vehicles will be the first one which is close to the intersection
m.i.pprint() #There is a problem about taking i and j value
if(m.i.value<number_of_vehicle): #Always comes equal!!!!
return (m.x[i,j]-m.xr[i,j])**2+(m.x[i+1,j]-m.xr[i+1,j])**2>=(m.lv+m.D)
else:
return pyo.Constraint.Skip
# the next line creates one constraint for each member of the set model.I
model.l1Constraint = pyo.Constraint(model.I, model.J, rule=lane_crossing_constraint_rule) |
Why always come default value from RangeSet to the constraint in Pyomo? |
|python|pyomo|nonlinear-optimization| |
when i run my backend file using nodemon app.js =>This Error Occurs
```
Error: querySrv ENOTFOUND _mongodb._tcp.cluster0.brbkwbs.mongodb.net
at QueryReqWrap.onresolve [as oncomplete] (node:internal/dns/promises:275:17) {
errno: undefined,
code: 'ENOTFOUND',
syscall: 'querySrv',
hostname: '_mongodb._tcp.cluster0.brbkwbs.mongodb.net'
```
This error is not resolving even i have upgraded my node version to latest...All packges are updated cluster configurations are accurate but dont know why its not working.......am i missing something do anybody knows about it? |
Unable to connect ot Mongodb? |
|node.js|mongodb|mongoose| |
null |
I am creating a method, which can send out emails with ics/calendar entry attached to it.
If I test the code by really sending out an email, it works with no problem, but if I try to get the Attachment in the Test with the nDumbster package, it does not get any attachments. I a now confused, if it should be in the Attachments or not and what the problem might be!
I imagine, that the problem lies at this following line:
```
var av = AlternateView.CreateAlternateViewFromString(str.ToString(), new ContentType("text/calendar"));
message.AlternateViews.Add(av);
```
The variable "str" is a StringBuilder with the code for the ics.
This is a small snippet of the Test I do. The first 3 lines work fine, but the last 2 fail (The test logic I did not include here, just the asserts)
```
var message = _simpleSmtpServer.ReceivedEmail.ToList()[0];
Assert.Equal("GREAT IMPORTANT EMAIL", message.Subject);
Assert.Equal(new List<MailAddress> { new("wow@wow.ch"), new("great@great.ch") }, message.To);
var attachment = message.Attachments.First();
Assert.Equal("text/calendar", attachment.ContentType.MediaType);
``` |
I am trying to deploy a website with a php form on AWS. Each time I go to submit the form I get an error 500. It it is showing the instance is running. I also checked the logs and did not see anything wrong with it.
```
<?php
$name = $_POST["name"];
$email = $_POST["email"];
$message = $_POST["message"];
require "vendor/autoload.php";
use PHPMailer\PHPMailer\PHPMailer;
use PHPMailer\PHPMailer\SMTP;
$mail = new PHPMailer(true);
$mail-> SMTPDebug = SMTP::DEBUG_SERVER;
$mail ->isSMTP();
$mail->SMTPAuth = true;
$mail->Host = 'smtp.gmail.com';
$mail->SMTPSecure = PHPMailer::ENCRYPTION_STARTTLS;
$mail->Port = 587;
$mail->Username = "USERNME";
$mail->Password = "PASSWORD";
$mail->setFrom($email, $name);
$mail->addAddress("****@ellejs.com", "NAME LASTNAME");
$mail->Subject = "{$name} Cleaning Request";
$mail->Body = "Name: $name\nEmail: $email\nMessage: $message";
$mail->send();
header("Location: sentmail.html");
?>
```
*everything in capital letters is just a replacement
I tried rebooting the environment, adding the correct IAM roles to the instance. |
Erorr 500 when submitting form after AWS Elastic Beanstalk deployment |
|php|amazon-web-services|amazon-elastic-beanstalk|phpmailer| |
null |
The Accounts package does not support providing DKIM keys to node mailer - but - the package uses a common function (**Accounts.generateOptionsForEmail**) to create the object to be mailed, so it is possible to wedge the package to provide this functionality.
import { Accounts } from 'meteor/accounts-base'
//
// Reconfigure Account.base to allow DKIM insertion
const dkim = {
domainName: Meteor.settings.mail.dkim_domain,
keySelector: Meteor.settings.mail.dkim_key,
privateKey: Meteor.settings.mail.dkim_private
}
// Keep a reference to the original "generate" function
Accounts.generateOptionsForEmailPackage = Accounts.generateOptionsForEmail
// Replace the package function with one that adds the DKIM options
Accounts.generateOptionsForEmail = (email, user, url, reason, extra = {}) => {
const emailParams = Accounts.generateOptionsForEmailPackage(email, user, url, reason, extra)
// If DKIM options provided in settings, add them to the object
if (dkim.domainName && dkim.keySelector && dkim.privateKey) {
emailParams.dkim = dkim
}
// Pass the new options back to the accounts function to be sent.
return emailParams
}
Implement this during Meteor.startup() on the server side only. Be careful, this approach patches the package at runtime and should the package change, this can (easily) break.
|
I enabled only 3.5 chat model. So, I need to pass it to LLM
chat = ChatSparkLLM(
spark_api_url="wss://spark-api.xf-yun.com/v3.5/chat",
spark_app_id="****", spark_api_key="***", spark_api_secret="**"
)
Then it worked fine, Thanks |
I like piping to a simple awk script for this. That way, you can customize with whatever colors/patterns suit you. You want to colorize an output unique to your project? Go for it.
Simply save this awk script to `./bin/colorize` in your project, `chmod 755 ./bin/colorize`, and customize to your needs:
```awk
#!/usr/bin/awk -f
# colorize - add color to go test output
# usage:
# go test ./... | ./bin/colorize
#
BEGIN {
RED="\033[31m"
GREEN="\033[32m"
CYAN="\033[36m"
BRRED="\033[91m"
BRGREEN="\033[92m"
BRCYAN="\033[96m"
NORMAL="\033[0m"
}
{ color=NORMAL }
/^ok / { color=BRGREEN }
/^FAIL/ { color=BRRED }
/^SKIP/ { color=BRCYAN }
/PASS:/ { color=GREEN }
/FAIL:/ { color=RED }
/SKIP:/ { color=CYAN }
{ print color $0 NORMAL }
# vi: ft=awk
```
And then call your tests and pipe to `colorize` with:
`go test ./... | ./bin/colorize`
I find this method much easier to read and customize than the `sed` answers, and much more lightweight and simple than some external tool like `grc`.
Note: For the list of color codes, [see here](https://en.wikipedia.org/wiki/ANSI_escape_code#3-bit_and_4-bit). |
In case you cannot (or not not want to) move the hal function's source code into the library, extend the library in a way it can consume a [pointer to a function][1] (either during some initialization step, or directly as a parameter of the hal function). This way, the code for the hal function will stay in the app, and the lib can call the function without having to include anything from the app.
[1]: https://stackoverflow.com/questions/840501/how-do-function-pointers-in-c-work |
I am trying to simulate a web site access via C# code. The flow is
1) HTTP Get the the login Page. This succeeds.
2) HTTP Post to login page. This returns status 302, I disabled auto redirect in HttpClientHandler. Validated the cookie returned, it has the login cookie.
3) HTTP Get the actual content page. Returns success code 200, but the content is always trimmed. This is the same page to which step 2 re-directs.
I have tried by even letting the auto-redirect enabled in the HttpClientHandler. Even then the response is trimmed.
In postman, when I directly do step 2 allowing re-direct. The content comes properly.
This used to work sometime back on the website. It's PHP based website and it's protected by CloudFare. Not sure if the cloudfare is recent thing.
I checked the headers sent via the browser for the same and replicated the same in the code but still it doesn't seems to work.
[Chrome Browser Request & Response Headers for step 1](https://i.stack.imgur.com/eAA48.png)
[Chrome Browser Request & Response Headers for step 2](https://i.stack.imgur.com/g2MTU.png)
[Chrome Browser Request & Response Headers for step 3](https://i.stack.imgur.com/FNgbT.png)
[From the code I set the below headers for step 1](https://i.stack.imgur.com/oM0Fw.png)
[From the code I set the below headers for step 2](https://i.stack.imgur.com/7Xxr9.png)
[From the code I set the below headers for step 3](https://i.stack.imgur.com/basAU.png)
The response header in code via HttpClient is as below:
[enter image description here](https://i.stack.imgur.com/cZ4zN.png)
But this response is truncated. I have enabled automatic de-compression of the data.
Any idea what might be missing.
Interestingly, when posting to login page via Postman without explicitly adding any other header, the login process works and retrieves the re-directed page.
|
I've been trying to learn some assembly and was testing out arrays and found that when I tried to print out the value at the point indexed nothing happened, after experimenting further it appears that even though I am using the arrays as shown in many examples across the internet, it just simply isn't working
Here's the code:
```
section .text
global _start
_start:
mov eax, num ; eax now contains 5
mov ebx, [array+8] ; ebx now contains 8
cmp eax, ebx ; compares eax to ebx
jge skip ; should not happen because eax is smaller than ebx
call printdigit
skip:
call printn
call _exit
printdigit:
mov eax, 0x30
add [num], eax
mov ecx, num
mov edx, 1 ;length
mov ebx, 1 ;write to stdout
mov eax, 4 ;write call number
int 0x80
ret
printn:
mov eax, 0x0A
push eax
mov eax, 4
mov ebx, 1
mov ecx, esp
mov edx, 1
int 0x80
add esp, 4
ret
_exit:
mov eax, 1
mov ebx, 0
int 0x80
section .data
num dw 5
array dw 0, 1, 2, 3, 4, 5, 6, 7, 8, 9
```
The commands I'm using to compile the code
```
nasm -f elf Bubblesort.asm
ld -m elf_i386 -s -o Bubblesort Bubblesort.o
./Bubblesort
```
What I'm running:
ubuntu 22.04.3 desktop amd64, (on virtual machine but shouldn't matter I think)
The output I want should be
```
5
```
The actual output
```
```
I want printdigit to be called
I am almost certain its not a computer issue but a code issue but I'm unsure where |
Problems with getting Attachments with NDumbster |
|c#|.net|icalendar| |
null |
This can be done by the following:
1. Create a dimension table for your currency conversions
[![enter image description here][1]][1]
2. Create a relationship with your fact table like so connecting Currency - Currency
[![enter image description here][2]][2]
3. Lastly you can create a measure to calculate your conversion
_RateMeasure = SUMX(myDF,myDF[Amount]*RELATED('Currency Conversion'[RateCol1]))
--sumx is an iterator function and goes row by row. Related references the related column RateCol1
[![enter image description here][3]][3]
[1]: https://i.stack.imgur.com/FtRmQ.png
[2]: https://i.stack.imgur.com/A3VgU.png
[3]: https://i.stack.imgur.com/0gqqJ.png |
I try to migrate rails server that running directly on EC2 instance to EKS pod.
The server runs with unicorn_rails that has 20+ worker, also it has nginx as frontend that receive requests and communicate with unicorn_rails via unix domain socket.
When it run directly on EC2 isntance and ALB with target group of these instances, latency always < 500ms and fluctuation range was only Β±10 ms
But when it runs as pod on EKS, latency increases nearly a hundreds ms on average and fluctuation range become Β±100 ms, even if it receives only a few request per second per pod. (for EKS nodepool, using same instance type of ec2 version)
Is it common for the latency of a request sent from alb to an eks pod to be this large or unstable compared to a request sent to an ec2 instances, even if request flow is nearly same?
belows are details of migration.
To migrate this server to eks pod, I use following 2 containers
1. unicorn_rails with single worker of it
2. nginx as sidecar container of 1, receives requests from alb
container 1 and 2 communicate with unix domain socket like EC2 instance version of the server does.
I believe request flow nearly same between ec2 version and eks version, so cannot figure out why latency behavior is so different.
```
old: alb =(network)=> EC2 (nginx) =(unix domain socket)=> EC2 (unicorn rails worker)
new: alb =(network)=> EKS pod (nginx) =(unix domain socket)=> EKS pod (unicorn rails worker)
```
reducing unicorn_rails worker is critical? or kube-proxy adds overhead?
any thoughts? |
I'm trying to connect my MySql RDS instance to my Heroku environment securely, but I can't whitelist my ENV's ip address because (as I understand it) Heroku changes the ip address' at random.
Connecting them with an ip of 0.0.0.0 in the security group works, but isn't a best practice security wise. What alternatives are available to me?
I've toyed with the idea of adding an ssl cert to my codebase, but this also doesn't seem like a very good idea. |
Securely connect my Mysql RDS instance to my Heroku env |
|amazon-web-services|authentication|security|heroku|amazon-rds| |
null |
How can I pull the banner, id, etc. of the logged in user? next@12 next-auth@4.24.6
When I log the profile part at the bottom, all the information comes out, but when I try to do it in the frontend part, I can't pull it in a way I don't understand. I just started next.js. I couldn't use next-auth with next.js version 14. :D
```import NextAuth from "next-auth";
import DiscordProvider from "next-auth/providers/discord";
export const authOptions = {
providers: [
DiscordProvider({
clientId: process.env.CLIENT_ID,
clientSecret: process.env.CLIENT_SECRET,
authorization: { params: { scope: "identify guilds" } },
profile(profile) {
if (profile.avatar === null) {
const defaultAvatarNumber = parseInt(profile.discriminator) % 5;
profile.image_url = `https://cdn.discordapp.com/embed/avatars/${defaultAvatarNumber}.png`;
} else {
const format = profile.avatar.startsWith("a_") ? "gif" : "png";
profile.image_url = `https://cdn.discordapp.com/avatars/${profile.id}/${profile.avatar}.${format}`;
}
return {
id: profile.id,
name: profile.username,
discriminator: profile.discriminator,
image: profile.image_url,
banner: profile.banner,
accentColor: profile.accentColor,
};
},
}),
],
callbacks: {
async session({ session, user, token }) {
if (session) {
session.accessToken = token.accessToken;
session.tokenType = token.tokenType;
session.discordUser = token.profile;
}
return session;
},
async signIn({ user, account, profile, email, credentials }) {
if (account.provider != "discord" && !user.id) {
return;
}
return {
user: {
name: user.name,
email: user.email,
image: user.image,
banner: profile.banner,
accentColor: profile.accentColor,
},
accessToken: user.accessToken,
expires: user.expires,
};
},
async jwt({ token, account, profile }) {
if (account) {
token.accessToken = account.access_token;
}
return token;
},
},
secret: process.env.NEXTAUTH_SECRET,
};
export default NextAuth(authOptions);
``` |
{"Voters":[{"Id":4712734,"DisplayName":"DuncG"},{"Id":573032,"DisplayName":"Roman C"},{"Id":466862,"DisplayName":"Mark Rotteveel"}],"SiteSpecificCloseReasonIds":[11]} |
SELECT * FROM db_hr.employee
WHERE salary > (
SELECT AVG(salary)
FROM db_hr.employee
)
GROUP BY salary;
What wrong with the query? it give me an error :
Error Code: 1055. Expression #1 of SELECT list is not in GROUP BY clause and contains nonaggregated column 'db_hr.employee.id' which is not functionally dependent on columns in GROUP BY clause; this is incompatible with sql_mode=only_full_group_by
I'm trying to display employee data with salary more than avg of salary and group it by salary |
Error Code: 1055. Expression #1 of SELECT list is not in GROUP BY clause |
|sql|mysql|database|mysql-workbench|sql-query-store| |
null |
I am trying to simulate a web site access via C# code. The flow is
1) HTTP Get the the login Page. This succeeds.
2) HTTP Post to login page. This returns status 302, I disabled auto redirect in HttpClientHandler. Validated the cookie returned, it has the login cookie.
3) HTTP Get the actual content page. Returns success code 200, but the content is always trimmed. This is the same page to which step 2 re-directs.
I have tried by even letting the auto-redirect enabled in the HttpClientHandler. Even then the response is trimmed.
In postman, when I directly do step 2 allowing re-direct. The content comes properly.
This used to work sometime back on the website. It's PHP based website and it's protected by CloudFare. Not sure if the cloudfare is recent thing.
I checked the headers sent via the browser for the same and replicated the same in the code but still it doesn't seems to work.
[Chrome Browser Request & Response Headers for step 1](https://i.stack.imgur.com/eAA48.png)
[Chrome Browser Request & Response Headers for step 2](https://i.stack.imgur.com/g2MTU.png)
[Chrome Browser Request & Response Headers for step 3](https://i.stack.imgur.com/FNgbT.png)
[From the code I set the below headers for step 1](https://i.stack.imgur.com/oM0Fw.png)
[From the code I set the below headers for step 2](https://i.stack.imgur.com/7Xxr9.png)
[From the code I set the below headers for step 3](https://i.stack.imgur.com/basAU.png)
The response header in code via HttpClient is as below:
[HttpClient header response](https://i.stack.imgur.com/cZ4zN.png)
But this response is truncated. I have enabled automatic de-compression of the data.
Any idea what might be missing.
Interestingly, when posting to login page via Postman without explicitly adding any other header, the login process works and retrieves the re-directed page.
|
The arrows can be disabled if the mininum and maximum dates fall within the current month and [prevent-min-max-navigation](https://vue3datepicker.com/props/calendar-configuration/#prevent-min-max-navigation) prop is used.
```html
<VueDatePicker
v-model="date"
:min-date="minDate"
:max-date="maxDate"
prevent-min-max-navigation
hide-offset-dates
/>
```
```js
<script setup>
import { ref, computed } from 'vue'
import { startOfMonth, endOfMonth } from 'date-fns'
const date = ref(new Date())
const minDate = computed(() => startOfMonth(new Date()))
const maxDate = computed(() => endOfMonth(new Date()))
</script>
```
There's also the [disable-month-year-select](https://vue3datepicker.com/props/calendar-configuration/#disable-month-year-select) prop that will hide the header of the calendar picker which includes the arrows. |
I'd be tempted to avoid a wrapper class unless necessary. Seems like you could have it be a lot simpler with just a boolean
class Foo:
def __init__(self, bar):
self._bar = bar
self._mutable = False
@property
def bar(self):
return self._bar
@bar.setter
def bar(self, value):
if not self._mutable:
raise RuntimeError('No touchy')
self._bar = value
@contextlib.contextmanager
def mutable(self):
self._mutable = True
try:
yield
finally:
self._mutable = False
foo = Foo('sup'
foo.bar = 'bro' # RuntimeError
with foo.mutable():
foo.bar = 'bro'
Since everything about this is in place modification of the Foo object, no special effort is necessary to get changes to be reflected in your FooRepo. |
I've got a horizontal range of numbers in cells, and a vertical range of numbers in some other cells. I want a product of those ranges. For example cells A1:D1 contain 1,2,3,4 and cells A2:A5 contain values 5, 6,7,8. I except to get an answer of 70.
I run =SUMPRODUCT(A1:D1, A2:A5) and it gives me #VALUE. Same problem for other simple examples like this. I have no idea why. According to all my sources I've read (including ChatGPT) this formula should work?
Tried various cell range lengths and values, in different positions, but same problem. |
The main problem with your current button behavior is caused by overlapping elements.
Currently, the svg icons are reducing the "hit-box" as they are overlapping the desired drop zone. You can apply CSS property `pointer-events:none` to let pointer events pass through these top elements and trigger the invisible `<input>` element.
Besides you have 2 click events attached to the file input field.
<!-- begin snippet: js hide: false console: false babel: false -->
<!-- language: lang-js -->
// Variables
const dropzone = document.querySelector('.dropzone');
const filenameDisplay = document.querySelector('.filename');
const input = document.querySelector('.input');
const uploadBtn = document.querySelector('.upload-btn');
const syncing = document.querySelector('.syncing');
const done = document.querySelector('.done');
const upload = document.querySelector('.upload');
const line = document.querySelector('.line');
dropzone.addEventListener("dragover", (e) => {
//e.preventDefault();
//e.stopPropagation();
dropzone.classList.add("dragover");
});
dropzone.addEventListener("dragleave", () => {
dropzone.classList.remove("dragover");
});
dropzone.addEventListener("drop", (e) => {
//e.preventDefault();
//e.stopPropagation();
dropzone.classList.remove("dragover");
//handleFiles(e.dataTransfer.files);
});
dropzone.addEventListener("dragenter", (e) => {
//e.preventDefault();
//e.stopPropagation();
dropzone.classList.add("dragover");
});
input.addEventListener('change', (event) => {
const files = event.target.files;
if (files.length > 0) {
const fileName = files[0].name;
document.querySelector('.filename').textContent = fileName;
upload.style.display = 'none';
}
});
uploadBtn.addEventListener('click', () => {
const files = document.querySelector('.input').files;
if (files.length > 0) {
dropzone.style.transition = 'opacity 0.5s ease';
syncing.style.transition = 'opacity 1s ease';
dropzone.style.opacity = '0';
syncing.style.opacity = '0.3';
line.classList.add('active');
uploadBtn.textContent = 'Uploading...';
setTimeout(() => {
done.style.transition = 'opacity 1s ease';
syncing.style.transition = 'opacity 0.5s ease';
syncing.style.opacity = '0';
done.style.opacity = '0.3';
uploadBtn.textContent = 'Done!';
input.value = '';
}, 5000);
}
});
<!-- language: lang-css -->
@import url(https://fonts.googleapis.com/css?family=Open+Sans:400);
.frame {
position: absolute;
display: flex;
justify-content: center;
align-items: center;
top: 50%;
left: 50%;
width: 400px;
height: 400px;
margin-top: -200px;
margin-left: -200px;
border-radius: 2px;
box-shadow: 4px 8px 16px 0 rgba(0, 0, 0, 0.1);
background: linear-gradient(to top right, #3A92AF 0%, #5CA05A 100%);
color: #676767;
font-family: "Open Sans", Helvetica, sans-serif;
-webkit-font-smoothing: antialiased;
-moz-osx-font-smoothing: grayscale;
}
.container {
position: absolute;
width: 300px;
height: 260px;
background: #fff;
border-radius: 3px;
box-shadow: 8px 10px 15px 0 rgba(0, 0, 0, 0.2);
}
.title {
position: absolute;
top: 0;
width: 100%;
font-size: 16px;
text-align: center;
border-bottom: 1px solid #676767;
line-height: 50px;
}
.line {
position: relative;
width: 0px;
height: 3px;
top: 49px;
left: 0;
background: #6ECE3B;
}
.line.active {
animation: progressFill 5s ease-out forwards;
}
@keyframes progressFill {
from {
width: 0;
}
to {
width: 100%;
}
}
.dropzone {
visibility: visible;
position: absolute;
display: flex;
justify-content: center;
align-items: center;
top: 50%;
left: 50%;
transform: translate(-50%, -50%);
width: 100px;
height: 80px;
border: 1px dashed #676767;
border-radius: 3px;
}
.dropzone.dragover {
background-color: rgba(0, 0, 0, 0.1);
}
.upload {
position: absolute;
width: 60px;
opacity: 0.3;
}
.input {
position: absolute;
inset: 0;
opacity: 0;
}
.filename {
overflow: hidden;
}
.syncing {
opacity: 0;
position: absolute;
top: calc(50% - 25px);
left: calc(50% - 25px);
width: 50px;
height: 50px;
animation: rotate 2s linear infinite;
}
@keyframes rotate {
from {
transform: rotate(0deg);
}
to {
transform: rotate(360deg);
}
}
.done {
opacity: 0;
position: absolute;
top: 50%;
left: 50%;
transform: translate(-50%, -50%);
width: 50px;
height: 50px;
}
.upload-btn {
position: relative;
top: 180px;
left: 80px;
width: 140px;
height: 40px;
line-height: 40px;
text-align: center;
background: #6ECE3B;
border-radius: 3px;
cursor: pointer;
color: #fff;
font-size: 14px;
box-shadow: 0 2px 0 0 #498C25;
transition: all 0.2s ease-in-out;
}
.upload-btn:hover {
box-shadow: 0 2px 0 0 #498C25, 0 2px 10px 0 #6ECE3B;
}
/* make sure the input el is on top */
.input {
position: absolute;
inset: 0;
opacity: 0;
z-index: 999;
cursor: pointer;
}
input::-webkit-file-upload-button {
cursor: pointer;
}
/* disable pointer events for overlaying svgs */
svg {
pointer-events: none
}
<!-- language: lang-html -->
<div class="frame">
<div class="container">
<div class="title">Drop file to upload</div>
<div class="line"></div>
<div class="dropzone">
<svg class="upload" viewBox="0 0 640 512" width="100" title="cloud-upload-alt">
<path d="M537.6 226.6c4.1-10.7 6.4-22.4 6.4-34.6 0-53-43-96-96-96-19.7 0-38.1 6-53.3 16.2C367 64.2 315.3 32 256 32c-88.4 0-160 71.6-160 160 0 2.7.1 5.4.2 8.1C40.2 219.8 0 273.2 0 336c0 79.5 64.5 144 144 144h368c70.7 0 128-57.3 128-128 0-61.9-44-113.6-102.4-125.4zM393.4 288H328v112c0 8.8-7.2 16-16 16h-48c-8.8 0-16-7.2-16-16V288h-65.4c-14.3 0-21.4-17.2-11.3-27.3l105.4-105.4c6.2-6.2 16.4-6.2 22.6 0l105.4 105.4c10.1 10.1 2.9 27.3-11.3 27.3z" />
</svg>
<span class="filename"></span>
<input type="file" class="input">
</div>
<div class="upload-btn">Upload file</div>
<svg class="syncing" viewBox="0 0 512 512" width="100" title="circle-notch">
<path d="M288 39.056v16.659c0 10.804 7.281 20.159 17.686 23.066C383.204 100.434 440 171.518 440 256c0 101.689-82.295 184-184 184-101.689 0-184-82.295-184-184 0-84.47 56.786-155.564 134.312-177.219C216.719 75.874 224 66.517 224 55.712V39.064c0-15.709-14.834-27.153-30.046-23.234C86.603 43.482 7.394 141.206 8.003 257.332c.72 137.052 111.477 246.956 248.531 246.667C393.255 503.711 504 392.788 504 256c0-115.633-79.14-212.779-186.211-240.236C302.678 11.889 288 23.456 288 39.056z" />
</svg>
<svg class="done" viewBox="0 0 512 512" width="100" title="check-circle">
<path d="M504 256c0 136.967-111.033 248-248 248S8 392.967 8 256 119.033 8 256 8s248 111.033 248 248zM227.314 387.314l184-184c6.248-6.248 6.248-16.379 0-22.627l-22.627-22.627c-6.248-6.249-16.379-6.249-22.628 0L216 308.118l-70.059-70.059c-6.248-6.248-16.379-6.248-22.628 0l-22.627 22.627c-6.248 6.248-6.248 16.379 0 22.627l104 104c6.249 6.249 16.379 6.249 22.628.001z" />
</svg>
</div>
</div>
<!-- end snippet -->
The above snippets removes the dropzone click events and instead puts the invisible input field on top via `z-index`.
Also check dev tools for invalid property values such as `visibility: visible`
|
I'd be tempted to avoid a wrapper class unless necessary. Seems like you could have it be a lot simpler with just a boolean
class Foo:
def __init__(self, bar):
self._bar = bar
self._mutable = False
@property
def bar(self):
return self._bar
@bar.setter
def bar(self, value):
if not self._mutable:
raise RuntimeError('No touchy')
self._bar = value
@contextlib.contextmanager
def mutable(self):
self._mutable = True
try:
yield
finally:
self._mutable = False
foo = Foo('sup')
foo.bar = 'bro' # RuntimeError
with foo.mutable():
foo.bar = 'bro'
Since everything about this is in place modification of the Foo object, no special effort is necessary to get changes to be reflected in your FooRepo. |
For the following script
```
#!/usr/bin/env bash
more <<_EOF_
PLEASE READ THE TERMS OF THIS AGREEMENT AND ANY PROVIDED TEXT. PLEASE READ THE TERMS OF THIS AGREEMENT AND ANY PROVIDED TEXT. PLEASE READ THE TERMS OF THIS AGREEMENT AND ANY PROVIDED TEXT. PLEASE READ THE TERMS OF THIS AGREEMENT AND ANY PROVIDED TEXT. PLEASE READ THE TERMS OF THIS AGREEMENT AND ANY PROVIDED TEXT. PLEASE READ THE TERMS OF THIS AGREEMENT AND ANY PROVIDED TEXT. PLEASE READ THE TERMS OF THIS AGREEMENT AND ANY PROVIDED TEXT. PLEASE READ THE TERMS OF THIS AGREEMENT AND ANY PROVIDED TEXT. PLEASE READ THE TERMS OF THIS AGREEMENT AND ANY PROVIDED TEXT. PLEASE READ THE TERMS OF THIS AGREEMENT AND ANY PROVIDED TEXT. PLEASE READ THE TERMS OF THIS AGREEMENT AND ANY PROVIDED TEXT
_EOF_
agreed=
while [ x$agreed = x ]; do
echo
echo "Do you accept all the terms?"
echo "Do you agree to the above license terms? [y/n] "
read reply leftover
case $reply in
y* | Y*)
agreed=1;;
n* | n*)
echo "If you don't agree to the license you can't install this sofware";
exit 1;;
esac
done
```
How do you automatically skip the `more` command?
`printf "y\n" | bash ./file.sh` doesn't seem to work. |
get data of the user logged in to discord with next-auth |
|javascript|next.js| |
null |
Little late to answer, but want to help anyone else trying to figure out how to do this. This is basically a reducer to combine all of your state variables so you can update them with a single dispatch call. You can choose to update a a single attribute, or all of them at the same time.
```jsx
type State = {
lastAction: string
count: number
}
const reducer = (
state: State,
action: Partial<State>,
) => ({
...state,
...action,
})
const INITIAL_STATE: State = {
lastAction: ''
count: 0
}
const ComponentWithState = () => {
const [state, dispatchState] = useReducer(reducer, INITIAL_STATE)
// ... other logic
const increment = () => {
dispatchState({lastAction: 'increment', count: state.count + 1})
}
const decrement = () => {
dispatchState({lastAction: 'decrement', count: state.count - 1})
}
return (
// UI
)
}
``` |
You can do this, and I have managed to do it successfully. But it takes some effort. Here are some things I would suggest keeping in mind.
1) Figure out the Image format that you are extracting from. If you do a binwalk on `bzImage`, you'll see compressed images at various locations. Also shameless plug for https://github.com/Caesurus/extract-vmlinux-v2, which might be of some use...
2) manually extract the kernel `vmlinux` from the `bzImage`. You'll need to figure out exactly what compression was used so you can recompress it later
3) while you can use `vmlinux-to-elf` to create an ELF file that contains symbols, you should only use that as a reference for debugging/analysis with Binary Ninja, Ghidra, IDA etc... Don't try to modify the resulting elf file and packing it back up.
4) modify the extracted vmlinux in the tool of your choice, patch asm etc...
5) when you're done, save the binary, compress it again with exactly the same options as the original, and replace the data in the original `bzImage` file.
For step `2`, I would recommend looking at the kernel sources to determine specific flags for compression, and make sure that you can uncompress and recompress without making any changes to the file. The recompressed file should be identical to the original compressed version.
Probably not the answer you were looking for, but hopefully helpful to someone in the future. I may at some point do a writeup with examples since it's pretty fun and interesting. |
My data is instrument reads and instrument baselines. The baseline data is punctual and typically does not extend to the "ends" of the dataset (i.e. first and last rows). Therefore i want to make a function that looks at the baseline column, and copies the values of the earliest and latest baselinepoints to the very first/last rows in the dataset, so that i can interpolate between them with approx().
I have so far done this manually, as exemplified below, but I need to do this task over and over again, so iΒ΄d like to make it a function.
I checked for other threads around here, and from what i read, i think must have to do with the different ways to adress columns and cells esp. when using self-made functions in data.frames.
Here is an example
```
#Make Two data frames: one holds instrument data, and one holds some
#baseline calibration we need to entend to the ends of the dataset
time<-seq(1,100,1)
data1<-rnorm(n = 100,mean = 7.5, sd = 1.1)
table1<-data.frame(cbind(time, data1))
time<-data.frame("time"=seq(2,96,4))
data2<-(0.32*rnorm(n = 24, mean = 1, sd = 1))
table2<-cbind(time,data2)
rm(time)
#now merge the two tables
newtable<-merge(table1, table2, by="time", all=T)
#remove junk
rm(data1, data2,table1,table2)
#copy 3rd column for later testing
newtable$data3<-newtable$data2
#the old manual way to fill the first row
newtable$data2[1]<-newtable$data2[min(which(!is.na(newtable$data2)))]
#the old manual way to fill the last row
newtable$data2[nrow(newtable)]<-newtable$data2[max(which(!is.na(newtable$data2)))]
#Now i try with a function
endfill<-function(df, col){
#fill the first row
df[1,col] <- df[min(which(!is.na(df[[col]]))), col] # using = instead of <- has no effect
df[nrow(df),col]<-df[max(which(!is.na(df[[col]]))),col]
#
}
#I want to try my funtion in column 4:
endfill(df= newtable,col = 4)
#Does not work...
Another try:
endfill<-function(df, col){
#fill the first row
df$col[1] <- df[[col]] [min(which(!is.na(df[[col]])))] # using $names
#df[nrow(df),col]<-df[max(which(!is.na(df[[col]]))),col]
#
}
endfill(df= newtable,col = 4)
# :-(
```
In the function i have tried different approaches to address cells, first with using df$col[1], then also with df[[col]][1], and mixed versions, but i seem to miss a point here.
When i execute my above function in pieces, e.g. only the single parts before and after the "<-", they all make sense, i.e. deliver NA values for empty cells or the target value. but it seems impossible to do real assignments ?!
Thanks a lot for any efforts! |
I need to get from my Pytorch AutoEncoder the importance it gives to each input variable. I am working with a tabular data set, no images.
My AutoEncoder is as follows:
class AE(torch.nn.Module):
def __init__(self, input_size, hidden_layer, latent_layer):
super().__init__()
self.encoder = torch.nn.Sequential(
torch.nn.Linear(input_size, hidden_layer),
torch.nn.ReLU(),
torch.nn.Linear(hidden_layer, latent_layer)
)
self.decoder = torch.nn.Sequential(
torch.nn.Linear(latent_layer, hidden_layer),
torch.nn.ReLU(),
torch.nn.Linear(hidden_layer, input_size)
)
def forward(self, x):
encoded = self.encoder(x)
decoded = self.decoder(encoded)
return decoded
To save unnecessary information, I simply call the following function to get my model:
average_loss, model, train_losses, test_losses = fullAE(batch_size=128, input_size=genes_tensor.shape[1],
learning_rate=0.0001, weight_decay=0,
epochs=50, verbose=False, dataset=genes_tensor, betas_value=(0.9, 0.999), train_dataset=genes_tensor_train, test_dataset=genes_tensor_test)
Where "model" is a trained instance of the previous AutoEncoder:
model = AE(input_size=input_size, hidden_layer=int(input_size * 0.75), latent_layer=int(input_size * 0.5)).to(device)
Well now I need to get the importance given by that model to each input variable in my original "genes_tensor" dataset, but I don't know how. I have researched how to do it and found a way to do it with shap software:
e = shap.DeepExplainer(model, genes_tensor)
shap_values = e.shap_values(
genes_tensor
)
shap.summary_plot(shap_values,genes_tensor,feature_names=features)
The problem with this implementation is the following: 1) I don't know if what I am actually doing is correct. 2) It takes forever to finish, since the dataset contains 950 samples, I have tried to do it with only 1 sample and it takes long enough. The result using a single sample is as follows:
I have seen that there are other options to obtain the importance of the input variables like Captum, but Captum only allows to know the importance in Neural Networks with a single output neuron, in my case there are many.
The options for AEs or VAEs that I have seen on github do not work for me since they use concrete cases, and especially images always, for example:
https://github.com/peterparity/PDE-VAE-pytorch
https://github.com/FengNiMa/VAE-TracIn-pytorch
Is my shap implementation correct? Should I try other methods?
Edit:
I have run the shap code with only 4 samples and get the following result:
[shap with 4 samples][1]
[1]: https://i.stack.imgur.com/wzcDJ.png
I don't understand why it's not the typical shap summary_plot plot that appears everywhere.
I have been looking at the shap documentation, and it is because my model is multi-output by having more than one neuron at the output. |
I have a problem.
When scrolling the page by 124 pixels, my Navigation Bar becomes sticky and is fixed on top of the screen.
```
const [scroll, setScroll] = useState(false);
useEffect(() => {
window.addEventListener("scroll", () => {
setScroll(window.scrollY > 124);
});
}, []);
return (
<nav
className={
scroll
? "sticky z-50 top-0 bg-blue-800 py-3 border-b-2 border-blue-950 animate-fade-up animate-duration-300 animate-ease-in-out shadow-2xl"
: "bg-blue-800"
}
>
</nav>
```
But I also set padding by Y - 3rem, and now, when I return to the position UP to 124 pixels, I have a page bounce. So I don't understand how I can fix this?
I also attached a video to make it easier for you to understand
>! https://www.youtube.com/watch?v=DRp8mlowc5Q |
How to automatically answer "more" command |
|bash| |
My app will crash in release mode, it works find in debug mode. I suspect something should be kept in Proguard, but can't figure out what's wrong.
> Fatal Exception: java.lang.NullPointerException
> androidx.compose.ui.text.platform.AndroidParagraphHelper_androidKt.createCharSequence
> (AndroidParagraphHelper_android.kt:48)
> androidx.compose.ui.text.platform.AndroidParagraphIntrinsics.<init>
> (AndroidParagraphIntrinsics.android.kt:48)
> androidx.compose.ui.text.platform.AndroidParagraphIntrinsics_androidKt.ActualParagraphIntrinsics
> (AndroidParagraphIntrinsics_android.kt:38)
> androidx.compose.ui.text.ParagraphIntrinsicsKt.ParagraphIntrinsics
> (ParagraphIntrinsics.kt:38)
> androidx.compose.ui.text.ParagraphIntrinsicsKt.ParagraphIntrinsics$default
> (ParagraphIntrinsics.kt:38)
> androidx.compose.foundation.text.modifiers.ParagraphLayoutCache.setLayoutDirection
> (ParagraphLayoutCache.kt:38)
> androidx.compose.foundation.text.modifiers.ParagraphLayoutCache.b
> (ParagraphLayoutCache.kt:1)
> androidx.compose.foundation.text.modifiers.ParagraphLayoutCache.layoutWithConstraints-K40F9xA
> (ParagraphLayoutCache.kt:215)
> androidx.compose.foundation.text.modifiers.TextStringSimpleNode.measure-3p2s80s
> (TextStringSimpleNode.kt:215)
> androidx.compose.ui.node.LayoutModifierNodeCoordinator.measure-BRTryo0
> (LayoutModifierNodeCoordinator.kt:11)
> androidx.compose.ui.graphics.SimpleGraphicsLayerModifier.g
> (GraphicsLayerModifier.kt:1)
> androidx.compose.ui.node.LayoutModifierNodeCoordinator.measure-BRTryo0
> (LayoutModifierNodeCoordinator.kt:11)
buildTypes {
debug {
minifyEnabled false
kotlinOptions {
freeCompilerArgs += [
'-Xopt-in=kotlin.RequiresOptIn'
]
}
}
release {
shrinkResources true
minifyEnabled true
proguardFiles getDefaultProguardFile('proguard-android-optimize.txt'), 'proguard-rules.pro'
kotlinOptions {
freeCompilerArgs += [
'-Xopt-in=kotlin.RequiresOptIn'
]
}
}
}
I have already add below to Proguard-rules.pro file.
> # We supply these as stubs and are able to link to them at runtime
> # because they are hidden public classes in Android. We don't want
> # R8 to complain about them not being there during optimization.
> -dontwarn android.view.RenderNode
> -dontwarn android.view.DisplayListCanvas
>
> -keepclassmembers class androidx.compose.ui.platform.ViewLayerContainer {
> protected void dispatchGetDisplayList(); }
>
> -keepclassmembers class androidx.compose.ui.platform.AndroidComposeView {
> android.view.View findViewByAccessibilityIdTraversal(int); }
>
> # Users can create Modifier.Node instances that implement multiple Modifier.Node interfaces,
> # so we cannot tell whether two modifier.node instances are of the same type without using
> # reflection to determine the class type. See b/265188224 for more context.
> -keep,allowshrinking class * extends androidx.compose.ui.node.ModifierNodeElement
>
>
> # Keep generic signature of Call, Response (R8 full mode strips signatures from non-kept items).
> -keep,allowobfuscation,allowshrinking interface retrofit2.Call
> -keep,allowobfuscation,allowshrinking class retrofit2.Response
>
> # With R8 full mode generic signatures are stripped for classes that are not
> # kept. Suspend functions are wrapped in continuations where the type argument
> # is used.
> -keep,allowobfuscation,allowshrinking class kotlin.coroutines.Continuation
Did I miss something? |
The problem with scrolling and fixing the nav bar at a certain X |
|javascript|html|css|reactjs| |
null |
You will want to:
1. Create the BufferedImage first and *outside* of any painting method. The painting method should be lean and fast, and not hold code that might be expensive with regards to time or memory, since slowing it down unnecessarily has a great effect on the perceived responsiveness of your code.
2. Draw lines that connect current point with prior point, not points.
3. *Always* call the `super.paintComponent(g)` method within your override. |
|httpclient|restsharp| |
I'm currently working on a Dockerized web application using Docker Compose. I have two services defined in my docker-compose.yml file: db and api, where db is a PostgreSQL container and api is a Rails application. However, I'm encountering an error when trying to connect to the PostgreSQL server from my Rails application.
`.env`
```
POSTGRES_USER=postgres
POSTGRES_PASSWORD=postgres
POSTGRES_HOST=127.0.0.1
POSTGRES_DB=postgres
```
`docker-compose.yml`
version: "3.9"
x-app: &app
env_file:
- .env
stdin_open: true
tty: true
services:
db:
<<: *app
image: postgres:16.2
volumes:
- ./tmp/db:/var/lib/postgresql/data
environment:
DATABASE_URL: "postgres://${POSTGRES_USER}:${POSTGRES_PASSWORD}@${POSTGRES_HOST}:5432/${POSTGRES_DB}"
ports:
- 5432:5432
api:
<<: *app
build: .
command: sh -c "rm -f tmp/pids/server.pid && bundle exec rails s -p 3000 -b '0.0.0.0'"
volumes:
- .:/app
- bundle-cache:/usr/local/bundle
- node_modules:/app/node_modules
ports:
- 3000:3000
depends_on:
- db
volumes:
bundle-cache: null
pg-data: null
node_modules:
`Dockerfile`
FROM ruby:3.2.3-alpine
RUN apk update --no-cache && apk add build-base tzdata nodejs libpq-dev postgresql-client --no-cache
WORKDIR /app
COPY . ./
RUN bundle install
EXPOSE 3000
CMD ["rails", "server", "-b", "0.0.0.0"]
I updated the `DATABASE_URL` environment variable in my Rails application to `"postgres://${POSTGRES_USER}:${POSTGRES_PASSWORD}@postgres:5432/${POSTGRES_DB}"`. |
Query
- group by `$user_id` to have only distinct users
- take the `$first` value of the duplicate values for the other fields
*you can add sort or match stage based on your needs
[Playmongo](https://squery.org/playmongo/?q=62867823853f039c69b0f8e7)
```js
aggregate(
[{"$group":
{"_id": "$user_id",
"time": {"$first": "$time"},
"message": {"$first": "$message"}}}])
``` |
I have a large 2d list that has duplicates in some sublist. I would like to return 2 or more duplicates and return the duplicates first if impossible. For instance,
Df =[[2,3,5,20],[5,10,20,10],[4,13,15,15,17,34,17],[33,34,15,21][12,16,24,32,12,33,24]]
I would like my result like this or
Df2 = [[15,17,4,13,34],[12,24,16,32,33]]
I have tried this code below but I only would to return the duplicates that has 2 or more duplicates.
res = [t for t in Df if len(t) > len(set(t))] |
Proguard for Android Jetpack compose UI |
|android|proguard|jetpack|compose| |
I'm trying to create a Homebrew formula for a Python project.
Here's the Homebrew formula:
```ruby
class Scanman < Formula
include Language::Python::Virtualenv
desc "Using LLMs to interact with man pages"
url "https://github.com/nikhilkmr300/scanman/archive/refs/tags/1.0.1.tar.gz"
sha256 "93658e02082e9045b8a49628e7eec2e9463cb72b0e0e9f5040ff5d69f0ba06c8"
depends_on "python@3.11"
def install
virtualenv_install_with_resources
bin.install "scanman"
end
test do
# Simply run the program
system "#{bin}/scanman"
end
end
```
Upon running the application with the installed scanman version, it fails to locate my custom modules housed within the src directory.
```bash
ModuleNotFoundError: No module named 'src'
```
Any insights into why this is happening?
Here's my directory structure if that helps:
```bash
.
βββ requirements.txt
βββ scanman
βββ scanman.rb
βββ setup.py
βββ src
βββ __init__.py
βββ cli.py
βββ commands.py
βββ manpage.py
βββ rag.py
βββ state.py
```
The main executable is `scanman`.
It's worth noting the following:
* When I run the local version of `scanman` from my repository, it works absolutely fine.
* Other 3rd party packages installed from PyPI don't throw any error. I can't find them in `/usr/local/Cellar/scanman/1.0.1/libexec/lib/python3.11/site-packages/`, however. |
I followed, below langchain documentation
**Reference** - https://python.langchain.com/docs/integrations/text_embedding/sparkllm
**Error Details**:
Request error: 11202, {'header': {'code': 11202, 'message': 'licc failed', 'sid': 'ase000fd997@dx18e35059f4c738d882'}}
|
iFlyTek, Spark Embeddings Error Code - 11202 |
|python|langchain|embedding| |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.