instruction
stringlengths 0
30k
⌀ |
---|
I'm trying to connect my MySql RDS instance to my Heroku environment securely, but I can't whitelist my ENV's ip address because (as I understand it) Heroku changes the ip address' at random.
Using the config vars with the DB credentials and connecting them with an ip of 0.0.0.0 in the security group works, but isn't a best practice security wise. What alternatives are available to me?
I've toyed with the idea of adding an ssl cert to my codebase, but this also doesn't seem like a very good idea. |
I am using PDFBox to insert text into the PDF
using this but I am getting inverted text in the PDF
try (PDDocument doc = PDDocument.load(new ByteArrayInputStream(decodedBytes))) {
PDPage firstPage = doc.getPage(0);
try (PDPageContentStream contentStream = new PDPageContentStream(doc, firstPage, PDPageContentStream.AppendMode.APPEND, true, true)) {
contentStream.setFont(PDType1Font.HELVETICA, 48);
contentStream.setStrokingColor(Color.BLACK);
contentStream.beginText();
contentStream.newLineAtOffset(3000, 1500);
contentStream.showText(timestampText);
contentStream.endText();
}
ByteArrayOutputStream baos = new ByteArrayOutputStream();
doc.save(baos);
return Base64.getEncoder().encodeToString(baos.toByteArray());
But I am getting this error
[enter ge description here](https://i.stack.imgur.com/0NQgs.png)
how can this be resolved I am unable to move ahead...... |
When define python async function, what happens and how it works? |
I know this is old but I thought I would share how I fixed my issue. So the code I was modifying wasn't mine. I added the cross site scripting code to the app I was working on and others. The one I had the problem with was changing the submit action to a different page. The submit (in my case) was unnecessary, so I changed it from form.submit() to window.location.href. Problem solved. |
ssh oracle@host "ps -ef|grep -i \_smon\_|grep -v grep|cut -d '_' -f3"
$ ps -ef|grep \_smon\_|grep -v grep|cut -d '_' -f3
prod
test
|
It seems like that we cannot access `Polars.DataFrame` object from `Polars Expressions` without passing them in as input params.
However, we could use the following queries to
1. Replace the `n`th value with the average of the first `n` values
2. Set `1-n`th value, left inclusive, to `Null`
3. Original value otherwise
```python
def query(
target_var: IntoExpr = pl.col("var"),
index_col: IntoExpr | pl.Expr | None = None,
n: int = 1,
**kwargs,
) -> pl.Expr:
if "index_col" in kwargs:
index_col = kwargs.pop("index_col") # IntoExpr
else:
index_col = pl.int_range(0, pl.len())
mean_nth = pl.head(target_var.meta.output_name(), n).mean()
expr = (
pl.when(index_col == (n-1))
.then(mean_nth)
.when(index_col < (n-1))
.then(None)
.otherwise(target_var)
)
return expr
``` |
I enabled undo using `.modelContainer(for:, isUndoEnabled: true)` and it worked fine. Cmd+Z etc on Mac, and shake to undo on iOS.
On adding the Mac Catalyst destination, undo no longer works. I thought this strange, so I created a new SwiftData project and, setup `isUndoEnabled` for the default app, and it really does disable undo after adding the Mac Catalyst destination.
Has anyone else experienced this, I can't find any info anywhere, but it seems odd that something this basic would break. Any workarounds?
|
Mac Catalyst disabled SwiftData automatic Undo |
|undo|swift-data| |
***Hi,***
You can use a simple trick witch is `-margin`. Important note: yours second div should have transparent background to allow first div background to show up.
----------
Solution
--------
I don't have yours `wavy div`, so for this example I used a simple circle gradient background to show the purpose. You should adjust margin height at your own background.
<!-- begin snippet: js hide: true console: true babel: false -->
<!-- language: lang-css -->
p {
margin: 0;
}
.main-section {
height: 200px;
width: 400px;
background-color: tomato;
}
.about-us-section {
height: 200px;
width: 400px;
padding: 50px 0px;
background-image: radial-gradient(goldenrod 200px, transparent 200px);
margin-top: -20px;
}
<!-- language: lang-html -->
<div class="main-section" id="home">
<img
class="person-image"
src="/assets/images/person.png"
alt="Imagem da Home"
/>
<p>
Lorem ipsum dolor sit amet consectetur adipisicing elit. Omnis facere
velit repellat eligendi accusantium ratione doloribus, aliquid sapiente
iste fuga ad totam deserunt temporibus commodi, dicta voluptas, at
exercitationem necessitatibus?
</p>
</div>
<div class="about-us-section" id="about-us">
<p>
Lorem ipsum dolor sit amet consectetur adipisicing elit. Omnis facere
velit repellat eligendi accusantium ratione doloribus, aliquid sapiente
iste fuga ad totam deserunt temporibus commodi, dicta voluptas, at
exercitationem necessitatibus?
</p>
</div>
<!-- end snippet -->
----------
***Cheers*** |
If you are using python 3, then you can use F-string. Here is an example
record_variable = 'records'
print(f"The element '{record_variable}' is found in the received data")
in this case, the output will be like:
```
The element 'records' is found in the received data
``` |
In my Maui xaml I have an Entry control which is initially enabled and I want to disable it programmatically once the user has entered a correct value in the control.
But it is not
My xaml code looks like this
<Entry
x:Name="InputAnswer"
Margin="0,0,20,0"
FontSize="24"
HorizontalOptions="Start"
IsEnabled="{Binding IsWrong}"
Keyboard="Numeric"
MaxLength="5"
Placeholder=""
Text="{Binding InputAnswer}" />
My model class
public class EquationWithInput: Equation, INotifyPropertyChanged
{
public int? InputAnswer { get; set; }
public int CorrectAnswer{ get; set; }
public bool IsWrong { get => InputAnswer == null ? true : InputAnswer != CorrectAnswer; }
...
}
But when I check the results, the control is never disabled. It appears the binding doesn't work, but it works everywhere else in this page for all the other properties. E.g. the `InputAnswer`.
So what do I need to do to get this to work? |
Unable to disable Entry via Binding |
|c#|xaml|mvvm|maui| |
I want to perform some analysis of source files in Gitlab-managed project. The analysis is about computing frequencies, counting unique strings etc. I came to conclusion that the plain bash `find`|`grep`|`sed`|`sort`|`uniq`-based solution would encounter its limits very soon. Hence I decided to use bash commands only to preprocess data into SQL table (created and filled in `create_table.sql` generated script - details of table are not substantial) and do subsequent analysis solely in SQL (`run_statistics.sql`) in own one-shot docker container.
I am trying to add this analysis as new job in Gitlab pipeline. It is sufficient for the result of the job to be some artifact with plain text output of SQL query dumps. My intention is to run this job in [its own postgres docker image](https://docs.gitlab.com/ee/ci/docker/using_docker_images.html#what-is-an-image) which [sees the project tree](https://stackoverflow.com/a/54188996/653539).
```
statistics:
image:
name: postgres:16
stage: test
allow_failure: true
script:
- chmod +x ./create_statistics_sql.sh
- ./create_statistics_sql.sh > create_table.sql
- psql -U postgres < create_table.sql
- echo "<html><body><pre>" > statistics.html
- psql -U postgres < run_statistics.sql >> statistics.html
- echo "</pre></body></html>" >> statistics.html
artifacts:
when: always
expire_in: 7 days
paths:
- statistics.html
```
The problem is, the Postgres container requires setting of mandatory environment variable `POSTGRES_PASSWORD` (or `POSTGRES_HOST_AUTH_METHOD: trust` since there is not security risk here, however the environment variable is needed anyway). Without it, the container for job won't start and error `psql: error: connection to server on socket "/var/run/postgresql/.s.PGSQL.5432" failed: No such file or directory` appears in job log.
In other words, I am looking for equivalent of `docker run -e` option for container spawned from job's `image:`. I tried to set it into `variables:` in job but without success. The `image:` section of job has very limited syntax, I would expect something like `image:docker:variable` though I am aware it is [not currently possible](https://docs.gitlab.com/ee/ci/yaml/#imagedocker).
Is there some workaround? (Actually I found one workaround in my self-answer but it is specific for my case and I am open to learn there is better solution.)
Notice: I don't (or intend to) use Postgres in project itself nor I have it as [service](https://stackoverflow.com/q/53837282/653539). (These two cases make my problem little bit hard to google.) I want to use it just as a single-use tool.
|
|java|oracle-database|performance|ssl|handshake| |
I have following process, that I thought would be easy to implement with the AzureServiceBus and sessions:
I have a subscription that creates +5 messages out of one message and sends all those messages via batch to another subscription that enables sessions. Therefore I set the same session id for those +5 messages.
Sending the messages is not a problem. But the next step is: I thought I could create a processor, that would get me all messages with the same session id. Each message calls a service that could possibly fail. If one call fails, all the other messages that were completed in that session before would have to be rollbacked. But I guess that's not how sessions work in the AzureServiceBus.
I started with following code (that's inside the method `StartAsync` in a processor class that implements the IHostedService):
```
[...]
sessionProcessor = serviceBusClient.CreateSessionProcessor(
processingOptions.CurrentValue.ServiceBusOptions.ServiceBusTopic,
processorOptions.Subscription,
options);
// Configure the message and error handler to use
sessionProcessor.ProcessMessageAsync += MessageHandler;
sessionProcessor.ProcessErrorAsync += ErrorHandler;
sessionProcessor.SessionInitializingAsync += SessionInitializingHandler;
sessionProcessor.SessionClosingAsync += SessionClosingHandler;
await sessionProcessor.StartProcessingAsync();
[...]
```
Problem is that the `MessageHandler` only processes one message and completes it right away. That's not what I need.
So I tried that here:
```
var receiver = await serviceBusClient.AcceptNextSessionAsync(processingOptions.CurrentValue.ServiceBusOptions.ServiceBusTopic, processorOptions.Subscription);
var messages = await receiver.ReceiveMessagesAsync(100);
using (var ts = new TransactionScope(TransactionScopeAsyncFlowOption.Enabled))
{
foreach (var message in messages)
{
await ProcessSessionRequest(message);
await receiver.CompleteMessageAsync(message);
await receiver.SetSessionStateAsync(new BinaryData(SessionState.SessionInProcess));
}
ts.Complete();
}
```
But here I have no error handling like the session processor would have. Which means: If one message fails and throws an exception, my processor completely shuts down instead of just trying again or moving the messages into the dead-letter queue and moving on to another session.
Does anyone have any idea how to implement that correctly? The only other idea I have is writing my own processor with the necessary events. But maybe I'm just missing something. |
AzureServiceBus: Process all messages of the same session and rollback if one fails |
|session|azureservicebus|azure-servicebus-subscriptions| |
styleNodes according to package is types is:
export interface IdAndContentObject {
id?: string;
content?: string;
}
styleNodes: IdAndContentObject[]
that array, In this case you can try with id of and content(all css files convert to string) use them
|
|sql|sql-server| |
[enter image description here][1]I'm new to this, but I understand that the file is missing. I'm working in vscode, and the error comes out of vs studio
1
I've been trying to install pandas using pip, but it always fails and gives me a giant wall of error text and this at the bottom
[1]: https://i.stack.imgur.com/h1Qdv.png |
I have a react native app with an android native java module that accesses my local Google Fit healthstore using the Java Google Fit API:
DataReadRequest readRequest = new DataReadRequest.Builder()
.enableServerQueries()
.aggregate(DataType.AGGREGATE_STEP_COUNT_DELTA)
.bucketByTime(interval, TimeUnit.SECONDS)
.setTimeRange(start, end, TimeUnit.MILLISECONDS)
.build();
Fitness.getHistoryClient(getReactContext(), getGoogleAccount())
.readData(readRequest)
.addOnSuccessListener(response -> {
for (Bucket bucket : response.getBuckets()) {
for (DataSet dataSet : bucket.getDataSets()) {
readDataSet(dataSet);
}
}
try {
getCallback().onComplete(getReport().toMap());
} catch (JSONException e) {
getCallback().onFailure(e);
}
})
.addOnFailureListener(e -> getCallback().onFailure(e));
My problem is that for some `start` and `end` intervals for a particular user, the code gets stuck in the `HistoryClient`'s `.readData(readRequest)`, never resolving to the `onSuccessListener` or `onFailureListener` callbacks. In one particular case, to correct this, I can vary the `start` or `end` date to reduce the range, and suddenly the history client returns a data response. There doesn't seem to be any pattern of this bug relative to the `start` and `end` of the `readRequest`. In this case, the range was only over a week or so. Note that there are only about 100 steps in the requested range.
I initially thought that some data samples in Google Fit may be corrupt, thus reducing the range of the request would miss these samples, hence explaining why it may suddenly work by tinkering with `start` and `end`. However, by repositioning the `start` and `end` to explicitly cover these suspected samples, Google Fit works normally and a response is returned. I can timeout the async call using a `CompletableFuture`, therefore I know there is a `.readData` thread spinning in there somewhere! No exception is thrown.
I have set up all relevant read permissions in my google account's oAuth credentials - I can verify in my user account settings that the connected app indeed has these health data read permissions. The scope I request in the native code is
DataType.AGGREGATE_STEP_COUNT_DELTA, FitnessOptions.ACCESS_READ
and I am using
'com.google.android.gms:play-services-fitness:21.1.0'
'com.google.android.gms:play-services-auth:21.0.0'
in my android build file. I have noticed the problem for both `react-native 0.65.3` (android `targetSdkVersion 31`, `compileSdkVersion 31`) and `react-native 0.73.2` (android `targetSdkVersion 34`, `compileSdkVersion 34`).
Are there any further steps I can take to diagnose the bug? When viewing the date range in Google Fit app, I see no problem and the step counts are there.
EDIT
I used logcat to scan for Google Fit log entries.
03-13 00:31:15.826 2696 24944 W Fitness : android.os.DeadObjectException: Transaction failed on small parcel; remote process probably died, but this could also be caused by running out of binder buffer space
Similar error seen in this android [question][1]
[1]: https://stackoverflow.com/questions/45432647/android-throw-deadobjectexception-with-log-transaction-failed-on-small-parcel |
Set environment variable for container of specific image of gitlab job |
|postgresql|docker|gitlab| |
Finally I found solution glued of hints in another SO answers.
* Postgres container must be spawned as a service (in `services:` section).
* Such service [can have](https://stackoverflow.com/a/77518809/653539) environment `variable:`s but - unfortunately - the `psql` command of that service is [not accessible](https://docs.gitlab.com/ee/ci/services/index.html#using-software-provided-by-a-service-image) from job's script.
* Hence original Postgres container from image specified in `image:` section has to stay there too, it serves as Postgres client.
* The client connects to server using `alias` hostname (the `psql -h` option).
Whole solution:
```
statistics:
image:
name: postgres:16
services:
- name: postgres:16
alias: postgresserver
variables:
POSTGRES_HOST_AUTH_METHOD: trust
stage: test
allow_failure: true
script:
- chmod +x ./create_statistics_sql.sh
- ./create_statistics_sql.sh > create_table.sql
- psql -h postgresserver -U postgres < create_table.sql
- echo "<html><body><pre>" > statistics.html
- psql -h postgresserver -U postgres < run_statistics.sql >> statistics.html
- echo "</pre></body></html>" >> statistics.html
artifacts:
when: always
expire_in: 7 days
paths:
- statistics.html
```
This is enough for me at this moment. Anyway I am curious if there exists simpler/more universal solution of passing environment variables directly into first container.
|
Store Values From a Multi-Area Range in an Array
-
**2D Array**
<!-- language: lang-vb -->
Sub Test2D()
Dim ws As Worksheet: Set ws = ThisWorkbook.Worksheets(1)
Dim rg As Range: Set rg = ws.Range("A3:C3,A5:C5,A7:C7")
Dim MyArray() As Variant: ReDim MyArray(1 To 3, 1 To 3) ' 2D
Dim arg As Range, HelpArray() As Variant, r As Long, c As Long
For Each arg In rg.Areas
r = r + 1
' Since all areas ('arg') are a single 3-cell row, 'HelpArray'
' will automatically be sized as '(1 To 1, 1 To 3)'.
HelpArray = arg.Value
For c = 1 To 3
MyArray(r, c) = HelpArray(1, c)
Next c
Next arg
For r = 1 To 3
For c = 1 To 3
Debug.Print MyArray(r, c)
Next c
Next r
End Sub
**(1D) Jagged Array aka Array of Arrays**
<!-- language: lang-vb -->
Sub TestJagged()
Dim ws As Worksheet: Set ws = ThisWorkbook.Worksheets(1)
Dim rg As Range: Set rg = ws.Range("A3:C3,A5:C5,A7:C7")
Dim MyArray() As Variant: ReDim MyArray(1 To 3) ' 1D
Dim arg As Range, r As Long, c As Long
For Each arg In rg.Areas
r = r + 1
' Since all areas ('arg') are a single 3-cell row, each element ('r')
' of 'MyArray' will hold a 2D one-based single-row array ('arg.Value')
' sized as '(1 To 1, 1 To 3)'.
MyArray(r) = arg.Value
Next arg
For r = 1 To 3
For c = 1 To 3
Debug.Print MyArray(r)(1, c) ' !!!
Next c
Next r
End Sub
[![enter image description here][1]][1]
Both Results
```
5
3
1
4
8
6
7
9
2
```
**Notes**
- Note that the expression
MyArray = rg.Value
only works if `rg` is a **single-area range** and has **at least two cells**.
- Also, note that in the same case, `rg.Value` (on the right side of the expression) is already a **2D one-based array** (containing the values of the range) with the same number of rows and columns as the rows and columns of the range which you can prove with:
<!-- language: lang-vb -->
Sub Test()
Dim ws As Worksheet: Set ws = ThisWorkbook.Worksheets(1)
Dim rg As Range: Set rg = ws.Range("A3:C3")
Debug.Print "Rows: " & UBound(rg.Value, 1) & vbLf _
& "Columns: " & UBound(rg.Value, 2)
Dim Item As Variant, c As Long
For Each Item In rg.Value
c = c + 1
Debug.Print c, Item
Next Item
End Sub
Result
```
Rows: 1
Columns: 3
1 5
2 3
3 1
```
[1]: https://i.stack.imgur.com/s43oH.jpg |
Almost exclusively if you search for examples of cross-component communication via services the examples are using RxJS inside the service (could be signals now too). However it doesn't seem like this is necessary. For example, even in the case of the below service, both components will be able to see changes to counter without using RxJS or Signals. Is there a downside to not using RxJS or Signals in this case? Sample service below and [here is a sample Stackblitz][1].
```
import { Injectable } from '@angular/core';
@Injectable({
providedIn: 'root',
})
export class MyService {
private counter: number = 0;
increment() {
this.counter += 1;
}
getValue() {
return this.counter;
}
}
```
[1]: https://stackblitz.com/edit/stackblitz-starters-7gvc44?file=src%2FMyService.ts |
Angular Reactivity and Communicating via Services |
|angular|angular-services| |
# Introduction
Recently, I was studying Go Lang and it seems it is an interesting language and I noticed that the polymorphism can be applied as a concept using Go Lang `interface` however when it comes to applying the polymorphism using `struct` it does not work as we may want it to work in OOP. There is a couple of ways that developers sometimes try to work around structs and interfaces to make the polymorphism works and here I'd like to share with you a straightforward way to implement the concept of Polymorphism using struct in Go Lang.
# The Problem
Let Animal be a super class (Father/Mother) and Cow, Bird, and Snake children of Animal and
let each animal have a name and three behaviors Eat, Move, and Speak. In Go Lang, we'd gather the behaviors in on interface and let's call it IAnimalBehaviours
```
type IAnimalBehaviour interface {
Eat()
Speak()
Move()
}
```
And then we'd levitate the common properties - in our case name - up to the super class level - Animal struct- and of course we'd add a setter to set the value of name property
```
type Animal struct {
name string
}
func (animal *Animal) SetName(v string) {
animal.name = v
}
```
```
type Cow struct {
Animal
}
func (cow *Cow) Eat() {
fmt.Println("grass")
}
func (cow *Cow) Speak() {
fmt.Println("moo")
}
func (cow *Cow) Move() {
fmt.Println("walk")
}
```
```
type Bird struct {
Animal
}
func (cow *Bird) Eat() {
fmt.Println("worms")
}
func (cow *Bird) Speak() {
fmt.Println("peep")
}
func (cow *Bird) Move() {
fmt.Println("fly")
}
```
```
type Snake struct {
Animal
}
func (cow *Snake) Eat() {
fmt.Println("mice")
}
func (cow *Snake) Speak() {
fmt.Println("hsss")
}
func (cow *Snake) Move() {
fmt.Println("slither")
}
```
If we stop here and look at the limitations of this implementation we'd see that the developer would be able to apply the polymorphism using the interface like this
```
interfaceSlice := make([]IAnimalBehaviour, 0, 0)
interfaceSlice = append(interfaceSlice, &Cow{})
interfaceSlice = append(interfaceSlice, &Bird{})
interfaceSlice = append(interfaceSlice, &Snake{})
for _, animalBehaviour := range interfaceSlice {
animalBehaviour.Eat() // The implementation of polymorphism
}
```
Nevertheless, if we tried to apply the same technique using Animal struct instead of the interface we'd find a compiler error
```
var animal Animal = Cow{}
/// cannot use Cow{} (value of type Cow) as Animal value in variable declaration
```
At the same time we'd be able to access the `setName(string)` function of Animal struct from Cow instance if we defined the variable as a Cow in this way
```
var animal Cow = Cow{}
animal.SetName("Cow Name")
```
So notice we can inherit the function SetName between structs and use it as we do in any other OOP programming language but we can not apply the polymorphism by declaring a variable as an Animal - super class/struct - and initialize it as a child cow, bird, or snake and use it methods as other OOP language.
# A suggested solution
Notice in OOP the difference between a struct and a class is that the class encapsulates **behaviours and data** into one unit whereas the struct encapsulates data only. So what if we allowed our Go Lang struct to have a special data attribute that holds the behaviours in it? Wouldn't be a way to extend the struct to be Go Lang class?
Let's try that in our example. We have IAnimalBehaviour interface and Animal struct, so to extend the Animal struct to be class we need to add behaviours to it.
```
type Animal struct {
name string
behaviours IAnimalBehaviour
}
```
and now we can define a collection of Animal and iterate over it and call its behaviours
```
var a Animal = Animal{behaviours: &Cow{}}
a.behaviours.Eat()
a.SetName("Cow Name")
var sliceAnimal = make([]Animal, 0, 0)
sliceAnimal = append(sliceAnimal, Animal{behaviours: &Cow{}})
sliceAnimal = append(sliceAnimal, Animal{behaviours: &Bird{}})
for _, animal := range sliceAnimal {
animal.behaviours.Eat()
}
```
# Note
I am still studying Go Lang and I find it an intersting language, I posted this article as a Question on StackOverFlow and it might be a wrong place to start a discussion or write an article so that's why I am posting this article under Discussion and see your opinions about it.
I am introducing here a way to apply polymorphism using struct not just the interface in Go Lang. Hope you find it useful. Please if you know other ways to approximate the problem let us know. |
Polymorphism using struct in Go Lang |
|go|inheritance|polymorphism|struct| |
I have a healthy, 2/2 Ready Pod that has been running without restarts for over half an hour, on Kubernetes 1.27 through EKS.
When I try to run `kubectl logs -n $MY_NS pod/$MY_POD -c $MY_CONTAINER` on my Apple M1 Max with a `kubectl version` of 1.25, I get no logs. I am able to get logs for other Pods as I would expect.
When I exec into a Pod mounting the Node FS, I can see, in `/var/log/pods/$MY_NS-$MY_POD/$MY_CONTAINER`, logs (including rotations). I copied them with `kubectl cp` to my laptop, and they and the contents look roughly as I'd expect:
```
$ ls -la
total 16M
drwxr-xr-x 7 jrichards 224 Mar 12 15:45 ./
drwxr-xr-x 7 jrichards 224 Mar 12 15:45 ../
-rw-r--r-- 1 jrichards 0 Mar 12 15:45 0.log
-rw-r--r-- 1 jrichards 920K Mar 12 15:45 0.log.20240312-212825.gz
-rw-r--r-- 1 jrichards 922K Mar 12 15:45 0.log.20240312-212835.gz
-rw-r--r-- 1 jrichards 908K Mar 12 15:45 0.log.20240312-212845.gz
-rw-r--r-- 1 jrichards 13M Mar 12 15:45 0.log.20240312-212856
```
As you can probably tell from the filenames, the container in question was spamming a crazy number of log lines (the most recent gzip file was rotated after only ~10s, after recording >2k msg/s for ~10s, with an uncompressed file size of 76MiB).
Even so, this is very confusing to me. Why does `kubectl logs` not return the any logged output from this container? I tried the command multiple times.
I was able to find [a kubernetes bugfix](https://github.com/kubernetes/kubernetes/pull/115702) that's targeted for 1.29 relating to `kubectl logs -f`, but I'm not using `-f`.
I also found [a 2015 kubernetes issue](https://github.com/kubernetes/kubernetes/issues/11046) that mentions that ['"kubectl logs" may give empty output'](https://github.com/kubernetes/kubernetes/issues/11046#issuecomment-120281807) and ['I think we should advise that `kubectl logs` is just a "cache" of some of the logs for a pod'](https://github.com/kubernetes/kubernetes/issues/11046#issuecomment-120505698). Is this still true?
I have read [the kuberenetes logging guide](https://kubernetes.io/docs/concepts/cluster-administration/logging/) as well as [the `kubectl logs` man page](https://kubernetes.io/docs/reference/kubectl/generated/kubectl_logs/) and found nothing that mentions any limitations on `kubectl logs`.
My questions:
1. Where can I find details on the limitations of `kubectl logs`?
2. Where can I find details of how Kubernetes handles log rotation?
3. How I should expect files to be formatted and used in `/var/log/containers/$MY_NS-$MY_POD/$MY_CONTAINER`?
|
You could do type checking with `isinstance` in python:
```py
#Assuming you already have a instance of commands.Bot named client in this case
channel = client.get_channel(channelid)
if isinstance(channel, discord.StageChannel):
#do something
``` |
```c++
template <>
inline bool _Sp_counted_base<_S_atomic>::_M_add_ref_lock_nothrow() noexcept {
// Perform lock-free add-if-not-zero operation.
_Atomic_word __count = _M_get_use_count();
do {
if (__count == 0) return false;
// Replace the current counter value with the old value + 1, as
// long as it's not changed meanwhile.
} while (!__atomic_compare_exchange_n(&_M_use_count, &__count, __count + 1,
true, __ATOMIC_ACQ_REL,
__ATOMIC_RELAXED));
return true;
}
```
This is in `c++/11/bits/shared_ptr_base.h: _M_add_ref_lock_nothrow()`.
`__count` only gets a value at first (not in the loop). Is it possible for this loop function to enter an infinite loop? |
Reference Every Nth Cell
-
[![enter image description here][1]][1]
**Usage**
<!-- language: lang-vb -->
Sub Test()
Dim ws As Worksheet: Set ws = ActiveSheet ' improve!
Dim rg As Range:
Set rg = ws.Range("P2", ws.Cells(ws.Rows.Count, "P").End(xlUp))
Dim nameBank As Range: Set nameBank = RefNthCellsInColumn(rg, 3)
If nameBank Is Nothing Then Exit Sub
nameBank.Copy ws.Range("Q2")
MsgBox nameBank.Cells.Count & " cells in range """ _
& nameBank.Address(0, 0) & """.", vbInformation
End Sub
**The Function**
<!-- language: lang-vb -->
Function RefNthCellsInColumn( _
ByVal singleColumnRange As Range, _
ByVal Nth As Long) _
As Range
Dim rg As Range, r As Long
For r = 1 To singleColumnRange.Cells.Count Step Nth
If rg Is Nothing Then
Set rg = singleColumnRange.Cells(r)
Else
Set rg = Union(rg, singleColumnRange.Cells(r))
End If
Next r
Set RefNthCellsInColumn = rg
End Function
[1]: https://i.stack.imgur.com/kq33d.jpg |
{"Voters":[{"Id":77567,"DisplayName":"rob mayoff"},{"Id":16217248,"DisplayName":"CPlus"},{"Id":4267244,"DisplayName":"Dalija Prasnikar"}]} |
While building the react app currently facing this issues.
Creating an optimized production build...
Failed to compile.
static/css/main.4d62b683.css from Css Minimizer plugin
TypeError: Cannot read properties of undefined (reading 'type')
[enter image description here](https://i.stack.imgur.com/stz7k.png)
I have checked any error on css. But could not find anything. |
Failed to compile while building the react app - static/css/main.4d62b683.css from Css Minimizer plugin |
|javascript|css|reactjs|react-css-modules|npm-build| |
null |
I am trying to derive some analytics for a set of data that I have on a csv file. The csv file has the following format[sniped from csv](https://i.stack.imgur.com/xdAnd.png)
The first column contains an ID/schema, each other column represents time/generations. This happens N number of Runs.
I want to be able to the occurrence of each ID/schema at each point of time/generations. So that I can have a final count of each ID, a final average, and a final standard deviation from the N number of runs.
So far I am able to extract the data, and remove the rows that contain 'R'. I don't really care about the runs because I want to add them together.
First step is to at least sum all the runs per each ID and generation but I can't seem to get the hang of pandas.
```
import pandas as pd
def main():
filename = 'out/outputs.csv' # Replace with the actual filename
rdata = read_csv(filename)
data = pd.DataFrame(rdata)
data = data[~data[0].str.contains('R')]
sums = data.groupby(data[0],axis=1) # here is where I want to sum by schema/ID and generation
sums = sums.sum()
if __name__ == "__main__":
main()
```
At the end my goal is to have something like the following matrices
sums
| Schema | gen1 | gen2 | genN
|--------|------|------|-----
| 11**** | 5 | 10 | 100
| ***111 | 0 | 3 | 12
averages
|Schema | gen1 | gen2 | genN
|-------|------|------|------
|11**** | 6.0 | 12.1 | 120.2
|***111 | 0.0 | 10.0 | 30.0
standard deviations
|Schema | gen1 | gen2 | genN
|-------|------|------|-----
|11**** | 0.2 | 1.0| 0.1
|***111 | 0.0 | 0.3| 0.12 |
Doh, followed along with the calendarauto example from the video (sqlbi.com) but no dates produced.
No syntax errors. Using 07.40 - Moving average.pbix
Table of Dates = CALENDARAUTO()
works just fine.
Tried other pbix files as well. 'New table' button was used :}
The following is the code I created that produces no syntax errors and no dates
Dates (CALENDARAUTO) =
VAR _FirstDate_CustomerBirth = MIN ( 'Customer'[Birth Date] )
VAR _FirstDate_ProductAvailable = MIN ( 'Product'[Available Date] )
VAR _FirstDate_SalesDelivery = MIN ( 'Sales'[Delivery Date] )
VAR _FirstDate_SalesOrder = MIN ( 'Sales'[Order Date] )
VAR _FirstDate =
IF ( _FirstDate_SalesOrder < _FirstDate_SalesDelivery, _FirstDate_SalesOrder, _FirstDate_SalesDelivery )
VAR _DateTable =
FILTER (
CALENDARAUTO(),
YEAR ( [Date] ) >= _FirstDate -- 'CALENDARAUTO'.[Date]
)
RETURN
ADDCOLUMNS(
_DateTable,
"Year", YEAR ( [Date] ),
"Month", FORMAT ( [Date], "mmm" ),
"Month Number", MONTH ( [Date] ),
"Quarter", FORMAT ( [Date], "\QQ yyyy" )
)
[PBI Community post for cash and prizes :}][1]
[1]: https://community.fabric.microsoft.com/t5/DAX-Commands-and-Tips/calendarauto-dax-returns-empty/td-p/3759101 |
DAX CALENDARAUTO function returns empty table when used in Power BI Desktop |
|dax| |
I have a piece of MATLAB code that I want to translate to python, here is the relevant part of code:
assuming `x = [1,2,3,4,5]`:
% The size of x
N=size(x);
N=N(2);
The issue is at the second line. I understand the index syntax if it was say `x(2)`, but `N(2)` I don't understand. I've tried looking at what this means in the documentation, but all I could find was for indexing lists.
Any help is appreciated, thanks.
|
I think I already found the fix for this, and it was pretty simple.
First, we want to place the set of navigation ids we pass to `AppBarConfiguration` to a variable because we'll need it for something in `onOptionsItemSelected`. You need something like this:
private val navBaseNodeSet = setOf(
R.id.nav_monitoring_fragment, R.id.nav_dashboard_fragment, R.id.nav_logout, R.id.nav_login_fragment
)
and of course you need to pass it to `AppBarConfiguration` again.
appBarConfiguration = AppBarConfiguration(navBaseNodeSet, binding.drawerLayout)
And here's the important bit, this is what `onOptionsItemSelected` should look like:
override fun onOptionsItemSelected(item: MenuItem): Boolean {
if (item.itemId == android.R.id.home) {
return if( !navBaseNodeSet.contains(navController.currentDestination?.id) ){
onBackPressed()
true
}else{
false
}
}
return false
} |
in PHP 8.3 I'm using:
```php
class Helper
{
/**
* @template T
* @param callable(): T $callback
* @return callable(): T
*/
public static function lazy(callable $callback): callable
{
return function () use ($callback) {
static $run = true;
static $cache = null;
if ($run) {
$cache = $callback();
$run = false;
}
return $cache;
};
}
}
```
used like:
```php
$c = 0;
// pass any task that should only be performend once and/or only if you need it, like:
$x = Helper::lazy(fn() => $c);
$y = $x();
echo "0 $y\n"; // 0 0
$c = 1;
$y = $x();
echo "1 $y\n"; // 1 0
``` |
So I tried stretching the image but it didnt work. Not sure if thats what Im meant to do and I cant find anything online that would help.
MyCanvas.Children.Add(ImageOne.Background);
Images ImageOne = new();
public MainWindow()
{
InitializeComponent();
MyCanvas.Children.Add(ImageOne.Background);
}
class Images
{
public Image Background = new() { Source = new BitmapImage(new Uri("chessbackground2.jpg", UriKind.Relative)), Stretch =.Fill, StretchDirection = StretchdDirection.Both };
} |
I've been trying to learn some assembly and was testing out arrays and found that when I tried to print out the value at the point indexed nothing happened, after experimenting further it appears that even though I am using the arrays as shown in many examples across the internet, it just simply isn't working
Here's the code:
```
section .text
global _start
_start:
mov eax, num ; eax now contains 5
mov ebx, [array+8] ; ebx now contains 8
cmp eax, ebx ; compares eax to ebx
jge skip ; should not happen because eax is smaller than ebx
call printdigit
skip:
call printn
call _exit
printdigit:
mov eax, 0x30
add [num], eax
mov ecx, num
mov edx, 1 ;length
mov ebx, 1 ;write to stdout
mov eax, 4 ;write call number
int 0x80
ret
printn:
mov eax, 0x0A
push eax
mov eax, 4
mov ebx, 1
mov ecx, esp
mov edx, 1
int 0x80
add esp, 4
ret
_exit:
mov eax, 1
mov ebx, 0
int 0x80
section .data
num dw 5
array dw 0, 1, 2, 3, 4, 5, 6, 7, 8, 9
```
The commands I'm using to compile the code
```
nasm -f elf Bubblesort.asm
ld -m elf_i386 -s -o Bubblesort Bubblesort.o
./Bubblesort
```
What I'm running:
ubuntu 22.04.3 desktop amd64, (on virtual machine but shouldn't matter I think)
The output I want should be
```
5
```
The actual output
```
```
I want printdigit to be called **only** when num is less than whatever is indexed at array
I am almost certain its not a computer issue but a code issue but I'm unsure where |
{"Voters":[{"Id":23474427,"DisplayName":"snj wrk"}]} |
{"Voters":[{"Id":2505965,"DisplayName":"Oka"},{"Id":298225,"DisplayName":"Eric Postpischil"},{"Id":2877241,"DisplayName":"Vlad from Moscow"}]} |
I have this server code:
```
require('dotenv').config();
const express = require('express');
const mysql = require('mysql');
const bcrypt = require('bcryptjs');
const jwt = require('jsonwebtoken');
const cors = require('cors');
const path = require("path");
const fs = require("fs");
const app = express();
app.use(express.json());
app.use(cors());
const corsOptions = {
origin: "*",
credentials: true,
optionSuccessStatus: 200
};
app.use(cors(corsOptions));
const sslOptions = {
ca: fs.readFileSync(
path.join(__dirname, "private/BaltimoreCyberTrustRoot.crt.pem")
),
rejectUnauthorized: false,
};
const db = mysql.createPool({
host: process.env.DB_HOST,
user: process.env.DB_USER,
password: process.env.DB_PASSWORD,
database: process.env.DB_NAME,
ssl: sslOptions,
});
db.getConnection(function(err, conn){
if(err) throw err;
console.log("Connected!");
});
const generateAccessToken = (user) => {
return jwt.sign({ id: user.id, email: user.email }, process.env.ACCESS_TOKEN_SECRET, { expiresIn: 30 });
};
const generateRefreshToken = (user) => {
return jwt.sign({ id: user.id, email: user.email }, process.env.REFRESH_TOKEN_SECRET, { expiresIn:60 });
};
app.post('/login', async (req, res) => {
const { email, password } = req.body;
db.query('SELECT * FROM users WHERE email = ?', [email], async (err, results) => {
if (err) throw err;
if (results.length === 0) return res.status(404).json({ message: 'Utente non trovato' });
const user = results[0];
const isMatch = await bcrypt.compare(password, user.password);
if (!isMatch) return res.status(401).json({ message: 'Password non valida' });
const accessToken = generateAccessToken(user);
const refreshToken = generateRefreshToken(user);
db.query('UPDATE tokens SET refresh_token = ? WHERE id_user = ?', [refreshToken, user.id], (err, result) => {
if (err) throw err;
res.status(200).json({result: true, accessToken, refreshToken });
});
});
});
app.post('/token', (req, res) => {
const { token } = req.body;
if (!token) return res.sendStatus(401);
console.log(token);
jwt.verify(token, process.env.REFRESH_TOKEN_SECRET, (err, user) => {
console.log(user);
if (err) return res.status(403).json({ message: err.message, user: user });
const accessToken = generateAccessToken({ id: user.id, email: user.email });
res.json({ accessToken });
});
});
const authenticateToken = (req, res, next) => {
const authHeader = req.headers['authorization'];
const token = authHeader && authHeader.split(' ')[1];
if (token == null) return res.sendStatus(401);
jwt.verify(token, process.env.ACCESS_TOKEN_SECRET, (err, user) => {
if (err) return res.status(403).json({massage: err, user: user});
req.user = user;
next();
});
};
app.get('/protected-route', authenticateToken, (req, res) => {
res.json({ message: 'Welcome to protected-area', user: req.user });
});
const PORT = process.env.PORT || 3000;
app.listen(PORT, () => {
console.log(`Server running on port ${PORT}`);
});
```
my problem is that when I give it the expired token, the service responds to me with "jwt expired".
the problem seems to come from "authenticateToken" which replies with 403.
How can I fix this error? passing the expired refresh token to refresh it I receive it
```
{
"message": "jwt expired"
}
```
How I can solve this? |
I may of fixed it, I changed the endpoint from
```
https://graph.microsoft.com/v1.0/me/sendMail
```
to
```
..../users/sendMail
```
That got rid of the 400 error. However, I now have a message telling me that my JSON is not correct!!
Thank you for looking |
I would like to place H1 and P tag in the Modal Header and have them occupy one row each.
I also want to center the text.
I have applied d-block to each, but they will have two cols.
I tried widening the Height in the Modal Header, but that didn't work either.
How can I solve this problem ?
I'm using Bootstrap@5.3.
<div class="modal fade" id="product-review-modal">
<div class="modal-dialog">
<div class="modal-content">
<div class="modal-header">
<h1 class="modal-title fs-5" id="exampleModalLabel">Review of "Product Name"</h1>
<p>Please review this product! Thank you for your cooperation!</p>
</div>
<div class="modal-body">
...
</div>
<div class="modal-footer">
<button type="button" class="btn btn-secondary" data-bs-dismiss="modal">Close</button>
<button type="button" class="btn btn-primary">Save changes</button>
</div>
</div>
</div> |
How do I split a Modal Header column in two? |
|html|css|bootstrap-5| |
The formula in the OP is not working because as per `MSFT` Documentations, `SUMPRODUCT()` function returns the product of corresponding ranges or arrays, where the default operation is multiplication. However, one can use addition, subtraction or division as well, which is possible. Read **[here][1]**.
----------
Now, the question is even though the data has same dimensions and same sizes its still not working because its not corresponding to each other in order to return the desired output `70`. Two options, transpose one of the ranges so it becomes corresponding to one another and then apply the function.
----------
• Using `SUMPRODUCT()` with `TRANSPOSE()` function:
[![enter image description here][2]][2]
----------
=SUMPRODUCT(TRANSPOSE(A1:D1),A2:A5)
----------
Or, Using `SUM()` with `TOCOL()`
=SUM(A2:A5*TOCOL(A1:D1))
----------
[1]: https://support.microsoft.com/en-gb/office/sumproduct-function-16753e75-9f68-4874-94ac-4d2145a2fd2e
[2]: https://i.stack.imgur.com/y68NF.png
|
Husky can prevent you from bad `git commit`, `git push`, and more. If you are getting this error check your ***code syntax***. In case you are getting this error even if your code is valid, please use the below solution.
# #Solution 1:
Delete the `.git/hooks` folder and then do the `npm install` to reinstall husky. There are chances for conflicts with husky-generated files in the `.git/hooks/ files`.
# #Solution 2:
this is a temporary/quick solution.
git commit -m "message" --no-verify
|
Given the registered variable *loop_results*, count how many items were skipped
```yaml
skip: "{{ loop_results.results |
json_query('[?skipped]') |
length }}"
```
Count the number of times a loop ran
```yaml
run: "{{ loop_results.results|length - skip|int }}"
```
<hr>
<sup>
Example of a complete playbook for testing
```yaml
- hosts: all
vars:
skip: "{{ loop_results.results |
json_query('[?skipped]') |
length }}"
run: "{{ loop_results.results|length - skip|int }}"
tasks:
- debug:
msg: "working on {{ item.name }}"
loop:
- {name: t1, skip: True}
- {name: t2, skip: False}
- {name: t3, skip: False}
- {name: t4, skip: False}
- {name: t5, skip: True}
register: loop_results
when: not item.skip
- debug:
var: loop_results
- debug:
var: skip
- debug:
var: run
```
gives (abridged)
```yaml
skip: '2'
run: '3'
```
</sup>
|
{"OriginalQuestionIds":[67644660],"Voters":[{"Id":2991619,"DisplayName":"somethinghere"},{"Id":-1,"DisplayName":"Community","BindingReason":{"DuplicateApprovedByAsker":""}}]} |
Suppose we have the following sequence:
```
[1,2,3,4,5,6,7,8,9,10]
```
And we want to train an LSTM model to predict the next number in the sequence based on the previous numbers. We'll choose a sequence length of 3 for this example.
So, we would organize our data into input-output pairs like this:
```
Input Output
[1, 2, 3] 4
[2, 3, 4] 5
[3, 4, 5] 6
[4, 5, 6] 7
[5, 6, 7] 8
[6, 7, 8] 9
[7, 8, 9] 10
```
Each input sequence contains three numbers, and the corresponding output is the number that follows the input sequence in the original sequence.
We would then feed these input-output pairs into the LSTM model during training. The LSTM model learns to capture the patterns in the sequences and predict the next number in the sequence based on the previous numbers.
During inference, if we provide the model with a sequence like
[8,9,10], it should predict the next number in the sequence, which is
11 in this case.
|
I know the question is about poetry, but you can use the more general:
```
[project.scripts]
script_name = "your_package.a_file:a_method"
```
That way also users that do not have poetry can a easy to access CLI script. |
I'am using GitLab CI to run a Terraform pipeline. But, as The Terraform CI/CD templates are deprecated since this month (**Feb 2024**) and will be removed. I want to switch to OpenTofu:
[![Terraform CI/CD templates deprecated since Feb 2024][1]][1]
Problem: I followed the [documentation][2] to make the conversion but end-up with errors.
In the most basic conversion try (see **B]**), I end up with this error:
> plan job: chosen stage does not exist; available stages are .pre, fmt,
> validate, plan, apply, .post
When I define the `fmt` stage as defined [here][3] (see **C]**) , I get:
> fmt: unknown keys in `extends` (.opentofu:fmt)
**Does anyone have an idea on what to do ?**
**A]** original `.gitlab-ci.yml`:
include:
- template: Terraform/Base.latest.gitlab-ci.yml # https://gitlab.com/gitlab-org/gitlab/blob/master/lib/gitlab/ci/templates/Terraform/Base.latest.gitlab-ci.yml
- template: Jobs/SAST-IaC.latest.gitlab-ci.yml # https://gitlab.com/gitlab-org/gitlab/blob/master/lib/gitlab/ci/templates/Jobs/SAST-IaC.latest.gitlab-ci.yml
variables:
# If not using GitLab's HTTP backend, remove this line and specify TF_HTTP_* variables
TF_STATE_NAME: iam
TF_CACHE_KEY: iam
TF_ROOT: provisioning
stages:
- validate
- test
- build
- deploy
- cleanup
fmt:
extends: .terraform:fmt
needs: []
validate:
extends: .terraform:validate
needs: []
build:
extends: .terraform:build
environment:
name: $TF_STATE_NAME
action: prepare
deploy:
extends: .terraform:deploy
dependencies:
- build
environment:
name: $TF_STATE_NAME
action: start
**B]** `.gitlab-ci.yml` conversion try 1:
include:
- component: gitlab.com/components/opentofu/validate-plan-apply@0.17.0
inputs:
version: 0.17.0
opentofu_version: 1.6.1
root_dir: provisioning
state_name: iam
stages: [fmt, validate, plan, apply]
**C]** `.gitlab-ci.yml` conversion try 2:
include:
- component: gitlab.com/components/opentofu/validate-plan-apply@0.17.0
inputs:
version: 0.17.0
opentofu_version: 1.6.1
root_dir: provisioning/
state_name: iam
- template: Jobs/SAST-IaC.latest.gitlab-ci.yml # https://gitlab.com/gitlab-org/gitlab/blob/master/lib/gitlab/ci/templates/Jobs/SAST-IaC.latest.gitlab-ci.yml
stages: [fmt, validate, plan, apply]
fmt:
extends: [.opentofu:fmt]
...:
extends: ...
**NB:** The lock file is correctly converted and `tofu plan` works perfectly.
[1]: https://i.stack.imgur.com/8KB6B.png
[2]: https://gitlab.com/explore/catalog/components/opentofu
[3]: https://gitlab.com/explore/catalog/components/opentofu#usage |
token has expired in node.js |
|node.js|express|jwt|refresh-token| |
I have a Symfony application using Doctrine ORM. When creating a new Temp entity with relationships and trying to persist it, I get a 500 Internal Server Error.
`create:1 POST http://127.0.0.1:8000/users/create 500 (Internal Server Error)`
My`TaskController.php`:
```php
/**
* @Route("/tasks", name="task_list")
*/
public function listTasks(TaskRepository $taskRepository): Response
{
$tasks = $taskRepository->findAll();
dump($tasks);
return $this->render('task/list.html.twig', [
'tasks' => $tasks,
]);
}
/**
* @Route("/tasks/create", name="task_create")
*/
public function createTask(Request $request): Response
{
$task = new Task();
$task->setCreatedAt(new \DateTime());
$task->setUpdatedAt(NULL);
$form = $this->createForm(TaskType::class, $task);
$form->handleRequest($request);
if ($form->isSubmitted() && $form->isValid()) {
$this->entityManager->persist($task); // Use the EntityManager to persist the task
dd($task);
$this->entityManager->flush(); // Use the EntityManager to flush changes
return $this->redirectToRoute('task_list');
}
return $this->render('task/create.html.twig', [
'form' => $form->createView(),
]);
}
```
My `routes.yaml`
```xml
task_list:
path: /tasks
controller: App\Controller\TaskController::listTasks
task_create:
path: /tasks/create
controller: App\Controller\TaskController::createTask
```
My entities:
`User` Entity:
```php
namespace App\Entity;
use ApiPlatform\Metadata\ApiResource;
use App\Repository\UserRepository;
use Doctrine\DBAL\Types\Types;
use Doctrine\ORM\Mapping as ORM;
#[ORM\Entity(repositoryClass: UserRepository::class)]
#[ApiResource]
class User
{
#[ORM\Id]
#[ORM\GeneratedValue]
#[ORM\Column]
private ?int $id = null;
#[ORM\Column(length: 255, nullable: true)]
private ?string $username = null;
#[ORM\Column(length: 255, nullable: true)]
private ?string $name = null;
#[ORM\Column(length: 255, nullable: true)]
private ?string $surname = null;
#[ORM\Column(length: 255, nullable: true)]
private ?string $email = null;
#[ORM\Column(length: 255, nullable: true)]
private ?string $password = null;
#[ORM\Column(length: 100, nullable: true)]
private ?string $role = null;
#[ORM\Column(type: Types::DATETIME_MUTABLE, nullable: true)]
private ?\DateTimeInterface $created_at = null;
public function getId(): ?int
{
return $this->id;
}
public function setId(int $id): static
{
$this->id = $id;
return $this;
}
public function getUsername(): ?string
{
return $this->username;
}
public function setUsername(?string $username): static
{
$this->username = $username;
return $this;
}
public function getName(): ?string
{
return $this->name;
}
public function setName(?string $name): static
{
$this->name = $name;
return $this;
}
public function getSurname(): ?string
{
return $this->surname;
}
public function setSurname(?string $surname): static
{
$this->surname = $surname;
return $this;
}
public function getEmail(): ?string
{
return $this->email;
}
public function setEmail(?string $email): static
{
$this->email = $email;
return $this;
}
public function getPassword(): ?string
{
return $this->password;
}
public function setPassword(?string $password): static
{
$this->password = $password;
return $this;
}
public function getRole(): ?string
{
return $this->role;
}
public function setRole(?string $role): static
{
$this->role = $role;
return $this;
}
public function getCreatedAt(): ?\DateTimeInterface
{
return $this->created_at;
}
public function setCreatedAt(?\DateTimeInterface $created_at): static
{
$this->created_at = $created_at;
return $this;
}
}
```
`Task` Entity:
```php
namespace App\Entity;
use ApiPlatform\Metadata\ApiResource;
use App\Repository\TaskRepository;
use Doctrine\DBAL\Types\Types;
use Doctrine\ORM\Mapping as ORM;
#[ORM\Entity(repositoryClass: TaskRepository::class)]
#[ApiResource]
class Task
{
#[ORM\Id]
#[ORM\GeneratedValue]
#[ORM\Column]
private ?int $id = null;
#[ORM\Column(length: 255)]
private ?string $title = null;
#[ORM\Column(type: Types::TEXT, nullable: true)]
private ?string $description = null;
#[ORM\Column(length: 100, nullable: true)]
private ?string $status = null;
#[ORM\Column(type: Types::DATETIME_MUTABLE, nullable: true)]
private ?\DateTimeInterface $createdAt = null;
#[ORM\Column(type: Types::DATETIME_MUTABLE, nullable: true)]
private ?\DateTimeInterface $updatedAt = null;
#[ORM\ManyToOne]
#[ORM\JoinColumn(nullable: false)]
private ?User $user = null;
public function getId(): ?int
{
return $this->id;
}
public function setId(int $id): static
{
$this->id = $id;
return $this;
}
public function getTitle(): ?string
{
return $this->title;
}
public function setTitle(string $title): static
{
$this->title = $title;
return $this;
}
public function getDescription(): ?string
{
return $this->description;
}
public function setDescription(?string $description): static
{
$this->description = $description;
return $this;
}
public function getStatus(): ?string
{
return $this->status;
}
public function setStatus(?string $status): static
{
$this->status = $status;
return $this;
}
public function getCreatedAt(): ?\DateTimeInterface
{
return $this->createdAt;
}
public function setCreatedAt(?\DateTimeInterface $createdAt): static
{
$this->createdAt = $createdAt;
return $this;
}
public function getUpdatedAt(): ?\DateTimeInterface
{
return $this->updatedAt;
}
public function setUpdatedAt(?\DateTimeInterface $updatedAt): static
{
$this->updatedAt = $updatedAt;
return $this;
}
public function getUser(): ?User
{
return $this->user;
}
public function setUser(?User $user): static
{
$this->user = $user;
return $this;
}
}
```
|
Doctrine EntityManager persist() fails on new entity with relationships |
|php|symfony|doctrine-orm|doctrine| |
Is there a way to automatically generate points of coordinates for the contour of islands using existing map ?
Is there a way to generate automatically these coordinate for instance using directly the points of the map of openstreetmap (or other), let say with a certain number of datapoints ?
There are some websites like https://geojson.io/#map=10.01/15.3855/-61.289 where we can build polygon it by clicking manually, but it is clearly too time consuming for the number of islands I have to do...
I have already tried searching the data on the net but what I find is usually bad quality, so I want to use directly the data from the maps.
Thanks a lot !
|
null |
When I try to show that 3 does not divide 4 in LaTeX like this, 3 \\nmid 4, it does not display what I need. However, \\mid works correctly.
[mid working correctly](https://i.stack.imgur.com/QNRz2.png)
[nmid not working](https://i.stack.imgur.com/ndpKt.png)
I do a google search on is there a configuration to enable or disable this but didn't find anything.
My configurations:
```
var mathField = MQ.MathField(mathFieldSpan, {
spaceBehavesLikeTab: true, // configurable
// autoCommands: 'pi theta sqrt sum',
autoOperatorNames: 'sin cos',
substituteTextarea: function() {
return document.createElement('textarea');
},
handlers: {
edit: function () { // useful event handlers
latexSpan.textContent = mathField.latex(); // simple API
},
enter: function () {
// addMathField();
},
}
});
```
|
Why MathQuill does not support \nmid LaTeX? |
|javascript|web|latex|mathquill| |
null |
You cannot do it by itself, but you can use `forEach` or a loop to pass it:
<!-- begin snippet: js hide: false console: true babel: false -->
<!-- language: lang-js -->
let params = new URLSearchParams();
[2, 3, 5, 7, 11, 13, 17, 19, 'etc'].forEach(item => params.append("primes", item));
console.log(params.toString());
<!-- end snippet -->
Or, you can even polyfill this:
<!-- begin snippet: js hide: false console: true babel: false -->
<!-- language: lang-js -->
URLSearchParams.prototype.appendArray = function(array) {
[2, 3, 5, 7, 11, 13, 17, 19, 'etc'].forEach(item => this.append("primes", item))
}
let params = new URLSearchParams();
params.appendArray([2, 3, 5, 7, 11, 13, 17, 19, 'etc']);
console.log(params.toString());
<!-- end snippet -->
You see, I added an `appendArray` `function` to the `URLSearchParams` prototype, so from now on any object instantiated as a `URLSearchParams` object will have an `appendArray` `function` that expects an array and add `append` it. But, if you polyfill, make sure you polyfill before you are to use the method... :) |
Migrate .gitlab-ci.yml from Terraform to OpenTofu |
|terraform|gitlab-ci|opentofu| |
You could make use of `rowwise` if you prefer tidyverse syntax:
``` r
library(tidyverse)
df %>%
rename_with(~gsub('Pop', '', .x)) %>%
rowwise() %>%
mutate(Partyrel = sum(c_across(-(1:2))[match(PartyA, names(.)) - 2])) %>%
mutate(Partyrel = if(PartyA == PartyB) { Partyrel } else {
sum(c_across(-(1:2))[match(PartyB, names(.)) - 2]) + Partyrel}) %>%
ungroup()
#> # A tibble: 5 x 8
#> PartyA PartyB Christian Muslim Jewish Sikh Buddhist Partyrel
#> <chr> <chr> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl>
#> 1 Christian Jewish 12 71 9 0 1 21
#> 2 Muslim Muslim 1 93 2 0 0 93
#> 3 Muslim Christian 74 5 12 1 2 79
#> 4 Jewish Muslim 14 86 0 0 0 86
#> 5 Sikh Buddhist 17 13 4 10 45 55
``` |
null |
I am trying to esizing iframes based on content changes for cross-origin content. Also I am trying open someone else's website in to my iframe so I don't access to there site. Is there a way to do so?
```
<script type="application/javascript">
function resizeIFrameToFitContent( iFrame ) {
iFrame.width = iFrame.contentWindow.document.body.scrollWidth;
iFrame.height = iFrame.contentWindow.document.body.scrollHeight;
}
window.addEventListener('DOMContentLoaded', function(e) {
var iFrame = document.getElementById( 'iFrame1' );
resizeIFrameToFitContent( iFrame );
// or, to resize all iframes:
var iframes = document.querySelectorAll("iframe");
for( var i = 0; i < iframes.length; i++) {
resizeIFrameToFitContent( iframes[i] );
}
} );
</script>
<iframe src="usagelogs/default.aspx" id="iFrame1"></iframe>
``` |
Resizing iframes based on content changes for cross-origin content? |
|javascript|html|jquery|css| |
null |
Two things:
1. Process *all* events in the event queue every frame, not just one:
```c++
if( SDL_PollEvent( &e ) )
```
...should be:
```c++
while( SDL_PollEvent( &e ) )
```
2. Maintain a SDL_Keycode -> bool map of keydown states and only apply the movement logic once per frame:
```c++
std::map< SDL_Keycode, bool > keyMap;
...
while( SDL_PollEvent( &e ) )
{
if( e.type == SDL_QUIT )
{
quit = true;
}
else if( e.type == SDL_KEYDOWN )
{
keyMap[ e.key.keysym.sym ] = true;
}
else if( e.type == SDL_KEYUP )
{
keyMap[ e.key.keysym.sym ] = false;
}
}
if( keyMap[ SDLK_RIGHT ] )
squareX += 1;
if( keyMap[ SDLK_LEFT ] )
squareX -= 1;
if( keyMap[ SDLK_UP ] )
squareY -= 1;
if( keyMap[ SDLK_DOWN ] )
squareY += 1;
```
All together:
```c++
#include <SDL.h>
#include <iostream>
#include <map>
SDL_Window* mainWindow = NULL;
SDL_Surface* mainWindowSurf = NULL;
SDL_Renderer* mainRenderer = SDL_CreateRenderer( mainWindow, -1, 0 );
int mainScreenWidth = 1280;
int mainScreenHeight = 720;
int squareX = 50;
int squareY = 50;
const int squareWidth = 50;
const int squareHeight = 100;
const int gravityAffect = 5;
bool mainGameInit();
void mainGameExit();
// Note to self - From here and to the next note comment, everything is functions.
bool mainGameInit()
{
bool mainGameInitSuccess = true;
if( SDL_Init( SDL_INIT_VIDEO ) < 0 )
{
std::cout << "SDL could not INIT. This time around, the window could not be created. For "
"more details, SDL_ERROR: "
<< SDL_GetError() << std::endl;
mainGameInitSuccess = false;
}
else
{
mainWindow = SDL_CreateWindow(
"Wednesday Night Dash",
SDL_WINDOWPOS_UNDEFINED,
SDL_WINDOWPOS_UNDEFINED,
mainScreenWidth,
mainScreenHeight,
SDL_WINDOW_SHOWN );
if( mainWindow == NULL )
{
std::cout << "SDL_Window could not be created! For more information, SDL_ERROR: "
<< SDL_GetError() << std::endl;
mainGameInitSuccess = false;
}
else
{
// Create renderer after window creation
mainRenderer = SDL_CreateRenderer( mainWindow, -1, SDL_RENDERER_PRESENTVSYNC );
if( mainRenderer == NULL )
{
std::cout << "SDL_Renderer could not be created! For more information, SDL_ERROR: "
<< SDL_GetError() << std::endl;
mainGameInitSuccess = false;
}
}
}
return mainGameInitSuccess;
}
void mainGameExit()
{
SDL_DestroyWindow( mainWindow );
mainWindow = NULL;
SDL_Quit();
}
int main( int argc, char* args[] )
{
if( !mainGameInit() )
{
std::cout << "Uh oh! SDL (The game code) Failed to init. For more information, SDL_ERROR: "
<< SDL_GetError() << std::endl;
}
else
{
std::map< SDL_Keycode, bool > keyMap;
SDL_Event e;
bool quit = false;
while( !quit )
{
while( SDL_PollEvent( &e ) )
{
if( e.type == SDL_QUIT )
{
quit = true;
}
else if( e.type == SDL_KEYDOWN )
{
keyMap[ e.key.keysym.sym ] = true;
}
else if( e.type == SDL_KEYUP )
{
keyMap[ e.key.keysym.sym ] = false;
}
}
if( keyMap[ SDLK_RIGHT ] )
squareX += 1;
if( keyMap[ SDLK_LEFT ] )
squareX -= 1;
if( keyMap[ SDLK_UP ] )
squareY -= 1;
if( keyMap[ SDLK_DOWN ] )
squareY += 1;
// Clear screen
SDL_SetRenderDrawColor( mainRenderer, 0, 0, 0, 255 );
SDL_RenderClear( mainRenderer );
SDL_Rect squareRect = { squareX, squareY, squareWidth, squareHeight };
SDL_SetRenderDrawColor( mainRenderer, 255, 0, 0, 255 );
SDL_RenderFillRect( mainRenderer, &squareRect );
// Update screen
SDL_RenderPresent( mainRenderer );
}
}
mainGameExit();
return 0;
}
``` |
I am about to publish a PowerBI dashboard onto PowerBI Service to be shared across the company. Currently, the dashboard consists of 6 tabs.
There are 3 groups of people who will have access to it, but RLS is not applied since the access will be page level, rather than filter level. For example, sales people will only have access to sales related tabs, (2/6 tabs) while management will have full access to all 6 tabs.
That being said, I may need to create 3 different dashboards based on this need, which causes redundancy.
However, when I add people to workplaces, do i also need to create 3 workplaces and each of them will contain the specific dashboard for that group?
Thank you! |
PowerBI Service Assign Workplace for different people |
|powerbi|powerbi-desktop| |
I'm trying to add a JW Player v1.1.3 instance in my react v16.8.1.
I ran the command `yarn add @jwplayer/jwplayer-react` to install the player.
After trying to import the player with:
```
import JWPlayer from "@jwplayer/jwplayer-react";
```
I got the following error message:
```
./node_modules/@jwplayer/jwplayer-react/lib/jwplayer-react.js
Module parse failed: Unexpected token (2:9801)
You may need an appropriate loader to handle this file type.
```
I reformatted the code of the file (where the error occurs) `./node_modules/@jwplayer/jwplayer-react/lib/jwplayer-react.js` to get more details about where exactly the error is, then I got this:
```
Failed to compile.
./node_modules/@jwplayer/jwplayer-react/lib/jwplayer-react.js
Module parse failed: Unexpected token (641:16)
You may need an appropriate loader to handle this file type.
| i().has(r) && (t[r] = e[r]);
| }),
| { ...e.config, ...t, isReactComponent: !0 }
| );
| })(t)),
@ ./app/front/react/containers/admin/email_template/edit.jsx 4:0-48
@ ./app/front/packs/react/admin.js
@ multi (webpack)-dev-server/client?http://localhost:3035 ./app/front/packs/react/admin.js
```
So the problem (as I think) is exactly in this line (641):
```
| { ...e.config, ...t, isReactComponent: !0 }
```
NB: I can't update the version of `react` |
How can I group data from csv by ID and other specific columns and perform operations like sum using pandas |
|python|pandas|csv| |
null |
Are these range operator and indexer? I have never seen them used like this `[..h]`.
Line 6 in this code:
```
public static void Main(string[] args)
{
int[] a ={1,2,3,4,5,6,7,8,9};
HashSet<int> h=new(a);
/// do somethings on HashSet
WriteArray([..h]);
}
static void WriteArray(int[] a)
{
foreach(var n in a)
{
Console.Write($"{n} ");
}
}
```
What operator are used in `[..h]` ?
Can you recommend a reference to study these operators or the method used? |
Get an Array from HashSet in c# with operator [..] |
|c#|range|operator-keyword|hashset|indexer| |
null |
|visual-studio-code|jupyter-notebook|color-thief| |
There is a trick you can use to get the shortcut working. Add a custom keyboard shortcut with the keybinding `Shift+Ctrl+Alt+down`. This disables the gnome shortcut that uses the same keybinding. After saving the keybinding, remove it. The VSCode bindings will now work since the gnome binding stays disabled.
[![enter image description here][1]][1]
[1]: https://i.stack.imgur.com/FOSBZ.png |
> Regex Match a pattern that only contains one set of numerals, and not more
I would start by writing a _grammar_ for the "forgiving parser" you are coding. It is not clear from your examples, for instance, whether `<2112` is acceptable. Must the brackets be paired? Ditto for quotes, etc.
Assuming that brackets and quotes do not need to be paired, you might have the following grammar:
##### _sign_
`+` | `-`
##### _digit_
`0` | `1` | `2` | `3` | `4` | `5` | `6` | `7` | `8` | `9`
##### _integer_
[ _sign_ ] _digit_ { _digit_ }
##### _prefix_
_any-sequence-without-a-sign-or-digit_
[ _prefix_ ] { _sign_ } _any-sequence-without-a-sign-or-digit_
##### _suffix_
_any-sequence-without-a-digit_
##### _forgiving-integer_
[ _prefix_ ] _integer_ [ _suffix_ ]
Notes:
- Items within square brackets are optional. They may appear either 0 or 1 time.
- Items within curly braces are optional. They may appear 0 or more times.
- Items separated by `|` are alternatives from which 1 must be chosen
- Items on separate lines are alternatives from which 1 must be chosen
With a grammar in hand, it should be much easier to figure out an appropriate regular expression.
### Program
My solution, however, is to avoid the inefficiencies of `std::regex` in favor of coding a simple "parser."
Function `validate_integer`, in the following program, implements the foregoing grammar. When `validate_integer` succeeds, it returns the integer it parsed. When it fails, it throws a `std::runtime_error` exception.
Because `validate_integer` uses `std::from_chars` to convert the integer sequence, it will not convert the test case `2112.0` from the OP. The trailing `.0` is treated as a second integer. All the other test cases work as expected.
The only tricky part is the initial loop that skips over non-numeric characters. When it encounters a sign (`+` or `-`), it has to check the following character to decide whether the sign should be interpreted as the start of a numeric sequence.
```lang-cpp
// main.cpp
#include <cctype>
#include <charconv>
#include <iomanip>
#include <iostream>
#include <stdexcept>
#include <string>
#include <string_view>
bool is_digit(unsigned const char c) {
return std::isdigit(c);
}
bool is_sign(const char c) {
return c == '+' || c == '-';
}
int validate_integer(std::string const& s)
{
enum : std::string::size_type { one = 1u };
std::string::size_type i{};
// skip over prefix
while (i < s.length())
{
if (is_digit(s[i]) || is_sign(s[i])
&& i + one < s.length()
&& is_digit(s[i + one]))
break;
++i;
}
// throw if nothing remains
if (i == s.length())
throw std::runtime_error("validation failed");
// parse integer
// due to foregoing checks, this cannot fail
auto const first{ &s[i] };
auto const last{ &s[s.length() - one] + one };
int n;
auto [end, ec] { std::from_chars(first, last, n)};
i += end - first;
// skip over suffix
while (i < s.length() && !is_digit(s[i]))
++i;
// throw if anything remains
if (i != s.length())
throw std::runtime_error("validation failed");
return n;
}
void test(std::ostream& log, bool const expect, std::string s)
{
std::streamsize w{ 46 };
try {
auto n = validate_integer(s);
log << std::setw(w) << s << " : " << n << '\n';
}
catch (std::exception const& e) {
log << std::setw(w) << s << " : " << e.what() << '\n';
}
}
int main()
{
auto& log{ std::cout };
log << std::left;
test(log, true, "<2112>");
test(log, true, "[(2112)]");
test(log, true, "\"2112, \"");
test(log, true, "-2112");
test(log, true, ".2112");
test(log, true, "<span style = \"numeral\">2112</span>");
log.put('\n');
test(log, false, "2112.0");
test(log, false, "");
test(log, false, "21,12");
test(log, false, "\"21\",\"12, \"");
test(log, false, "<span style = \"font - size:18.0pt\">2112</span>");
test(log, false, "21TwentyOne12");
log.put('\n');
return 0;
}
// end file: main.cpp
```
### Output
```lang-none
<2112> : 2112
[(2112)] : 2112
"2112, " : 2112
-2112 : -2112
.2112 : 2112
<span style = "numeral">2112</span> : 2112
2112.0 : validation failed
: validation failed
21,12 : validation failed
"21","12, " : validation failed
<span style = "font - size:18.0pt">2112</span> : validation failed
21TwentyOne12 : validation failed
```
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.