instruction
stringlengths 0
30k
⌀ |
---|
Find the largest numer for array after operation |
|java|algorithm| |
```
static ArrayList<Integer> LinearSearch(int[] arr,int index,int target) {
ArrayList<Integer> iList = new ArrayList<>();
if(index == arr.length){
return iList;
}
if(arr[index] == target){
iList.add(index);
}
ArrayList<Integer> temp = LinearSearch(arr,index+1,target);
iList.addAll(temp);
return iList;
}
```
Above code is my instructors code. I don't understand the `temp` part. How the indexes are adding to `temp`? example `arr = 1,2,3,4,3`: it should return `4,2` but it returns `2,4`.
How is this possible? I can't understand. What is going on with the stack? Please help me to understand.
Below is my own code, which I understand. Here we collect items. But I can't understand the above code.
```
static ArrayList<Integer> LinearSearch(int[] arr,int index,int target) {
ArrayList<Integer> iList = new ArrayList<>();
if(index == arr.length){
return iList;
}
iList = LinearSearch(arr,index+1,target);
if(arr[index] == target){
iList.addFirst(index);
}
return iList;
}
``` |
Advanced SQL will include Basic and **Tansact** SQL
Part 1: Basic Selection Queries
Part 2: Aggregate Functions SQL
Part 3: String Functions SQL
Part 4: Conditional Queries
Part 5: Combining Data Queries
Part 6: Window Functions
Part 7: Ranking Functions
Part 8: Data Insertion Queries
Part 9: Data Updation Queries
Part 10: Common Table Expression
Part 11: User Defined Functions
Part 12: Table Functions
Part 13: SQL SubQueries
Part 14: SQL Joins
Part 15: SQL Wildcards
Part 16: SQL Range Operators
Advanced SQL will also include Stored Procedures, Triggers, Views
Part 1: Variables
Part 2: Input Parameters
Part 3: Output Parameters
Part 4: Conditional Statements
Part 5: Table Valued Parameters
Part 6: Loop Statements
SQL with a Programming Language like Java
Part 1: Java Jdbc Basics
Part 2: Execute SQL Queries with Java
Part 3: Call Stored Procedures in Java
Part 4: Batch Execution and Bulk Copy
Taken from the [HelperCodes SQL Sheets][1] Blog
[1]: https://helpercodes.com/
|
null |
i am trying to get application context for showToast but not able to get it here is my kind, kindly help
```
package com.coding.APPNAVIGATION.MenuAndNavigationDrawer.Utils
import android.app.Application
import android.content.Context
open class MainApplication : Application() {
override fun onCreate() {
super.onCreate()
MainApplication.appContext = applicationContext
}
companion object {
lateinit var appContext : Context
}
}
```
```
package com.coding.APPNAVIGATION.MenuAndNavigationDrawer.Utils
import android.widget.Toast
object Utils{
fun showToast(message : String){
Toast.makeText(MainApplication.appContext, message, Toast.LENGTH_LONG).show()
}
}
```
i want to get application context to run showToast anywhere within the project |
lateinit property appContext has not been initialized to get Application Context for ShowToast |
|android|kotlin|android-studio|kotlin-multiplatform|gradle-kotlin-dsl| |
null |
I'm codding an app that open json file and according it content fill interface with grid of squares. Base UI I made in QT Designer and it looks like that.
[![enter image description here][1]][1]
Everything worked good. And I wanted make improvement, such as adjust width window according to content. And after that UI broke. It doesn't adjust, but even shrinks in width.
[![enter image description here][2]][2]
In debugging I found out that this because of QScrollArea. If I place grid of squares outside this widget, window adjusts fine. But I can't find/understand why QScrollArea doesn't adjust window size.
This is ui.py from Qt Designer:
from PyQt6 import QtCore, QtGui, QtWidgets
class Ui_RAL_widget(object):
def setupUi(self, RAL_widget):
RAL_widget.setObjectName("RAL_widget")
RAL_widget.resize(800, 600)
sizePolicy = QtWidgets.QSizePolicy(QtWidgets.QSizePolicy.Policy.Expanding, QtWidgets.QSizePolicy.Policy.Expanding)
sizePolicy.setHorizontalStretch(0)
sizePolicy.setVerticalStretch(0)
sizePolicy.setHeightForWidth(RAL_widget.sizePolicy().hasHeightForWidth())
RAL_widget.setSizePolicy(sizePolicy)
RAL_widget.setMinimumSize(QtCore.QSize(0, 0))
RAL_widget.setMaximumSize(QtCore.QSize(800, 600))
self.horizontalLayout = QtWidgets.QHBoxLayout(RAL_widget)
self.horizontalLayout.setContentsMargins(-1, -1, 0, -1)
self.horizontalLayout.setObjectName("horizontalLayout")
self.RAL_tab = QtWidgets.QTabWidget(parent=RAL_widget)
self.RAL_tab.setObjectName("RAL_tab")
self.Classic_tab = QtWidgets.QWidget()
sizePolicy = QtWidgets.QSizePolicy(QtWidgets.QSizePolicy.Policy.Expanding, QtWidgets.QSizePolicy.Policy.Expanding)
sizePolicy.setHorizontalStretch(0)
sizePolicy.setVerticalStretch(0)
sizePolicy.setHeightForWidth(self.Classic_tab.sizePolicy().hasHeightForWidth())
self.Classic_tab.setSizePolicy(sizePolicy)
self.Classic_tab.setObjectName("Classic_tab")
self.verticalLayout = QtWidgets.QVBoxLayout(self.Classic_tab)
self.verticalLayout.setSizeConstraint(QtWidgets.QLayout.SizeConstraint.SetNoConstraint)
self.verticalLayout.setContentsMargins(0, 0, 0, 0)
self.verticalLayout.setObjectName("verticalLayout")
self.classic_tab_scroll_area = QtWidgets.QScrollArea(parent=self.Classic_tab)
sizePolicy = QtWidgets.QSizePolicy(QtWidgets.QSizePolicy.Policy.Expanding, QtWidgets.QSizePolicy.Policy.Expanding)
sizePolicy.setHorizontalStretch(0)
sizePolicy.setVerticalStretch(0)
sizePolicy.setHeightForWidth(self.classic_tab_scroll_area.sizePolicy().hasHeightForWidth())
self.classic_tab_scroll_area.setSizePolicy(sizePolicy)
self.classic_tab_scroll_area.setAutoFillBackground(False)
self.classic_tab_scroll_area.setFrameShape(QtWidgets.QFrame.Shape.NoFrame)
self.classic_tab_scroll_area.setFrameShadow(QtWidgets.QFrame.Shadow.Sunken)
self.classic_tab_scroll_area.setSizeAdjustPolicy(QtWidgets.QAbstractScrollArea.SizeAdjustPolicy.AdjustToContents)
self.classic_tab_scroll_area.setWidgetResizable(True)
self.classic_tab_scroll_area.setAlignment(QtCore.Qt.AlignmentFlag.AlignCenter)
self.classic_tab_scroll_area.setObjectName("classic_tab_scroll_area")
self.classic_scroll_widget = QtWidgets.QWidget()
self.classic_scroll_widget.setGeometry(QtCore.QRect(0, 0, 785, 556))
sizePolicy = QtWidgets.QSizePolicy(QtWidgets.QSizePolicy.Policy.Expanding, QtWidgets.QSizePolicy.Policy.Expanding)
sizePolicy.setHorizontalStretch(0)
sizePolicy.setVerticalStretch(0)
sizePolicy.setHeightForWidth(self.classic_scroll_widget.sizePolicy().hasHeightForWidth())
self.classic_scroll_widget.setSizePolicy(sizePolicy)
self.classic_scroll_widget.setObjectName("classic_scroll_widget")
self.classic_grid = QtWidgets.QGridLayout(self.classic_scroll_widget)
self.classic_grid.setSizeConstraint(QtWidgets.QLayout.SizeConstraint.SetNoConstraint)
self.classic_grid.setObjectName("classic_grid")
self.classic_tab_scroll_area.setWidget(self.classic_scroll_widget)
self.verticalLayout.addWidget(self.classic_tab_scroll_area)
self.RAL_tab.addTab(self.Classic_tab, "")
self.horizontalLayout.addWidget(self.RAL_tab)
self.retranslateUi(RAL_widget)
self.RAL_tab.setCurrentIndex(0)
QtCore.QMetaObject.connectSlotsByName(RAL_widget)
def retranslateUi(self, RAL_widget):
_translate = QtCore.QCoreApplication.translate
RAL_widget.setWindowTitle(_translate("RAL_widget", "RAL Pallete"))
self.RAL_tab.setTabText(self.RAL_tab.indexOf(self.Classic_tab), _translate("RAL_widget", "Classic"))
And this is my main.py:
import json
import sys
from PyQt6 import uic
from PyQt6.QtWidgets import QApplication, QWidget, QLabel
from PyQt6.QtCore import Qt
class Window(QWidget):
def __init__(self):
super().__init__()
# Load UI
uic.loadUi('form.ui', self)
self.fill_classic_pallete()
# Adjust window width
self.setFixedSize(self.sizeHint().width(), 600)
# Debugging
print(f"Size Hint: {self.sizeHint()}")
print(f"Actual Size: {self.size()}")
# Fill grid with labels according json data
def fill_classic_pallete(self):
with open("Ral_classic.json", "r", encoding="utf-8-sig") as ral_file:
ral_data = ral_file.read()
ral_classic = json.loads(ral_data)
# Columns number in grid
NUM_COLUMNS = 6
# Create labels
for index, ral in enumerate(ral_classic):
label = QLabel(self.classic_scroll_widget)
label.setObjectName(ral["RAL"].replace(" ", ""))
row = index // NUM_COLUMNS
col = index % NUM_COLUMNS
label.setText(f"{ral['RAL']}\n{ral['English']}")
label.setStyleSheet(f"background-color: {ral['HEX']}")
label.setFixedSize(120, 120)
label.setAlignment(Qt.AlignmentFlag.AlignCenter)
self.classic_grid.addWidget(label, row, col)
app = QApplication(sys.argv)
window = Window()
window.show()
sys.exit(app.exec())
[1]: https://i.stack.imgur.com/KaxMe.jpg
[2]: https://i.stack.imgur.com/Jnebo.jpg |
Adjust window size according content in QScrollArea in PyQt6 |
|python-3.x|qt-designer|pyqt6| |
i am trying to get application context for showToast but not able to get it here is my code, kindly help
```
package com.coding.APPNAVIGATION.MenuAndNavigationDrawer.Utils
import android.app.Application
import android.content.Context
open class MainApplication : Application() {
override fun onCreate() {
super.onCreate()
MainApplication.appContext = applicationContext
}
companion object {
lateinit var appContext : Context
}
}
```
```
package com.coding.APPNAVIGATION.MenuAndNavigationDrawer.Utils
import android.widget.Toast
object Utils{
fun showToast(message : String){
Toast.makeText(MainApplication.appContext, message, Toast.LENGTH_LONG).show()
}
}
```
i want to get application context to run showToast anywhere within the project |
My answer is not the exactly the solution you are looking for, but it solved mine.
I wanted to do the same thing you want because when I retrieved the url of a video in minio, the url with docker service name did not work on the local browser.
I could not find a way to make it localhost, but I realized the service name was only for docker, and of course the url did not work locally.
And the solution is to make the endpoint of the minioClient as below:
`endPoint: 'host.docker.internal'`
And the url now works both for docker and local machine.
|
You can create a global variable `let app = {...}` and store all program data in there (this is how I usually organize programs). Then, store the state of your items under `app.data.checkboxState` or somewhere like that.
Below is an example program showing how to apply that:
<!-- begin snippet: js hide: false console: true babel: false -->
<!-- language: lang-js -->
const app = { // variable to hold info about program
data : {
checkboxState : {} // store checkbox state here
}
}
app.getState = function() { // method to get state and save it in memory
// get all elements with the class 'checkbox'
// you can use any class you like instead, just
// make sure all your checkbox elements have that class
let checkboxes = document.getElementsByClassName("checkbox");
for (let i=0; i<checkboxes.length; i++) { // loop through them
let checkbox = checkboxes[i];
app.data.checkboxState[checkbox.id] = checkbox.checked; // save this checkboxs' state
}
}
// helper function to get state of a
// certain element by its id
app.getElementState = function(elementId) {
return app.data.checkboxState[elementId]
}
// method to show user the state of memory.
// you can replace it with whatever you want,
// just remeber to use app.getElementState(idOfElementYouWantToCheckStateOf)
// to get the state of an element
app.showState = function() {
// generate message with data
let msg = JSON.stringify(app.data.checkboxState);
// you dont have to understand this, it's just
// formatting to make the message easier to read
let formattedMsg = msg.replaceAll('{', '')
.replaceAll('}','')
.replaceAll('true', ' checked ')
.replaceAll('false', ' unchecked ')
.replaceAll(':', ' : ')
.replaceAll('"', '')
.replaceAll(',', '\n ');
// open window with message
alert("App state is: \n " + formattedMsg);
}
app.run = function() { // method to execute program (save & show message)
app.getState(); // save state
app.showState(); // show message
}
<!-- language: lang-css -->
html, body {
font-family:arial;
}
h2 {
color:turquoise;
}
button {
color:black;
background:turquoise;
border:none;
padding:5px 10px;
border-radius:2px;
width:100px;
cursor:pointer;
margin:5px 0px;
}
button:hover {
box-shadow: 1px 1px 4px 0px black;
}
<!-- language: lang-html -->
<!DOCTYPE html>
<html>
<head>
</head>
<body>
<h2> Check options below: </h2>
<p>
<input class="checkbox" id="op1" type="checkbox"></input><span>Option 1</span><br>
<input class="checkbox" id="op2" type="checkbox"></input><span>Option 2</span><br>
<input class="checkbox" id="op3" type="checkbox"></input><span>Option 3</span><br>
<input class="checkbox" id="op4" type="checkbox"></input><span>Option 4</span><br>
<input class="checkbox" id="op5" type="checkbox"></input><span>Option 5</span><br>
</p>
<button onclick="app.run()">OK</button><br>
<button onclick="app.getState()">Save state</button><br>
<button onclick="app.showState()">Show state</button><br>
</body>
</html>
<!-- end snippet -->
Click "Run code snippet" (above) to run the code. If you click "OK", the state will be saved and a message will be shown showing the saved data. Click "Save state" to save without showing a message, and "Show state" to show the message without saving. Notice how when you click "Show state" without clicking "Save state" first, the message is not updated (even if you check or uncheck a box)
Just a reminder-- for such a small program, it is probably not necessary to do this and might just make it more complicated, but when you create larger programs, organizing your code like this really makes it easier to maintain :).
|
The official Interactive Brokers API is only offered through their Github site and not the Python Package Index (PyPI) because it's distributed under a different license. You can however build a wheel from the provided source code and then install the wheel. These are the steps, follow them precisely for the most updated TWS API version on windows machine. The latest answer was very confusing:
1) Download "API Latest" from http://interactivebrokers.github.io/
2) Unzip or install (if its a .msi file) the download.
3) Go to C:/tws-api/source/pythonclient/ in command prompt
4) Install wheel package with: python3 -m pip install wheel
5) Build a wheel with: python3 setup.py bdist_wheel
6) Install wheel with: python3 -m pip install --user --upgrade dist/ibapi-10.19.2-py3-none-any.whl
7) Check install with: python -m pip show ibapi |
If you want a dark border followed by a lighter border, then
use `static let dark_blue_2 = Color.blue.opacity(0.3)` and
add `.padding(4)` after your `.stroke(ThemeColors.dark_blue_2, lineWidth: 8)`
to adjust the location of the inner lighter border as shown in the example code:
struct EventMatcherCardBorder<Content: View>: View {
let content: Content
let CORNER_RADIUS: CGFloat = 50
init(@ViewBuilder content: () -> Content) {
self.content = content()
}
var body: some View {
VStack{
VStack{
VStack {
VStack{
Spacer().frame(height:5)
// Image("auth_icon")
Image(systemName: "house") // <-- for testing
.resizable()
.aspectRatio(contentMode: .fit)
.frame(height: 50)
content
.background(Color.white)
.padding(10)
}
.frame(maxWidth: .infinity, maxHeight: .infinity)
}
.padding(2)
.overlay(
RoundedRectangle(cornerRadius: CORNER_RADIUS)
.stroke(ThemeColors.dark_blue_2, lineWidth: 8)
.padding(4) // <-- here, adjust as required
)
}
.padding(1)
.overlay(
RoundedRectangle(cornerRadius: CORNER_RADIUS)
.stroke(ThemeColors.dark_blue_3, lineWidth: 6)
)
}.padding(15)
}
}
struct ThemeColors {
static let dark_blue_2 = Color.blue.opacity(0.3) // <-- here, adjust as required
static let dark_blue_3 = Color.red
}
|
In svelte I had same issue and solved by destroying chart with onDestroy
import { onDestroy } from 'svelte';
onDestroy(()=>{
chart.destroy();
}); |
Endeavor to both TRIM and apply IN and OUT fades to a number of MP3 files using ffmpeg. Specifically trimming off a [400mS] length from the start, followed by a [400mS] fade up to full volume, then a [400mS] fade down at the conclusion of the recording. My obstacle is making the fade out function correctly.
Have employed ffprobe to determine the total length of the file which is used in the $END variable. The following command appears to work correctly for the trim and the afade-in.
`ffmpeg -ss 00:00.4 -i "$FILENAME" -af "afade=t=in:st=0:d=0.4" -c:a libmp3lame "$NEWFILENAME"`
However, adding the afade-out breaks the command and results in error with "Conversion failed!".
`-af "afade=t=in:st=0:d=0.4,afade=t=out:st=$END:d=0.4"`
Typical transaction follows with error details.
```
type [mp3 @ 0x557572b829c0] Estimating duration from bitrate, this may be inaccurate
Input #0, mp3, from 'kef.reports-know.how.mp3':
Metadata:
artist : KEF Reports
title : Know how
encoded_by : Layer 3, Codec
track : 617
Duration: 00:03:48.62, start: 0.000000, bitrate: 128 kb/s
Stream #0:0: Audio: mp3, 44100 Hz, stereo, fltp, 128 kb/s
Stream mapping:
Stream #0:0 -> #0:0 (mp3 (mp3float) -> mp3 (libmp3lame))
Press [q] to stop, [?] for help
[afade @ 0x557572ba0880] Value 3.000000 for parameter 'type' out of range [0 - 1]
Last message repeated 1 times
[afade @ 0x557572ba0880] Error setting option type to value 03.
[Parsed_afade_0 @ 0x557572ba0780] Error applying options to the filter.
[AVFilterGraph @ 0x557572b864c0] Error initializing filter 'afade' with args 't=out:st=00:03:48.620:d=0.3'
Error reinitializing filters!
Failed to inject frame into filter network: Numerical result out of range
Error while processing the decoded data for stream #0:0
Conversion failed!
```
Bewildered by, "Value 3.000000 for parameter 'type' out of range [0 - 1]".
Thanks in advance,
Have experimented with the afade-out parameters without success.
|
I've been reading about Linux's various `SCHED_*` scheduling algorithms (which can be specified for a given thread/process via a call to [sched_setscheduler()][1]), and there is a good amount of information about how they work, but I haven't been able to find much about why each of them was created.
For example there is [SCHED_DEADLINE][2], but it's not very clear under what circumstances it would be appropriate/beneficial to use it rather than one of the other algorithms.
Therefore, my question is: for each of these algorithms, what is a "canonical" example of an application for which that algorithm would be the preferred/most-appropriate one to use? i.e. what specific motivating use-cases did the designers of the algorithm have in mind when they created the algorithm?
For reference, I've listed the algorithms below, along with my current (partial) understanding of what they should be used for.
- `SCHED_OTHER` -- for all "normal" (non-time-sensitive) applications that need to share the CPU as fairly as possible, with generally-good response time and no chance of thread-starvation, regardless of how the threads behave. For example, a web browser would use this scheduler.
- `SCHED_FIFO` -- for time-sensitive (soft-real-time) applications that need low latency at all times if at all possible, even if it means lower-priority threads might starve in certain scenarios. For example, an audio driver might use this scheduler.
- `SCHED_RR` -- ???
- `SCHED_BATCH` -- for applications that don't care about timing/latency at all, but just need as many spare cycles as the system can afford when it's not doing anything else. For example, a bitcoin mining program might use this scheduler.
- `SCHED_ISO` -- ???
- `SCHED_IDLE` -- ???
- `SCHED_DEADLINE` -- ???
[1]: https://man7.org/linux/man-pages/man2/sched_setscheduler.2.html
[2]: https://en.wikipedia.org/wiki/SCHED_DEADLINE |
I have a GitHub action that does a conditional execution of a job where it builds and runs the tests for a project only if there was a change in one specific directory of the code. Seems like a common thing to do. I used this as a reference: https://github.com/marketplace/actions/paths-changes-filter
**Problem** it always shows it as "changed" because it is comparing if my branch is different from main which it is. I want it to determine if there were any files changed in this specific directory ('libs/storage/Tsavorite/**') and if so, then it will run all the steps to build and test it.
Also - I don't want to set it in the "paths" part at the top because I will have a job that runs every time in this same CI. So, I have the core part running every time and the libs\storage\tsavorite part only run if that is changed.
Looking at the output, the step where it is checking for change, it says:
"Change detection refs/remotes/origin/main..refs/remotes/origin/darrenge/TsavIntoCI" so that tells me it is comparing main to my branch.
**CI yml file:**
name: My CI
on:
workflow_dispatch:
push:
paths-ignore:
- 'website/**'
- '*.md'
branches:
- darrenge/TsavIntoCI
pull_request:
branches:
- main
env:
DOTNET_SKIP_FIRST_TIME_EXPERIENCE: 1
DOTNET_NOLOGO: true
jobs:
changes:
runs-on: windows-latest
outputs:
tsavorite: ${{ steps.filter.outputs.tsavorite }}
steps:
- uses: actions/checkout@v4
- name: Check if Tsavorite changed
uses: dorny/paths-filter@v3
id: filter
with:
filters: |
tsavorite:
- 'libs/storage/Tsavorite/**'
verify-changes:
needs: changes
runs-on: windows-latest
if: needs.changes.outputs.tsavorite == 'true'
steps:
- name: CHANGES MADE
run: echo "CHANGES were made"
verify-NO-changes:
needs: changes
runs-on: windows-latest
if: needs.changes.outputs.tsavorite == 'false'
steps:
- name: NO CHANGES MADE
run: echo "NO CHANGES were made"
I am expecting it to only run the "changes were made" part of the code if a file in the commit that forced this run was made in libs/storage/Tsavorite/** |
One of the components (PartSlot) in my project is rendered active or inactive based on one of the props it receives. It influences component's style, animations and behaviour, and while style and onClick tags do accept new values when prop isActive changes to `true`, the animations don't start working.
Animations work perfectly fine when PartSlots are rendered with isActive = true from the beginning, but components which were originally created with isActive = false won't work as intended.
What can I do to address this issue? Thanks in advance
Component's code:
```
function PartSlot({ isActive, iconAddress, computer, partClassName, partName, setOnClick }: PartSlotProps) {
return {
<motion.div
onClick={ isActive ? () => { setOnClick(partClassName) } : undefined }
initial = 'tap'
whileHover='hover'
whileTap= {isMobile ? 'tapMobile' : 'tap'}
style={ inactivePartSlotStyle }
>
<div className='IconMountLower'>
<div className='IconMountUpper' style = { inactiveIconHolderStyle }>
<motion.img
src={iconAddress}
variants={ isActive ? partSlotAnimationVariants : undefined }
transition={{
duration: 0.08,
ease: easeOut
}}
/>
</div>
</div>
{/* some more stuff here */}
</motion.div>
}
}
```
This is how components are created:
```
export const predefinedPartSlotList = (isActive: boolean, computer: Computer, onClickPartSlot: (partClassName: PartClassName) => void): Array<ReactElement> => {
return [
(<PartSlot key={1} isActive={isActive} iconAddress={MotherboardIcon} partClassName={PartClassName.Motherboard} computer={computer} setOnClick={onClickPartSlot} partName={"Материнская плата"}/>),
(<PartSlot key={2} isActive={isActive} iconAddress={FanIcon} partClassName={PartClassName.Case} computer={computer} setOnClick={onClickPartSlot} partName={"Корпус"}/>)
// More PartSlots here
]
};
``` |
What are the motivating use-cases for each of Linux's SCHED_* scheduling algorithms? |
|c|linux|scheduling| |
I have the following problem:
I have the URL to a picture 'HTTP://WWW.ROLANDSCHWAIGER.AT/DURCHBLICK.JPG' saved in my database. I think you see the problem here: The URL is in uppercase. Now I want to display the picture in the SAP GUI, but for that, I have to convert it to lowercase.
I have the following code from a tutorial, but without the conversion:
*&---------------------------------------------------------------------*
*& Report ZDURCHBLICK_24
*&---------------------------------------------------------------------*
*&
*&---------------------------------------------------------------------*
REPORT zdurchblick_24.
TABLES: zproject_24.
PARAMETERS pa_proj TYPE zproject_24-projekt OBLIGATORY.
DATA gs_project TYPE zproject_24.
*Controls
DATA: go_container TYPE REF TO cl_gui_custom_container.
DATA: go_picture TYPE REF TO cl_gui_picture.
START-OF-SELECTION.
WRITE: / 'Durchblick 3.0'.
SELECT SINGLE * FROM zproject_24035 INTO @gs_project WHERE projekt = @pa_proj.
WRITE gs_project.
IF sy-subrc = 0.
WRITE 'Wert im System gefunden'.
ELSE.
WRITE 'Kein Wert gefunden'.
ENDIF.
WRITE : /'Es wurden', sy-dbcnt, 'Werte gefunden'.
AT LINE-SELECTION.
zproject_24 = gs_project.
CALL SCREEN 9100.
*&---------------------------------------------------------------------*
*& Module CREATE_CONROLS OUTPUT
*&---------------------------------------------------------------------*
*&
*&---------------------------------------------------------------------*
MODULE create_conrols OUTPUT.
* SET PF-STATUS 'xxxxxxxx'.
* SET TITLEBAR 'xxx'.
IF go_container IS NOT BOUND.
CREATE OBJECT go_container
EXPORTING
container_name = 'BILD'.
CREATE OBJECT go_picture
EXPORTING
parent = go_container.
CALL METHOD go_picture->load_picture_from_url
EXPORTING
url = gs_project-bild.
ENDIF.
ENDMODULE. |
While both other answers show valid solutions, I believe both the question being asked and the two solutions somehow miss the point of using `Layouts`.
Basically, `Layouts` are made to bring together `Items` that have an implicit size (`implicitHeight`/`implicitWidth`). `Layout.preferredWidth`/`Layout.preferredHeight` are used to override these things in some rare situations, see below. The [Qt Quick Layouts - Basic Example](https://doc.qt.io/qt-6/qtquick-layouts-example.html) coming with Qt does not use `Layout.preferredWidth`/`Layout.preferredHeight` at all (!) and makes a really nice look, without contaminating the whole qml file with either `anchors` or `Layout` properties. It takes some learning to be able to do this oneself, but once you got used to it, `Layouts` are a way to define user interfaces more directly with less code.
What confused me the most at the beginning were the following things:
* `RowLayout`/`ColumnLayout`/`GridLayout` come with `Layout.fillWidth`/`Layout.fillHeight` set to `true`, so when putting these near an `Item`/`Rectangle` then the `Items`/`Rectangles` suddenly disappear, because they don't have set these values (i.e. they have `Layout.fillWidth`/`Layout.fillHeight` set to `false`).
* `Item`s/`Rectangle`s come with an `implicitHeight`/`implicitWidth` of `0`, meaning they don't really play nice side-by-side with Layouts. The best thing to do is to derive `implicitWidth`/`implicitHeight` from contained subitems, like a `RowLayout`/`ColumnLayout` itself does by default for its subitems.
* `Layout.preferredWidth`/`Layout.preferredHeight` can be used to overcome implicit sizes where they are already defined and cannot be set. One such place is directly in a layout item, another is e.g. a `Text` item which also doesn't let you override implicit sizes.
Considering these points, I would write the example in the following way. I removed unnecessary items to better illustrate when `Layout.fillWidth`/`Layout.fillHeight` are needed, and when it is better to use `implicitWidth` in my opinion.
import QtQuick 2.9
import QtQuick.Controls 2.0
import QtQuick.Layouts 1.3
ApplicationWindow {
width: 250
height: 150
visible: true
ColumnLayout {
spacing: 0
anchors.fill: parent
Rectangle {
implicitHeight: 40
Layout.fillHeight: true
Layout.fillWidth: true
color: "red"
}
RowLayout {
spacing: 0
Layout.preferredHeight: 20
Rectangle {
implicitWidth: 20
Layout.fillHeight: true
Layout.fillWidth: true
color: "darkGreen"
}
Rectangle {
implicitWidth: 80
Layout.fillHeight: true
Layout.fillWidth: true
color: "lightGreen"
}
}
RowLayout {
spacing: 0
Layout.preferredHeight: 40
Rectangle {
implicitWidth: 40
Layout.fillHeight: true
Layout.fillWidth: true
color: "darkBlue"
}
Rectangle {
implicitWidth: 20
Layout.fillHeight: true
Layout.fillWidth: true
color: "blue"
}
Rectangle {
implicitWidth: 40
Layout.fillHeight: true
Layout.fillWidth: true
color: "lightBlue"
}
}
}
} |
Framer Motion animation not working after state change |
|reactjs|frontend|framer-motion| |
Found the answer – you have to set up Ngrok first https://developer.atlassian.com/platform/forge/tunneling/?_ga=2.206551187.129054678.1710258805-1599633662.1710258805#providing-credentials-for-ngrok |
I am attempting to create an incremental counter in an ID column based on several conditions being met in 2 other columns and then "resetting" those conditions being met to ascertain the next increment of ID. This is time series data so order does matter (I have not included the time stamp column).
I will provide a toy dataset. I have 3 columns: Location, Activity and ID. At the moment my ID column is empty but I have populated it here with values to illustrate my conditioning. I want to initialize ID from 1 and then I want to check whether D occurs. This is my first condition. I then need to check whether A occurs after D and at that instance, A should ALSO be at Location 2. Once this along with the D condition is met, I want to increment ID by 1 in the following row. Then in the next row, I want to "reset" the conditions which have occured and then again check by row whether D occurs and then at the first instance of A at location 2 occuring after D, I want to increment the next line by 1. This repeats itself to the very end of the dataset.
```
df <- data.frame(
Location = c(2, 3, 3, 2, 1, 2, 2, 2, 1, 3, 3, 1, 2, 3, 2, 2, 1, 2, 3, 2, 1),
Activity = c("A", "B", "C", "D", "D", "B", "A", "A", "B", "A", "C", "D", "A", "B", "B", "D", "A", "D", "D", "A", "C"),
ID = c(1, 1, 1, 1, 1, 1, 1, 2, 2, 2, 2, 2, 2, 3, 3, 3, 3, 3, 3, 3, 4)
)
# Print the dataframe to view its structure
print(df)
Location Activity ID
1 2 A 1
2 3 B 1
3 3 C 1
4 2 D 1
5 1 D 1
6 2 B 1
7 2 A 1
8 2 A 2
9 1 B 2
10 3 A 2
11 3 C 2
12 1 D 2
13 2 A 2
14 3 B 3
15 2 B 3
16 2 D 3
17 1 A 3
18 2 D 3
19 3 D 3
20 2 A 3
21 1 C 4
...
```
[](https://i.stack.imgur.com/qhQY6.png)
I have tried many iterations of some sort of conditional logic, but it appears to fail. My best attempt follows, but it does not match my expectations on ID column.
```
# Function to increment ID based on conditions
increment_id_based_on_conditions <- function(df) {
df$ID[1] <- 1 # Initialize the first ID
# Initialize control variables
waiting_for_a <- FALSE
last_id <- 1
for (i in 1:nrow(df)) {
if (waiting_for_a && df$Activity[i] == "A" && df$Location[i] == 2) {
last_id <- last_id + 1 # Increment ID after conditions are met
waiting_for_a <- FALSE # Reset condition
} else if (df$Activity[i] == "D") {
waiting_for_a <- TRUE # Set condition to start waiting for "A" at Location 2
}
df$ID[i] <- last_id # Update ID column
}
df$ID <- c(df$ID[-1], NA) # Shift ID down by one row and make last ID NA
return(df)
}
# Apply the function to dataset
df_with_ids <- increment_id_based_on_conditions(df)
# View the updated dataset
print(df_with_ids)
Location Activity ID
1 2 A 1
2 3 B 1
3 3 C 1
4 2 D 1
5 1 D 1
6 2 B 2
7 2 A 2
8 2 A 2
9 1 B 2
10 3 A 2
11 3 C 2
12 1 D 3
13 2 A 3
14 3 B 3
15 2 B 3
16 2 D 3
17 1 A 3
18 2 D 3
19 3 D 4
20 2 A 4
21 1 C NA
``` |
|omnet++|inet| |
My data is instrument reads and instrument baselines. The baseline data is punctual and typically does not extend to the "ends" of the dataset (i.e. first and last rows). Therefore i want to make a function that looks at the baseline column, and copies the values of the earliest and latest baselinepoints to the very first/last rows in the dataset, so that i can interpolate between them with approx().
I have so far done this manually, as exemplified below, but I need to do this task over and over again, so i´d like to make it a function.
I checked for other threads around here, and from what i read, i think must have to do with the different ways to adress columns and cells esp. when using self-made functions in data.frames.
Here is an example
```
#Make Two data frames: one holds instrument data, and one holds some
#baseline calibration we need to entend to the ends of the dataset
time<-seq(1,100,1)
data1<-rnorm(n = 100,mean = 7.5, sd = 1.1)
table1<-data.frame(cbind(time, data1))
time<-data.frame("time"=seq(2,96,4))
data2<-(0.32*rnorm(n = 24, mean = 1, sd = 1))
table2<-cbind(time,data2)
rm(time)
#now merge the two tables
newtable<-merge(table1, table2, by="time", all=T)
#remove junk
rm(data1, data2,table1,table2)
#copy 3rd column for later testing
newtable$data3<-newtable$data2
#the old manual way to fill the first row
newtable$data2[1]<-newtable$data2[min(which(!is.na(newtable$data2)))]
#the old manual way to fill the last row
newtable$data2[nrow(newtable)]<-newtable$data2[max(which(!is.na(newtable$data2)))]
#Now i try with a function
endfill<-function(df, col){
#fill the first row
#df[1,col] <- df[min(which(!is.na(df[[col]]))), col] # using = instead of <- has no effect
df[nrow(df),col]<-df[max(which(!is.na(df[[col]]))),col]
#
}
#I want to try my funtion in column 4:
endfill(df= newtable,col = 4)
#Does not work...
Another try:
endfill<-function(df, col){
#fill the first row
df$col[1] <- df[[col]] [min(which(!is.na(df[[col]])))] # using $names
#df[nrow(df),col]<-df[max(which(!is.na(df[[col]]))),col]
#
}
endfill(df= newtable,col = 4)
# :-(
```
In the function i have tried different approaches to address cells, first with using df$col[1], then also with df[[col]][1], and mixed versions, but i seem to miss a point here.
When i execute my above function in pieces, e.g. only the single parts before and after the "<-", they all make sense, i.e. deliver NA values for empty cells or the target value. but it seems impossible to do real assignments ?!
Thanks a lot for any efforts! |
How do i properly refer to data.frame cells in functions |
|r|dataframe|function|cell| |
null |
Not sure if this is still relevant but for those that land here... I was facing the same issue.
Here's the solution: `https://graph.facebook.com/v19.0/{creative_id}?fields=object_story_spec&access_token={access_token}`
A good way to go about it is:
1) Get the list of your creative ids: `https://graph.facebook.com/v19.0/act_798134188164544/adcreatives?access_token={access_token}`
2) Take a specific id you want info about and use the above url: `https://graph.facebook.com/v19.0/{creative_id}?fields=object_story_spec&access_token={access_token}`
|
null |
It can be done in single animation starting at "`0` rotation" without stacking and without negative delay, and you were pretty close to that. (Welcome to SO, by the way!)
You just had the easing functions set one frame later, but the progression (`ease-out` - `ease-in-out` - `ease-in`) was correct.
For the POC demo I've changed the "thing" to resemble a pendulum, because I think it is slightly more illustrative for this purpose:
<!-- begin snippet: js hide: false console: true babel: false -->
<!-- language: lang-css -->
@keyframes swing {
/* Starting at the bottom. */
0% {
transform: rotate(0turn); color: red;
animation-timing-function: ease-out;
}
/* From the bottom to the right cusp:
start full speed, end slow (ease-out). */
25% {
transform: rotate(-0.2turn); color: blue;
animation-timing-function: ease-in-out;
}
/* From the right cusp to the left cusp:
start slow, end slow (ease-in-out).
It will effectively cross the bottom
`0turn` point at 50% in full speed.
*/
75% {
transform: rotate(0.2turn); color: green;
animation-timing-function: ease-in;
}
/* From the left cusp to the bottom:
start slow, end full speed (ease-in). */
100% {
transform: rotate(0turn); color: yellow;
animation-timing-function: step-end;
}
/* Back at the bottom.
Arrived here at the full speed.
Animation timing function has no effect here. */
}
div {
animation: swing;
animation-duration: 3s;
/* `animation-timing-function` is set explicitly
(overridden) in concrete keyframes. */
animation-iteration-count: infinite;
animation-direction: normal;
/* `reverse` still obeys "reversed" timing functions from *previous* frames. */
animation-play-state: running;
transform-origin: center top;
margin: auto;
width: 100px;
display: flex;
flex-direction: column;
align-items: center;
pointer-events: none;
&::before,
&::after {
content: '';
background-color: currentcolor;
}
&::before {
width: 1px;
height: 100px;
}
&::after{
width: 50px;
height: 50px;
}
}
#reset:checked ~ div {
animation: none;
}
#pause:checked ~ div {
animation-play-state: paused;
}
<!-- language: lang-html -->
<meta name="color-scheme" content="dark light">
<input type="checkbox" id="pause"><label for="pause">Pause animation</label>,
<input type="checkbox" id="reset"><label for="reset">Remove animation</label>.
<div></div>
<!-- end snippet -->
I must admit it never occurred to me that we can set different timing functions for each key frame, so such naturally looking multi-step animation with "bound" easing types is in fact achievable. Big takeaway for me is also information that easing function of the last (`to` / `100%`) key frame logically doesn't have any effect.
---
Personally I'd most probably go with terser "back-and-forth" `animation-direction: alternate` between the two cusp points, `ease-in-out` timing and negative half-duration delay shifting its initial state to the "bottom" mid-point (similar to that proposed in other answer here) but I definitely see the benefits of this more straightforward approach without delay. |
This is initially a JS project from free code camp, but to put it on my portfolio website, I want to make it interactive.
I have tried setting the variables to state and being passed down through props, but the variables will only but initially mutated once, and the display will not update further than that. Here is some of the code that should give an idea of the whole project.
The state for button functions, button text, and text all update and display properly, but the button functions only set the state once and continue to call on the original state.
```
//JSX file
export default function DragonRepeller() {
const [health, setHealth] = useState({ health: 100 });
const [gold, setGold] = useState({ gold: 50 });
const [text, setText] = useState({ text: 'Welcome to Dragon Repeller. You must defeat the dragon that is preventing people from leaving the town. You are in the town square. Where do you want to go? Use the buttons above.'});
const [buttonText, setButtonText] = useState({ button1: "Go To Store", button2: 'Go to Cave', button3: 'Fight Dragon'});
const button1Text = buttonText.button1
const button2Text = buttonText.button2
const button3Text = buttonText.button3
const [buttonFunction, setButtonFunction] = useState({button1: goStore, button2: goCave, button3: fightDragon});
const button1Function = buttonFunction.button1
const button2Function = buttonFunction.button2
const button3Function = buttonFunction.button3
// various other initializers and some objects
const locations = [
{
name: "town square",
"button text": ["Go to store", "Go to cave", "Fight dragon"],
"button functions": [goStore, goCave, fightDragon],
text: "You are in the town square. You see a sign that says \"Store\"."
},
{
name: "store",
"button text": ["Buy 10 health (10 gold)", "Buy weapon (30 gold)", "Go to town square"],
"button functions": [buyHealth, buyWeapon, goTown],
text: "You enter the store."
},
{/*location[2] commented out for space*/},
{/*location[3]*/},
{/*location[4]*/},
{/*location[5]*/},
{/*location[6]*/},
{/*location[7]*/}
];
//location functions
function update(location) {
setButtonText(() => {
return {
button1: location['buttonText'[0]],
button2: location['buttonText'[1]],
button3: location['buttonText'[2]]
}
});
setButtonFunction(() => {
return {
button1: location['button functions'][0],
button2: location['button functions'][1],
button3: location['button functions'][2]
}
});
setText(() => {
return { text: location.text}
});
};
//button functions that use the update(location) function
function goTown() { // starting location
update(locations[0]);
};
function goStore() { // this is the location with the function in question
update(locations[1]);
};
function goCave() {
update(locations[2]);
};
//buy health button function, buy weapon works the same
function buyHealth() {
if (gold.gold >= 10) {
setGold((prevGold) => { return { gold: prevGold.gold - 10 }});
setHealth((prevHealth) => { return { health: prevHealth.health + 10 }});
} else {
setText(() => { return { text: 'You do not have enough gold to buy health' }});
}
}
//the rest of the code
}
return (
<div id=game>
<div id='stats'>
<span className='stat'>Health: <strong><span id='healthText'>{health.health}</span></strong></span>
<span className='stat'>Gold: <strong><span id='goldText'>{gold.gold}</span></strong></span>
</div>
<div id="controls">
<button id="button1" className='gButton' onClick={button1Function}>{button1Text}</button>
<button id="button2" className='gButton' onClick={button2Function}>{button2Text}]</button>
<button id="button3" className='gButton' onClick={button3Function}>{button3Text}</button>
</div>
<div id='text'>
{text.text}
</div>
</div>
)
```
I feel like I am going about this the wrong way and could use some help.
I have tried rewriting the code multiple times with varying ways of setting state. Every time I run the code, the text will keep updating can following along with what I have it set to change to, but the other variables only mutate once and then they revert to either the initial state or the first way it changed.
I have tried plain JS instead of JSX and that has run into more issues with the functions not executing.
I have also been scouring stack overflow to find anything that might relate to this but I have not found any solution that works which makes me feel like I am missing a key point here. |
A compact solution: *array_walk()* processes every row of the main array, the callable receives the sub-array as reference, removes the last element, calculates the product of all remaining elements using *array_reduce()* and assigns it as the last element of the sub-array.
```php
<?php
$mainArray = [
[4, 3, 5, 7, 210],
[4, 9, 5, 7, 210],
[4, 9, 25, 7, 210],
];
array_walk ($mainArray, function (&$subArr) {
array_pop($subArr);
$subArr[] = !$subArr ? 0 : array_reduce($subArr, fn ($carry, $item) => $carry * $item, 1);
},
);
print_r ($mainArray);
```
Result:
Array
(
[0] => Array
(
[0] => 4
[1] => 3
[2] => 5
[3] => 7
[4] => 420
)
[1] => Array
(
[0] => 4
[1] => 9
[2] => 5
[3] => 7
[4] => 1260
)
[2] => Array
(
[0] => 4
[1] => 9
[2] => 25
[3] => 7
[4] => 6300
)
)
|
### `Point` class
Let's start with a point class, in proper Java style:
```java
class Point {
private int[] components;
public Point(int... components) {
this.components = components;
}
public int getDimension() {
return components.length;
}
public int getComponent(int i) {
return components[i];
}
public void setComponent(int i, int v) {
components[i] = v;
}
}
```
### Sorting by axis
Indeed we need a comparator for this. In old school Java this is a bit of a ceremony - we have to write an `AxisComparator<Point>` class with an `axis` field which then compares based on component:
```java
class AxisComparator<Point> implements Comparator<Point> {
private int axis;
public AxisComparator(int axis) {
this.axis = axis;
}
public int compare(Point p, Point q) {
return Integer.compare(p.getComponent(axis), q.getComponent(axis));
}
}
```
You could then use this as `new AxisComparator(k)`.
In "modern" Java, you could get away with a one-liner:
```java
Comparator.comparingInt((Point p) -> p.getComponent(k));
```
[`Arrays.sort`](https://docs.oracle.com/javase/8/docs/api/java/util/Arrays.html#sort-T:A-java.util.Comparator-) accepts a comparator.
### `KdTree` class
We will want to distinguish between the tree - which stores a potentially `null` (in case of an empty tree) pointer to the root - and tree nodes. The tree nodes should get a recursive `private` constructor, taking the array and the axis we want to split on and producing a k-d-tree which splits on that axis. The tree should get a public constructor taking an array of points.
```java
class KdTree {
record Node(int axis, Node left, Node right) {
// Note: points is mutated (sorted by axis)
static Node build(Point[] points, int axis) {
if (points.length == 0)
return null; // empty node
Arrays.sort(points, Comparator.comparingInt((Point p) -> p.getComponent(axis)));
var leqPoints = Arrays.copyOf(points, points.length / 2);
var geqPoints = Arrays.copyOfRange(points, points.length / 2, points.length);
var nextAxis = (axis + 1) % points[0].getDimension();
return Node(axis, build(leqPoints, nextAxis), build(geqPoints, nextAxis));
}
}
private Node root;
public KdTree(Point[] points) {
root = Node.build(points.clone(), 0);
}
// Implement operations on your k-d-tree, like finding the nearest neighbor to a point here
}
```
### Better algorithms
This "naive" algorithm of sorting by an axis for each split is not ideal in terms of performance; it incurs O(n log n) costs at each of the O(log n) levels of the trees, resulting in O(n (log n)²), which is not bad, but also not optimal.
One option to optimize this is to *pre-sort* the points by each of the k axes, incurring costs of O(k n log n) for the pre-sorting, and then filtering the k pre-sorted lists into left & right parts as you split, incurring kn costs for each split, for a tree of depth O(log n), resulting in O(k n log n). This can be better than O(n (log n)²) if n is big and k is small (say, 2, 3 or 4).
Finally, the asymptotically optimal option is to use a *linear time* median selection algorithm ([median of medians](https://en.wikipedia.org/wiki/Median_of_medians)). Using this, you get O(n) costs for O(log n) layers, for a total of O(n log n).
I have implemented all three approaches in Lua a while ago [here](https://github.com/TheAlgorithms/Lua/blob/61ec71e6e6458c5f773c7f36eafc95e6c63fce0d/src/data_structures/k_d_tree.lua). If you don't know Lua, read it as pseudocode.
### Out-of-band representation (low-level optimizations)
Representing an array of points as, well, an array of points *is* definitely the most idiomatic, straightforward, simple way to implement this in an OOP language like Java. I'd stick to this for an initial implementation unless you have a good reason not to. I would not prematurely optimize this.
As you said, you want to keep the components of a point together, and that is cumbersome to do if you store separate arrays of components.
But if you do have to optimize this at a low-level (maybe you have determined that the points eat up too much memory, or the heap allocation, GC or indirection overhead is too large or the cache locality is too bad) by using multiple arrays, write yourself something like a `PointArray` class which manages an array of k arrays of components. Sorting could simply sort a permutation of indices into these k arrays (well, after the first split of these indices, it is only a permutation of the restricted set of indices, but you get the idea).
This could look like this:
```java
class PointArray {
private int[][] points; // points[i][j] = i-th coordinate of j-th point
private int[] indices; // into points
public PointArray(int[][] points) {
this.points = points;
int n = points[0].length;
assert n > 0;
for (int i = 1; i < points.length; i++)
assert points[i].length == n;
indices = new int[n];
for (int i = 1; i < n; i++)
indices[i] = i;
}
private PointArray(int[] indices, int[][] points) {
this.indices = indices;
this.points = points;
}
public int getDimension() {
return points.length;
}
public int getLength() {
return indices.length;
}
public int getComponent(int pointIdx, int axis) {
return points[axis][indices[pointIdx]];
}
public void sortByAxis(int axis) {
Arrays.sort(indices, Comparator.comparingInt((int i) -> points[axis][i]));
}
// To slice the array, it suffices to slice the indices.
// We do not have to slice the points.
public PointArray slice(int from, int to /*exclusive*/) {
return new PointArray(Arrays.copyOfRange(indices, from, to), points);
}
}
```
Really this approach isn't all that different from having an array of points, except instead of pointers to heap-allocated points, we have indices into our "array-allocated" matrix of point coordinates. This requires some small changes to our k-d-tree:
```java
class KdTree {
record Node(int axis, Node left, Node right) {
// Note: points is mutated (sorted by axis)
static Node build(PointArray points, int axis) {
if (points.getLength() == 0)
return null; // empty node
points.sortByAxis(axis);
var mid = points.getLength() / 2;
var leqPoints = points.slice(0, mid);
var geqPoints = points.slice(0, points.getLength());
var nextAxis = (axis + 1) % points.getDimension();
return Node(axis, build(leqPoints, nextAxis), build(geqPoints, nextAxis));
}
}
private Node root;
public KdTree(int[][] points) {
root = Node.build(new PointArray(points), 0);
}
// Implement operations on your k-d-tree, like finding the nearest neighbor to a point here
}
``` |
I'm trying to call .ToString() on the editor to get the friendly text for a rule. Either I'm not configured correctly or perhaps there's a bug, hence why I'm reaching out.
When calling .ToString(), the result looks like:
**Check if Products contain and Medical.NetworkType_BuiltInNetwork is "Y"**
However, it should read like (notice that 'medical' is in the text):
**Check if Products contain Medical and Medical.NetworkType_BuiltInNetwork is "Y"**
I'm only seeing the issue when on the server. The UI is rendering as expected.
Here's my rule Xml:
```
<?xml version="1.0" encoding="utf-8"?><codeeffects xmlns="https://codeeffects.com/schemas/rule/41" xmlns:ui="https://codeeffects.com/schemas/ui/4"><rule id="31be4f36-26c0-423b-a178-c2d04ab8ca4c" webrule="5.1.16.4" utc="2024-03-12T22:12:52.4714" type="Models.Root, Application, Version=1.0.0.0, Culture=neutral, PublicKeyToken=null" eval="true"><definition><and><condition type="contains"><property name="Products" /><value type="Models.Enums+ProductType, Application, Version=1.0.0.0, Culture=neutral, PublicKeyToken=null">1</value></condition><condition type="equal" stringComparison="OrdinalIgnoreCase"><property name="Medical.NetworkType_BuiltInNetwork" /><value>Y</value></condition></and></definition><format><lines /></format></rule></codeeffects>
```
Here's a simplified version of my model:
```
public class Root
{
public List<Enums.ProductType>? Products { get; set; } = new();
}
```
and a simplified version of my enum:
```
public enum ProductType
{
[Description("Medical"), EnumItem("Medical")] Medical = 1,
[Description("RX"), EnumItem("RX")] Rx = 2,
[Description("Dental"), EnumItem("Dental")] Dental = 3,
[Description("Vision"), EnumItem("Vision")] Vision = 4,
[Description("COBRA"), EnumItem("COBRA")] Cobra = 5,
}
```
Am I'm doing something wrong? Any help is greatly appreciated. |
Calling .ToString() on editor not returning enum name |
|rule-engine|business-rules|codeeffects| |
null |
As far as I know and read the official documentation, updates will only be triggered if the value is different.
> The set function that lets you update the state to a different value and trigger a re-render.
However, after my testing in version 18.2, there is a situation that triggers repeated rendering.[enter image description here](https://i.stack.imgur.com/VBhav.png)
I want to know what his capacity is and whether I need to be concerned about it |
I have 4 storage class variables that are set by default to false , then set one of them in inputs to "true" . Looking for a way to validate that only one of 4 storage classes variable is set to "true".
variable "sc1_default" {
default = "false"
}
variable "sc1_default" {
default = "false"
}
variable "sc3_default" {
default = "false"
}
variable "sc4_default" {
default = "false"
}
on terraform.tfvars Inputs
sc3_default = "true" |
Terraform valdiate that one of N variables is set to "true" |
|validation|terraform| |
```
x = c(1, 2, 2, 3, 3, 3, 4, 4, 5)
x.tab = table(x)
plot(x.tab, xlim = c(0, 10), xaxp=c(0, 10, 10))
```
(Unfortunately, I do not have enough reputation to post image, but received graph has only tick marks 1 to 5, instead of intended 0 to 10)
Why does R just ignore xaxp? I understand that I could factor x to levels 0:10 (i.e. add x = factor(x, levels=0:10)) and that would be solution, but why doesn't parameter xaxp work as intended? And by the way how to extract frequencies as vector from x.tab without complicated magic? |
The suggestion you've received above is a good starting point. It gives a high-level overview of the task at hand, the deep learning approach to take, and touches on the need for labeled data. However, there are additional insights and details that could help clarify the path forward and provide a more actionable guide:
Task Specificity: Clarify that while road segmentation is common, segmenting out residential areas will require focusing on the distinctive features of such areas. This may include roofs, driveways, gardens, and pools, not just the negative space of roads.
Model Architecture and Transfer Learning: Mention specific neural network architectures known for semantic segmentation success, such as U-Net, FCN (Fully Convolutional Networks), and DeepLab. Explain that using pre-trained models on similar tasks can reduce the requirement for a large labeled dataset, as these models have learned useful representations that can be transferred to your task. |
please help
Bad PCD format error
I would like to know whats wrong with PCD file.
I have given the header to the PCD file
header = "# .PCD v.7 - Point Cloud Data file format
VERSION .7
FIELDS x y z data
SIZE 4 4 4
TYPE F F F
COUNT 1 1 1
WIDTH 0
HEIGHT 1
VIEWPOINT 0 0 0 1 0 0 0
POINTS 0
DATA binary"
|
Analyzing data: PCD file. Problem: Bad PCD format |
|python| |
null |
I assume that you have table something like that:
CREATE TABLE OCCUPATIONS (
Name VARCHAR(255),
Occupation VARCHAR(255)
);
SELECT CONCAT(Name, '(', LEFT(Occupation, 1), ')')
FROM OCCUPATIONS
ORDER BY Name;
SELECT CONCAT('There are a total of ', COUNT(*), ' ', LOWER(Occupation), 's.')
FROM OCCUPATIONS
GROUP BY Occupation
ORDER BY COUNT(*), Occupation;
Here working example:
https://sqlfiddle.com/mysql/online-compiler?id=d3240cf0-f53f-463f-97af-6a7cce7c9ef5 |
Why does useState trigger rendering with the same value |
|javascript|reactjs|react-hooks| |
null |
Make sure to have angular.json in the folder where you try to build serve etc.
Maybe it's one level up because of a misplaced npm install. |
I'm migrating from `or-tools` to Google's Cloud Fleet Routing API (Optimization AI API). So far, the client libraries are not the best, nor do they have good documentation. Looking through the REST documentation (https://cloud.google.com/optimization/docs/), it's very unclear to me how I add reload points to offload capacity. If I have 5 pickups, and I specify a Delivery (assuming that's what a reload point is), the delivery shipment is skipped UNLESS the `loadDemand` *exactly* matches capacity of the vehicle at the time - which is not possible to predict. If we set `loadDemand=capacity` of the vehicle, it will go to the reload point ONLY when full. Does anyone know how to use this API and how to handle loadDemand to set capacity to 0 at whichever time makes sense for the vehicle? It's somewhat straightforward in `or-tools`. |
what I want is to make the TextPanel disabled if `payStub` has a value, and not be disabled if `payStub` does not have a value,
In my react code, I have the following:
const [payStub, setPayStub] = useState(() => {
if (isNewPayStub) {
return get(user, 'stub', '');
}
return stubEdit.value?.name || '';
});
const stubIsValid = Boolean(trim(payStub));
and in the react side I have the following:
<!-- begin snippet: js hide: false console: true babel: false -->
<!-- language: lang-html -->
return (
<Panel
onClose={onClose}
onSave={handleSave}
addNewText={isNewPayStub ? 'Add New PayStub' : ''}
>
....
<TextPanel
handleChange={setPayStub}
isValid={stubIsValid}
isRequired
label="Stub"
placeholder="Enter STUB"
value={payStub}
/>
<!-- end snippet -->
The TextPanel receives a property of `disabled` , and when I add a property as `disabled: {stubIsValid}`, when the user enters the first character, the condition will be met and it makes the TextPanel disabled which is not what I want (user should be able to fill enter the payStub) .
How do I fix this situation? |
Ideally you should follow the answer by @Leeroy Hannigan, But if you just have to get your version working, I just created a function (with nodejs18) in AWS console , renamed file from index.mjx to index.js and copy pasted the below code, I was able to get past the error you have described(changed exports.handler in your code to module.exports.handler as below).
const aws = require("aws-sdk");
module.exports.handler = async (event) => {
console.log('Hello!');
// some code
}; |
Well after 4 hours I realised I was missing the semi colon at the end of the <Text> ....Expense Screen</Text> for each of them. |
I've written a unit test to limit a methods line count, but its reporting back incorrect results and failing for everything?
I may not have got some of the math right or detection and was wondering where I went wrong?
I'm looping through all *.cs files, then through each line, marking the start of a method (declaration) and counting how many iterations till it reaches an end statement (`}`),
public class LongMethodLineCountTest
{
private const int MaxLinesForMethod = 60;
[Test]
public Task Check_All_Methods_Line_Count()
{
var path = Directory.GetParent(Directory.GetCurrentDirectory())?.Parent?.Parent.Parent.FullName;
foreach (var file in Directory.GetFiles(path, "*.cs", SearchOption.AllDirectories))
{
var lines = File.ReadAllLines(file);
var lastMethodDecLineNumber = 0;
var lastMethodDecName = string.Empty;
var lineNumber = 0;
var methods = new Dictionary<string, int>();
foreach (var line in lines)
{
lineNumber++;
if (line.Length < 5)
{
continue;
}
var isMethodDecLine = line.Contains("private void") ||
line.Contains("public Task") ||
line.Contains("private Task") ||
line.Contains("public async") ||
line.Contains("private async");
var isMethodEndLine = line[4..] == "}";
if (isMethodDecLine)
{
lastMethodDecLineNumber = lineNumber;
lastMethodDecName = line;
}
else if (isMethodEndLine)
{
methods[lastMethodDecName] = lineNumber - lastMethodDecLineNumber;
}
}
var methodsTooBig = methods.Where(x => x.Value > MaxLinesForMethod).ToList();
if (methodsTooBig.Count != 0)
{
Console.WriteLine(file);
Console.WriteLine(string.Join(",", methodsTooBig.Select(x => x.Key)));
}
Assert.That(methods.Values, Has.All.LessThanOrEqualTo(MaxLinesForMethod));
}
return Task.CompletedTask;
}
} |
Limiting method length? |
|c#| |
{"Voters":[{"Id":4712734,"DisplayName":"DuncG"},{"Id":466862,"DisplayName":"Mark Rotteveel"},{"Id":1431,"DisplayName":"Juha Syrjälä"}]} |
I am scraping messages about power plant unavailability and converting them into timeseries and storing them in a sql server database. My current structure is the following.
* `Messages`: publicationDate datetime, messageSeriesID nvarchar, version int, messageId identity
The primary key is on `(messageSeriesId, version)`
* `Units`: messageId int, area nvarchar, fueltype nvarchar, unitname nvarchar tsId identity
The primary key is on `tsId`. There is a foreign key relation on tsId between this table and `Messages`. The main reason for this table is that one message can contain information about multiple power plants.
* `Timeseries`: tsId int, delivery datetime, value decimal
I have a partition scheme based on delivery, each partition contains a month of data. The primary key is on `(tsId, delivery)` and it's partitioned along the monthly partition scheme. There is a foreign key on `tsId` to `tsId` in the `Units` table.
The `Messages` and `Units` tables contain around a million rows each. The `Timeseries` table contains about 500 million rows.
Now, every time I insert a new batch of data, one row goes into the `Messages` table, between one and a few (4) go into the `Units` table, and a lot (up to 100.000s) go into the `Timeseries` table.
The problem I'm encountering is that inserts into the `Timeseries` table are too slow (100.000 rows take up to a minute). I already made some improvements on this by setting the fillfactor to 80 instead of 100 when rebuilding the index there. However its still too slow.
And I am a bit puzzled, because the way I understand it is this: every partition contains all rows with delivery in that month, but the primary key is on `tsId` first and `delivery` second. So to insert data in this partition, it should simply be placed at the end of the partition (since `tsId` is the identity column and thus increasing by one every transaction).
The time series that I am trying to insert spans 3 years and therefore 36 partitions. If I, however, create a time series with the same length that falls within a single partition the insert is notable faster (around 1.5 second). Likewise if I create an empty time series table (`timeseries_test`) with the same structure as the original one, then inserts are also very fast (also for inserting data that spans 3 years). However, querying is done based mainly on delivery, so I don't think partitioning by `tsId` is a good idea.
If anyone has a suggestion on the structure or methods to improve querying it would be greatly appreciated. |
Unable to install ‘audio.whisper’ package from GitHub in RStudio despite correct Rtools installation |
|openai-whisper|rtools|remotes| |
null |
I’m going to recommend a slightly different approach that I think gets at the functionality you are looking for.
First, as noted by others, an interface is a Typescript language feature not a Javascript language feature. It’s used for static type checking during compile time, and that’s ir; there is no object or class that gets created in the resulting JavaScript. As such, you can’t extend it. If you wanted to be able to extend it you’d need to use a class instead of an interface.
What you can do, however, is define special functions that will evaluate a given instance of your Product interface and inform Typescript whether or not it it has additional properties. These special functions are referred to as type guards (https://www.typescriptlang.org/docs/handbook/advanced-types.html).
So, for instance, you could write a *hasAbc* type guard like so:
function hasAbc (p : Product): p is Product & { abc: string } {
return _.isString((p as any).abc);
}
Now if you use this function in an if-block, the typescript compiler will be happy when you reference the field *abc* from a Product:
function work(p: Product) {
…
if (hasAbc(p)) (
// no type script error!
const abc = p.abc;
…
}
…
} |
I want to click on the checkbox. Tried different methods. But not working. Any solution?
@FindBy(xpath="//label[@for='Tnc']")
Still its not getting the exact element of check box. Instead it is clicking on the Terms and Coniditions hyperlink and opening the pop up page.
I wanted to click on the Chcekbox. |
Not able to identify the checkbox element ::before and ::after in Selenium |
|java|selenium-webdriver|pseudo-element| |
null |
I'm trying to use a constexpr constructor in C++17 with a lambda that uses `std::tie` to initialize fields in a class from a tuple.
The code is similar to this:
```
#include <tuple>
enum class Format {
UINT8,
UINT16
};
struct FormatInfo {
const char* name = nullptr;
int maxVal = 0;
constexpr explicit FormatInfo(Format fmt) {
auto set = [this](const auto&... args) constexpr {
std::tie(name, maxVal) = std::make_tuple(args...);
};
switch(fmt) {
case Format::UINT8: set("uint8", 255); break;
case Format::UINT16: set("uint16", 65535); break;
}
}
};
int main() {
FormatInfo info(Format::UINT8); // ok
constexpr FormatInfo info2(Format::UINT8); // fails
}
```
Calling the constructor as constexpr fails, with an error that there is a call to a non-constexpr function inside `set`. Even though both `std::tie` and `std::make_tuple` should be constexpr.
Making the lambda itself constexpr (`constexpr auto set = ...`) also fails with an error that `this` is not a constant expression.
Is there any way to make this work in C++17? |
Using lambda function in constexpr constructor |
As far as I know and read the official documentation, updates will only be triggered if the value is different.
> The set function that lets you update the state to a different value and trigger a re-render.
However, after my testing in version 18.2, there is a situation that triggers repeated rendering.[enter image description here](https://i.stack.imgur.com/VBhav.png)
demo: https://playcode.io/1796380
I want to know what his capacity is and whether I need to be concerned about it |
Well after 4 hours I realised I was missing the semi colon at the end of the Text component for each of them. |
After updating to version 5.10.2 as suggested by Filip I was getting an PackageReferenceId error which later on was addressed by creating new protected containers first then adding the data instead of just passing arrays to them. The code below is a little more detailed than what is available in github:
require_once(__DIR__ . '/vendor/autoload.php');
use SellingPartnerApi\Api\OrdersV0Api;
use SellingPartnerApi\Model\OrdersV0\ConfirmShipmentRequest;
use SellingPartnerApi\Model\OrdersV0\PackageDetail;
use SellingPartnerApi\Model\OrdersV0\ConfirmShipmentOrderItem;
$order_id = 'your_order_id_here';
$order_items_list = $this->fn_GetOrderItems($order_id);
$order_item = $order_items_list->getPayload()->getOrderItems();
$shipment_items = new ConfirmShipmentOrderItem();
foreach ($order_item as $item) {
if( $item['quantity_shipped']> 0) {
echo "Items in order " . $order_id . " already marked shipped";
return;
} else {
$shipment_items->setOrderItemId($item->getOrderItemId());
$shipment_items->setQuantity($item->getQuantityOrdered());
}
}
$package_details = new PackageDetail();
$package_details->setCarrierCode('your_carrier_code');
$package_details->setCarrierName('your_shipping_carrier');
$package_details->setTrackingNumber('your_tracking_number');
$package_details->setShippingMethod('your_shipping_method');
$package_details->setPackageReferenceId('1');
$package_details->setShipDate(gmdate('Y-m-d\TH:i:s\Z'),strtotime('your_ship_date'));
$package_details->setOrderItems([$shipment_items]); //must be in brackets to make it an array
$payload = new ConfirmShipmentRequest();
$payload ->setPackageDetail($package_details);
$payload ->setMarketplaceId('your_market_placeid');
$apiInstance = new OrdersV0Api($this->config); //your config settings
try {
$apiInstance->confirmShipment($order_id, $payload);
echo "Order " . $order_id . " Fulfilled";
return;
} catch (Exception $e) {
echo $order_id . ' Exception when calling OrdersV0Api->confirmShipment: ', $e->getMessage(), PHP_EOL;
return;
} |
null |
I am new to Docker, and I have the Dockerfile below which Azure Pipelines is using to build an image and push it to an Azure Container Registry.
// Dockerfile
FROM mcr.microsoft.com/dotnet/aspnet:6.0 AS base
WORKDIR /app
EXPOSE 80
EXPOSE 443
FROM mcr.microsoft.com/dotnet/sdk:6.0 AS build
WORKDIR /src
COPY ["poc.frontend/poc.frontend.csproj", "poc.frontend/"]
RUN dotnet restore "poc.frontend/poc.frontend.csproj"
COPY ["Tests/poc.frontend/poc.frontend.unittests/poc.frontend.unittests.csproj", "poc.tests/"]
RUN dotnet restore "poc.tests/poc.frontend.unittests.csproj"
COPY . .
WORKDIR "/src/poc.frontend"
RUN dotnet build "poc.frontend.csproj" -c Release
FROM build AS test
LABEL test=true
WORKDIR "/src/poc.tests"
RUN dotnet build "poc.frontend.unittests.csproj" -c Release
RUN dotnet test --test-adapter-path "bin/Release/net6.0/NUnit3.TestAdapter.dll" -c Release --results-directory /testresults --logger "trx;LogFileName=test_results.trx" "poc.frontend.unittests.csproj"
FROM build AS publish
WORKDIR "/src/poc.frontend"
RUN dotnet publish "poc.frontend.csproj" -c Release -o /app/publish /p:UseAppHost=false
FROM base AS final
WORKDIR /app
COPY --from=publish /app/publish .
ENTRYPOINT ["dotnet", "poc.frontend.dll"]
When the build pipeline uses this Dockerfile as part of the Docker@2 task, it skips the test part. After building the application it goes immediately to the publish part.
// build-template.yaml
// ... other tasks
- task: Docker@2
displayName: 'Build and push an image to ACR'
inputs:
command: buildAndPush
repository: ${{parameters.imageRepository}}
dockerFile: ${{parameters.dockerFile}}
containerRegistry: sc-acr-sp
buildContext: $(Build.SourcesDirectory)
tags: v$(Build.BuildId)
These are what the logs are showing.
...
#17 [build 9/9] RUN dotnet build "poc.frontend.csproj" -c Release
#17 0.285 MSBuild version 17.3.2+561848881 for .NET
#17 1.000 Determining projects to restore...
#17 1.398 All projects are up-to-date for restore.
##[debug]Agent environment resources - Disk: / Available 18735.00 MB out of 74244.00 MB, Memory: Used 883.00 MB out of 6932.00 MB, CPU: Usage 15.91%
#17 3.825 /src/poc.frontend/Repositories/Mock/MockBasketRepository.cs(15,35): warning CS1998: This async method lacks 'await' operators and will run synchronously. Consider using the 'await' operator to await non-blocking API calls, or 'await Task.Run(...)' to do CPU-bound work on a background thread. [/src/poc.frontend/poc.frontend.csproj]
#17 3.826 /src/poc.frontend/Repositories/Mock/MockBasketRepository.cs(22,35): warning CS1998: This async method lacks 'await' operators and will run synchronously. Consider using the 'await' operator to await non-blocking API calls, or 'await Task.Run(...)' to do CPU-bound work on a background thread. [/src/poc.frontend/poc.frontend.csproj]
#17 3.939 poc.frontend -> /src/poc.frontend/bin/Release/net6.0/poc.frontend.dll
#17 3.956
#17 3.956 Build succeeded.
#17 3.957
#17 3.957 /src/poc.frontend/Repositories/Mock/MockBasketRepository.cs(15,35): warning CS1998: This async method lacks 'await' operators and will run synchronously. Consider using the 'await' operator to await non-blocking API calls, or 'await Task.Run(...)' to do CPU-bound work on a background thread. [/src/poc.frontend/poc.frontend.csproj]
#17 3.957 /src/poc.frontend/Repositories/Mock/MockBasketRepository.cs(22,35): warning CS1998: This async method lacks 'await' operators and will run synchronously. Consider using the 'await' operator to await non-blocking API calls, or 'await Task.Run(...)' to do CPU-bound work on a background thread. [/src/poc.frontend/poc.frontend.csproj]
#17 3.957 2 Warning(s)
#17 3.957 0 Error(s)
#17 3.957
#17 3.957 Time Elapsed 00:00:03.53
#17 DONE 4.0s
#18 [publish 1/2] WORKDIR /src/poc.frontend
#18 DONE 0.0s
...
What should I do so that the test stage doesn't get ignored? |
test Stage is Being Skipped |
|azure-devops|dockerfile| |
To work with an `SDDL` (Security Descriptor Definition Language) you first need to know the structure.
From [MS Learn - Security Descriptor String Format](https://learn.microsoft.com/en-us/windows/win32/secauthz/security-descriptor-string-format)
> The format is a null-terminated string with tokens to indicate each of the four main components of a security descriptor:
> * owner (O:),
> * primary group (G:),
> * DACL (D:),
> * and SACL (S:)
**DACL (Discretionary Access Control List)**
A DACL is a list of Access Control Entries (ACEs) that dictate who can access a specific object and what actions they can perform with it.
The term "discretionary" implies that the object’s owner has control over granting access and defining the level of access.
**SACL (System Access Control List)**
A SACL is a set of access control entries (ACEs) that specify the security events to be audited for users or system processes attempting to access an object. These objects can include files, registry keys, or other system resources.
**Structure of a Security Desriptor**
This is a simple example of a Security Descriptor String `SDDL`
~~~
"O:LAG:BUD:(A;;RPWPCCDCLCSWRCWDWOGA;;;S-1-5-32-545)"
~~~
* O:owner_sid
* G:group_sid (primary group)
* D:dacl_flags(string_ace1)(string_ace2)... (string_acen)
* S:sacl_flags(string_ace1)(string_ace2)... (string_acen)
When assigning permissions you are using the `DACL` part of the `SDDL`.
Every entry in a `SDDL` is called an `ACE` (Access Control Entry).
This particular example doesn't have a SACL (S: is missing).
Instead of SID:s for O: and G:, the constants `LA` (Local Administrator) and `BU` (Builtin Users) are used.
See [MS Learn - SID Strings](https://learn.microsoft.com/en-us/windows/win32/secauthz/sid-strings)
~~~
ConvertFrom-SddlString "O:AOG:AOD:(A;;RPWPCCDCLCSWRCWDWOGA;;;S-1-5-32-545)"
Owner : EXAMPLEHOST\Administrator
Group : BUILTIN\Users
DiscretionaryAcl : {BUILTIN\Users: AccessAllowed (ChangePermissions, CreateDirectories,
ExecuteKey, GenericAll, GenericExecute, GenericWrite, ListDirectory,
ReadExtendedAttributes, ReadPermissions, TakeOwnership, Traverse,
WriteData, WriteExtendedAttributes, WriteKey)}
SystemAcl : {}
~~~
Each `ACE-string` in the `DACL` follows the structure of
~~~
ace_type;ace_flags;rights;object_guid;inherit_object_guid;account_sid;(resource_attribute)
~~~
See [MS Learn - ACE Strings](https://learn.microsoft.com/en-us/windows/win32/secauthz/ace-strings)
**Constructing an ACE**
So, we want to add additional `ACE-strings` into the `DACL` (or change, remove or replace).
This might be done by changing the `SDDL` using string manipulation. But I don't know how to integrate the `SDDL` back into a .Net object in that way.
The relevant fields for adding an ACE to a DACL are:
* ace_type: Indicates the type of ACE (e.g., A for access allowed, D for access denied).
* ace_flags: Flags specifying inheritance and other properties.
* rights: Specifies the access rights granted or denied.
* account_sid: The Security Identifier (SID) of the user or group.
The order of these fields matters.
Example ACE string for granting read access to a specific user:
~~~
(A;;GA;;;S-1-5-32-545)
~~~
* A: Access allowed.
* GA: Grant all permissions.
* S-1-5-32-545 (Well known SID for BUILTIN\Users)
Now, when the basics are set, is where the answers from above makes an entry using some .Net magic.
The `RawDescriptor` is already avaialable from the `ConvertFrom-SddlString` cmdlet.
~~~
$sddl = ConvertFrom-SddlString "O:LAG:BUD:(A;;RPWPCCDCLCSWRCWDWOGA;;;S-1-5-32-545)"
$sddl.RawDescriptor
IsContainer : False
IsDS : False
ControlFlags : DiscretionaryAclPresent, SelfRelative
Owner : S-1-5-21-XXXXXXX-500
Group : S-1-5-32-545
SystemAcl :
DiscretionaryAcl : {System.Security.AccessControl.CommonAce}
IsSystemAclCanonical : True
IsDiscretionaryAclCanonical : True
BinaryLength : 96
$sddl.RawDescriptor.DiscretionaryAcl
BinaryLength : 24
AceQualifier : AccessAllowed
IsCallback : False
OpaqueLength : 0
AccessMask : 269353023
SecurityIdentifier : S-1-5-32-545
AceType : AccessAllowed
AceFlags : None
IsInherited : False
InheritanceFlags : None
PropagationFlags : None
AuditFlags : None
~~~
Constructing a bare `ACE`-object can be done as above. But the example is incomplete and my knowledge of .Net doesn't permit me to find out what's missing :/
The other option provided above, is working with a `SecurityDescriptor` object instead. Which is already provided for us in the `RawDescriptor` :)
~~~
$sddl.RawDescriptor.DiscretionaryAcl.AddAccess("Allow", "S-1-5-32-546", 268435456,"None","None")
$sddl.RawDescriptor.GetSddlForm([System.Security.AccessControl.AccessControlSections]::All)
O:LAG:BUD:(A;;CCDCLCSWRPWPRCWDWOGA;;;BU)(A;;GA;;;BG)
~~~
See [MS Learn - DiscretionaryAcl.AddAccess Method](https://learn.microsoft.com/en-us/dotnet/api/system.security.accesscontrol.discretionaryacl.addaccess?view=net-8.0)
The "only" thing missing for now in this answer, is how to construct the mask which lurks in [MS Learn - ObjectAccessRule Class](https://learn.microsoft.com/en-us/dotnet/api/system.security.accesscontrol.objectaccessrule?view=net-8.0)
The mask could of course be copied from an already existing ACE.
I will get back on this if I learn how...
When constructing a `SDDL` for usage with `WINRM`, there is actually a graphical tool for that.
(To be continued) |
Is it possible to do some recursion(?) in typescript to call same Columns type but with different generic type N instead of T? Please consider example below.
```
type Column<T> = {
render: (item: T, rowIndex: number) => React.ReactNode;
};
```
**Generic Type N here is only for example purposes.** I want to type this Container type to receive prop items, which can be array of N or function that receives current T type and returns new type N array.
```
type ColumnsContainer<T> = {
items: ((item: T, rowIndex: number) => N[]) | N[];
columns: Columns<N>;
};
```
```
type ColumnDefinition<T> = ColumnsContainer<T> | Column<T>;
```
```
type Columns<T> = ColumnDefinition<T>[];
```
```
type GridProps<T> = {
items: T[];
columns: Columns<T>;
};
```
What I want to achieve is:
```
type Cargo = { locFrom: string; locTo: string; totalQty: number; models: Model[] };
type Model = { modelName: string; qty: number; price: number; alternatives: Alternative[] };
type Alternative = { manufacturer: string; name: string; year: number };
const items: Cargo[] = [
{
totalQty: 299,
locFrom: 'London',
locTo: 'Dublin',
models: [
{
modelName: 'Peugeot',
price: 15000,
qty: 299,
alternatives: [
{ manufacturer: 'VW', name: 'Volkswagen', year: 1990 },
{ manufacturer: 'Audi', name: 'Audi', year: 1998 },
],
},
],
},
];
const columns: GridProps<Cargo> = {
items,
columns: [
{ render: item => item.totalQty },
{ render: item => item.locFrom },
{ render: item => item.locTo },
{
items: item => item.models,
columns: [
{
render: item => item.modelName,
},
{ items: item => item.alternatives },
{ render: item => item.price },
{ render: item => item.qty },
],
},
],
};
``` |
SQL Server Data Model and Insert Performance |
Your output:
```lang-none
this is thread 1
this is thread 2
main exists
thread 2 exists
thread 1 exists
thread 1 exists
```
Before I see that *"it prints "thread 1 exists" twice."*, I see that it prints after "main exists": This behavior can lead to unpredictable results.
--
First, you should array your code:
```cpp
#include <thread>
#include <iostream>
#include <future>
#include <syncstream>
void log(const char* str)
{
std::osyncstream ss(std::cout);
ss << str << std::endl;
}
void worker1(std::future<int> fut)
{
log("this is thread 1");
fut.get();
log("thread 1 exists");
}
void worker2(std::promise<int> prom)
{
log("this is thread 2");
prom.set_value(10);
log("thread 2 exits");
}
int main()
{
std::promise<int> prom;
std::future<int> fut = prom.get_future();
// Fire the 2 threads:
std::thread t1(worker1, std::move(fut));
std::thread t2(worker2, std::move(prom));
t1.join();
t2.join();
log("main exits");
}
```
Key points:
* CRITICAL: Replace the `while` loop and `detach()` with `join()` in the `main()` to ensure that the main thread waits for all child threads to finish before exiting.
* Dilute the `#include` lines to include only what's necessary - For better practice.
* Remove unused variables - For better practice.
* Remove the unused `using namespace` directive - For better practice.
* In addition, I would also replace the `printf()` calls with `std::osyncstream`.
[Demo][1]
Now, the output is:
```lang-none
this is thread 1
this is thread 2
thread 2 exits
thread 1 exits
main exits
```
--
UPDATE:
Due to your comment: *"The detach used but not join here is the requirements from test. I cannot change that."*. This solution works with detach:
```cpp
#include <thread>
#include <iostream>
#include <future>
#include <syncstream>
#include <mutex>
#include <condition_variable>
std::mutex mtx{};
std::condition_variable cv{};
uint8_t workers_finished{ 0 }; // Counter for finished workers
void log(const char* str)
{
std::osyncstream ss(std::cout);
ss << str << std::endl;
}
void worker1(std::future<int> fut)
{
log("this is thread 1");
fut.get();
log("thread 1 exits");
std::lock_guard lock(mtx);
++workers_finished;
cv.notify_one(); // Signal main thread
}
void worker2(std::promise<int> prom)
{
log("this is thread 2");
prom.set_value(10);
log("thread 2 exits");
std::lock_guard lock(mtx);
++workers_finished;
cv.notify_one(); // Signal main thread
}
int main()
{
std::promise<int> prom;
std::future<int> fut = prom.get_future();
// Fire the 2 threads:
std::thread t1(worker1, std::move(fut));
std::thread t2(worker2, std::move(prom));
t1.detach();
t2.detach();
{
std::unique_lock lock(mtx);
while (workers_finished < 2) {
cv.wait(lock); // Wait until notified (or spurious wakeup)
}
}
log("main exits");
}
```
[Demo][1]
Now, the output is: (The same)
```lang-none
this is thread 1
this is thread 2
thread 2 exits
thread 1 exits
main exits
```
[1]: https://onlinegdb.com/UboKp_Bpf |
{"OriginalQuestionIds":[63808813],"Voters":[{"Id":16791505,"DisplayName":"Paolo"},{"Id":17562044,"DisplayName":"Sunderam Dubey"},{"Id":1431,"DisplayName":"Juha Syrjälä"}]} |
I currently work on a chrome extension designed to scrape pdf files from specific websites, modify them (splitting them in multiple file, remove sensitive information from them etc.) and rendering them on the fly. For this I want to use pdf-lib for the pdf manipulation and pdfjs-dist for rendering. I use webpack to bundle my JS and all my dependencies but i get this error when trying to load the extension on a website in chrome:
```
chrome-extension://c…build/pdf.mjs:18148 Uncaught SyntaxError: Unexpected token 'export'
chrome-extension://c…pdf.sandbox.mjs:239 Uncaught SyntaxError: Unexpected token 'export'
chrome-extension://c…df.worker.mjs:57169 Uncaught SyntaxError: Unexpected token 'export'
```
I used babel to transpile ES6 modules, here are my config files:
webpack.config.js:
```
const path = require('path');
module.exports = {
entry: './src/index.js',
output: {
filename: 'bundle.js',
path: path.resolve(__dirname, 'dist'),
},
resolve: {
extensions: ['.js', '.mjs', '.json'],
},
module: {
rules: [
{
test: /\.worker\.js$/,
use: { loader: 'worker-loader' },
},
{
test: /\.m?js$/,
exclude: /node_modules\/(?!(pdfjs-dist)\/).*/,
use: {
loader: 'babel-loader',
options: {
presets: ['@babel/preset-env'],
},
},
},
],
},
mode: 'development',
};
```
babel.config.json:
```
{
"presets": [
[
"@babel/preset-env",
{
"loose": true,
"modules": false
}
]
]
}
```
I have tried loading the modules directly inside the chrome extension, inside an html file, but nothing seems to work for pdfjs... Thank you so much for your help. |
Importing pdf.js in a chrome extension setting: "Uncaught SyntaxError: Unexpected token 'export'" |
|webpack|google-chrome-extension|pdfjs-dist| |
null |
I am working on a microservice application developed in C# ASP.NET Core targeting .NET 6.0 framework. During security checks on my application, the security team identified an issue regarding "Improper Error handling."
The recommendation from the security team is that the application should not expose any detailed error handling messages to users, as this could potentially reveal sensitive implementation details.
Currently, when a 400 response is encountered, the error message includes detailed internal information, such as stack traces and error codes, which should not be exposed to users. Here is an example of the error message:
{
"type": "https://tools.ietf.org/html/rfc7231#section-6.5.1",
"title": "One or more validation errors occurred.",
"status": 400,
"traceId": "some guid",
"errors": {
"$.Id": [
"The JSON value could not be converted to System.Int64. Path: $.Id |
LineNumber: 0 | BytePositionInLine: 41."
]
}
}
What I Need:
I need to modify the error message returned in the BadRequest response to a generic message such as "Some error occurred. Please contact the support team with log details."
What I Have Tried:
I attempted to configure the InvalidModelStateResponseFactory in the MVC services to create a custom BadRequest response. However, I encountered issues as the Errors property is not writable in the ValidationProblemDetails class.
services.AddMvc().ConfigureApiBehaviorOptions(options =>
{
options.InvalidModelStateResponseFactory = context =>
{
var problems = new CustomBadRequest(context);
return new BadRequestObjectResult(problems);
};
});
public class CustomBadRequest : ValidationProblemDetails
{
public CustomBadRequest(ActionContext context) : base(context.ModelState)
{
Detail = this.Detail;
Instance = this.Instance;
Status = 400;
Title = this.Title;
Type = this.Type;
Errors = "Unexpected Error Occurs";
}
}
I also tried creating a middleware to modify the response, but encountered difficulties as the new message was appended to the existing error message, rather than replacing it. Additionally, I faced an error stating: "System.InvalidOperationException: 'The response headers cannot be modified because the response has already started.'"
await _next(context);
if (context.Response.StatusCode == (int)HttpStatusCode.BadRequest)
{
byte[] newStringData = Encoding.UTF8.GetBytes("This is a new string message.");
await context.Response.Body.WriteAsync(newStringData, 0, newStringData.Length);
}
I have also tried with Attribute.
Seeking Solution:
I would appreciate any guidance or suggestions on how to properly modify the BadRequest error message in ASP.NET Core microservices.
Thank you for your assistance.
|
**Resolved: Docker build error "failed to solve: the Dockerfile cannot be empty"**
After further investigation, I realized that the issue was caused by not saving the changes made in my code editor (VS Code) before attempting to build the Docker image.
It turns out that the Docker build process requires the Dockerfile and any related files to be saved in order to be properly recognized. Once I saved the changes made to my Dockerfile and other related files in VS Code, the `docker build` command worked as expected without any errors.
I apologize for the confusion and appreciate the assistance provided. Hopefully, this solution helps others who might encounter a similar issue in the future.
|
I have the following problem:
I have the URL to a picture 'HTTP://WWW.ROLANDSCHWAIGER.AT/DURCHBLICK.JPG' saved in my database. I think you see the problem here: The URL is in uppercase. Now I want to display the picture in the SAP GUI, but for that, I have to convert it to lowercase.
I have the following code from a tutorial, but without the conversion:
*&---------------------------------------------------------------------*
*& Report ZDURCHBLICK_24035
*&---------------------------------------------------------------------*
*&
*&---------------------------------------------------------------------*
REPORT zdurchblick_24035.
TABLES: zproject_24035.
PARAMETERS pa_proj TYPE zproject_24035-projekt OBLIGATORY.
DATA gs_project TYPE zproject_24035.
*Controls
DATA: go_container TYPE REF TO cl_gui_custom_container.
DATA: go_picture TYPE REF TO cl_gui_picture.
START-OF-SELECTION.
WRITE: / 'Durchblick 3.0'.
SELECT SINGLE * FROM zproject_24035 INTO @gs_project WHERE projekt = @pa_proj.
WRITE gs_project.
IF sy-subrc = 0.
WRITE 'Wert im System gefunden'.
ELSE.
WRITE 'Kein Wert gefunden'.
ENDIF.
WRITE : /'Es wurden', sy-dbcnt, 'Werte gefunden'.
AT LINE-SELECTION.
zproject_24035 = gs_project.
CALL SCREEN 9100.
*&---------------------------------------------------------------------*
*& Module CREATE_CONROLS OUTPUT
*&---------------------------------------------------------------------*
*&
*&---------------------------------------------------------------------*
MODULE create_conrols OUTPUT.
* SET PF-STATUS 'xxxxxxxx'.
* SET TITLEBAR 'xxx'.
IF go_container IS NOT BOUND.
CREATE OBJECT go_container
EXPORTING
container_name = 'BILD'.
CREATE OBJECT go_picture
EXPORTING
parent = go_container.
CALL METHOD go_picture->load_picture_from_url
EXPORTING
url = gs_project-bild.
ENDIF.
ENDMODULE.
|
{"Voters":[{"Id":340478,"DisplayName":"6006604"},{"Id":466862,"DisplayName":"Mark Rotteveel"},{"Id":1431,"DisplayName":"Juha Syrjälä"}]} |
Hi I've found a Pine Script V5 indicator that loops through a list of csv price levels input by the user & plots horizontal lines. Is it poss to set an alert at the point of plotting each level in the loop to alert me when price crosses each level? TIA
Pine Script V5 - In my loop of setting a horizontal line, tried various syntax options (see below) but not sure the logic is right as i want an alert to sit at each price level. z_lvl1 is the price I want the alert at. Each loop draws a line at that itteration's Price & I just want an alert to set on the line so it triggers when Price reaches it...
is_crossed = ta.cross(close, z_lvl1) //str.format("{0,number,####.#####}",str.tonumber(lvl)))
Trigger an alert when the condition is met
alertcondition(is_crossed, "Price crossed ")
alertcondition(ta.cross(close, z_lvl1), title="Price crossed", message="Price crossed")
alertcondition(condition=priceAlert, message="Price Crossed")
alert(ta.cross(close, z_lvl1), title="Buy Alert", alert_type="Price", frequency="once_per_bar")
alert(close = z_lvl1, title="Price Alert", alert_type="Price", frequency="once_per_bar") |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.