instruction
stringlengths
0
30k
โŒ€
|javascript|html|reactjs|
[In this demo](https://stackblitz.com/edit/kxgmi3?file=src%2Fstyles.scss) I'm trying to style the `mat-expansion-panel-header`'s `::before` pseudo element like this (Style rules added to styles.scss): ``` mat-expansion-panel-header { position: relative; } mat-expansion-panel-header::before { content: ''; position: absolute; top: 0px; left: 0px; bottom: 0px; right: 0px; z-index: 1000; background-color: black; } ``` So if the above rule would take effect the `mat-expansion-panel-header` items in the demo would just be black. However the rule does not take effect and when I look in the developer tools I don't see the `::before` element. Any ideas on how to fix this?
Styling the ::before element on a mat-expansion-panel-header?
|javascript|html|css|angular|angular-material|
If you used [JSON_VALUE][1] (since 8.0.21), that would work as you expect ``` CREATE TABLE participants ( id INT AUTO_INCREMENT PRIMARY KEY, puuid CHAR(36) NOT NULL, data JSON, project INT AS (JSON_VALUE(data, '$.project')), schoolCode VARCHAR(255) AS (JSON_VALUE(data, '$.schoolCode')), UNIQUE KEY (puuid) ); ``` This then works `insert into participants(puuid,data) values('aasd1','{"project":null}');` and gives expected result. See https://stackoverflow.com/a/75743288/131391 [1]: https://dev.mysql.com/doc/refman/8.0/en/json-search-functions.html#function_json-value
Regardless of what px value do I choose, nothing happens. The only times when something is altered is when I adjust either the paddingLeft or the paddingRight parameter, but here, another peculiar thing happens since if I try to set both paddings nothing happens, therefore the trick works only when setting just one sort of padding. <Button variant='secondary' sx={{ fontSize: '0.9rem', fontWeight: 500, m: '1rem', px: '0.1rem',//nothing happens outline: 'none', '&:focus': { outline: 'none', }, }} > enable notifications </Button>
Material UI button's padding can't be adjusted
|reactjs|material-ui|jsx|styled-jsx|
# Context I have a server which has a bunch of routes which may take a json body as an input and may answer with a json response as an output, and I want to quickly test these routes and various combinations of them. The commands I am using manually look like : ```bash curl -H "Content-Type:application/json" -d @- http://$HOST:$PORT/route1 | jq ``` I want to be able to compose these routes, piping them. Currently, I use a small script ./jqr which looks like : ```bash case $1 in route1) jq 'filter_input_route1' | curl -H "Content-Type:application/json" -d @- http://$HOST:$PORT/route1 | jq ;; # ... ``` And can be used like this : ```bash ./jqr route1 < share_example_a.json | jq 'some_manual_processing' ``` Which is better but far from perfect. # Problem Ideally, I would like to extract the curl invocation in a jq function rather than in a shell program, so as to be able to do ```bash jq 'route1|some_manual_processing' < share_example_a.json ``` It may not look like much, but in addition to save alternating calls of jq and ./jqr, piping and copying is a lot more powerful in jq than in bash. Say I have routes `add` and `disable` with no output and a route `list` with no input, I could do things like : ```bash echo '{"name":"toto","enabled":true}' | jq 'route_add,route_list|.[]|select(.name=="toto")' ``` ```bash jq 'route_list|.[]|select(.name=="toto")|route_disable' ``` Where `def filter_input_route_disable : {"id":.id}` # What I tried and looked up Again, because piping and copying is much more convenient in jq than in bash, making a wrapper script around jq to augment its capabilities seems highly impractical and time consuming. jq 's manual, while otherwise very thorough, is very indigent when it comes to custom functions and modules. I did manage to find some other sources to learn how to write them, but during my testings in `jq-1.6`, I didn't manage to shell out from a jq custom function. It seems to be a popular feature request more than a decade old, approved by stedolan https://github.com/jqlang/jq/issues/147 which still wasn't implemented somewhat recently https://github.com/jqlang/jq/issues/1101 . I assume there are privilege issues involved : https://github.com/jqlang/jq/pull/1005 . Additionally, there might be issues with order of operations and (a)synchronicity, maybe jq language makes assumptions not compatible with what I want to do, and maybe it would not work well with awaiting results from a network request, I really have no clue. One possible solution might be to write a jq function in C rather than in jq, like some built-in functions are, but I have no idea where to start. Am I even going the right way about my issue ? I know of Postman and such, I'm not much of a clickety kid myself, I'd rather use my ./jqr solution, but I'm open to using some other curl frontend with good keyboard control and decent posix compatibility, preferably TUI. # Subsequent questions Assuming a crude combination of curl and jq is the right way, is there anything I am missing ? Like a jq version or fork capable of shelling out ? Assuming I didn't miss anything, is it worth attempting to write jq custom functions in C ? Like would it take more than a handful easy lines per route, and less than an hour of constant overhead, learning and setting things up ?
EDIT: I just double checked, and I'm actually wrong. Since the argument is neither a floating point number nor an integer, the left alignment should be the default. This had to be a compiler bug since simply changing the compiler version gave the [right output](https://godbolt.org/z/x1zaehbEq). `{0:30}` means that the variable `0` will be followed by `30` spaces. For the separator (`|`) to be alined you can use `{0:<30}`: ```c++20 std::format("|{0:<30}|{1:30.5f}|\n", "p", std::numbers::pi) ``` To align the variable `0` left and add padding so that the length of the line is `30`. ([cppreference](https://en.cppreference.com/w/cpp/utility/format/spec#Fill_and_align)) Live on [Compiler Explorer](https://godbolt.org/z/Y9Kjx8zTo).
I'm encountering an issue while trying to build a Docker image for my Python application. Whenever I run the `docker build` command, I'm getting the following error: *ERROR: failed to solve: the Dockerfile cannot be empty* **Here's my Dockerfile:** ```Dockerfile # Use an official Python runtime as the base image FROM python:latest # Set the working directory in the container WORKDIR /app # Copy the current directory contents into the container at /app COPY . . # Install any needed dependencies specified in requirements.txt # RUN pip install --no-cache-dir -r requirements.txt # Make port 80 available to the world outside this container EXPOSE 80 # Define environment variable ENV NAME World # Run name.py when the container launches CMD ["python", "name.py"] ``` And here's the content of my `name.py` file: `print("Hello World!")` I'm confused as to why Docker is complaining that the Dockerfile is empty when it contains instructions. Any ideas on what might be causing this issue? **Additional Information:** Docker version: Docker version 25.0.3 Operating system: Windows
null
I am scraping messages about power plant unavailability and converting them into timeseries and storing them in a sql server database. My current structure is the following. > **messages**: (publicationDate datetime, messageSeriesID nvarchar, version int, messageId identity) > > The primary key is on (messageSeriesId, version) > > **units**: (messageId int, area nvarchar, fueltype nvarchar, unitname nvarchar tsId identity) > > The primary key is on tsId. There is a foreign key relation on tsId > between this table and messages. The main reason for this table is > that one message can contain information about multiple power plants. > > **timeseries** (tsId int, delivery datetime, value decimal) > > I have a partition scheme based on delivery, each partitition contains > a month of data. The primary key is on (tsId, delivery) and its > partitioned along the monthly partition scheme. There is a foreign key on tsId to tsId in units table. The messages and units table contain around a million records each. The timeseries table contains about 500 million records. Now, every time I insert a new batch of data, one row goes into messages table, between one and a few (4) go into units table, and a lot (up to 100.000s) go into timeseries table. The problem I'm encountering is that inserts into timeseries table are too slow. (100.000 records take up to a minute). I already made some improvements on this by setting the fillfactor to 80 instead of 100 when rebuilding the index there. However its still too slow. And I am a bit puzzled, because the way I understand it is this: every partition contains all rows with delivery in that month, but the primary key is on tsId first and delivery second. So to insert data in this partition, it should simply be placed at the end of the partition (since tsId is the identity column and thus increasing by one every transaction). The timeseries that I am trying to insert spans 3 years and therefore 36 partitions. If I, however, create a timeseries with the same length that falls within a single partition the insert is notable faster (around 1.5 second). Likewise if I create an empty timeseries table (timeseries_test) with the same structure as the original one, then inserts are also very fast (also for inserting data that spans 3 years). However, querying is done based mainly on delivery, so I don't think partitioning by tsId is a good idea. If anyone has suggestions on the structure or methods to improve querying it would be greatly appreciated.
null
I understand what the function does, but I don't really know what does e.preventDefault() do. I know that prevents the default behaviour from the keydown event, but I don't really know which behaviour is that, I haven't been able to find it. Reference article: https://hidde.blog/using-javascript-to-trap-focus-in-an-element/ **Trap focus function** <!-- begin snippet: js hide: false console: true babel: false --> <!-- language: lang-js --> trapFocus(document.getElementById('dialog-backdrop')); function trapFocus(element) { const focusableLms = element.querySelectorAll('a[href]:not([disabled]),button:not([disabled]),textarea:not([disabled]), input[type="text"]:not([disabled]), input[type="radio"]:not([disabled]),input[type="checkbox"]:not([disabled]), select:not([disabled])'); const firstFocusableLm = focusableLms[0]; const lastFocusableLm = focusableLms[focusableLms.length - 1]; element.addEventListener('keydown', (e) => { const isTabPressed = (e.key === 'Tab'); if (!isTabPressed) { return; } if (e.shiftKey) /* shift + tab */ { if (document.activeElement === firstFocusableLm) { lastFocusableLm.focus(); e.preventDefault(); } } else /* tab */ { if (document.activeElement === lastFocusableLm) { firstFocusableLm.focus(); e.preventDefault(); } } }); } <!-- language: lang-html --> <div id="dialog-backdrop" class="dialog-backdrop"> <div class="alert-dialog" id="alert-dialog" role="alertdialog" aria-label="Confirm discard changes." aria-describedby="alert-dialog__desc"> <img src="img/garbage-collector-2.jpg" alt="" /> <button aria-label="Close dialog." type="button" class="alert-dialog__cancel-btn" id="alert-dialog__cancel-btn"> <span aria-hidden="true" class="material-symbols-outlined">cancel</span> </button> <p class="alert-dialog__desc" id="alert-dialog__desc">Are you sure you want to discard all changes made in form?</p> <div> <button id="alert-dialog__confirmation-btn" type="button">Yes</button> <button id="alert-dialog__discard-btn" type="button">No</button> </div> </div> </div> <!-- end snippet --> If I remove it, it only traps focus for two of the three button elements in the dialog. It completely ignores the close button. And only cycles between the yes and no button.
if(tid\<n) { gain = in_degree[neigh]*out_degree[tid] + out_degree[neighbour]*in_degree[tid]/total_weight //here let say node 0 moves to 2 atomicExch(&node_community[0, node_community[2] // because node 0 is in node 2 now atomicAdd(&in_degree[2],in_degree[0] // because node 0 is in node 2 now atomicAdd(&out_degree[2],out_degree[0] // because node 0 is in node 2 now } } this is the process, in this problem during calculation of gain all the thread should see the update value of 2 which values of 2+values 0 but threads see only previous value of 2. how to solve that ? here is the output: node is: 0 node is: 1 node is: 2 node is: 3 node is: 4 node is: 5 //HERE IS THS PROBLEM (UPDATED VALUES ARE NOT VISIBLE TO THE REST OF THREDS WHO EXECUTED BEFORE THE ATOMIC WRITE) updated_node is: 0 // this should be 2 updated_values are: 48,37. // this should be(48+15(values of 2))+(37+12(values of 0)) I have tried using , __syncthreads(), _threadfence() and shared memory foe reading writing values can any one tell what could be the issue ??
Sql Server Data Model and Insert Performance
|sql-server|partitioning|
The question seems incomplete, as the code is not present. As per my understanding adding the relevant details. Ideally, gauge metric works on the latest value evaluated by the method/field given to it. (It can go up or down or can remain the same) > w.r.t. the question, If the field's value is still 'x' for pod 1, then > it's obvious that it will emit 'x'. If different pods somehow share > the value via some external source, it should emit 'y'. Other than that there are different kinds of metrics for example COUNTERS, which can have the CountingMode as CUMMILATIVE or STEP. CUMMILATIVE -> will keep on aggregating the count. STEP -> will reset the counter metric to 0 on a fixed interval. REF: [MICROMETER][1] [CountingMode][2] [1]: https://www.baeldung.com/micrometer/ [2]: https://www.javadoc.io/doc/io.micrometer/micrometer-core/1.0.6/io/micrometer/core/instrument/simple/CountingMode.html
I have folder called preprocessed_data_png where the npy file of the images and annotations are. When I try to train the model I get the below error. RuntimeError: Given groups=1, weight of size [32, 3, 3, 3], expected input[32, 416, 416, 3] to have 3 channels, but got 416 channels instead Convert DICOM to PNG ``` def preprocess_data(scan_dir, annotation_dir, output_dir): dict = getUID_path(scan_dir) annotation_paths = [os.path.join(annotation_dir, f) for f in os.listdir(annotation_dir) if f.endswith('.xml')] dicom_paths = [] dicom_names = [] for full_path in annotation_paths: base_path = os.path.basename(full_path) dcm_path, dcm_name = dict[base_path[:-4]] _, ext = os.path.splitext(dcm_name) if ext in ['.dcm']: dicom_names.append(dcm_name) dicom_paths.append(dcm_path) for dcm_path, dcm_name, annotation_path in zip(dicom_paths,dicom_names,annotation_paths): im = pydicom.dcmread(dcm_path) im = im.pixel_array.astype(float) rescaled_image = (np.maximum(im,0)/im.max())*255 final_image = np.uint8(rescaled_image) final_image = Image.fromarray(final_image) name_without_extension, ext = os.path.splitext(dcm_name) if ext.lower() == '.dcm': dcm_name = name_without_extension # final_image.save(output_dir,dcm_name+'.png') # Save the image in the specified folder location with the correct format final_image.save(os.path.join(output_dir, dcm_name+'.png'), format='PNG') # Copy annotation file to output directory with the same name as DICOM image copyfile(annotation_path, os.path.join(output_dir, dcm_name+'.xml')) # Preprocess data scan_dir = "/Dataset/Scans/Lung_Dx-B0001" annotation_dir = "/Dataset/Annotations/B0001" # Ensure the output directory exists, create it if not output_dir = "/Dataset/preprocessed_png/" if not os.path.exists(output_dir): os.makedirs(output_dir) preprocess_data(scan_dir, annotation_dir, output_dir) ``` Preprocess Code ``` import os import cv2 import numpy as np from xml.etree import ElementTree as ET def read_png_file(png_path): # Read PNG image using OpenCV image = cv2.imread(png_path, cv2.IMREAD_GRAYSCALE) return image def parse_annotation(annotation_file): tree = ET.parse(annotation_file) root = tree.getroot() bounding_boxes = [] for obj in root.findall('object'): xmin = int(obj.find('bndbox').find('xmin').text) ymin = int(obj.find('bndbox').find('ymin').text) xmax = int(obj.find('bndbox').find('xmax').text) ymax = int(obj.find('bndbox').find('ymax').text) bounding_boxes.append([xmin, ymin, xmax, ymax]) return np.array(bounding_boxes) def resize_and_normalize(scan, annotation): # Resize scan to 416x416 and normalize pixel values to [0, 1] resized_scan = cv2.resize(scan, (416, 416)) normalized_scan = resized_scan / 255.0 # Normalize bounding box coordinates normalized_annotation = annotation / np.array([scan.shape[1], scan.shape[0], scan.shape[1], scan.shape[0]]) return normalized_scan, normalized_annotation def convert_to_yolo_labels(annotations): yolo_labels = [] for annotation in annotations: x_center = (annotation[0] + annotation[2]) / 2.0 y_center = (annotation[1] + annotation[3]) / 2.0 width = annotation[2] - annotation[0] height = annotation[3] - annotation[1] yolo_label = [0, x_center, y_center, width, height] yolo_labels.append(yolo_label) return np.array(yolo_labels) def preprocess_data(scan_dir, output_dir): for filename in os.listdir(scan_dir): if filename.endswith('.png'): base_name = os.path.splitext(filename)[0] png_path = os.path.join(scan_dir, filename) annotation_path = os.path.join(scan_dir, base_name + '.xml') # Read PNG scan scan = read_png_file(png_path) # Parse annotation XML file annotations = parse_annotation(annotation_path) # Resize and normalize scan and bounding boxes resized_scan, resized_annotations = resize_and_normalize(scan, annotations) # Convert to YOLO-style labels yolo_labels = convert_to_yolo_labels(resized_annotations) # Save preprocessed data np.save(os.path.join(output_dir, f"{base_name}_scan.npy"), resized_scan) np.save(os.path.join(output_dir, f"{base_name}_labels.npy"), yolo_labels) # Set directories scan_dir = "/Dataset/preprocessed_png" // folder where png and xml files are output_dir = "/Dataset/preprocessed_data_png" # Ensure output directory exists os.makedirs(output_dir, exist_ok=True) # Preprocess data preprocess_data(scan_dir, output_dir) ``` Model Code ``` class YOLOv7(nn.Module): def __init__(self, num_classes): super(YOLOv7, self).__init__() self.num_classes = num_classes # Define convolutional layers for feature extraction self.conv1 = nn.Conv2d(in_channels=3, out_channels=32, kernel_size=3, stride=1, padding=1) self.conv2 = nn.Conv2d(in_channels=32, out_channels=64, kernel_size=3, stride=1, padding=1) self.conv3 = nn.Conv2d(in_channels=64, out_channels=128, kernel_size=3, stride=1, padding=1) # Define fully connected layers for classification and bounding box regression self.fc1 = nn.Linear(128 * 64 * 64, 1024) self.fc2 = nn.Linear(1024, 256) self.fc3 = nn.Linear(256, num_classes + 5) # 5 for bounding box coordinates def forward(self, x): # Feature extraction x = F.relu(self.conv1(x)) x = F.max_pool2d(x, kernel_size=2, stride=2) x = F.relu(self.conv2(x)) x = F.max_pool2d(x, kernel_size=2, stride=2) x = F.relu(self.conv3(x)) x = F.max_pool2d(x, kernel_size=2, stride=2) # Flatten the feature map x = x.view(-1, 128 * 64 * 64) # Fully connected layers for classification and bounding box regression x = F.relu(self.fc1(x)) x = F.relu(self.fc2(x)) x = self.fc3(x) return x ``` main.py ``` # Define the dataset class to load preprocessed data class CustomDataset(Dataset): def __init__(self, data_dir): self.data_dir = data_dir self.image_files = [f for f in os.listdir(data_dir) if f.endswith('_scan.npy')] self.annotation_files = [f for f in os.listdir(data_dir) if f.endswith('_labels.npy')] def __len__(self): return len(self.image_files) def __getitem__(self, idx): image_file = os.path.join(self.data_dir, self.image_files[idx]) annotation_file = os.path.join(self.data_dir, self.annotation_files[idx]) image = np.load(image_file) annotation = np.load(annotation_file) # Convert grayscale image to 3 channels (if needed) if len(image.shape) == 2: image = np.stack((image,) * 3, axis=-1) print(image.shape) # Convert to tensor image = torch.from_numpy(image).float() annotation = torch.from_numpy(annotation).float() return image, annotation # Define training parameters batch_size = 32 num_classes = 1 lr = 0.001 num_epochs = 10 # Create dataset and dataloader train_dataset = CustomDataset("/content/drive/MyDrive/Dataset/preprocessed_data_png/") train_loader = DataLoader(train_dataset, batch_size=batch_size, shuffle=True) # Initialize model model = YOLOv7(num_classes) # Define loss function and optimizer criterion = nn.MSELoss() optimizer = torch.optim.Adam(model.parameters(), lr=lr) # Training loop for epoch in range(num_epochs): model.train() total_loss = 0.0 for images, targets in train_loader: # Forward pass outputs = model(images) # Compute loss loss = criterion(outputs, targets) # Backward pass and optimization optimizer.zero_grad() loss.backward() optimizer.step() total_loss += loss.item() print(f"Epoch [{epoch+1}/{num_epochs}], Loss: {total_loss/len(train_loader):.4f}") # Evaluation model.eval() eval_loss = 0.0 eval_accuracy = 0.0 y_true = [] y_pred = [] with torch.no_grad(): for images, targets in train_loader: outputs = model(images) eval_loss += criterion(outputs, targets).item() # Convert outputs and targets to numpy arrays outputs_np = outputs.detach().cpu().numpy().round() targets_np = targets.detach().cpu().numpy().round() # Flatten arrays outputs_flat = outputs_np.flatten() targets_flat = targets_np.flatten() # Calculate accuracy eval_accuracy += calculate_accuracy(outputs_flat, targets_flat) # Collect true and predicted labels for F1 score calculation y_true.extend(targets_flat) y_pred.extend(outputs_flat) eval_loss /= len(train_loader) eval_accuracy /= len(train_loader) f1 = f1_score(y_true, y_pred) print(f"Epoch [{epoch+1}/{num_epochs}], Evaluation Loss: {eval_loss:.4f}, Accuracy: {eval_accuracy:.4f}, F1 Score: {f1:.4f}") ``` I basically have DICOM files and xml annotation. When I tried with the dicom file got the same error. Then converted the files into png and made the .npy in the main dataset folder but still getting the same error. Trying to run on YOLOv7
RuntimeError: Given groups=1, weight of size [64, 1, 3, 3], expected input[1, 3, 416, 416] to have 1 channels, but got 3 channels instead
|channel|yolo|dicom|
null
The following c code, which is trying to synchronize each thread with a mutual exclusion lock, adds one to the variable g_Count, and eventually g_Count=THREAD_NUM. but for some reason, the variable g_Count is always less than the total number of threads? ```C #include <windows.h> #include <stdio.h> const unsigned int THREAD_NUM = 100; unsigned int g_Count = 0; HANDLE g_Mutex; DWORD WINAPI ThreadFunc(LPVOID); int main() { HANDLE hThread[THREAD_NUM]; g_Mutex = CreateMutex(NULL, FALSE, NULL); for (int i = 0; i < THREAD_NUM; i++) { hThread[i] = CreateThread(NULL, 0, ThreadFunc, &i, 0, NULL); } WaitForMultipleObjects(THREAD_NUM, hThread, TRUE, INFINITE); for(int i=0; i < THREAD_NUM; i++ ) CloseHandle(hThread[i]); CloseHandle(g_Mutex); printf("%d", g_Count); return 0; } DWORD WINAPI ThreadFunc(LPVOID p) { DWORD result; result = WaitForSingleObject(g_Mutex, INFINITE); if (result == WAIT_OBJECT_0) { g_Count++; printf("Thread %d plus 1 to g_Count.\n", GetCurrentThreadId()); ReleaseMutex(g_Mutex); } return 0; } ``` it outputs like: Thread 27084 plus 1 to g_Count. Thread 18916 plus 1 to g_Count. Thread 12236 plus 1 to g_Count. 3
windows multithreading CreateMutex
|c|windows|multithreading|
null
In recent Emacs versions (24), Semantic is able to this. 0. Possibly activate semantic mode <kbd>M-x semantic-mode RET</kbd>. 1. Bring up the Symref buffer with <kbd>C-c , g</kbd>. 2. Press <kbd>C-c C-e</kbd> to open all references. 3. Rename with <kbd>R</kbd>. 4. Save all the edited buffers with <kbd>C-x s !</kbd>
For completeness, here is another technique equivalent to `[0..]` for generating an infinite stream of natural numbers: naturals = 0 : allthefollowingnat naturals where allthefollowingnat (current : successors) = immediateSuccessor : allthefollowingnat successors where immediateSuccessor=current+1 While this technique is arguably overkill for generating a stream of natural numbers, it follows a template that is useful for defining various streams where [the next values depend on previous ones][1]. For instance, here is a [Fibonacci stream][2], defined by using the [as-pattern][3] (`@`): fibstream = 0 : 1 : allthefollowingfib fibstream where allthefollowingfib (previous : values@(current : _)) = next : allthefollowingfib values where next = current+previous [1]: https://en.wikipedia.org/wiki/Recurrence_relation [2]: https://wiki.haskell.org/index.php?title=The_Fibonacci_sequence&oldid=65208#With_direct_self-reference [3]: https://stackoverflow.com/a/30326349/19661910
I'm encountering an issue in my SwiftUI application where the UsernameView is presented again even when users have completed the onboarding process, specifically after they force close the app and reopen it. Otherwise not presented ``` @MainActor final class RootViewModel: ObservableObject { @Published private(set) var userUsername: [UserUsername] = [] @Published var hasUsername: Bool = false func getUsername() async throws { guard let authUser = try? AuthenticationManager.shared.getAuthenticatedUser() else { return } _ = try await UserManager.shared.getUsername(userId: authUser.uid) // ... process username ... hasUsername = true // Set flag after successful retrieval } } struct RootView: View { @StateObject private var viewModel = RootViewModel() @State private var showSignInView: Bool = false var body: some View { ZStack { if showSignInView { NavigationStack { AuthenticationView(showSignInView: $showSignInView) } } else if !viewModel.hasUsername { UsernameView(hasUsername: $viewModel.hasUsername) } else { TabbarView(showSignInView: $showSignInView) } } .onAppear { let authUser = try? AuthenticationManager.shared.getAuthenticatedUser() self.showSignInView = authUser == nil ? true : false Task { // ... sign-in check and getUserData ... if let _ = authUser { DispatchQueue.main.async { showSignInView = false // Dismiss AuthenticationView } } } } } } ```
SwiftUI: UsernameView unexpectedly reappearing after force closing and reopening app
|swiftui|
{"OriginalQuestionIds":[63999871],"Voters":[{"Id":209103,"DisplayName":"Frank van Puffelen","BindingReason":{"GoldTagBadge":"google-cloud-firestore"}}]}
For the old JDBC jars artifacts: To, [**install**] 3rd party JARs to [**local**] repo: ``` # `mvn install:install-file` to install a custom file `install-file` [artifact] into the local repo # For more info about (maven-deploy-plugin) parameters: # https://maven.apache.org/plugins/maven-deploy-plugin/deploy-file-mojo.html # https://maven.apache.org/guides/mini/guide-3rd-party-jars-local.html # [install > local repo] # Example: com.microsoft.sqlserver:sqljdbc4:pom:3.0 groupId=com.microsoft.sqlserver artifactId=sqljdbc4 version=3.0 packaging=jar file=/...path_to_jar.../sqljdbc4-3.0.jar pomFile=/...path_to_pom.../sqljdbc4-3.0.pom localRepositoryPath=/Users/~/.m2/repository mvn install:install-file \ -DgroupId=$groupId \ -DartifactId=$artifactId \ -Dversion=$version \ -Dpackaging=$packaging \ -Dfile=$file \ -DpomFile=$pomFile \ -DlocalRepositoryPath=$localRepositoryPath ``` To, [**deploy**] 3rd party JARs to [**remote**] repo: ``` # `mvn deploy:deploy-file` to deploy a custom file `deploy-file` [artifact] to the remote repo # For more info about (maven-install-plugin) parameters: # https://maven.apache.org/plugins/maven-install-plugin/install-file-mojo.html # https://maven.apache.org/guides/mini/guide-3rd-party-jars-remote.html # [deploy > remote repo] # Example: com.microsoft.sqlserver:sqljdbc4:pom:3.0 groupId=com.microsoft.sqlserver artifactId=sqljdbc4 version=3.0 packaging=jar file=/...path_to_jar.../sqljdbc4-3.0.jar pomFile=/...path_to_pom.../sqljdbc4-3.0.pom createChecksum=true repositoryId=your_company-3rdparty-repo url=https://nexus.your_company.com/content/repositories/thirdparty mvn deploy:deploy-file \ -DgroupId=$groupId \ -DartifactId=$artifactId \ -Dversion=$version \ -Dpackaging=$packaging \ -Dfile=$file \ -DpomFile=$pomFile \ -DcreateChecksum=$createChecksum \ -DrepositoryId=$repositoryId \ -Durl=$url ```
In 2024, this is pretty simple If you're working with `PreviewView` with CameraX. You can directly [get][1] Bitmap from the `PreviewView` instance. ``` val previewView = findViewById(R.id.previewView) val imageBitmap = previewView?.bitmap ``` [1]: https://developer.android.com/reference/androidx/camera/view/PreviewView#getBitmap()
I fixed the issue myself by: - removing the temporary var seq and using a variable thats created with the Object itself "attack_animation = noone;" - Have the attack_animation be declared a value when the conditions are met "attack_animation = layer_sequence_create("Instances", x, y, sAttackLeft);" - have the attack animation deleted as soon as its time of 12 frames ends using "layer_sequence_destroy(attack_animation);"
if(tid\<n) { gain = in_degree[neigh]*out_degree[tid] + out_degree[neighbour]*in_degree[tid]/total_weight //here let say node 0 moves to 2 atomicExch(&node_community[0, node_community[2] // because node 0 is in node 2 now atomicAdd(&in_degree[2],in_degree[0] // because node 0 is in node 2 now atomicAdd(&out_degree[2],out_degree[0] // because node 0 is in node 2 now } } this is the process, in this problem during calculation of gain all the thread should see the update value of 2 which values of 2+values 0 but threads see only previous value of 2. how to solve that ? here is the output: node is: 0 node is: 1 node is: 2 node is: 3 node is: 4 node is: 5 //HERE IS THS PROBLEM (UPDATED VALUES ARE NOT VISIBLE TO THE REST OF THREDS WHO EXECUTED BEFORE THE ATOMIC WRITE) updated_node is: 0 // this should be 2 updated_values are: 48,37. // this should be(48+15(values of 2))+(37+12(values of 0)) I have tried using , __syncthreads(), _threadfence() and shared memory foe reading writing values can any one tell what could be the issue ??
I would use [tag:geopandas] to read the [KML][1] file, then make a [`Map`][2] with an initial view that is based on the [`centroid`][3] of the unified polygon(s) : ``` import fiona import geopandas as gpd from ipyleaflet import GeoData, Map, basemaps fiona.drvsupport.supported_drivers["LIBKML"] = "rw" gdf = gpd.read_file("file.kml") CENTER = ( gdf.dissolve() # .to_crs("EPSG:xxxx") ? .centroid.get_coordinates() .stack()[::-1].tolist() ) m = Map(center=CENTER, basemap=basemaps.Esri.WorldStreetMap, zoom=15) gdata = GeoData( geo_dataframe=gdf, style={"color": "black", "fillColor": "red", "fillOpacity": 0.8}, ) m.add(gdata) ``` Output (`m`): [![enter image description here][4]][4] [1]: https://en.wikipedia.org/wiki/Keyhole_Markup_Language [2]: https://ipyleaflet.readthedocs.io/en/latest/map_and_basemaps/map.html#ipyleaflet.Map [3]: https://geopandas.org/en/stable/docs/reference/api/geopandas.GeoSeries.centroid.html [4]: https://i.stack.imgur.com/OXkAA.png
For IP, It's seem like a public IP address. Now you can connect via localhost. So you need to check firewall inbound opens tcp 15432. Then config port forwarding in your NAT. To check this IP:port open in the Internet, you can use ping.eu
Getting the below error while creating any sample dataframe: >24/03/10 11:30:40 WARN TaskSetManager: Lost task 0.0 in stage 0.0 (TID 0) (Siva executor driver): org.apache.spark.SparkException: Python worker exited unexpectedly (crashed) Attached Environment Variables: ![image](https://i.stack.imgur.com/w3MQ5.png) Attached Path Variables: ![image](https://i.stack.imgur.com/cczwm.png)
what I want is to make the TextPanel disabled if `payStub` has a value, and not be disabled if `payStub` does not have a value, In my react code, I have the following: const [payStub, setPayStub] = useState(() => { if (isNewPayStub) { return get(user, 'stub', ''); } return stubEdit.value?.name || ''; }); const stubIsValid = Boolean(trim(payStub)); and in the react side I have the following: <!-- begin snippet: js hide: false console: true babel: false --> <!-- language: lang-html --> return ( <Panel onClose={onClose} onSave={handleSave} addNewText={isNewPayStub ? 'Add New PayStub' : ''} > .... <TextPanel handleChange={setPayStub} isValid={stubIsValid} isRequired label="Stub" placeholder="Enter STUB" value={payStub} /> <!-- end snippet --> The TextField receives a property of `disabled` , and when I add a property as `disabled: {stubIsValid}`, when the user enters the first character, the condition will be met and it makes the TextPanel disabled which is not what I want (user should be able to fill enter the payStub) . How do I fix this situation?
How to handle the states when disabling my component in react?
|javascript|reactjs|react-hooks|
|java|spring-boot|kubernetes|fabric8|
In my Shiny application, depending on the size of the screen, some of the legend information for my ggplotly interactive plots is cropped. Is there a way to fix this? Here's a small example: ``` library(shiny) library(ggplot2) library(plotly) library(tidyverse) df <- tibble( year = rep(c(2015:2023), 8), number = c(10:18, 20:12, 13:21, 14:22, 19:11,5:13, 18:10, 8:16), region = c(rep("Region 1", 9), rep("Region 2", 9), rep("Region 3", 9), rep("Region 4", 9), rep("Region 5", 9), rep("Region 6", 9), rep("Region 7", 9), rep("Region 8", 9)) ) # Define UI for application that draws a histogram ui <- fluidPage( # Application title titlePanel("Example for StackOverflow"), # Sidebar with a slider input for number of bins sidebarLayout( sidebarPanel( sliderInput("bins", "Number of bins:", min = 1, max = 50, value = 30) ), # Show a plot of the generated distribution mainPanel( uiOutput("plotpanel") ) ) ) # Define server logic required to draw a histogram server <- function(input, output) { output$plotlyObject <- renderPlotly({ p <- ggplot(df, aes(x = year, y = number, group = region, color = region)) + geom_line(stat= "identity") ggplotly(p) %>% plotly::layout( xaxis = list( title = list(text = "Year", font = list(size = 14, family = "Arial black")), tickfont = list(family = "Arial black"), tickangle = -45 ), yaxis = list( title = list(text = "Number of Patients", font = list(size = 14, family = "Arial black")), tickfont = list(family = "Arial black") ), legend = list( orientation = "h", y = -.25, x = -.1 ) ) %>% plotly::config(displaylogo = FALSE, modeBarButtonsToRemove = list('hoverClosestCartesian', 'hoverCompareCartesian', 'autoScale2d', 'lasso2d', 'select2d', 'zoom2d' )) }) output$plotpanel <- renderUI({ wellPanel( plotlyOutput("plotlyObject", height = "400px"), style = "padding: 5px; border-color: darkgray" ) }) } # Run the application shinyApp(ui = ui, server = server) ``` Depending on the size of the browser window, sometimes the legend displays correctly but often it does not: [![plot with cropped legend][1]][1] [![plot with cropped legend][2]][2] [1]: https://i.stack.imgur.com/QnqPt.png [2]: https://i.stack.imgur.com/DpySv.png Are there settings in plotly::layout that will fix this? I haven't found a solution.
null
``` x = c(1, 2, 2, 3, 3, 3, 4, 4, 5) x.tab = table(x) plot(x.tab, xlim = c(0, 10), xaxp=c(0, 10, 10)) ``` (Unfortunately, I do not have enough reputation to post image, but received graph has only tick marks 1 to 5, instead of intended 0 to 10) Why does R just ignoring xaxp? I understand that I could factor x to levels 0:10 (i.e. add x = factor(x, levels=0:10)) and that would be solution, but why doesn't parameter xaxp work as intended? And by the way how to extract frequencies as vector from x.tab without complicated magic?
I have an application using the classic [clean architecture](https://jasontaylor.dev/clean-architecture-getting-started/) with all the dependencies set up as described (i.e. flowing inward). I have an external service I wish to use in my project, so defined the functionality (objects that interact with the service, some interfaces etc.) for interacting with that service in the `Infrastructure` layer. Currently the `Presentation` layer is consuming the external service directly from the `Infrastructure` layer. A question about this would be; **Is the `Presentation` layer communicating directly with the `Infrastructure` layer 'acceptable' or must everything go via the `Application` layer?** Ideally I would like the `Presentation` layer to call the `Application` layer so that I can reuse some of the functionality it has available (for things such as validation amongst others) but then with the `Infrastructure` layer not knowing about the `Application` layer, I would need to define the external service objects in the `Application` layer. I would rather avoid doing so as these are service specific rather than application specific. I would much rather have them defined in the `Infrastructure` closest to where they are used. So; **Is there a way to reuse the interfaces defined in the `Infrastructure` layer for this external service or will I just have to suck it up and have some duplication (i.e. define interfaces in the `Application` layer that my external service in the `Infrastructure` layer implements)?**
In clean architecture is the Presentation layer directly allowed to communicate directly with the Infrastructure layer
|asp.net-core|asp.net-web-api|asp.net-core-webapi|clean-architecture|
{"Voters":[{"Id":328193,"DisplayName":"David"},{"Id":9214357,"DisplayName":"Zephyr"},{"Id":466862,"DisplayName":"Mark Rotteveel"}],"SiteSpecificCloseReasonIds":[13]}
C is a general-purpose programming language used for system programming (OS and embedded), libraries, games, and cross-platform. This tag should be used with general questions concerning the C language, as defined in the ISO 9899 standard (the latest version, 9899:2018, unless otherwise specified โ€” also tag version-specific requests with c89, c99, c11, etc.). C is distinct from C++, and it should not be combined with the C++ tag without a specific reason.
null
I think it can be approach like this way: import 'package:flutter/material.dart'; DateTime setDateForDayOfWeek(DateTime date, int desiredDayOfWeek) { int difference = desiredDayOfWeek - date.weekday; /// Adjust the difference if it's negative if (difference < 0) { difference += 7; } return date.add(Duration(days: difference)); } void main() { DateTime currentDate = DateTime.now(); /// Setting the desired day of the week (e.g., Wednesday = 3) int desiredDayOfWeek = 3; DateTime desiredDate = setDateForDayOfWeek(currentDate, desiredDayOfWeek); print('Desired Date: $desiredDate'); }
null
If there are things like constants or changes that can be improved, I recommend you apply for the same Flutter recommendations. Run this in your terminal and that's it. *[Flutter fix][1]* dart fix --apply [1]: https://docs.flutter.dev/tools/flutter-fix
I wanna to get voice call function in my app(using flutter). Here is my code and log. ``` import 'package:cloud_firestore/cloud_firestore.dart'; import 'package:flutter/material.dart'; import 'package:firebase_auth/firebase_auth.dart'; import 'MainScreen.dart'; import 'package:firebase_database/firebase_database.dart'; import 'package:agora_rtc_engine/agora_rtc_engine.dart'; import 'package:permission_handler/permission_handler.dart'; import 'dart:core'; // ChatRoom ๋ชจ๋ธ class ChatRoom { final String title; final int limit; final bool isPrivate; final String password; final String ownerId; final String ownerPhotoUrl; final String ownerNickname; ChatRoom({ required this.title, required this.limit, required this.isPrivate, required this.password, required this.ownerId, required this.ownerPhotoUrl, required this.ownerNickname, }); Map<String, dynamic> toMap() { return { 'title': title, 'limit': limit, 'isPrivate': isPrivate, 'password': password, 'ownerId': ownerId, 'ownerPhotoUrl': ownerPhotoUrl, 'ownerNickname': ownerNickname, }; } } class VoiceChatRoomScreen extends StatefulWidget { final String roomId; VoiceChatRoomScreen({Key? key, required this.roomId}) : super(key: key); @override _VoiceChatRoomScreenState createState() => _VoiceChatRoomScreenState(); } class _VoiceChatRoomScreenState extends State<VoiceChatRoomScreen> { String appId = "af65ec64fd244043a786ba6b820fa01f"; int uid = 0; // uid of the local user int? _remoteUid; // uid of the remote user bool _isJoined = false; // Indicates if the local user has joined the channel late RtcEngine agoraEngine; // Agora engine instance final GlobalKey<ScaffoldMessengerState> scaffoldMessengerKey = GlobalKey<ScaffoldMessengerState>(); // Global key to access the scaffold showMessage(String message) { scaffoldMessengerKey.currentState?.showSnackBar(SnackBar( content: Text(message), )); } List<Map<String, dynamic>> members = []; String ownerId = ''; final DatabaseReference _userStatusRef = FirebaseDatabase.instance.reference() .child('usersStatus'); List<String> onlineMembers = []; @override void initState() { super.initState(); // ์ฑ„ํŒ…๋ฐฉ ์ •๋ณด์™€ ๋ฉค๋ฒ„ ์ •๋ณด๋ฅผ ๋กœ๋“œํ•˜๋Š” ๊ธฐ์กด์˜ ๋ฉ”์„œ๋“œ๋ฅผ ์œ ์ง€ํ•ฉ๋‹ˆ๋‹ค. _loadRoomInfo(); _loadRoomMembers(); setupVoiceSDKEngine().then((_) { join(); // Ensure join is called after setupVoiceSDKEngine is completed }); } Future<void> setupVoiceSDKEngine() async { // retrieve or request microphone permission await [Permission.microphone].request(); //create an instance of the Agora engine agoraEngine = createAgoraRtcEngine(); await agoraEngine.initialize(RtcEngineContext( appId: appId )); // Register the event handler agoraEngine.registerEventHandler( RtcEngineEventHandler( onJoinChannelSuccess: (RtcConnection connection, int elapsed) { showMessage("Local user uid:${connection.localUid} joined the channel"); setState(() { _isJoined = true; }); print("Local user joined the channel"); }, onUserJoined: (RtcConnection connection, int remoteUid, int elapsed) { showMessage("Remote user uid:$remoteUid joined the channel"); setState(() { _remoteUid = remoteUid; }); print("Remote user joined the channel with uid: $remoteUid"); }, onUserOffline: (RtcConnection connection, int remoteUid, UserOfflineReasonType reason) { showMessage("Remote user uid:$remoteUid left the channel"); setState(() { _remoteUid = null; }); print("Remote user left the channel with uid: $remoteUid"); }, ), ); } Future<void> join() async { ChannelMediaOptions options = const ChannelMediaOptions( clientRoleType: ClientRoleType.clientRoleBroadcaster, channelProfile: ChannelProfileType.channelProfileCommunication, ); await agoraEngine.joinChannel( token: widget.roomId, channelId: widget.roomId , options: options, uid: uid, ); print("Channel join request sent with channelId: ${widget.roomId}"); } // ์ฑ„ํŒ…๋ฐฉ ์ •๋ณด๋ฅผ ๋กœ๋“œํ•˜๋Š” ๋ฉ”์„œ๋“œ Future<void> _loadRoomInfo() async { DocumentSnapshot roomSnapshot = await FirebaseFirestore.instance .collection('chatRooms') .doc(widget.roomId) .get(); if (roomSnapshot.exists) { Map<String, dynamic> roomData = roomSnapshot.data() as Map<String, dynamic>; setState(() { ownerId = roomData['ownerId']; print("๋ฐฉ์žฅ ID: $ownerId"); // ๋ฐฉ์žฅ ID ๋กœ๊ทธ ์ถœ๋ ฅ }); } } @override void dispose() { // WebRTC ๋ฆฌ์†Œ์Šค ์ •๋ฆฌ // ์ฑ„ํŒ…๋ฐฉ์„ ๋– ๋‚˜๋Š” ๋กœ์ง. ์˜ˆ๋ฅผ ๋“ค์–ด, ์„œ๋ฒ„์— 'leave' ๋ฉ”์‹œ์ง€๋ฅผ ๋ณด๋‚ผ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. leaveChatRoom().then((_) { // ์ฑ„ํŒ…๋ฐฉ์„ ๋– ๋‚œ ํ›„, ๋ฉ”์ธ ํ™”๋ฉด์œผ๋กœ ๋Œ์•„๊ฐ‘๋‹ˆ๋‹ค. Navigator.of(context).pushAndRemoveUntil( MaterialPageRoute(builder: (context) => MainScreen()), (Route<dynamic> route) => false, ); }); agoraEngine.leaveChannel(); super.dispose(); } Future<void> leaveChatRoom() async { final userId = FirebaseAuth.instance.currentUser?.uid; if (userId != null) { // ์‚ฌ์šฉ์ž๋ฅผ ์˜คํ”„๋ผ์ธ์œผ๋กœ ์„ค์ • // 'chatRooms/{roomId}/members/{userId}' ๋ฌธ์„œ ์‚ญ์ œ await FirebaseFirestore.instance .collection('chatRooms') .doc(widget.roomId) .collection('members') .doc(userId) .delete(); setState(() { _isJoined = false; _remoteUid = null; }); agoraEngine.leaveChannel(); } } Future<void> _loadRoomMembers() async { FirebaseFirestore.instance .collection('chatRooms') .doc(widget.roomId) .collection('members') .snapshots() .listen((snapshot) async { // ์˜จ๋ผ์ธ ์ƒํƒœ์ธ ์‚ฌ์šฉ์ž ID ๋ชฉ๋ก์„ ๊ฐ€์ ธ์˜ต๋‹ˆ๋‹ค. // ๋ฉค๋ฒ„ ๋ชฉ๋ก์„ ์—…๋ฐ์ดํŠธํ•ฉ๋‹ˆ๋‹ค. List<Map<String, dynamic>> updatedMembers = []; for (var doc in snapshot.docs) { String memberId = doc.id; Map<String, dynamic> memberData = doc.data() as Map<String, dynamic>; updatedMembers.add(memberData); } if (mounted) { setState(() { members = updatedMembers; }); } }); } @override Widget build(BuildContext context) { // UI ๊ตฌ์„ฑ ๋ถ€๋ถ„์€ ์ด์ „๊ณผ ๋™์ผํ•˜๋˜, ๋ฐฉ์žฅ ์—ฌ๋ถ€ ํ™•์ธ ๋กœ์ง์„ ์ถ”๊ฐ€ํ•ฉ๋‹ˆ๋‹ค. return Scaffold( appBar: AppBar( title: Text('์Œ์„ฑ ์ฑ„ํŒ…๋ฐฉ'), ), body: members.isEmpty ? Center(child: CircularProgressIndicator()) : ListView.builder( itemCount: members.length, itemBuilder: (context, index) { bool isOwner = members[index]['uid'] == ownerId; // ๋ฐฉ์žฅ ์—ฌ๋ถ€ ํŒ๋‹จ return ListTile( leading: CircleAvatar( backgroundImage: NetworkImage(members[index]['photoUrl']), radius: 25, ), title: Text(members[index]['nickname']), trailing: isOwner ? Text( '๋ฐฉ์žฅ', style: TextStyle(color: Colors.red)) // ๋ฐฉ์žฅ์ผ ๊ฒฝ์šฐ "King" ํ‘œ์‹œ : null, ); }, ), ); } } ``` ,,,, and the log is saying I succeeded to enter the voice channel. ``` [info] [iris_rtc_api_engine.cc:407] api name RtcEngine_initialize_0320339 result 0 outdata {"result":0} [info] [iris_rtc_api_engine.cc:343] api name RtcEngine_setAppType params "{"appType":4}" [info] [iris_rtc_api_engine.cc:407] api name RtcEngine_setAppType result 0 outdata {"result":0} [info] [iris_rtc_api_engine.cc:343] api name RtcEngine_registerEventHandler_5fc0465 params "{}" [info] [i ris_rtc_api_engine.cc:395] api name RtcEngine_registerEventHandler_5fc0465 extened params "{"event":10796528752}" [info] [iris_rtc_api_engine.cc:407] api name RtcEngine_registerEventHandler_5fc0465 result 0 outdata {"result":0} [info] [iris_rtc_api_engine.cc:341] api name RtcEngine_joinChannel_cdbb747 params "{"token":"qOEc***************GPsO","channelId":"qOEcnJcSyg1I4DgSGPsO","uid":0,"options":{"clientRoleType":1,"channelProfile":0}}" [info] [iris_rtc_api_engine.cc:407] api name RtcEngine_joinChannel_cdbb747 result 0 outdata {"result":0} flutter: Channel join request sent with channelId: qOEcnJcSyg1I4DgSGPsO ``` >>but the voice call doesn't work in my voiceroom... what do i have to do??!! ``` E/libc (17700): Access denied finding property "net.dns1" E/libc (17700): Access denied finding property "net.dns2" E/libc (17700): Access denied finding property "net.dns3" E/libc (17700): Access denied finding property "net.dns4" ``` >>and whenever I say to phone, these emerged... I wanna get voice call function in my flutter app
agora.io's voice call doesn't work in flutter
|flutter|agora.io|
null
if(tid\<n) { gain = in_degree[neigh]*out_degree[tid] + out_degree[neighbour]*in_degree[tid]/total_weight //here let say node 0 moves to 2 atomicExch(&node_community[0, node_community[2] // because node 0 is in node 2 now atomicAdd(&in_degree[2],in_degree[0] // because node 0 is in node 2 now atomicAdd(&out_degree[2],out_degree[0] // because node 0 is in node 2 now } } this is the process, in this problem during calculation of gain all the thread should see the update value of 2 which values of 2+values 0 but threads see only previous value of 2. how to solve that ? here is the output: node is: 0 node is: 1 node is: 2 node is: 3 node is: 4 node is: 5 //HERE IS THS PROBLEM (UPDATED VALUES ARE NOT VISIBLE TO THE REST OF THREDS WHO EXECUTED BEFORE THE ATOMIC WRITE) updated_node is: 0 // this should be 2 updated_values are: 48,37. // this should be(48+15(values of 2))+(37+12(values of 0)) I have tried using , __syncthreads(), _threadfence() and shared memory foe reading writing values can any one tell what could be the issue ??
Try this . instead of Else 'Found MsgBox "Found at: " & fVal.Address .Range("A:A" & (fVal.Row)).Copy Sheets("Sheet2").Activate Sheets("Sheet2").Range("B1").PasteSpecial xlPasteValues End If use this lines Else 'Found MsgBox "Found at: " & fVal.Address Sheets("Sheet2").Range("B" & Sheets("Sheet2").Cells(Rows.Count, "B").End(xlUp).Row ).Value = fVal.Value End If
I have a smart contract deployed to Sepolia network. One of the method emits 3 events. Please take a look at https://sepolia.etherscan.io/tx/0x31f77360db4cd51f5db7954d143d0fb514b96d71fa409c17b3412f78c11ee6a3#eventlog as an example. There are 3 logs and 2 of them are `Transfer` but the last one doesn't have any name as shown in below screenshot. [![enter image description here][1]][1] The first two `Transfer` events are emitted from `@openzeppelin/contracts/token/ERC20/ERC20.sol` code while the last one was emitted by this contract, the code is: ``` event SectorCreated(address owner, int x, int y, uint256 width, uint256 height, uint256 price); ... emit SectorCreated(msg.sender, x, y, width, height, price); ``` I am not sure what the `SectorCreated` event is now shown. What is the different between the `Transfer` and `SectorCreated` events? [1]: https://i.stack.imgur.com/6TFEJ.png
Issue with ggplotly legend being cropped in R Shiny application
|r|ggplot2|shiny|plotly|ggplotly|
Codebase was like this in a non-asyncยน method: ```csharp Thread.Sleep(500); ``` A colleague refactored it to be like this: ```csharp Task.Delay(500).Wait(); ``` Given the relative small delay, is really an advantage to use the async version (``Task.Delay()``) ? Does it change if the delay is a bunch of millisecond or a bunch of seconds ? More important... Is not the ``.Wait()`` defeating he purpose of making an async ? To provide more context, this is executes in a background process that runs an average of every minute. Isn't this second approach just wasting more resources ? (as some comments [here][1] seems to indicate) **[Edit]** The `await` is not used in this case because the method is NOT `async`. My colleague prefers `Task.Delay(N).Wait()` to `Thread.Sleep(N)`. Where the method is async he used `await` and not `.Wait()`. In this specific case the method is called when an API endpoint is hit and send a request to an external service and tries 3 or 5 times (with a delay) until it gets the result (kind of polling). ยน <sub>Originally the method was mentioned to be async.</sub> [1]: https://stackoverflow.com/questions/27820626/difference-between-task-delay-and-new-task-thread-sleep "Difference between Task.Delay() and new Task(()=>Thread.Sleep())"
My answer is not the exactly solution you want, but it solved mine. I wanted to do the same thing you want because when I retrieved the url of a video in minio, the url with docekr service name did not work on the local browser. I could not find a way to make it localhost, but I realized the service name was only for docker, and of course the url did not work locally. And the solution is to make the endpoint of the minioClient as below: `endPoint: 'host.docker.internal'` And the url now works both for docker and local machine.
My answer is not the exactly the solution you are looking for, but it solved mine. I wanted to do the same thing you want because when I retrieved the url of a video in minio, the url with docekr service name did not work on the local browser. I could not find a way to make it localhost, but I realized the service name was only for docker, and of course the url did not work locally. And the solution is to make the endpoint of the minioClient as below: `endPoint: 'host.docker.internal'` And the url now works both for docker and local machine.
My goal is to add button to the right hand side of a section header in a UITableView. However, when I made my new target iOS15, I found that I had to adopt UIButton configuration to set the padding to the add icon in the button. The problem is that, configuration doesn't seem to have a "frame" property and passing a configuration to a button then setting its frame builds to show no button at all. My new, faulty code is: func tableView (_ tableView: UITableView, viewForHeaderInSection section: Int) -> UIView? { let frame = tableView.frame let headerView = UIView(frame: CGRect(x:0, y: 0, width: frame.size.width, height: frame.size.height)) // create custom view let sectionName = hasSections ? sectionTasks[section].sectionName : nil if sectionName != nil { let view = UIView() let label = UILabel() ... // Label config, skipping here let tableWidth = tableView.frame.width label.frame = CGRect(x: 5, y: 0, width: tableWidth, height: 30) headerView.addSubview(label) } // My faulty button var buttonConfig = UIButton.Configuration.filled() buttonConfig.contentInsets = NSDirectionalEdgeInsets(top: 5, leading: 5, bottom: 5, trailing: 5) buttonConfig.image = UIImage(named: "add_AsTemplate") let xPos = tableView.frame.width-40-15 let button = UIButton(configuration: buttonConfig) button.frame = CGRect(x: xPos, y: 0, width: 40, height: 40) button.tag = section button.addTarget(self, action: #selector(BaseTaskListVC.sectionNewTask_BtnPressed), for: .touchUpInside) // add selector called by clicking on the button headerView.addSubview(button) return headerView } My original button code (skipping context of parent function): let xPos = tableView.frame.width-40-15 let button = UIButton(frame: CGRect(x: xPos, y: 0, width: 40, height: 40)) button.tag = section button.imageEdgeInsets = UIEdgeInsets(top: 5, left: 5, bottom: 5, right: 5 ) button.setImage(UIImage(named: "add_AsTemplate"), for: UIControl.State.normal) button.addTarget(self, action: #selector(BaseTaskListVC.sectionNewTask_BtnPressed), for: .touchUpInside) headerView.addSubview(button) I've read the Apple documentation but it doesn't seem to address this explicitly. All the other SO threads on this show how to use configuration but not configuration + frame. Original code: [![Original code at runtime][1]][1] With new configuration approach: [![With new configuration approach][2]][2] [1]: https://i.stack.imgur.com/gHtA8.png [2]: https://i.stack.imgur.com/45OjO.png
In this project I want to create a Pixel Coloring book. ``` <ScrollViewer HorizontalScrollBarVisibility="Auto" VerticalScrollBarVisibility="Auto"> <ItemsControl ItemsSource="{Binding ColorMatrix}"> <ItemsControl.ItemTemplate> <DataTemplate> <ItemsControl ItemsSource="{Binding}"> <ItemsControl.ItemsPanel> <ItemsPanelTemplate> <StackPanel Orientation="Horizontal"/> </ItemsPanelTemplate> </ItemsControl.ItemsPanel> <ItemsControl.ItemTemplate> <DataTemplate> <pxl:Pixel Style="{StaticResource Pixel}" PrimaryColor="{Binding Converter={StaticResource ColorConverter}}"/> </DataTemplate> </ItemsControl.ItemTemplate> </ItemsControl> </DataTemplate> </ItemsControl.ItemTemplate> </ItemsControl> </ScrollViewer> ``` `ColoringBookView.xaml` This is just a view of the Color\[\]\[\] array. ColorMatrix is Color\[\]\[\] type ``` public class ColorConverter : IValueConverter { public object Convert(object value, Type targetType, object parameter, CultureInfo culture) { if (value is System.Drawing.Color drawingColor) { return Color.FromArgb(drawingColor.A, drawingColor.R, drawingColor.G, drawingColor.B); } return Colors.Black; } public object ConvertBack(object value, Type targetType, object parameter, CultureInfo culture) { return null; } } ``` `ColoreConverter.cs` ``` public class Pixel : CheckBox { public static readonly DependencyProperty PrimaryColorProperty = DependencyProperty.Register("PrimaryColor", typeof(Color), typeof(Pixel)); public static readonly DependencyProperty AccentColorProperty = DependencyProperty.Register("AccentColor", typeof(Color), typeof(Pixel)); public static readonly DependencyProperty IsDrawedProperty = DependencyProperty.Register("IsDrawed", typeof(bool), typeof(Pixel)); public Color PrimaryColor { get { return (Color)GetValue(PrimaryColorProperty); } set { SetValue(PrimaryColorProperty, value); } } public Color AccentColor { get { return (Color)GetValue(AccentColorProperty); } set { SetValue(AccentColorProperty, value); } } public bool IsDrawed { get { return (bool)GetValue(IsDrawedProperty); } set { SetValue(IsDrawedProperty, value); } } } ``` `Pixel.cs` This is a CustomControl for WPF. ``` <ResourceDictionary xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" xmlns:pxl="clr-namespace:PXL.Core.Theme" xmlns:converters="clr-namespace:PXL.Core.Converters"> <Style TargetType="{x:Type pxl:Pixel}" x:Key="Pixel"> <Setter Property="Template"> <Setter.Value> <ControlTemplate TargetType="{x:Type pxl:Pixel}"> <Grid> <Rectangle Height="30" Width="30"> <Rectangle.Fill> <SolidColorBrush Color="{TemplateBinding PrimaryColor}"/> </Rectangle.Fill> </Rectangle> <TextBlock Text="Pixel" Background="{TemplateBinding PrimaryColor}" Foreground="{TemplateBinding PrimaryColor}"/> </Grid> </ControlTemplate> </Setter.Value> </Setter> </Style> </ResourceDictionary> ``` `Pixel.xaml` I have troubles with databinding, but I don't understand what's wrong. The TextBlock.Background and TextBlock.Foreground in Pixel.xaml doesn't change.
How can I solve this problem with DataBInding?
I created a new add-in as suggested and compared with mine. I have done the following things and the problem was solved: * copied the new generated commands html and ts file to my project. * updated office-addin-debugging to the new major version. I have done some other alterations but rollbacking any but those two seem to make the taskpane still be able to be opened, what makes me think those are the problem solvers. In more detailes the code that seems to have solved the issue is the following line on the end of the Commands script that was missing on my old generated file (I have not changed the commands script since it's generation a year or so ago): ```js // Register the function with Office. Office.actions.associate('action', action); ``` If there is anything I'm missing please just comment and will edit. If you are having similar problems and these was not sufficient please comment and I will try to troubleshoot further.
I am programming an app with .NET Maui and use Mopup Nuget package. A pop up window should be opened and from there a navigation page should be opened by a button. Unfortunately when clicking the button no navigation page appears. Only when I close the mopup window the navigation page is displayed. But if I close the pop-up window first, as I was told, the navigation page is not displayed. How can I solve this? Thank you very much ``` using Mopups.Services; using Mopups.Pages; namespace FinanceTracker.Pages; public partial class ExpenseEntryWindow : PopupPage { public ExpenseEntryWindow() { InitializeComponent(); } private void btnAbbrechen_Clicked(System.Object sender, System.EventArgs e) { MopupService.Instance.PopAsync(); } private async void btnNavigation_Clicked(object sender, EventArgs e) { await Navigation.PushAsync(new NewPage1()); } }
Open navigation page within Mopup window in .NET Maui
|.net|xamarin|xamarin.forms|maui|
react spring - Is there a Fade in and out simpler, less complicated solution?
I am currently trying to reduce cumulative layout shift by using an adjusted local font as fallback. For the regular font this works quite well. But, in the above-the-fold content of the website, we also have a couple of bold formatted sentences. Is there any clean way to also give the other font weights a fallback in Bootstrap 5? Of course, outside of Bootstrap I could just name the bold font "Mulish Bold" and add a fallback for it. But in Bootstrap, I would have to add ugly overwrites of a couple of elements like `.fw-bold`, `b`, `strong`, etc. what seems to be quite ugly. @font-face { font-family: 'Mulish Regular Fallback'; src: local(Tahoma); size-adjust: 103%; ascent-override: 100%; descent-override: 23%; line-gap-override: normal; } @font-face { font-family: 'Mulish'; font-display: swap; font-weight: 400; src: url('../fonts/Mulish-Regular.ttf'); } @font-face { font-family: 'Mulish'; font-display: swap; font-weight: 700; src: url('../fonts/Mulish-Bold.ttf'); } // In Bootstrap Variables: $font-family-sans-serif: 'Mulish', 'Mulish Regular Fallback', sans-serif;
Delete trigger run for each deleted row when delete multi row on edit tab
I have the following problem: I have the URL to a picture "HTTP://WWW.ROLANDSCHWAIGER.AT/DURCHBLICK.JPG" saved in my database. I think you see the problem here: The URL is in uppercase. Now I want to display the picture in the sap gui but for that I have to convert it to lowercase. I have the following ode from an tutorial, but without the convertion: *&---------------------------------------------------------------------* *& Report ZDURCHBLICK_24 *&---------------------------------------------------------------------* *& *&---------------------------------------------------------------------* REPORT zdurchblick_24. TABLES: zproject_24. PARAMETERS pa_proj TYPE zproject_24-projekt OBLIGATORY. DATA gs_project TYPE zproject_24. *Controls DATA: go_container TYPE REF TO cl_gui_custom_container. DATA: go_picture TYPE REF TO cl_gui_picture. START-OF-SELECTION. WRITE: / 'Durchblick 3.0'. SELECT SINGLE * FROM zproject_24035 INTO @gs_project WHERE projekt = @pa_proj. WRITE gs_project. IF sy-subrc = 0. WRITE 'Wert im System gefunden'. ELSE. WRITE 'Kein Wert gefunden'. ENDIF. WRITE : /'Es wurden', sy-dbcnt, 'Werte gefunden'. AT LINE-SELECTION. zproject_24 = gs_project. CALL SCREEN 9100. *&---------------------------------------------------------------------* *& Module CREATE_CONROLS OUTPUT *&---------------------------------------------------------------------* *& *&---------------------------------------------------------------------* MODULE create_conrols OUTPUT. * SET PF-STATUS 'xxxxxxxx'. * SET TITLEBAR 'xxx'. IF go_container IS NOT BOUND. CREATE OBJECT go_container EXPORTING container_name = 'BILD'. CREATE OBJECT go_picture EXPORTING parent = go_container. CALL METHOD go_picture->load_picture_from_url EXPORTING url = gs_project-bild. ENDIF. ENDMODULE.
ABAP convert Database char to lowercase
|database|character|converters|abap|sap|
null
I was getting the same error. I tried a lot of fixes from the internet but nothing worked but thanks to our teacher, with his help i was able to fix it. so the error i was getting said that > my worker node has python3.10 while the driver has python3.11 that's not the exact wording of the error but you get the point, its the same error. What i had to do was to navigate to `/usr/bin/` and execute `ls -l` there. i got the following output: ``` lrwxrwxrwx 1 root root 7 Feb 12 19:50 python -> python3 lrwxrwxrwx 1 root root 9 Oct 11 2021 python2 -> python2.7 -rwxr-xr-x 1 root root 14K Oct 11 2021 python2.7 -rwxr-xr-x 1 root root 1.7K Oct 11 2021 python2.7-config lrwxrwxrwx 1 root root 16 Oct 11 2021 python2-config -> python2.7-config lrwxrwxrwx 1 root root 10 Feb 12 19:50 python3 -> python3.10 -rwxr-xr-x 1 root root 15K Feb 12 19:50 python3.11 -rwxr-xr-x 1 root root 3.2K Feb 12 19:50 python3.11-config lrwxrwxrwx 1 root root 17 Feb 12 19:50 python3-config -> python3.11-config -rwxr-xr-x 1 root root 2.5K Apr 8 2023 python-argcomplete-check-easy-install-script -rwxr-xr-x 1 root root 383 Apr 8 2023 python-argcomplete-tcsh lrwxrwxrwx 1 root root 14 Feb 12 19:50 python-config -> python3-config ``` notice the line `lrwxrwxrwx 1 root root 10 Feb 12 19:50 python3 -> python3.10` I realized that my python was pointing python3.10 even though i had python3.11 installed. so it should be pointing to python3.11 instead. If that's the same case with you then you've figured out the problem and the **following fix should work just fine for you.** 1. **Locate Python 3.11:** First, ensure that Python 3.11 is installed on your system. You can proceed to locate its path. You can usually find it in `/usr/bin/` or `/usr/local/bin/`. Let's assume it's in `/usr/bin/python3.11`. 2. **Update the Symbolic Link:** You need to update the python3 symbolic link to point to Python 3.11. You can do this using the ln command in the terminal. Open a terminal and type:<br> `sudo ln -sf /usr/bin/python3.11 /usr/bin/python3` 3. **Verify the Update:** After running the command, you can verify that the symbolic link has been updated correctly by typing: <br> `ls -l /usr/bin/python3` <br> <br> This should show something like:<br> `lrwxrwxrwx 1 root root XX XXX XX:XX /usr/bin/python3 -> /usr/bin/python3.11` <br><br>This indicates that python3 now points to Python 3.11. Now, when you run your PySpark code, it should use Python 3.11 on both the worker nodes and the driver, resolving the version mismatch issue.
Why doesn't event name shown in sepolia.etherscan?
|ethereum|blockchain|solidity|smartcontracts|
Yes it is allowed, *modulo bugs in GCC*. The compiler --- GCC follows the [Itanium ABI](https://github.com/itanium-cxx-abi/cxx-abi), which is actually platform-independent, despite the name. Here is the Itanium ABI mission statement: > we want users to be able to build relocatable objects with different compilers and link them together, and if possible even to ship common DSOs. Note there are no separate ABI specifications for separate versions of the C++ standard. There is one specification that works for them all. The library --- Here is the mission statement of libstdc++ as far as versioning is concerned > Extending existing, stable ABIs. Versioning gives subsequent releases of library binaries the ability to add new symbols and add functionality, all the while retaining compatibility with the previous releases in the series. Thus, program binaries linked with the initial release of a library binary will still run correctly if the library binary is replaced by carefully-managed subsequent library binaries. This is called forward compatibility. The library supports not one, but two different ABIs. There was a change in the C++11 standard that necessitated an ABI split. However, as the documentation points out, [the choice of ABI to use is independent of the `-std` option used to compile your code](https://gcc.gnu.org/onlinedocs/libstdc++/manual/using_dual_abi.html). This is ancient history however. You are going to use the new C++11-compatible ABI, which is the default, about 100% of the time, unless you need to maintain an old piece of software built with pre-C++11-compatible ABI. The real life --- The open source ecosystem has zillions of C++ libraries that are used in all kind of products. No one coordinates `-std` option between maintainers of different libraries. Everybody upstream uses what they want/need, and downstream the libraries are built with whatever options are there, and linked together with no problem. It all just works. I personally run Gentoo, which is a rolling release distro. I fetch whatever stable release of a software component is available directly from that library's GitHub or whatever it is stored, and compile with whatever compiler version I currently have. I can recompile any library using any compiler version at any time. The system still works just fine. Without this kind of cross-standard, cross-version compatibility, a rolling release would never ever have a chance to work. Conclusion --- Is it 100% safe? You decide. There are compiler bugs in this area (you have found one) and sometimes people get biten by them. Then again, there are compiler bugs in all areas, but people still use compilers.
If your use case fits the description of a 'variant' or a 'mod' of the original app then you can avoid the hassle of renaming your packages, and can leave alone the `namespace` _and_ the `applicationId`, and instead define the **`applicationIdSuffix`** in your `build.gradle`, e.g.: ```bash defaultConfig { namespace "com.original.app" applicationId "com.original.app" versionCode 42 versionName "0.42" applicationIdSuffix '.insidersBuild' ################### ^^^^^^^^^^^^^^^^ } ``` *Keep in mind*: If you donโ€™t explicitly define the `applicationId` in your `build.gradle`'s `defaultConfig`, then the value is inferred from the package attribute in your `AndroidManifest.xml`. <sub>*See also:*</sub> - https://developer.android.com/build/build-variants#groovy + see about [*flavors*][1] right below [1]: https://developer.android.com/build/build-variants#product-flavors
Set Size for UIButton with new iOS15 Configuration Approach
|swift|uikit|uibutton|ios15|
I encountered an error while using the TextField component. I've checked whether the issue lies in my source code or in the MUI TextField component, but I was told that they couldn't provide assistance, so I'm posting here. The problem occurs when the parent tag wrapping the TextField becomes hidden and then toggles back. An error message "ResizeObserver loop completed with undelivered notifications" appears, but it doesn't occur in all cases. It only happens when using certain props of the TextField such as multiline, (rows | maxRows | minRows). Currently, to resolve this, I've changed the {props.children} approach to {props.value === props.index && props.children}. However, due to the previous tasks not being retained and recreated upon tab changes, I'm also considering not using MUI. Could the issue be with the TextField component itself? (Changing from hidden to display: none and toggling still results in the same error.) ``` const tabList = [ { id: "id1", label: "label1", el: <><TextField multiline rows={4} /></> }, { id: "id2", label: "label2", el: <>test</> }, ]; const CustomTabPanel = (props: TabPanelProps) => ( <Box className="fx1 mt10" hidden={props.value !== props.index} > {props.value === props.index && props.children} {/* TextField Prop:multiline, rows -> using error : ResizeObserver loop completed with undelivered notifications */} {/* {props.children} */} </Box> ); const App = (): JSX.Element => { const [tab, setTab] = useState(0); const selTab = (event: React.SyntheticEvent, newValue: number) => { setTab(newValue); }; return ( <Box className="fx1 fullBox flex-column"> <Tabs value={tab} onChange={selTab}> {tabList.map((row, i) => ( <Tab key={`Tab-${i}`} label={row.label} /> ))} </Tabs> {tabList.map((row, i) => ( <CustomTabPanel key={`CustomTabPanel-${i}`} value={tab} index={i}> {row.el} </CustomTabPanel> ))} </Box> ); }; ```
Material UI(MUI) Textfield Prop: multiline, (rows | maxRows | minRows)
|material-ui|textfield|
null
To work with an `SDDL` (Security Descriptor Definition Language) you first need to know the structure. From [MS Learn - Security Descriptor String Format](https://learn.microsoft.com/en-us/windows/win32/secauthz/security-descriptor-string-format) > The format is a null-terminated string with tokens to indicate each of the four main components of a security descriptor: > * owner (O:), > * primary group (G:), > * DACL (D:), > * and SACL (S:) **DACL (Discretionary Access Control List)** A DACL is a list of Access Control Entries (ACEs) that dictate who can access a specific object and what actions they can perform with it. The term "discretionary" implies that the objectโ€™s owner has control over granting access and defining the level of access. **SACL (System Access Control List)** A SACL is a set of access control entries (ACEs) that specify the security events to be audited for users or system processes attempting to access an object. These objects can include files, registry keys, or other system resources. **Structure of the SDDL** This is a simple example of a Security Descriptor String `SDDL` ~~~ "O:LAG:BUD:(A;;RPWPCCDCLCSWRCWDWOGA;;;S-1-5-32-545)" ~~~ * O:owner_sid * G:group_sid (primary group) * D:dacl_flags(string_ace1)(string_ace2)... (string_acen) * S:sacl_flags(string_ace1)(string_ace2)... (string_acen) When assigning permissions you are using the `DACL` part of the `SDDL`. Every entry in a `SDDL` is called an `ACE` (Access Control Entry). This particular example doesn't have a SACL (S: is missing). Instead of SID:s for O: and G:, the constants `LA` (Local Administrator) and `BU` (Builtin Users) are used. See [MS Learn - SID Strings](https://learn.microsoft.com/en-us/windows/win32/secauthz/sid-strings) ~~~ ConvertFrom-SddlString "O:AOG:AOD:(A;;RPWPCCDCLCSWRCWDWOGA;;;S-1-5-32-545)" Owner : EXAMPLEHOST\Administrator Group : BUILTIN\Users DiscretionaryAcl : {BUILTIN\Users: AccessAllowed (ChangePermissions, CreateDirectories, ExecuteKey, GenericAll, GenericExecute, GenericWrite, ListDirectory, ReadExtendedAttributes, ReadPermissions, TakeOwnership, Traverse, WriteData, WriteExtendedAttributes, WriteKey)} SystemAcl : {} ~~~ Each `ACE-string` in the `DACL` follows the structure of ~~~ ace_type;ace_flags;rights;object_guid;inherit_object_guid;account_sid;(resource_attribute) ~~~ See [MS Learn - ACE Strings](https://learn.microsoft.com/en-us/windows/win32/secauthz/ace-strings) **Constructing an ACE** So, we want to add additional `ACE-strings` into the `DACL` (or change, remove or replace). This might be done by changing the `SDDL` using string manipulation. But I don't know how to integrate the `SDDL` back into a .Net object in that way. The relevant fields for adding an ACE to a DACL are: * ace_type: Indicates the type of ACE (e.g., A for access allowed, D for access denied). * ace_flags: Flags specifying inheritance and other properties. * rights: Specifies the access rights granted or denied. * account_sid: The Security Identifier (SID) of the user or group. The order of these fields matters. Example ACE string for granting read access to a specific user: ~~~ (A;;GA;;;S-1-5-32-545) ~~~ * A: Access allowed. * GA: Grant all permissions. * S-1-5-32-545 (Well known SID for BUILTIN\Users) Now, when the basics are set, is where the answers from above makes an entry using some .Net magic. The `RawDescriptor` is already avaialable from the `ConvertFrom-SddlString` cmdlet. ~~~ $sddl = ConvertFrom-SddlString "O:LAG:BUD:(A;;RPWPCCDCLCSWRCWDWOGA;;;S-1-5-32-545)" $sddl.RawDescriptor IsContainer : False IsDS : False ControlFlags : DiscretionaryAclPresent, SelfRelative Owner : S-1-5-21-XXXXXXX-500 Group : S-1-5-32-545 SystemAcl : DiscretionaryAcl : {System.Security.AccessControl.CommonAce} IsSystemAclCanonical : True IsDiscretionaryAclCanonical : True BinaryLength : 96 $sddl.RawDescriptor.DiscretionaryAcl BinaryLength : 24 AceQualifier : AccessAllowed IsCallback : False OpaqueLength : 0 AccessMask : 269353023 SecurityIdentifier : S-1-5-32-545 AceType : AccessAllowed AceFlags : None IsInherited : False InheritanceFlags : None PropagationFlags : None AuditFlags : None ~~~ Constructing a bare `ACE`-object can be done as above. But the example is incomplete and my knowledge of .Net doesn't permit me to find out what's missing :/ The other option provided above, is working with a `SecurityDescriptor` object instead. Which is already provided for us in the `RawDescriptor` :) ~~~ $sddl.RawDescriptor.DiscretionaryAcl.AddAccess("Allow", "S-1-5-32-546", 268435456,"None","None") $sddl.RawDescriptor.GetSddlForm([System.Security.AccessControl.AccessControlSections]::All) O:LAG:BUD:(A;;CCDCLCSWRPWPRCWDWOGA;;;BU)(A;;GA;;;BG) ~~~ See [MS Learn - DiscretionaryAcl.AddAccess Method](https://learn.microsoft.com/en-us/dotnet/api/system.security.accesscontrol.discretionaryacl.addaccess?view=net-8.0) The "only" thing missing for now in this answer, is how to construct the mask which lurks in [MS Learn - ObjectAccessRule Class](https://learn.microsoft.com/en-us/dotnet/api/system.security.accesscontrol.objectaccessrule?view=net-8.0) The mask could of course be copied from an already existing ACE. I will get back on this if I learn how... When constructing a `SDDL` for usage with `WINRM`, there is actually a graphical tool for that. (To be continued)
I am not sure you could overwrite the modules when they are loaded in. What you can do though is wrap the `nn.Module` with a function that will go through the module tree and replace `nn.Conv2d` with another layer implementation (for example here `nn.Identity`). The only trick is the fact child layers can be identified by compound keys. For example `models.layer1[0].conv2` has keys `"layer1"`, `"0"`, and finally `"conv2"`. Gather the `nn.Conv2d` and split their compound keys: convs = [] for k, v in model.named_modules(): if isinstance(v, nn.Conv2d): convs.append(k.split('.')) Build a recursive function to get a sub module from a compound key: inspect = lambda m, k: inspect(getattr(m, k[0]), k[1:]) if len(k)>1 else m Finally, you can iterate over the submodules and replace the layer: for k in convs: setattr(inspect(model, k), k[-1], nn.Identity()) You will see all `nn.Conv2d` layers (whatever their depth) will be replaced: >>> model.layer1[0].conv2 Identity()
{"Voters":[{"Id":157247,"DisplayName":"T.J. Crowder"},{"Id":3776927,"DisplayName":"derpirscher"},{"Id":466862,"DisplayName":"Mark Rotteveel"}],"SiteSpecificCloseReasonIds":[18]}
I started to learn WireMock. My first experience is not very positive. Here's a failing MRE: ```java import com.github.tomakehurst.wiremock.WireMockServer; import org.junit.jupiter.api.Test; public class GenericTest { @Test void test() { new WireMockServer(8090); } } ``` ```xml <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-test</artifactId> <scope>test</scope> </dependency> <dependency> <groupId>org.wiremock</groupId> <artifactId>wiremock</artifactId> <version>3.5.1</version> <scope>test</scope> </dependency> ``` ```lang-none java.lang.NoClassDefFoundError: org/eclipse/jetty/util/thread/ThreadPool ``` I debugged it a little: ```java public WireMockServer(int port) { this(/* -> this */ wireMockConfig() /* <- throws */.port(port)); } ``` ```java // WireMockConfiguration // โ†“ throwing inline private ThreadPoolFactory threadPoolFactory = new QueuedThreadPoolFactory(); public static WireMockConfiguration wireMockConfig() { return /* implicit no-args constructor */ new WireMockConfiguration(); } ``` ```java package com.github.tomakehurst.wiremock.jetty; import com.github.tomakehurst.wiremock.core.Options; import com.github.tomakehurst.wiremock.http.ThreadPoolFactory; // โ†“ package org.eclipse does not exist, these lines are in red import org.eclipse.jetty.util.thread.QueuedThreadPool; import org.eclipse.jetty.util.thread.ThreadPool; public class QueuedThreadPoolFactory implements ThreadPoolFactory { @Override public ThreadPool buildThreadPool(Options options) { return new QueuedThreadPool(options.containerThreads()); } } ``` My conclusion: 1. WireMock has a dependency on `org.eclipse` 2. WireMock doesn't include this dependency in its artifact 3. I have to provide it manually I even visited their [GitHub][1] to see for myself if the dependency is marked as `provided`, but they use Gradle, and I don't know Gradle My question: 1. What is the rationale behind the decision to exclude the dependency from the artifact? [1]: https://github.com/wiremock/wiremock/tree/master
Why is org.eclipse not included in the WireMock artifact?
|java|wiremock|
Tom wants to find the maximum value after doubling each element in an array. He defines f(i) as: the maximum value of the array after doubling the i-th element. Tom expects to find the values of f(1) to f(n), where n is the length of this array. For example, if the array sequence is 1, 3, 2, 5, 4, then the output result would be: 5, 6, 5, 10, 8.