instruction
stringlengths
0
30k
Implemented basic setup of node-oidc-provider from this [github example][1] with mongo adapter. When deployed under a https it got worked earlier, however when tried to deploy the same code in a different server, getting the error for token endpoint request. added listener to provider and attached logs too. ``` provider.addListener('server_error', (etx, error) => { console.error(JSON.stringify(error, null, 2)); }); ``` ``` { "request": { "method": "POST", "url": "/token", "header": { "host": "--hosted app url--", "x-forwarded-proto": "https", "x-real-ip": "--private-ip--", "x-forwarded-for": "--private-ip--", "connection": "close", "content-length": "311", "sec-ch-ua": "\"Not A(Brand\";v=\"99\", \"Google Chrome\";v=\"121\", \"Chromium\";v=\"121\"", "accept": "application/json", "content-type": "application/x-www-form-urlencoded", "sec-ch-ua-mobile": "?0", "user-agent": "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/121.0.0.0 Safari/537.36", "sec-ch-ua-platform": "\"macOS\"", "origin": "--requested web app url--", "sec-fetch-site": "same-site", "sec-fetch-mode": "cors", "sec-fetch-dest": "empty", "referer": "--requested web app url--", "accept-encoding": "gzip, deflate, br", "accept-language": "en-GB,en-US;q=0.9,en;q=0.8" } }, "response": { "status": 500, "message": "Internal Server Error", "header": { "content-security-policy": "default-src 'self';base-uri 'self';font-src 'self' https: data:;frame-ancestors 'self';img-src 'self' data:;object-src 'none';script-src 'self';script-src-attr 'none';style-src 'self' https: 'unsafe-inline';upgrade-insecure-requests", "cross-origin-embedder-policy": "require-corp", "cross-origin-opener-policy": "same-origin", "cross-origin-resource-policy": "*", "x-dns-prefetch-control": "off", "x-frame-options": "SAMEORIGIN", "strict-transport-security": "max-age=15552000; includeSubDomains", "x-download-options": "noopen", "x-content-type-options": "nosniff", "origin-agent-cluster": "?1", "x-permitted-cross-domain-policies": "none", "referrer-policy": "no-referrer", "x-xss-protection": "0", "vary": "Origin", "access-control-allow-origin": "--requested web app url--", "access-control-allow-credentials": "true", "cache-control": "no-store", "content-type": "application/json; charset=utf-8" } }, "app": { "subdomainOffset": 2, "proxy": true, "env": "production" }, "originalUrl": "/token", "req": "<original node req>", "res": "<original node res>", "socket": "<original node socket>" } ``` In the existing server sometimes we are getting ``` MongoServerError: E11000 duplicate key error collection: basemodels index: payload.grantId_1 dup key: { payload.grantId: "F5DIll55oARfHVxRdgoUWFFe8XbQQ0H8_FijtT3KB9_" } keyPattern: { 'payload.grantId': 1 }, keyValue: { 'payload.grantId': 'F5DIll55oARfHVxRdgoUWFFe8XbQQ0H8_FijtT3KB9_' }, Error [ERR_HTTP_HEADERS_SENT]: Cannot set headers after they are sent to the client at new NodeError (node:internal/errors:405:5) at ServerResponse.setHeader (node:_http_outgoing:648:11) at Cookies.set (/app/node_modules/cookies/index.js:148:13) at ContextSession.save (/app/node_modules/koa-session/lib/context.js:341:22) at ContextSession.commit (/app/node_modules/koa-session/lib/context.js:244:16) at session (/app/node_modules/koa-session/index.js:46:20) at process.processTicksAndRejections (node:internal/process/task_queues:95:5) at async cors (/app/node_modules/@koa/cors/index.js:61:32) ``` Please let me know if I missed anything. [1]: https://github.com/panva/node-oidc-provider/blob/270af1da83dda4c49edb4aaab48908f737d73379/example/standalone.js
I want to add a SRT to a MP4 with matching names from same folder. I want to script it to keep doing this to all files in folder. This works if I do **1 at a time**, but i have over 1,000 to do. How can I create a batch to do this ffmpeg -i *source*.mp4 -i *source*.srt -c copy -c:s mov_text -metadata:s:s:0 language=eng *destination*.mp4 Stream #0:0(und): Video: h264 (High) (avc1 / 0x31637661), yuv420p(tv, bt709, progressive), 1280x694 [SAR 1:1 DAR 640:347], q=2-31, 1385 kb/s, 23.98 fps, 23.98 tbr, 24k tbn (default) Metadata: handler_name : VideoHandler vendor_id : [0][0][0][0] Stream #0:1(und): Audio: aac (LC) (mp4a / 0x6134706D), 44100 Hz, stereo, fltp, 192 kb/s (default) Metadata: handler_name : SoundHandler vendor_id : [0][0][0][0] Stream #0:2(eng): Subtitle: mov_text (tx3g / 0x67337874) Metadata: encoder : Lavc60.40.100 mov_text [out#0/mp4 @ 000001b40337b680] video:1027151KiB audio:142295KiB subtitle:31KiB other streams:0KiB global headers:0KiB muxing overhead: 0.297458% size= 1172956KiB time=01:35:42.13 bitrate=1673.4kbits/s speed= 763x
Adding SRT subtitle to multiple MP4
|ffmpeg|mp4|subtitle|srt|
null
Use adjusted local fallback font with Bootstrap 5 for different font weights
I'm sorry, it was my mistake in that I didn't clarify the problem and the relationship between my tables well. I solved the problem as follows: public function edit(User $user, Termin $termin) { return $user->id === $termin->created_by_id || $user->team->id === $termin->call_center_id || $user->id === $termin->assigned_to_id; } public function view(User $user, Termin $termin) { return $user->id === $termin->created_by_id || $user->team->id === $termin->call_center_id || $user->id === $termin->assigned_to_id; } Controller: public function show($id) { $termin = Termin::findOrFail($id); if (Auth::user()->role[0]->id != 1) { $this->authorize('view', $termin); } return view('admin.termins.show', compact('termin')); } public function edit($id) { $termin = Termin::findOrFail($id); if (Auth::user()->role[0]->id != 1) { $this->authorize('edit', $termin); } rest of the code... Thank you @afcy
I want to show a new QWidget, which is set in a QScrollArea that has a grid layout, and then a QVBoxLayout is added to the grid one. The program allows adding labels dynamically in this QVBoxLayout. I want the scroll area to have a fixed size, and when the labels' size exceeds the scroll area's size (height), the scrollbar should appear (and no changes in the scroll area size). With my code — adding labels causes the height of the QWidget (window) to increase. The scrollbar can be seen at the beginning, but at some point it disappears, like the QScrollArea is no more there. ```c++ QWidget* playlistWindow = new QWidget(); playlistWindow->setAcceptDrops(true); playlistWindow->resize(200, 300); QScrollArea* scrollArea = new QScrollArea(); scrollArea->setWidget(playlistWindow); QVBoxLayout* layout = new QVBoxLayout(); QGridLayout* gridLayout = new QGridLayout(); layout->setAlignment(Qt::AlignTop); scrollArea->setLayout(gridLayout); gridLayout->addLayout(layout, 0, 0); scrollArea->setMaximumHeight(300); scrollArea->show(); ```
Strange issue when recursively concatenating to string
|javascript|csv|recursion|spotify|
null
I have a 18 GB `.parquet` file with a ~300M rows of accounting data (which I cannot share) and split in to 53 row groups. My task is to 'clean' the data by retaining in each cell, only specific words from a dictionary. The reading of the file is trouble free. The processing of the file ends in a segmentation fault on a 20-core, 128GB RAM Ubuntu 22.04 desktop. Using Python and the Dask library, I convert the data in to a Dask data frame with the following columns: ``` ['rowid', 'txid', 'debit', 'credit', 'effective_date', 'entered_date', 'user_id', 'transaction', 'memo', 'type', 'account', 'total_amt'] ``` The columns to be cleaned in this file are `memo`, `type`, and `account`. My approach is to take each of those columns and apply a `filter_field` and a `hash_field` method to them: ``` if isinstance(data, dd.DataFrame): result = [data[col].apply(lambda x: self.filter_field(text=x, word_dict=word_dict), meta=data[col]) for col in memo_columns] for i, col in enumerate(memo_columns): # this loop seems to be req'd to assign values data[col] = result[i] # second: hash the name/id fields id_cols = name_columns + id_columns + account_columns result = [data[col].apply(lambda x: self.hash_field(text=x), meta=data[col]) for col in id_cols] for i, col in enumerate(id_cols): data[col] = result[i] del result gc.collect() ``` The filter_field takes each cell, removes symbols, then checks to see if remaining words are in a dictionary, and if they are not the words are dropped. The hash_field is just `shake_256(text.encode(encoding='utf8')).hexdigest(20)`. I know the filtering is sound b/c everything runs fine on near identical files of up to 128M rows. Two things happen when I run this larger file: 1. at some point I get a segmentation fault. Sometimes it happens early in the processing, sometimes later. 1. rarely are more than 3 cores processing at more 30-50% and when there are more the additional cores are at ~1-2% (observed via htop) What I would like to know: 1. is there a better approach than the looping/vectorizing I used; or 1. how can I get more cores working on the process. Notes: I tried various approaches with partition size changes, both by varying number and size. There was no visible improvement in processing and the large file still threw a seg fault.
I'm attempting to pull Nuget packages from Telerik's Nuget repository into an Azure build pipeline, however, none of the pipeline configuration attempts I've made seem to work. I either receive an error stating my nuget.config is not formatted correctly or a 401 error when connecting to the repository. The below configuration section is from my build definition. I've tried using NuGetCommand as well as NuGetRestore: ``` - task: NuGetToolInstaller@1 inputs: versionSpec: '5.0.2' - task: NuGetAuthenticate@1 inputs: nuGetServiceConnections: 'Telerik_v3' - task: NuGetCommand@2 inputs: command: 'restore' restoreSolution: '**/*.sln' feedsToUse: 'config' nugetConfigPath: './XXXXXXX/nuget.config' #'$(System.DefaultWorkingDirectory)/XXXXXXX/NuGet.config' externalFeedCredentials: 'Telerik_v3' # - task: NuGetRestore@1 # inputs: # solution: '**/*.sln' # selectOrConfig: 'config' # nugetConfigPath: './XXXXXXX/nuget.config' ``` Here is my nuget.config: ``` <?xml version="1.0" encoding="utf-8"?> <configuration> <packageSources> <add key="NuGet" value="https://api.nuget.org/v3/index.json" protocolVersion="3"/> <add key="Telerik_NuGet" value="https://nuget.telerik.com/v3/index.json" protocolVersion="3" /> </packageSources> </configuration> ``` Here is the error I receive: > NuGet.Protocol.Core.Types.FatalProtocolException: Unable to load the service index for source https://nuget.telerik.com/v3/index.json. ---> System.Net.Http.HttpRequestException: Response status code does not indicate success: 401 (Unauthorized). Tweaking the configuration slightly I get this error: > [error]The nuget command failed with exit code(1) and error(NuGet.Configuration.NuGetConfigurationException: NuGet.Config is not valid XML. Path: 'D:\a\1\Nuget\tempNuGet_966.config'. ---> System.Xml.XmlException: An error occurred while parsing EntityName. Line 10, position 46. I based my build tasks on the NuGetCommandv2 and NuGetRestore v1 documentation as well as what I found on Telerik's site.
Just a simple confuse how do I update my *@model* data structure after I make a container like ``` @Model final class Item { var timestamp: Date init(timestamp: Date) { self.timestamp = timestamp } } ``` everything work fine but after add new attribute ``` @Model final class Item { var timestamp: Date var newAtt: String init(timestamp: Date, newAtt: String) { self.timestamp = timestamp self.newAtt = newAtt } } ``` my application can't open and get a long error(part of piece) > error: addPersistentStoreWithType:configuration:URL:options:error: returned error NSCocoaErrorDomain (134110) CoreData: error: addPersistentStoreWithType:configuration:URL:options:error: returned error NSCocoaErrorDomain (134110) error: userInfo: CoreData: error: userInfo: error: sourceURL : What's the problem with the update? it seems I can't change data structure after setting up the container or anything wrong?
How to modify or update data structure using swiftData?
|swift-data|
null
After investing numerous hours, I have identified both the cause of this issue and its solution. Avoid utilizing `nextTik()` before invoking `useFetch`, as it triggers fetching client-side rendering (CSR), causing you to forfeit SEO benefits (resulting in no fetched data in server-side rendering (SSR) mode on the source page). To resolve this issue promptly, simply upgrade the version of your Nuxt installation: ```bash npx nuxi upgrade ``` The version that has proven effective for me is `"nuxt": "^3.10.3"`.
So I'm trying to run a CMD check for a package I want to upload to CRAN but when running it on my Windows 11 cmd I get the following error: ``` * using log directory 'C:/Users/Zick/Documents/YEAB/YEAB-master/YEAB.Rcheck' * using R version 4.3.3 (2024-02-29 ucrt) * using platform: x86_64-w64-mingw32 (64-bit) * R was compiled by gcc.exe (GCC) 12.3.0 GNU Fortran (GCC) 12.3.0 * running under: Windows 11 x64 (build 22631) * using session charset: UTF-8 * using option '--as-cran' * checking for file 'YEAB/DESCRIPTION' ... OK * this is package 'YEAB' version '0.1.0' * package encoding: UTF-8 * checking CRAN incoming feasibility ... WARNING Maintainer: 'Emmanuel Alcala <jealcala@gmail.com>' New submission Non-FOSS package license (CC-BY-4.0 + file LICENSE) Unknown, possibly misspelled, fields in DESCRIPTION: 'Extdata' Strong dependencies not in mainstream repositories: rethinking The Date field is over a month old. * checking package namespace information ... OK * checking package dependencies ... OK * checking if this is a source package ... OK * checking if there is a namespace ... OK * checking for executable files ... OK * checking for hidden files and directories ... OK * checking for portable file names ... OK * checking serialization versions ... OK * checking whether package 'YEAB' can be installed ... ERROR Installation failed. See 'C:/Users/Zick/Documents/YEAB/YEAB-master/YEAB.Rcheck/00install.out' for details. * DONE Status: 1 ERROR, 1 WARNING See 'C:/Users/Zick/Documents/YEAB/YEAB-master/YEAB.Rcheck/00check.log' for details. ``` And when I open the "00install.out" file I get this error: ``` * installing *source* package 'YEAB' ... ** using staged installation ** R ** data ** inst ** byte-compile and prepare package for lazy loading Error : 'library' is not an exported object from 'namespace:foreach' Error: unable to load R code in package 'YEAB' Execution halted ERROR: lazy loading failed for package 'YEAB' * removing 'C:/Users/Zick/Documents/YEAB/YEAB-master/YEAB.Rcheck/YEAB' ``` The problem is that I'm not importing any function named "library" from the "foreach" namespace. What is more, if I'm not wrong, the "library" function comes from the "base" package, so I don't understand why I'm getting this error. My NAMESPACE file looks like this: ``` export(KL_div) export(ab_range_normalization) export(addalpha) export(balci2010) export(berm) export(biexponential) export(box_dot_plot) export(bp_km) export(bp_opt) export(ceiling_multiple) export(curv_index_fry) export(curv_index_int) export(den_histogram) export(entropy_kde2d) export(eq_hyp) export(event_extractor) export(exhaustive_lhl) export(exhaustive_sbp) export(exp_fit) export(f_table) export(fleshler_hoffman) export(fwhm) export(gaussian_fit) export(gell_like) export(get_bins) export(hist_over) export(hyperbolic_fit) export(ind_trials_opt) export(mut_info_discret) export(mut_info_knn) export(n_between_intervals) export(objective_bp) export(read_med) export(sample_from_density) export(trapezoid_auc) export(unity_normalization) export(val_in_interval) importFrom(foreach,"%dopar%") importFrom(foreach,foreach) importFrom(MASS,kde2d) importFrom(Polychrome,createPalette) importFrom(cluster, clusGap) importFrom(dplyr,between) importFrom(dplyr,lag) importFrom(infotheo,discretize) importFrom(infotheo,mutinformation) importFrom(magrittr,"%>%") importFrom(minpack.lm,nls.lm) importFrom(rmi,knn_mi) importFrom(scales,show_col) importFrom(sfsmisc,integrate.xy) importFrom(zoo,rollapply) importFrom(ggplot2, ggplot, aes, geom_point) importFrom(grid, unit) importFrom(gridExtra, grid.arrange) importFrom(rethinking, HPDI) importFrom(stats, median, optim, coef, fitted, approx, integrate, quantile) importFrom("grDevices", "col2rgb", "grey", "rgb") importFrom("graphics", "abline", "arrows", "axis", "box", "boxplot", "grid", "hist", "layout", "lines", "mtext", "par", "polygon", "stripchart", "text") importFrom("stats", "approx", "bw.SJ", "coef", "cor", "density", "fitted", "integrate", "lm", "loess", "median", "na.omit", "nls", "nls.control", "optim", "pnorm", "quantile", "rbinom", "runif") importFrom("utils", "read.table", "stack", "tail", "write.csv") importFrom(grid, unit, gpar, grid.polygon, grid.text, grid.layout, grid.newpage, pushViewport, viewport, grid.rect, grid.points, grid.xaxis, grid.yaxis, grid.segments) importFrom(minpack.lm, "nls.lm.control") importFrom(utils, "stack", "write.csv") importFrom(ggplot2, theme_gray, element_line, element_blank, element_text) importFrom(cluster, pam) importFrom(dplyr, group_by, summarise) ``` And my DESCRIPTION file looks like this: ``` Package: YEAB Title: Package to Analyze Data from Analysis of Behavior Experiments Version: 0.1.0 Authors@R: c( person("Emmanuel", "Alcala", , "jealcala@gmail.com", role = c("aut", "cre")), person("Rodrigo", "Sosa", , "rsosas@up.edu.mx", role = "aut") ) Description: This is a colletion of functions aimed to analyze data from behavioral experiments from MED output and others. It also have functions to fit exponential or hyperbolic models from delay discounting tasks, exponential mixtures to IRTs, Gaussian plus ramp model for peak procedures data, etc. License: CC0 Encoding: UTF-8 Roxygen: list(markdown = TRUE) RoxygenNote: 7.2.3 Imports: cluster, doParallel, rethinking, dplyr, foreach, ggplot2, grid, gridExtra, infotheo, ks, magrittr, minpack.lm, Polychrome, rmi, scales, sfsmisc, VGAM, MASS, zoo License: CC-BY-4.0 + file LICENSE Extdata: inst/extdata/fi60_raw.csv, inst/extdata/fi60_raw Author: 'Emmanuel Alcala [aut, cre], Rodrigo Sosa [aut]' Maintainer: Emmanuel Alcala <jealcala@gmail.com> Date: 2023-09-27 ``` I've looked all over the internet for a solution and tried to update the "foreach" package to it's latest version but to no avail. What is more strange is that if I delete these lines: ``` importFrom(foreach,"%dopar%") importFrom(foreach,foreach) ``` from the NAMESPACE file I still get the same 'library' is not an exported object from 'namespace:foreach' error. Have somebody else found this problem? I'm quite new in R package development so I'm sorry if I turn out to be making a rookie mistake here.
```c++ template <> inline bool _Sp_counted_base<_S_atomic>::_M_add_ref_lock_nothrow() noexcept { // Perform lock-free add-if-not-zero operation. _Atomic_word __count = _M_get_use_count(); do { if (__count == 0) return false; // Replace the current counter value with the old value + 1, as // long as it's not changed meanwhile. } while (!__atomic_compare_exchange_n(&_M_use_count, &__count, __count + 1, true, __ATOMIC_ACQ_REL, __ATOMIC_RELAXED)); return true; } ``` this is in c++/11/bits/shared_ptr_base.h: _M_add_ref_lock_nothrow() Question: "__count" only get value at first(not in loop) Is it possible for this loop function to enter an infinite loop? hope answer! thank you!
from selenium import webdriver from selenium.webdriver.chrome.service import Service from webdriver_manager.chrome import ChromeDriverManager from selenium.webdriver.chrome.options import Options import time options = Options() options.add_argument("user-data-dir=C:\\Users\\yourusername\\AppData\\Local\\Google\\Chrome Beta\\User Data") options.add_argument("profile-directory=Default") driver_service = Service(ChromeDriverManager().install()) driver = webdriver.Chrome(service=driver_service, options=options) # Define the search keyword keyword = "hello" driver.get(f"https://www.bing.com/search?q={keyword}") time. Sleep(10) # Wait for 10 seconds
I've got a postgres DB with a sequence table that has columns like so | id | name | sequence | | -------- | -------------- | -------------- | | 191212 | seq1 | gtagagctttttttgatgttctatcatcttgggggggaaa | | 414124 | seq2 | cccggtttaatatttggttttttgggagggagagagagggggggagag | I'm wondering if anyone knows of a way of doing an efficient sub string search on this table which would also allow for fuzzy matching. For example the following searches only looking for >80% matches would produce the following results: | search seq | matches on seq | match index start | % match | | --------------- | -------------- | ----------------- | ------- | | tcttgggggggaaa | seq1 | 26 | 100% | | tcttgggggggaaat | seq1 | 26 | 93% | | tcttgggggtggaaa | seq1 | 26 | 93% | | tcttggggggaaa | seq1 | 26 | 92% | | ttttttt | seq1 | 7 | 100% | | ttttttt | seq2 | 17 | 85% | I'm defining ``` % match = (number of exact letter matches) / (length of search seq) ``` Thanks!
I've got a horizontal range of numbers in cells, and a vertical range of numbers in some other cells. I want a product of those ranges. For example cells A1:D1 contain 1,2,3,4 and cells A2:A5 contain values 5, 6,7,8. I except to get an answer of 70. I run =SUMPRODUCT(A1:D1, A2:A5) and it gives me #VALUE. Same problem for other simple examples like this. I have no idea why. According to all my sources I've read (inlcuding ChatGPT) this formula should work? Tried various cell range lengths and values, in different positions, but same problem.
Why does my simple Excel formula =SUMPRODUCT(A1:D1, A2:A5) return #VALUE error?
|excel|excel-formula|sumproduct|
null
How could python async function become effective before its parameter being evaluated?
I have a piece of MATLAB code that I want to translate to python, here is the relevant part of code: assuming x is a list, % The size of x N=size(x); N=N(2); The issue is at the second line. I understand the index syntax if it was say `x(2)`, but `N(2)` I don't understand. I've tried looking at what this means in the documentation, but all I could find was for indexing lists. Any help is appreciated, thanks.
Variable syntax in Matlab
|matlab|
*(please give me an explanation if you are downvoting)* # Rational An array is a data structure passed by reference. This means that the array holds a point of reference in the **RAM** memory to a set of variables and when it is passed to a method and the values inside are modified, the modification is done at the same address in the **RAM** memory, and thus modifying the values will reflect the change everywhere the array's reference was passed. In order to to modify it without passing the changes to the original memory address reference, the contents of the array must be copied at a different memory address. # Implementation A way of copying the contents of the array is to create an immutable array and copy the contents of the array. Another way is to use ```Array.Copy()```. Another way is to create a contiguous memory block at a certain location and move the contents of the array at once at the newly created contiguous memory block. ``` public static Data[] DeepCopy<Data>(Data[]data) { // CREATE AN IMUTABLE ARRAY AND COPY THE DATA OF THE ORIGINAL ARRAY ImmutableArray<Data> copy = data.ToImmutableArray<Data>(); // PARSE THE IMUTABLE ARRAY TO A NORMAL ARRAY AND RETURN return copy.ToArray<Data>(); } public static Data[] CopyMemoryChunk<Data>(Data[] data) { // CREATE A CONTIGUOUS MEMORY BLOCK AND ALLOCATE THE VALUS OF THE ARRAY // AT THE MEMORY ADDRESS WHERE THE CONTIGUOUS MEMORY BLOCK IS CREATED Memory<Data> data_obj = new Memory<Data>(data); // PARSE THE CONTENTS OF THE CONTIGUOUS MEMORY BLOCK TO AN ARRAY AND RETURN return data_obj.ToArray(); } ``` # RESULT * At small sets (sub 1000 integers), is faster to copy the address using ```CopyTo()``` or to make a deep copy if the original array using an immutable array, than creating a contiguous memory block and storing the values at the contiguous memory block's address. [![enter image description here][1]][1] <br/> <br/> * At bigger sets (10000 and over), copying the values using ```CopyTo()``` or using an immutable array to create a deep copy, is slower than creating a contiguous memory block and storing the values at the contiguous memory block's address [![enter image description here][2]][2] <br/> # CODE ``` using System.Collections; using System.Collections.Immutable; using System.Diagnostics; using System.Dynamic; using System.Security.Cryptography.X509Certificates; namespace DeepCopy { class Program { public static Data[] DeepCopy<Data>(Data[]data) { // CREATE AN IMUTABLE ARRAY AND COPY THE DATA OF THE ORIGINAL ARRAY ImmutableArray<Data> copy = data.ToImmutableArray<Data>(); // PARSE THE IMUTABLE ARRAY TO A NORMAL ARRAY AND RETURN return copy.ToArray<Data>(); } public static Data[] CopyMemoryChunk<Data>(Data[] data) { // CREATE A CONTIGUOUS MEMORY BLOCK AND ALLOCATE THE VALUS OF THE ARRAY // AT THE MEMORY ADDRESS WHERE THE CONTIGUOUS MEMORY BLOCK IS CREATED Memory<Data> data_obj = new Memory<Data>(data); // PARSE THE CONTENTS OF THE CONTIGUOUS MEMORY BLOCK TO AN ARRAY AND RETURN return data_obj.ToArray(); } public static void Main() { int number_of_tests = 100000; int number_of_elements = 100; CopyUsingDeepCopy(true, number_of_elements); CopyUsingContinguousMemoryBlock(true, number_of_elements); TestSpeed(number_of_tests, number_of_elements); } public static void TestSpeed(int number_of_tests, int number_of_elements) { long TotalCopyUsingDeepCopyTime = 0; long TotalCopyUsingContinguousMemoryBlockTime = 0; long TotalArrayCopyTime = 0; Stopwatch s = new Stopwatch(); for(int i = 0; i < number_of_tests; i++) { s.Start(); CopyUsingDeepCopy(false, number_of_elements); s.Stop(); TotalCopyUsingDeepCopyTime += s.ElapsedTicks; s.Reset(); s.Start(); CopyUsingContinguousMemoryBlock(false, number_of_elements); s.Stop(); TotalCopyUsingContinguousMemoryBlockTime += s.ElapsedTicks; s.Reset(); s.Start(); int [] deep_copy = new int[number_of_elements]; int[] values = new int[number_of_elements]; for(int count = 0; count < number_of_elements; count++) { values[count] = count; } values.CopyTo(deep_copy, 0); s.Stop(); TotalArrayCopyTime += s.ElapsedTicks; s.Reset(); } Console.WriteLine("Average timer ticks took for deep copy: " + (double)(TotalCopyUsingDeepCopyTime / number_of_tests)); Console.WriteLine("Average timer ticks took for contigugous memory block: " + (double)(TotalCopyUsingContinguousMemoryBlockTime / number_of_tests)); Console.WriteLine("Average timer ticks took for 'ArrayCopy()': " + (double)(TotalArrayCopyTime / number_of_tests)); } public static void CopyUsingDeepCopy(bool print, int number_of_elements) { if(print == true) { Console.WriteLine("\n\nCopying using deep copy:\n"); } int[] values = new int[number_of_elements]; for(int i = 0; i < number_of_elements; i++) { values[i] = i; } int[]copy = DeepCopy<int>(values); values[0] = 123456; if(print == true) { PrintValues(values, copy); } } public static void CopyUsingContinguousMemoryBlock(bool print, int number_of_elements) { if(print == true) { Console.WriteLine("\n\nCopying using continguous memory block:\n"); } int[] values = new int[number_of_elements]; for(int i = 0; i < number_of_elements; i++) { values[i] = i; } int[]copy = CopyMemoryChunk<int>(values); values[0] = 123456; if(print == true) { PrintValues(values, copy); } } public static void PrintValues(int[]arr_1, int[]arr_2) { Console.WriteLine("Values in original object:\n"); for(int i = 0; i < arr_1.Length; i++) { if(i != arr_1.Length - 1) { Console.Write(arr_1[i] + ", "); } else { Console.Write(arr_1[i]); } } Console.WriteLine("\n\n"); Console.WriteLine("Values in copy object:\n"); for(int i = 0; i < arr_2.Length; i++) { if(i != arr_2.Length - 1) { Console.Write(arr_2[i] + ", "); } else { Console.Write(arr_2[i]); } } Console.WriteLine("\n\n"); } } } ``` <br/> # SPECS The program was ran using the following specs: [![specs][3]][3] [1]: https://i.stack.imgur.com/HE4Qc.png [2]: https://i.stack.imgur.com/wDp9U.png [3]: https://i.stack.imgur.com/VLEyo.png
I think scss syntax allows you to put the ```::before``` pseudo element inside ```mat-expansion-panel-header``` ```scss mat-expansion-panel-header { position: relative; ::before { content: ''; position: absolute; top: 0px; left: 0px; bottom: 0px; right: 0px; z-index: 1000; background-color: black; } } ``` Is this how it's supposed to look? [![!\[enter image description here][1]][1] [1]: https://i.stack.imgur.com/TYTVG.png
I'm trying to run and deploy the Dalle Playground on my local machine using an AMD GPU, I'm on Windows 11 with a WSL instance running. </br> [Link to Dalle Playground repo][1] System OS: Windows 11 Pro - Version 21H1 - OS Build 22000.675 WSL Version: WSL 2 WSL Kernel: 5.10.16.3-microsoft-standard-WSL2 WSL OS: Ubuntu 20.04 LTS GPU: AMD Radeon RX 6600 XT CPU: AMD Ryzen 5 3600XT (32GB ram) I have been able to deploy the backend and frontend successfully but it runs off the CPU. It gives me this warning: --> Starting DALL-E Server. This might take up to two minutes. 2022-06-12 01:16:33.012306: I external/org_tensorflow/tensorflow/core/tpu/tpu_initializer_helper.cc:259] Libtpu path is: libtpu.so 2022-06-12 01:16:37.581440: I external/org_tensorflow/tensorflow/compiler/xla/service/service.cc:174] XLA service 0x5a4e760 initialized for platform Interpreter (this does not guarantee that XLA will be used). Devices: 2022-06-12 01:16:37.581474: I external/org_tensorflow/tensorflow/compiler/xla/service/service.cc:182] StreamExecutor device (0): Interpreter, <undefined> 2022-06-12 01:16:37.587860: I external/org_tensorflow/tensorflow/compiler/xla/pjrt/tfrt_cpu_pjrt_client.cc:176] TfrtCpuClient created. 2022-06-12 01:16:37.588478: I external/org_tensorflow/tensorflow/stream_executor/tpu/tpu_platform_interface.cc:74] No TPU platform found. WARNING:absl:No GPU/TPU found, falling back to CPU. (Set TF_CPP_MIN_LOG_LEVEL=0 and rerun for more info.) I have been trying to make my GPU accessible by my WSL instance but I can't work out what I'm doing wrong. For use with GPU I've entered the following command from [pytorch][2]: pip3 install torch torchvision --extra-index-url https://download.pytorch.org/whl/rocm4.5.2 I confirmed that pytorch is installed correctly by running the sample PyTorch code supplied on their [website][2]. After doing some digging I realised that the **rock-dkms** package needed to be installed also, so I followed the advice on [this website][3] and installed it successfully - after a lot of issues. When I try to check ROC for my GPU this is what comes up: $ /opt/rocm/bin/rocminfo ROCk module is NOT loaded, possibly no GPU devices $ /opt/rocm/opencl/bin/clinfo Number of platforms: 1 Platform Profile: FULL_PROFILE Platform Version: OpenCL 2.1 AMD-APP (3423.0) Platform Name: AMD Accelerated Parallel Processing Platform Vendor: Advanced Micro Devices, Inc. Platform Extensions: cl_khr_icd cl_amd_event_callback Platform Name: AMD Accelerated Parallel Processing Number of devices: 0 Based on this response, it seems to be there's definitely some sort of AMD compatible driver available and if you look at the attached photo you can see what shows up when I try to query for the GPU. I don't know at this stage if WSL can see and/or access my GPU or not, as **glxinfo** can identify it but nothing else can. (Even if it gets my VRAM wrong) ANY advice would be very helpful, I know this issue may not be project specific but I tried to include as much information about what I am doing as possible for the best chance of figuring this out. **Installed AMD GPU libs:** $ sudo apt list|grep -i gpu|grep installed WARNING: apt does not have a stable CLI interface. Use with caution in scripts. libdrm-amdgpu1/focal-updates,focal-security,now 2.4.107-8ubuntu1~20.04.2 amd64 [installed] libosdgpu3.4.0/focal,now 3.4.0-6build1 amd64 [installed,automatic] **Installed ROC packages:** $ apt list --installed | grep -i roc WARNING: apt does not have a stable CLI interface. Use with caution in scripts. hsa-rocr-dev/Ubuntu,now 1.5.0.50100-36 amd64 [installed,automatic] hsa-rocr/Ubuntu,now 1.5.0.50100-36 amd64 [installed,automatic] hsakmt-roct-dev/Ubuntu,now 20220128.1.7.50100-36 amd64 [installed,automatic] hsakmt-roct/Ubuntu,now 20210520.3.071986.40301-59 amd64 [installed,automatic] libopencv-imgproc4.2/focal,now 4.2.0+dfsg-5 amd64 [installed,automatic] libpostproc55/focal-updates,focal-security,now 7:4.2.7-0ubuntu0.1 amd64 [installed,automatic] libprocps8/focal-updates,now 2:3.3.16-1ubuntu2.3 amd64 [installed,automatic] procps/focal-updates,now 2:3.3.16-1ubuntu2.3 amd64 [installed,automatic] python3-ptyprocess/focal,now 0.6.0-1ubuntu1 all [installed,automatic] rock-dkms-firmware/Ubuntu,now 1:4.3-59 all [installed,automatic] rock-dkms/Ubuntu,now 1:4.3-59 all [installed,automatic] rocm-clang-ocl/Ubuntu,now 0.5.0.50100-36 amd64 [installed,automatic] rocm-cmake/Ubuntu,now 0.7.2.50100-36 amd64 [installed,automatic] rocm-core/Ubuntu,now 5.1.0.50100-36 amd64 [installed,automatic] rocm-dbgapi/Ubuntu,now 0.64.0.50100-36 amd64 [installed,automatic] rocm-debug-agent/Ubuntu,now 2.0.3.50100-36 amd64 [installed,automatic] rocm-dev/Ubuntu,now 5.1.0.50100-36 amd64 [installed,automatic] rocm-device-libs/Ubuntu,now 1.0.0.50100-36 amd64 [installed,automatic] rocm-dkms/Ubuntu,now 5.1.0.50100-36 amd64 [installed] rocm-gdb/Ubuntu,now 11.2.50100-36 amd64 [installed,automatic] rocm-llvm/Ubuntu,now 14.0.0.22114.50100-36 amd64 [installed,automatic] rocm-ocl-icd/Ubuntu,now 2.0.0.50100-36 amd64 [installed,automatic] rocm-opencl-dev/Ubuntu,now 2.0.0.50100-36 amd64 [installed,automatic] rocm-opencl/Ubuntu,now 2.0.0.50100-36 amd64 [installed,automatic] rocm-smi-lib/Ubuntu,now 5.0.0.50100-36 amd64 [installed,automatic] rocm-utils/Ubuntu,now 5.1.0.50100-36 amd64 [installed,automatic] rocminfo/Ubuntu,now 1.0.0.50100-36 amd64 [installed,automatic] rocprofiler-dev/Ubuntu,now 1.0.0.50100-36 amd64 [installed,automatic ] roctracer-dev/Ubuntu,now 1.0.0.50100-36 amd64 [installed,automatic] [![Trying to find GPU][4]][4] **EDIT:** Just ran the following and got a little more information about the current state of my system. $ glxinfo -B name of display: :0 display: :0 screen: 0 direct rendering: Yes Extended renderer info (GLX_MESA_query_renderer): Vendor: Microsoft Corporation (0xffffffff) Device: D3D12 (AMD Radeon RX 6600 XT) (0xffffffff) Version: 22.2.0 Accelerated: yes Video memory: 24485MB Unified memory: no Preferred profile: core (0x1) Max core profile version: 4.2 Max compat profile version: 4.2 Max GLES1 profile version: 1.1 Max GLES[23] profile version: 3.1 OpenGL vendor string: Microsoft Corporation OpenGL renderer string: D3D12 (AMD Radeon RX 6600 XT) OpenGL core profile version string: 4.2 (Core Profile) Mesa 22.2.0-devel (git-cbcdcc4 2022-06-11 focal-oibaf-ppa) OpenGL core profile shading language version string: 4.20 OpenGL core profile context flags: (none) OpenGL core profile profile mask: core profile OpenGL version string: 4.2 (Compatibility Profile) Mesa 22.2.0-devel (git-cbcdcc4 2022-06-11 focal-oibaf-ppa) OpenGL shading language version string: 4.20 OpenGL context flags: (none) OpenGL profile mask: compatibility profile OpenGL ES profile version string: OpenGL ES 3.1 Mesa 22.2.0-devel (git-cbcdcc4 2022-06-11 focal-oibaf-ppa) OpenGL ES profile shading language version string: OpenGL ES GLSL ES 3.10 Adding to the confusion even more, glmark2 seems to be able to use my GPU fine, possibly an issue with the dalle program and not WSL? [![GLmark2 running successfully despite issues with dalle playground][5]][5] [1]: https://github.com/saharmor/dalle-playground [2]: https://pytorch.org/get-started/locally/ [3]: https://www.reddit.com/r/linuxquestions/comments/qu8n5d/error_installing_amd_rocm/ [4]: https://i.stack.imgur.com/m1eEz.png [5]: https://i.stack.imgur.com/9UduF.png **UPDATE:** Could be solved by ZLUDA but not tested.
mojo: (e.g., App A) get '/css/:file' => sub { my $c = shift; my $file = $c->param('file'); # Remove the "A" prefix from URL $file =~ s{^A/}{}; $c->reply->static("public/css/$file"); }; ngix: location /A { rewrite ^/A(.*) /$1 break; proxy_pass http://127.0.0.1:3000; }
null
I need to read Parquet files written by Pandas using Pyarrow ( I can't change it for now) using PySpark. I used `spark.read.parquet` to read Parquet files but I got the `Illegal Parquet type: INT64 (TIMESTAMP(NANOS,false))` error: ``` --------------------------------------------------------------------------- Py4JJavaError Traceback (most recent call last) <ipython-input-2-10e38942d3ef> in <cell line: 4>() ----> 4 spark_df = spark.read.parquet('data/tests_dataset') 3 frames /usr/local/lib/python3.10/dist-packages/py4j/protocol.py in get_return_value(answer, gateway_client, target_id, name) 324 value = OUTPUT_CONVERTER[type](answer[2:], gateway_client) 325 if answer[1] == REFERENCE_TYPE: --> 326 raise Py4JJavaError( 327 "An error occurred while calling {0}{1}{2}.\n". 328 format(target_id, ".", name), value) Py4JJavaError: An error occurred while calling o32.parquet. : org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 1.0 failed 1 times, most recent failure: Lost task 0.0 in stage 1.0 (TID 1) (69d72c9d10d9 executor driver): org.apache.spark.sql.AnalysisException: Illegal Parquet type: INT64 (TIMESTAMP(NANOS,false)). ``` I use the last version of PySpark `3.5.1`. I also tried with the older version `3.4.2` but got the same error. Searching online someone suggests to change the type of the DateTime on the Pandas' writing phase but I can't do that now. There is a way to avoid this problem without changing the parquet files?
Pyspark: Read Parquet written with Pandas rise 'Illegal Parquet type: INT64 (TIMESTAMP(NANOS,false))'
|python|apache-spark|pyspark|parquet|
Error: EPERM: operation not permitted, unlink 'C:\Users\Bright-Ewuru\Desktop\full stack real estate\server\node_modules\.prisma\client\query_engine-windows.dll.node'. This the error i get when running npx prisma generate with working with react, mongodb and express I was expecting no error after i run npx prisma generate, i also trying restarting my server but it is also not working
[Inject] IJSRuntime JS { get; set; } if (!await JS.InvokeAsync<bool>("confirm", $"Are you sure you want to delete ?")) { return; }
## The background: I'm writing a cross-platform word game helper app in SwiftUI. It's useful for games like Wordle. (a game where you try to guess a 5 letter word by guessing other 5 letter words and using info about the matching letters to zero in on the solution.) The app is also useful for other "multi-Wordle" games like Quordle, Octordle, and Sedecordle (4, 8, and 16 Wordle games at once.) For those multi-Wordle games, the Mac version of the app allows the user to open a separate Wordle window for each sub-game. SwiftUI staggers each new window down and to the right of the previous window. ## The question: How would I tile multiple windows so they neatly fill the screen in a row-major grid in SwiftUI? (Where each new window is placed to the right of the previous one until a row is filled, and then it creates a new row of windows directly below the previous one.) I both want an option to tile windows as they are created, and a "tile windows" menu option for the Windows menu.
I am using the same tools on GIMP a lot, but they are not part of the toolbox, mostly tools from the filter or color menu (e.g. enhance, exposure,...). They are not in the list that I can select in the preferences/toolbox, so is there another way to attach these tools to the toolbox? thanks, Sina tried to find them in the list of the toolbox tools that can optionally be shown in the preferences/toolbox list, but they do not appear there...
how to add Filter menu and Color menu tools to Toolbox in GIMP
|filter|colors|toolbox|
null
What you are looking for in this case is CSS. Blazor is mainly a programing framework running on web assambly that mostly replaces javascript (javascript interop is still needed for some functions). You can center a html object in the horizontal axis by creating a CSS class and then use it on the object: <!-- begin snippet: js hide: false console: true babel: false --> <!-- language: lang-css --> .my-class { display: flex; justify-content: center; } <!-- language: lang-html --> <MudPaper Class="my-class" Elevation="0"> Test </MudPaper> <!-- end snippet -->
|c++|c++11|
``` <template> <div v-if="loading"><loading></loading></div> <div v-else-if="error"><error v-bind="error"/></div> <div v-else> <div class="title" v-if="!error"> <h1>Recently Added:</h1></div> <section class="adverts" > <adverts @update:isFavorited="updateIsFavorited" v-for="advert in adverts.list" v-bind:key="advert.id" v-bind="advert" /> </section> </div> </template> <script lang="ts"> import { defineComponent, reactive, watchEffect } from 'vue'; import { useQuery } from '@vue/apollo-composable' import { GET_ADVERTS } from "@/graphql/advert"; import Adverts from '../components/Adverts.vue'; import Error from '../components/Error.vue'; import Loading from '../components/Error.vue'; interface Advert { id: number; location: string; price: number; title: string; createdAt: string; available: boolean; isFavorited: boolean; } export default defineComponent({ name: 'App', components: { Adverts, Error, Loading, }, methods: { async loadMoreAdverts() { if (this.loading) return; const accessToken = localStorage.getItem("access_token") || ""; this.loading = true; try { const { result, error } = await useQuery(GET_ADVERTS, { variables: { accessToken, offset: this.adverts.list.length, limit: 10 } }); if (result) { this.adverts.list.push(...result.value.getAdverts); } else if (error) { console.error(error); } } catch (error) { console.error(error); } finally { this.loading = false; } } }, setup() { const adverts = reactive({ list: [] as Advert[] }); const accessToken = localStorage.getItem("access_token") || ""; const updateIsFavorited = (id: number, isFavorited: boolean) => { const index = adverts.list.findIndex((advert) => advert.id === id); const updatedAdverts = [...adverts.list]; updatedAdverts[index] = { ...updatedAdverts[index], isFavorited }; adverts.list = updatedAdverts; }; const {result, loading, error} = useQuery(GET_ADVERTS, { accessToken, offset: 1, limit: 4 }, { fetchPolicy: 'network-only' }); console.log(result, loading, error); watchEffect(() => { if (result.value) { adverts.list = result.value.getAdverts; } }); return { adverts, loading, error, updateIsFavorited }; }, mounted() { // this.loadMoreAdverts(); // const observer = new IntersectionObserver(entries => { // if (entries[0].isIntersecting) { // this.loadMoreAdverts(); // } // }, { threshold: 1 }); // observer.observe(this.$refs.loadMoreTrigger); }, }); </script> <style scoped> .adverts { display:flex; justify-content:flex-start; align-items:center; flex-direction: row; flex-wrap:wrap; gap:30px 30px; margin:50px 150px; } .error { height:100%; } .title { display:grid; } h1 { color: rgb(var(--v-theme-text)); margin: 50px 150px 20px 150px; justify-self: flex-start; } </style> ``` i have this code, it fetch adverts, but every time on load i see error message for 1 sec or less, this pattern is from official documentation https://apollo.vuejs.org/guide-composable/query.html, and i dont think it should work like that, i think it should display loading but not error, because when i check error.value in script it is null [![enter image description here][1]][1] [1]: https://i.stack.imgur.com/iaw0e.png looks like this
If someone needs a separate `Dockerfile` for each project, you can put each `Dockerfile` in the project folder and use `docker-compose` to build and run them separately. The trick here is to config your `docker-compose.yaml` like this: ```yml # [Solution Folder]/docker-compose.yaml services: app1: build: context: . dockerfile: ./Project1/Dockerfile ... app2: build: context: . dockerfile: ./Project2/Dockerfile ... ``` **Note:** Inside each `Dockerfile`, you should assume you are in the solution folder (not in the project folder). So for example you will have a line like this in your `Project1/Dockerfile`: ```lang-sh COPY ["Project1/Project1.csproj", "Project1/"] ```
I have a Symfony application using Doctrine ORM. When creating a new Temp entity with relationships and trying to persist it, I get a 500 Internal Server Error. `create:1 POST http://127.0.0.1:8000/users/create 500 (Internal Server Error)` My`TaskController.php`: ```php /** * @Route("/tasks", name="task_list") */ public function listTasks(TaskRepository $taskRepository): Response { $tasks = $taskRepository->findAll(); dump($tasks); return $this->render('task/list.html.twig', [ 'tasks' => $tasks, ]); } /** * @Route("/tasks/create", name="task_create") */ public function createTask(Request $request): Response { $task = new Task(); $task->setCreatedAt(new \DateTime()); $task->setUpdatedAt(NULL); $form = $this->createForm(TaskType::class, $task); $form->handleRequest($request); if ($form->isSubmitted() && $form->isValid()) { $this->entityManager->persist($task); // Use the EntityManager to persist the task dd($task); $this->entityManager->flush(); // Use the EntityManager to flush changes return $this->redirectToRoute('task_list'); } return $this->render('task/create.html.twig', [ 'form' => $form->createView(), ]); } ``` My `routes.yaml` ```xml task_list: path: /tasks controller: App\Controller\TaskController::listTasks task_create: path: /tasks/create controller: App\Controller\TaskController::createTask ``` My entities: `User` Entity: ```php <?php namespace App\Entity; use ApiPlatform\Metadata\ApiResource; use App\Repository\UserRepository; use Doctrine\DBAL\Types\Types; use Doctrine\ORM\Mapping as ORM; #[ORM\Entity(repositoryClass: UserRepository::class)] #[ApiResource] class User { #[ORM\Id] #[ORM\GeneratedValue] #[ORM\Column] private ?int $id = null; #[ORM\Column(length: 255, nullable: true)] private ?string $username = null; #[ORM\Column(length: 255, nullable: true)] private ?string $name = null; #[ORM\Column(length: 255, nullable: true)] private ?string $surname = null; #[ORM\Column(length: 255, nullable: true)] private ?string $email = null; #[ORM\Column(length: 255, nullable: true)] private ?string $password = null; #[ORM\Column(length: 100, nullable: true)] private ?string $role = null; #[ORM\Column(type: Types::DATETIME_MUTABLE, nullable: true)] private ?\DateTimeInterface $created_at = null; public function getId(): ?int { return $this->id; } public function setId(int $id): static { $this->id = $id; return $this; } public function getUsername(): ?string { return $this->username; } public function setUsername(?string $username): static { $this->username = $username; return $this; } public function getName(): ?string { return $this->name; } public function setName(?string $name): static { $this->name = $name; return $this; } public function getSurname(): ?string { return $this->surname; } public function setSurname(?string $surname): static { $this->surname = $surname; return $this; } public function getEmail(): ?string { return $this->email; } public function setEmail(?string $email): static { $this->email = $email; return $this; } public function getPassword(): ?string { return $this->password; } public function setPassword(?string $password): static { $this->password = $password; return $this; } public function getRole(): ?string { return $this->role; } public function setRole(?string $role): static { $this->role = $role; return $this; } public function getCreatedAt(): ?\DateTimeInterface { return $this->created_at; } public function setCreatedAt(?\DateTimeInterface $created_at): static { $this->created_at = $created_at; return $this; } } ``` `Task` Entity: ```php <?php namespace App\Entity; use ApiPlatform\Metadata\ApiResource; use App\Repository\TaskRepository; use Doctrine\DBAL\Types\Types; use Doctrine\ORM\Mapping as ORM; #[ORM\Entity(repositoryClass: TaskRepository::class)] #[ApiResource] class Task { #[ORM\Id] #[ORM\GeneratedValue] #[ORM\Column] private ?int $id = null; #[ORM\Column(length: 255)] private ?string $title = null; #[ORM\Column(type: Types::TEXT, nullable: true)] private ?string $description = null; #[ORM\Column(length: 100, nullable: true)] private ?string $status = null; #[ORM\Column(type: Types::DATETIME_MUTABLE, nullable: true)] private ?\DateTimeInterface $createdAt = null; #[ORM\Column(type: Types::DATETIME_MUTABLE, nullable: true)] private ?\DateTimeInterface $updatedAt = null; #[ORM\ManyToOne] #[ORM\JoinColumn(nullable: false)] private ?User $user = null; public function getId(): ?int { return $this->id; } public function setId(int $id): static { $this->id = $id; return $this; } public function getTitle(): ?string { return $this->title; } public function setTitle(string $title): static { $this->title = $title; return $this; } public function getDescription(): ?string { return $this->description; } public function setDescription(?string $description): static { $this->description = $description; return $this; } public function getStatus(): ?string { return $this->status; } public function setStatus(?string $status): static { $this->status = $status; return $this; } public function getCreatedAt(): ?\DateTimeInterface { return $this->createdAt; } public function setCreatedAt(?\DateTimeInterface $createdAt): static { $this->createdAt = $createdAt; return $this; } public function getUpdatedAt(): ?\DateTimeInterface { return $this->updatedAt; } public function setUpdatedAt(?\DateTimeInterface $updatedAt): static { $this->updatedAt = $updatedAt; return $this; } public function getUser(): ?User { return $this->user; } public function setUser(?User $user): static { $this->user = $user; return $this; } } ``` Kindly, help me.
Now you can use `Trans` without converting every tag to number tag: ```json "en": { "product": { "header": "Welcome, <strong>User!</strong>" } } ``` "strong" tags in translation file will be replaced with components' the value of "strong" key ```html <Trans i18nKey="en.product.header" components={{ strong: <strong /> }} /> ``` The result is "Welcome, \<strong>User!<\/strong>" as html as you expected.
|git|visual-studio-code|gitignore|
I'm currently developing my first project, a simple paint application using Java Swing and AWT. While implementing the painting functionality, I encountered an issue with accurately capturing mouse movements, especially when moving the mouse quickly. I've designed the application to update the drawing coordinates in response to mouse events (mouseDragged and mouseMoved methods in the PaintPanel class), triggering repaints to render the drawings. However, despite my efforts, I've noticed that fast mouse movements sometimes result in skipped points, leading to gaps in the drawn lines. Here's my PaintPanel class, which manages the painting functionality: ``` public class PaintPanel extends JPanel implements MouseMotionListener{ public Point mouseCoordinates; boolean painting = false; public PaintPanel() { this.setPreferredSize(new Dimension(1000,550)); this.setBackground(Color.white); this.addMouseMotionListener(this); } public void paintComponent(Graphics g) { Graphics2D g2D = (Graphics2D) g; g2D.setRenderingHint(RenderingHints.KEY_ANTIALIASING, RenderingHints.VALUE_ANTIALIAS_ON); if(painting == false) { super.paintComponent(g2D); } if(mouseCoordinates != null) { g2D.setColor(UtilePanel.col); g2D.fillOval((int)mouseCoordinates.getX(),(int)mouseCoordinates.getY(),UtilePanel.brushSize, UtilePanel.brushSize); this.setCursor( this.getToolkit().createCustomCursor( new BufferedImage( 1, 1, BufferedImage.TYPE_INT_ARGB ), new Point(), null ) ); } } @Override public void mouseDragged(MouseEvent e) { mouseCoordinates = e.getPoint(); painting = true; repaint(); } @Override public void mouseMoved(MouseEvent e) { mouseCoordinates = e.getPoint(); repaint(); } } ``` here's an image illustrating the issue: [](https://i.stack.imgur.com/vpztn.png) Additionally, I attempted to incorporate a game loop to continuously poll for mouse input, hoping it would improve the accuracy of mouse movement capturing. However, even with the game loop in place, the problem persists. I'm unsure if my approach to painting by omitting super.paintComponent(g) in paintComponent is the correct way or there's a better way to do it. Could someone provide insights or suggestions on how to improve the mouse event capturing to guarantee precise rendering, especially during rapid mouse movements? Your assistance would be greatly appreciated. Thank you!
How would I tile windows in a SwiftUI Mac app?
|swift|macos|window|
I have a table called `Nodes` consisting of base doc, current doc, and target doc: ``` BaseDocType . BaseDocID DocType . DocID TargetDocType . TargetDocID .. ``` I want to fetch all the related nodes for any specific node. This is what I have so far: ``` With CTE1 (ID, BaseDocType, BaseDocID, DocType, DocID, TargetDocType, TargetDocID) As ( Select ID, BaseDocType, BaseDocID, DocType, DocID, TargetDocType, TargetDocID From Doc.Nodes Where DocType=8 and DocID = 2 Union All Select a.ID, a.BaseDocType, a.BaseDocID, a.DocType, a.DocID, a.TargetDocType, a.TargetDocID From Doc.Nodes a Inner Join CTE1 b ON (a.BaseDocType = a.BaseDocType and a.BaseDocID = b.BaseDocID and a.DocType != b.DocType and a.DocID != b.DocID) ) Select * From CTE1 ``` But the query is not working. It says: > Msg 530, Level 16, State 1, Line 8 The statement terminated. The maximum recursion 100 has been exhausted before statement completion. ![Example][1] How can I fix this? [1]: https://i.stack.imgur.com/t3q8u.jpg
I drew out a tree using your example to better understand the problem. A few things (assuming I did it properly) revealed by the tree are:- A) In events 04 and 05 you don't need to store event 01, because in the case of 04: 02 is a parent of 04, and 01 is a parent of 02, so implicitly this info is captured. Same thing with event 05. B) I'm not sure whether what you are expecting as output matches the tree structure. You say you are expecting event_id | successors 02 | [04] But as 05 is a child of 04 and therefore takes place after 04 should this read like so? event_id | successors 02 | [04] [05] I'm doubtful that what you are trying to achieve is actually possible in SQL (with the given model). I've worked as an oracle DBA for longer than I should have and I would probably say to my clients don't do it like this - and spend some time with them to get a better understanding of the business case for the model or find a model which fits the requirement better. But as I'm new to Postgresql maybe I don't have enough knowledge to say for sure whether it is possible (or not) in Postgresql. I'll see what I can find.
from google.auth.transport.requests import Request from google.oauth2.credentials import Credentials from google_auth_oauthlib.flow import InstalledAppFlow from googleapiclient.discovery import build from googleapiclient.errors import HttpError from google.oauth2.service_account import Credentials def GoogeSheetApi(): SCOPES = ["https://www.googleapis.com/auth/spreadsheets"] KEY_PATH = "YOUR-CREDENTIAL PATH" SAMPLE_SPREADSHEET_ID = "YOUR-SPREAD-SHEET-ID" creds = None creds = Credentials.from_service_account_file(KEY_PATH,scopes=SCOPES) service = build("sheets", "v4", credentials=creds) sheet = service.spreadsheets() data_for_update = [["01/01/2000" , 1000],["01/01/2000" , 2000], ["01/01/2000" , 3000]] req = heet.values().append(spreadsheetId=SAMPLE_SPREADSHEET_ID,range="YourSheetName!A1", valueInputOption="RAW",body={"values" : data_for_update}).execute() print(req) return "success"
I'm using `pygame.mixer.music.play` to load and play a wav file as follows: ```` from pygame import mixer mixer.init() wavfile = "/my/wav/file.wav" mixer.music.load(wavfile) start = 0.5 mixer.music.play(start=start) ```` However, `start` seems to have a 1 second resolution. In other words, any float acts as if it was truncated to its whole number part. This is unexpected because the [documentation of pygame](https://www.pygame.org/docs/ref/music.html#pygame.mixer.music.play) shows that the start position is a float given in seconds. In fact, investigating the source, you can see that `mixer.music.play` is just a thin wrapper around the SDL library's `Mix_FadeInMusicPos` ([documented here](https://wiki.libsdl.org/SDL2_mixer/Mix_FadeInMusicPos)). Those docs also show a float second start time. Is this a limitation of the wav format, the library, or am I specifying something incorrectly here?
href="pictures/proxy-image.png" and src="pictures/proxy-image.png" Are linking to the same file They should link to 2 seperate files The href=" is the full size image and src=" is a thumbnail image JonofLeeds
{"Voters":[{"Id":9517769,"DisplayName":"Amira Bedhiafi"},{"Id":13061224,"DisplayName":"siggemannen"},{"Id":839601,"DisplayName":"gnat"}]}
I am working or matching name project, and I am using metaphone3 in java to check if two names are matched phonetically or not, metaphone3 is working great in English names but the names that I am using are Arabic names, so some times i give it two alternative spelling names that should matched but it gives me a low percent, for example Muhammed and Mhmad after calculate the ratio distance between two these names using Fuzzy it gives me they are matched 73.0 !! and I give it Muhammed and Mahmood, it gives me 83.0 !! so I am trying to put the percentage let's say if it greater than x then they are matched otherwise they are not, what is the better approach or better way to boost its accurate with non English names this is my code ``` private boolean metaphone3Ratio(String nameOne, String nameTwo){ Metaphone3 metaphone3 = new Metaphone3(); metaphone3.SetEncodeExact(false); metaphone3.SetEncodeVowels(true); metaphone3.SetWord(nameOne); metaphone3.Encode(); String nameOneEncoded = metaphone3.GetMetaph(); metaphone3.SetWord(nameTwo); metaphone3.Encode(); String nameTwoEncoded = metaphone3.GetMetaph(); return FuzzySearch.ratio(nameOneEncoded,nameTwoEncoded) > 88.0; } ```
What is the minemum percentage in Metaphone3 that I can use it to tell these names are matched
|java|large-language-model|metaphone|
null
I'm working on an Express API project and have come across an issue. I want to import my Mongoose models into the main project as a Git submodule. The idea is to keep my models in a separate repository for modularity across a few APIs. However, I'm running into issues ensuring that the mongoose connection established in my app.ts is being correctly utilized by the models in the submodule. Has anyone here had experience with setting up Mongoose models as a Git submodule for an Express project? How did you manage to ensure that the submodule models use the same mongoose connection as the main app? Any tips or advice would be greatly appreciated! In my main app, I ensure that the Mongoose connection is opened before importing any models or handling any routes. After the connection is established, I import the models from the submodule into my route handlers. I’ve added logs around the database connection process and within the model usage in routes to confirm that the connection is successful and models are loaded. Also the paths used to import the submodule models are correct.
Importing Mongoose Models into main app as Git Submodule
|node.js|typescript|mongodb|mongoose|
null
If you want to group and reduce the rows then don't use sum with window function. Instead use sum and groupby. Try: select SUM(FLOW_AMT) as FLOW_AMT_SUM, ACID, ASSOC_ID, HARMED_BILL_DUE_DATE, TRAN_ID from table group by ACID, ASSOC_ID, HARMED_BILL_DUE_DATE, TRAN_ID
Difficulty capturing fast mouse movements in Java Swing paint app
|java|swing|awt|mouselistener|
null
Somewhen I wanted to create a case insensitive autoloader, so I used the Example here: https://www.php-fig.org/psr/psr-4/examples/ ... and modified it a bit. I added a method to create pattern for `glob()`. It creates a pattern where every single char exists in uppercase and lowercase, as long it differs in case it is the same then there is no extra pattern needed so dots and slashes doesn't have uppercase or lowercase versions. path/file.php will look like [pP][aA][tT][hH]/[fF][iI][lL][eE].[pP][hH][pP]. It is not perfect, because in some cases in utf8 there are more than just one alternative char, so in case lowercase and uppercase differs from the given char, it also adds this to the pattern of possible chars, but just this one and not all alternatives. /** * Convert path and filename to a case insensitive pattern for glob(). * * @param string $path * @return string */ private function createCaseInsensitivePattern(string $path) { $chars = mb_str_split($path); $pattern = ''; foreach ($chars as $char) { $lower = mb_strtolower($char); $upper = mb_strtoupper($char); if ($char == $lower && $char == $upper) { $pattern .= $char; } else { $pattern .= '['.$lower.$upper.($char != $lower && $char != $upper?$char:'').']'; } $pattern .= $char == '/' || $char == '\\' ? $char:(''); } return $pattern; } And then I extended the `requireFile()` method with an else case. So in case the file is not found in the correct case, then it simply calls the `createCaseInsensitivePattern()` method and does a `glob()` with the result. In case nothing fits, then it just returns false. /** * If a file exists, require it from the file system. * * @param string $file The file to require. * @return bool True if the file exists, false if not. */ protected function requireFile(string $file) { if (file_exists($file)) { require $file; return true; } else { $filePattern = $this->createCaseInsensitivePattern($file); $possibleFiles = glob($filePattern); foreach ($possibleFiles as $currentFile) { if (file_exists($currentFile)) { require $currentFile; return true; } } } return false; } In case you just want to do the file_exists, just remove the two "lines that start with `require`. One last thing: I didn't check the performance. So in case it is to slow, either just add somewhere a caching that gets written to a file. Then you don't need a connection to a database and let the whole construct check the cache first, or change it to a way, that only the file but not the full path is case insensitive.
I am solving a problem that requires me to write a function that returns a pointer to a 2 dimensional array of integers, so I want to understand how memory for a 2D array or a multi-dimensional array is allocated using malloc? How is memory free'd in this situation? How do pointers work in this situation? I know a bit about all the things I have asked about (malloc, free, pointers and arrays), so when you explain don't bother yourself with trying to explain the basics of the things I have just mentioned, unless it is neccessary. I am learning these concepts but I am having a difficult time.
Pygame music play start position seems to have a 1 second resolution
|python|pygame|pygame-mixer|
I am having issues loading HTML into a modal window or sidebar in Google Sheets. I have tried starting a fresh project and I can instantly recreate the issue. Here is the *entire* code function myFunction() { var html = HtmlService.createHtmlOutput('<p>Hello, sidebar!</p>'); SpreadsheetApp.getUi().showSidebar(html); } This results in 2 errors in the console `Failed to load resource: the server responded with a status of 503 (Service Unavailable)` and `Unrecognized Content-Security-Policy directive 'require-trusted-types-for'` and the following image shows the sidebar [![][1]][1] [1]: https://i.stack.imgur.com/P6ipd.png I have tried multiple browsers, creating a new spreadsheet and clearing cache and cookies. Google also prompts for permission the first time I tried to run it and I allowed that. Does anyone know what could be causing this?
503 error in google apps script sidebar and modal
|google-sheets|google-apps-script|
I created a script to generate a maze and it works as expected but during coding, I discovered I should create object instances differently, so I refactored this: ```lua local Cell = { index = nil, coordinates = Vector2.zero, wasVisited = false, markAsVisited = function (self) self.wasVisited = true end, new = function(self, index: number, coordinates: Vector2) local cell = table.clone(self) cell.index = index cell.coordinates = coordinates return cell end, } return Cell ``` to ```lua local Cell = { index = nil, coordinates = Vector2.zero, wasVisited = false } function Cell:new(index: number, coordinates: Vector2) local instance = setmetatable({}, self) self.__index = self self.index = index self.coordinates = coordinates return instance end function Cell:markAsVisited() self.wasVisited = true end return Cell ``` and instead of [![enter image description here][1]][1] I get [![enter image description here][2]][2] can someone explain to me why? [1]: https://i.stack.imgur.com/RWCAd.png [2]: https://i.stack.imgur.com/vh8WS.png
You can also explore the functions `[REGEXP\_EXTRACT][1]` to pull out the list content enclosed in a square bracket [], combined with `SPLIT` to separate the extracted string into an array of individual values using space ' ' as the delimiter. ``` WITH sample_data AS ( SELECT 1 AS col1, '[1.2 4.2 6.3 3.1]' AS col2 UNION ALL SELECT 2, '[5.2 5.4 0.3 6.1]' UNION ALL SELECT 3, '[3.2 0.1 5.3 6.7]' ) SELECT col1, col2 FROM ( SELECT col1, SPLIT(REGEXP_EXTRACT(col2, r'\[(.*?)\]'), ' ') AS col2 FROM sample_data ) ```
I am learning about design patterns in Python and wanted to combine the abstract factory with the delegation pattern (to gain deeper insights into how the pattern works). However, I am getting a weird recursion error when combining the two patterns, which I do not understand. The error is: [Previous line repeated 987 more times] File "c:\Users\jenny\Documents\design_pattern\creational\abstract_factory.py", line 60, in __getattribute__ def __getattribute__(self, name: str): RecursionError: maximum recursion depth exceeded It is raised when `client_with_laptop.display()` is called. However a/the recursion error is already stored as `client_with_laptop`. The code is: from abc import abstractmethod class ITechnique: #abstract product def display(self): pass def turn_on(self): print("I am on!") def turn_off(self): print("I am off!") class Laptop(ITechnique): #concrete product def display(self): print("I'am a Laptop") class Smartphone(ITechnique): #concrete product def display(self): print("I'am a Smartphone") class Tablet(ITechnique): #concrete product def display(self): print("I'm a tablet!") class IFactory: @abstractmethod def get_hardware(): pass class SmartphoneFactory(IFactory): def get_hardware(self): return Smartphone() class LaptopFactory(IFactory): def get_hardware(self): return Laptop() class TabletFactory(IFactory): def get_hardware(self): return Tablet() class Client(): def __init__(self, factory: IFactory) -> None: self._hardware = factory.get_hardware() def __getattribute__(self, name: str): return getattr(self._hardware, name) if __name__ == "__main__": client_with_laptop = Client(LaptopFactory()) client_with_laptop.display() client_with_tablet = Client(TabletFactory()) client_with_tablet.display() client_with_smartphone = Client(SmartphoneFactory()) client_with_smartphone.display() When I access the attribute _hardware and remove the get_attribute section (so, basically, when I remove the delegation pattern), everything works as expected. See below the modified code section, which works: class Client(): def __init__(self, factory: IFactory) -> None: self._hardware = factory.get_hardware() if __name__ == "__main__": client_with_laptop = Client(LaptopFactory()) client_with_laptop._hardware.display() client_with_tablet = Client(TabletFactory()) client_with_tablet._hardware.display() client_with_smartphone = Client(SmartphoneFactory()) client_with_smartphone._hardware.display() Can anybody help me explain why the recursion error occurs or how to fix it. My objective was (1) to have varying devices depending on the factory used in client and (2) to be able to call the methods from the `_hardware` without typing `client._hardware` all the time but calling it directly from the client object, e.g. `client.display()`. It is not whether this is, in reality, a useful approach or not; I simply want to understand the pattern - and the occurring error - better. :-)
Adding GPS location EXIF tag to video recordings with AVAssetWriter
|ios|swift|avfoundation|avassetwriter|
|gradle|java-module|jlink|ehcache-3|
``` import easyocr import pyautogui import cv2 import numpy as np import time from PIL import Image def detect_numbers(region): reader = easyocr.Reader(['en']) # OCR modelini İngilizce olarak başlat while True: screenshot = pyautogui.screenshot(region=region) # PIL görüntüsüne dönüştür screenshot_pil = screenshot.convert('L') ocr_results = reader.readtext(np.array(screenshot_pil)) timestamp = time.strftime("%Y%m%d%H%M%S") screenshot_pil.save(f"C:/Users/furka/Pictures/Screenshotsscreenshot_{timestamp}.png") filtered_words = [] for result in ocr_results: text = result[1] if text.isdigit() and len(text) == 8: # Metnin sekiz basamaklı bir sayı olup olmadığını kontrol et filtered_words.append((text, result[0][0], result[0][1])) # Sayıyı, sol üst köşe koordinatlarını ve boyutlarını filtrelenmiş kelimelere ekle if len(filtered_words) < 2: continue min_dist = float('inf') closest_pair = None for i in range(len(filtered_words) - 1): for j in range(i + 1, len(filtered_words)): dist = levenshtein_distance(filtered_words[i][0], filtered_words[j][0]) if dist < min_dist: min_dist = dist closest_pair = (filtered_words[i], filtered_words[j]) if min_dist > 2: continue winner = closest_pair[0] if closest_pair[0][1] < closest_pair[1][1] else closest_pair[1] print(f"Winner: {winner[0]}") # Ekrandaki diğer 8 haneli kodları da filtered_words listesine ekleyerek kazanan kelimenin tamamıyla karşılaştır for result in ocr_results: text = result[1] if text.isdigit() and len(text) == 8 and text != winner[0]: filtered_words.append((text, result[0][0], result[0][1])) # Kazanan kelimenin sol üst köşesine tıkla x, y = winner[1][0], winner[1][1] # winner[1] ve winner[2] yerine winner[1][0] ve winner[1][1] kullanılmalı click_x = region[0] + x click_y = region[1] + y pyautogui.click(click_x, click_y) # Kazananın sol üst köşesine tıkla time.sleep(1) def levenshtein_distance(s1, s2): if len(s1) < len(s2): return levenshtein_distance(s2, s1) if len(s2) == 0: return len(s1) previous_row = range(len(s2) + 1) for i, c1 in enumerate(s1): current_row = [i + 1] for j, c2 in enumerate(s2): insertions = previous_row[j + 1] + 1 deletions = current_row[j] + 1 substitutions = previous_row[j] + (c1 != c2) current_row.append(min(insertions, deletions, substitutions)) previous_row = current_row return previous_row[-1] if __name__ == "__main__": # Tarama yapılacak bölgenin koordinatlarını burada belirtin (sol, üst, genişlik, yükseklik) region = (704, 301, 399, 499) detect_numbers(region) ``` I tried converting the image to RGB - L before and tried to find out which one would work better, I also tried tesseract OCR, although it is a larger library, easyOCR works more stable in my process (I don't know why). Even though I made a few preliminary changes to the captured screenshot (enlarging the image by 400% - using a gray filter - contrast settings), I saw that it still did not work in small fonts. Finally, I selected a region and wanted it to scan only there. It is more stable than scanning the entire screen, but I still haven't solved the problem of detecting small fonts.
Memory allocation for a 2D array in C, using malloc. Free'ing and pointers
|arrays|c|pointers|malloc|
null
I am trying to make a collision system with multiple collider types (ie AABB and circle). A unique method for checking collision needs to be run for each type of collider. In order to determine the type I have given each collider an enum stating its type. Then when the program checks collisions it could compare their type and run the corresponding method for checking that type of collision. I could do this using if else-if spam like so ``` if (colliderA.ColliderType == ColliderType.Circle && colliderB.ColliderType == ColliderType.Circle) { // Check circle-circle collision } else if (colliderA.ColliderType == ColliderType.Circle && colliderB.ColliderType == ColliderType.Box) { // Check circle-box collision } else if (...) { // And so on } ``` But this sucks for multiple obvious reasons most obviously being that it only checks if a is x and b is y but not the other case with the same outcome (a is y and b is x). It makes more sense to use a switch statement with enums anyways but that leads to another issue of getting a unique number that represents the states of 2 different enums. I came up with a solution that works but has some weird issues that I don't like. My method to convert each enum to it's number representation and then bitshift the second enum to create a unique number. ``` int enumValue = ((int)colliderA.ColliderType << 8) + (int)colliderB.ColliderType; ``` This solution has two issues that are bothering me. The first is how difficult it is to read the code when done like this ``` switch (enumValue) { case 1: case 256: // Both of these cases are for circle-box collisions // They are both far apart from each other and difficult to tell what they mean at a glance break; } ``` I could do it like this but its also a little weird to read ``` case ((int)ColliderType.Circle << 8) + (int)ColliderType.Box: case ((int)ColliderType.Box << 8) + (int)ColliderType.Circle: ``` This feels like something that would have a simple solution built into languages like C# (what I'm using) just like things like enum flags, but even if it's not I feel like there still must be a general way to do what I'm looking for in an efficient and readable way. Can anybody give me some pointers? TLDR: Given two enums is it possible to generate a unique number that represents both their states (where inputs (A, B) and (B, A) give the same output)
Generating a unique value using 2 different enums to act as cases in a switch statement
|c#|enums|switch-statement|
I have added two dependencies which can get WIFI name and can run the application in background. The application is running perfectly in the background but it get null in background. And when the application is open then it can get WIFI name. ``` import 'package:awesome_notifications/awesome_notifications.dart'; import 'package:connectivity_plus/connectivity_plus.dart'; import 'package:flutter/material.dart'; import 'package:permission_handler/permission_handler.dart'; import 'package:network_info_plus/network_info_plus.dart'; import 'package:flutter_background_service/flutter_background_service.dart'; import 'package:flutter_local_notifications/flutter_local_notifications.dart '; Future<void> initializeService() async { await flutterLocalNotificationsPlugin.initialize(const InitializationSettings( android: AndroidInitializationSettings('ic_launcher'))); var service = FlutterBackgroundService(); if (Platform.isIOS) { await flutterLocalNotificationsPlugin.initialize( const InitializationSettings(iOS: DarwinInitializationSettings())); } if (Platform.isAndroid) { final androidInfo = await deviceInfo.getWifiName(); log("androidInfo======================>$androidInfo"); } await flutterLocalNotificationsPlugin .resolvePlatformSpecificImplementation< AndroidFlutterLocalNotificationsPlugin>() ?.createNotificationChannel(notificationChannel); await service.configure( iosConfiguration: IosConfiguration(onBackground: iosBackground, onForeground: onStart), androidConfiguration: AndroidConfiguration( onStart: onStart, autoStart: true, notificationChannelId: "Foreground Service", isForegroundMode: true, initialNotificationTitle: "Foreground Service Running0..", foregroundServiceNotificationId: 90)); service.startService(); } @pragma("vm:entry-point") void onStart(ServiceInstance service) { service.on("setAsForeground").listen((event) { log("Foreground=======================>"); }); service.on("setAsBackground").listen((event) { log("Background=======================>"); }); service.on("stopService").listen((event) { service.stopSelf(); }); Timer.periodic(const Duration(seconds: 4), (timer) async { final info = NetworkInfo(); // var storedWIFI = '"AndroidWifi"'; String storedWIFI = '"JioFiber_5G"'; Connectivity() .onConnectivityChanged .listen((ConnectivityResult result) async { log("onConnectivityChanged===00001==============>$result"); if (result == ConnectivityResult.wifi) { var wifiName = await info.getWifiName(); log("wifiName=====B4 condition===========$storedWIFI=========>$wifiName"); if (storedWIFI == wifiName) { AwesomeNotifications().createNotification( content: NotificationContent( id: 10, channelKey: 'general_notification', title: 'You are connected to $wifiName', body: 'You have checked in.', notificationLayout: NotificationLayout.BigText, ), ); } } }); flutterLocalNotificationsPlugin.show( 90, "Cool Service", "Awesom ${DateTime.now()}", const NotificationDetails( android: AndroidNotificationDetails( "Foreground Service", "Foreground Service Running0..", ongoing: true))); }); } ``` Got the solution finally just need to add following permission in AndroidManifest.xml file '''<uses-permission android:name="android.permission.ACCESS_BACKGROUND_LOCATION" />'''
This is a simple Java program. I will recommend to use Notepad and Command Prompt for same. To do so, you are required to configure Java Bin Path of System variables (for Windows, Environment Variables of Advanced system settings). Then open command prompt and enter below commands: 1. `cd FileFolderPath` 2. `javac oops5.java` 3. `java oops5`
|sql|sql-server|recursion|
I have a json response where I need to extract the list of values for **files** and **folders**, however both the number of elements in each list and properties in the json response is not fixed as shown below: JSON Response: ``` "config.yml" : "project: \n files: \n - file1\n - file2\n folders: \n - folder1\n - folder2\n random1: \n random2:\n - redundant1" ``` How can I parse the string and extract the list for files and folders into a []string? For reference: config.yml ``` project: files: - file1 - file2 folders: - folder1 - folder2 random1: random2: - redundant1 ``` I tried yaml.unmarshalling the string, then json.marshal and json.unmarshal to get a nested map but I am not sure if this is the right way to go about it. Any help/guidance would be greatly appreciated!
null
<!-- language-all: js --> *Try this single `array-style` formula(dont drag it down) in **Cell_C2**:* =map(A2:A,B2:B,lambda(a,b,if(or(a="",b=""),,max(filter(F:F,E:E=a,F:F<b)))))
You could use `grep` within `vapply` with `ifelse`: ``` srch <- paste(df$PartyA, df$PartyB, sep = "|") vapply(srch, \(x) ifelse(is.null(dim(df[, grep(x, names(df))])), df[which(srch == x), grep(x, names(df))], sum(df[which(srch == x), grep(x, names(df))])), numeric(1L)) # Christian|Jewish Muslim|Muslim Muslim|Christian Jewish|Muslim Sikh|Buddhist # 21 93 79 86 55 # to assign the results to a new column: df$newcol <- vapply(srch, \(x) ifelse(is.null(dim(df[, grep(x, names(df))])), df[which(srch == x), grep(x, names(df))], sum(df[which(srch == x), grep(x, names(df))])), numeric(1L)) ```