instruction
stringlengths
0
30k
took me a while but i figured it out myself, i needed to change my height value from a percentage to view height in the child box. i then added margins to be able to control the space myself. <!-- begin snippet: js hide: false console: true babel: false --> <!-- language: lang-css --> .modal { width: 50vw; height: 30vh; <<!-- referring to this property --> margin:20px auto 20px auto; <<-- add this to control the space yourself--> padding: 1rem; background-color: aqua; border: 1px solid black; border-top: 15px solid black; border-radius: 10px; color: black; overflow: scroll; overflow-x: hidden; display: flex; flex-direction: column; text-align: left; } <!-- end snippet --> that should fix the issue for anyone looking to do the same thing.
I'm learning Python. I recently learned about Nim and began to study it too. I liked the python+pascal syntax and the speed of C. Unfortunately, there is much less educational information and reference books about Nim than there is about Python. I don't quite understand how to properly perform a reverse iteration in Nim. In Python, it's easy. For example, for a "for" loop, the var[::-1] method works easily. And it seems that in Nim it just doesn't work that way. Something like var[^1..0] just give me an error. I was able to do reverse iteration using len() and the "ind" variable that simply gets smaller with each loop. Here is a code example. ```nim const str: string = "Hello world!" var str_2: string var ind: int = len(str) for i in str: ind.inc(-1) str_2.add(str[ind]) echo str_2 ``` It's work. But this is not a pythonic solution. An unnecessary bicycle. In a good way, there should be built-in tools for reverse iteration. Nim experts, what are the main methods of reverse iteration? Sorry for my English. I write with the help of an online translator. I tried using a reverse iteration like var[^1..0]. It just gives an error. I also tried to use the "countdown" function, but that's not what it's needed for. I would like to learn about the main and right methods of reverse iteration in Nim.
I have a Redis cluster v7.x with 3 masters each having 1 slave. A function `myFunc` written in Lua is loaded in all the 3 master nodes. I am calling this function from my node.js code (caller.js) using ioredis client library as: let redis = require('ioredis'); const cluster = new redis.Cluster( [ { port: 5000, host: "localhost" }, { port: 5001, host: "localhost" }, { port: 5002, host: "localhost" } ] ); cluster.on('error', function (err) { console.log('Redis error ' + err); }); cluster.fcall("myFunc", "0", "arg1", "arg2", "arg3").then((elements) => { console.log(elements); }); When I run the program as following: `nodejs caller.js` then sometimes it runs and logs the return value, and sometimes it throws the following error: (node:2809539) UnhandledPromiseRejectionWarning: ReplyError: ERR Script attempted to access a non local key in a cluster node script: myFunc, on @user_function:66. at parseError (/home/user/node_modules/redis-parser/lib/parser.js:179:12) at parseType (/home/user/node_modules/redis-parser/lib/parser.js:302:14) The `myFunc` Lua based function uses a sorted set in Redis. The sorted set would be stored on a particular master, not all the masters. What could be wrong here? How to make the function call work?
null
I've been using dropdown with bootstrap, and yesterday I had the idea to use `navbar`. In general, it worked very well, but the location of the menu is wrong, cutting all the menu items by half. (as you can see in the image). [![enter image description here][1]][1] [1]: https://i.stack.imgur.com/fK26m.png I already use the class `dropdown-menu-right` and it solves the problem, but with `navbar`, it didn't work. This is the code of the `nav-link` that contains the wrong dropdown: ``` <li class="nav-item dropdown mx-1"> <a class="nav-link" href="#" id="navbarDropdown-98324708" role="button" data-toggle="dropdown" aria-expanded="false"> Fake button to open the menu </a> <ul class="dropdown-menu dropdown-menu-right" aria-labelledby="navbarDropdown-98324708"> <a class="dropdown-item" href="#"> this is a fake item</a> </ul> </li> ``` Anyone have any idea what I need to do to change the location? (I'm using Bootstrap 4)
Bootstrap navbar with dropdown menu in wrong locale
|html|css|bootstrap-4|
Good day, I want to check the folderpermissions on a server and export the results as a csv to a shared folder on the local computer. I thought enter-pssession was the way to go, but it cannot find the folder that I want to check. It looks like it is looking for that folder on the local computer, not on the server. This is what I have got: ``` #Start of script #Get Servername $Servername = Read-Host -Prompt "Servername" $Adminusername = Read-Host -Prompt "Admin user name" #Connect to the server Enter-PSsession $Servername -Credential $Adminusername #Get the Directory $Directory = Read-Host -Prompt "What directory and subfolders do you want to check (f.e. E:\Data\Shared_Data)?" #Define location of export file $path = "\\shared folder on local computer\" $Logdate = Get-Date -Format yyyyMMddHHmm $csvfile = $path + "\Permissions_$Servername_$LogDate.csv" Write-Host Write-Host "THE RESULTS ARE EXPORTED TO EXCEL FILE:" -ForegroundColor White -backgroundcolor darkgreen Write-Host Write-Host "\\shared folder on local computer\\Server_Permissions_$LogDate.csv" -ForegroundColor White -backgroundcolor darkgreen #Scan for directories under shared folder and get permissions for all of them dir -Recurse $Directory | where { $_.PsIsContainer } | % { $path1 = $_.fullname; Get-Acl $_.Fullname | % { $_.access | Add-Member -MemberType NoteProperty '.\Application Data' -Value $path1 -passthru }} | Export-Csv -path $csvfile -NoTypeInformation -inputObject { $_; Write-Progress "Exporting to CSV" "$($_) " } Write-Host Write-Host "Export completed" Write-Host Read-Host -Prompt “Press Enter to return to the menu” #End of script ``` How can I get this to work?
Return more than 2 duplicates in each sublist (2d list)
|python|list|multidimensional-array|jupyter-notebook|pycharm|
I am trying to simulate a web site access via C# code. The flow is 1) HTTP Get the the login Page. This succeeds. 2) HTTP Post to login page. This returns status 302, I disabled auto redirect in HttpClientHandler. Validated the cookie returned, it has the login cookie. 3) HTTP Get the actual content page. Returns success code 200, but the content is always trimmed. This is the same page to which step 2 re-directs. I have tried by even letting the auto-redirect enabled in the HttpClientHandler. Even then the response is trimmed. In postman, when I directly do step 2 allowing re-direct. The content comes properly. This used to work sometime back on the website. It's PHP based website and it's protected by CloudFare. Not sure if the cloudfare is recent thing. I checked the headers sent via the browser for the same and replicated the same in the code but still it doesn't seems to work. [Chrome Browser Request & Response Headers for step 1](https://i.stack.imgur.com/eAA48.png) [Chrome Browser Request & Response Headers for step 2](https://i.stack.imgur.com/g2MTU.png) [Chrome Browser Request & Response Headers for step 3](https://i.stack.imgur.com/FNgbT.png) [From the code I set these headers for step 1](https://i.stack.imgur.com/oM0Fw.png) [From the code I set these headers for step 2](https://i.stack.imgur.com/7Xxr9.png) [From the code I set these headers for step 3](https://i.stack.imgur.com/basAU.png) The response header in code via HttpClient is as below: [HttpClient header response](https://i.stack.imgur.com/cZ4zN.png) But this response is truncated. I have enabled automatic de-compression of the data. Any idea what might be missing. Interestingly, when posting to login page via Postman without explicitly adding any other header, the login process works and retrieves the re-directed page.
|httpclient|restsharp|
"Could not locate package metadata" error after clicking the configure button of a desktop effect KDE
Since you have explicitly mentioned decrease clause, Dafny will use that decrease clause. You assumption that it will compare `x + y` with tuple `x`, `y` is wrong. It would have chosen tuple `x`, `y` as if you haven't provided decrease clause. In either case it will compare decrease clause of invoking function/lemma with decrease clause of called function/lemma. Hence it will compare `x+y` with `x+y` (recursion call values) in this case. Edit: Dafny doesn't compare decrease clause of caller with parameter tuple of callee. It compares decrease clause of caller with decrease clause of callee. Since here caller and callee are same, it is same expression. If either of these don't have decrease clause, decrease clause will be defaulted to parameter tuple - but it still compares decrease clause of caller and callee. See this example ``` lemma Test(x: nat, y: nat) decreases y { if y > 0 { Test(x+1, 0); } } ``` Now take case when it is called with `x = 3` and `y = 2`. Here `x + y` is 5 and when you call recursively in last else if branch it will be `x = 2` and `y = 3` but `x + y` is 5 still. It is not decreasing hence Dafny is complaining.
This CORS thing can be a headache to set especially for the first time. But by being precise you can do. In your `Program.cs` file, make the following change. Notice that there are two changes to be made one above and another below the `var app = ...` statement: ``` builder.Services.AddCors(policyBuilder => policyBuilder.AddDefaultPolicy(policy => policy.WithOrigins("*").AllowAnyHeader().AllowAnyMethod()) ); var app = builder.Build(); app.UseCors(); ``` If you are specifying an actual origin, make sure you include the `http` part for example: ``` ... policy.WithOrigins("http://localhost:3000") ``` Your web browser code making the request should also be in point. Here is the request in javascript using axios: ``` import axios from "axios"; const api = 'http://localhost:5000' /** * * @returns {TransferHeader[]} */ export const fetchTransfers = async () => { const { data } = await axios({ method: 'get', url: `${api}/api/transfers`, // withCredentials: true }) console.log(data) return data } ``` Setting cors is only required when the api is being called by the web browser. If the calls are coming from the server like it is for nextjs' server components, setting up cors may not be required
If the other widgets in the stack do not require gestures, you can wrap them in an `IgnorePointer`. This way, they will not block the gestures intended for `PhotoView`. @override Widget build(BuildContext context) { Orientation orientation = MediaQuery.of(context).orientation; return MediaQuery( data: MediaQuery.of(context).copyWith(textScaleFactor: 1.0), child: Container( child: Stack(alignment: Alignment.center, children: [ PhotoView(...), IgnorePointer(child: SafeArea( child: ... ),), IgnorePointer(child: Visibility( visible: showCaliper, child: CustomPaint( size: Size(300, 300), painter: MyPainter(), ), ),), ]) )); }
From the link, it seems like short section refers to the GP related data section and long is non GP related data. In general the gp register points to section of data that is not big (and probably hence the name 'short') and load assembly calls take less code space as there is no need to give the full address, just the offset from GP. Another advantage of GP is that the memory addresses of this section cad be defined in runtime (and need to change GP value accordingaly). See example for GP [here][1] and [here][2] [1]: https://www.doc.ic.ac.uk/lab/secondyear/spim/node10.html [2]: https://tool-support.renesas.com/autoupdate/support/onlinehelp/csp/V4.01.00/CS+.chm/Compiler-CCRH.chm/Output/ccrh08c0401y.html
I'm using MySQL Workbench v8.0.36 to learn SQL and I have this table named 'Employee' | EmpID | Empname | Gender | Salary | City | RowNum | | -------- | -------- | -------- | -------- | -------- | -------- | | 1 | Arjun | M | 75000 | Pune| 1 | | 1 | Arjun | M | 75000 | Pune| 2 | | 2 | Ekadanta | M | 125000 | Bangalore| 1 | | 3 | Lalita | F | 150000 | Mathura| 1 | | 4 | Madhav | M | 250000 | Delhi| 1 | | 5 | Vishakha | F | 120000 | Mathura| 1 | as you can see, employee no.1's data has been entered twice in this table. Note - Column 'RowNum' isn't in the original 'Employee' table, only meant for demonstration... I can get the desired results by simply using ``` create table newemp as (select distinct * from employee); select * from newemp; ``` but this 'creates' a new table instead of 'deleting' the duplicates from the original table. If I HAD to run a delete operation and delete the duplicate entry in the OG table itself, how can I do that? So far, I tried using two basic methods but at the end whether I use - CTE method ``` with cte as ( select *, row_number() over (partition by empid order by empname) as rownum from employee ) delete from employee where empid in (select empid from cte where rownum > 1); ``` - OR I use count(*) ``` delete from employee where empid in (select empid from (select empid from employee group by empid having count(empid) > 1) as emp2 ) ; ``` IN the 'Employee' table itself, since 'EmpID' or any other column doesn't truly uniquely identify the rows, I am not able to target the duplicate entries in the **WHERE** clause and as a result, the query deletes any and all entries where the relevant conditions are met whereas I want to keep one and delete the rest... AFAIK I cannot target only the duplicate entries and delete only those entries in the absence of a truly unique identifier in the original table, is it somehow possible to use the COUNT of Empid to instruct the workbench to delete only a select number of instances of any given employee's data that has been entered multiple times? PS- sorry if the code or table doesn't come out neatly in the post, not familiar with formatting lines of code.
Powershell: Check folderpermissions on server and export to local computer
|powershell|server|permissions|
Adding several QHorizontalBarSeries to my QChart using the same x-axis (QValueAxis) did not add the bars relative to the values on the x-axis. Instead, all Series I've added have their biggest value fill the entire x-axis. Example data: ``` BLOCKED: {'Category 1': 146, 'Category 2': 31, 'Category 3': 0, 'Category 4': 3, 'Category 5': 14, 'Category 6': 0} FAIL: {'Category 1': 100, 'Category 2': 52, 'Category 3': 6, 'Category 4': 0, 'Category 5': 26, 'Category 6': 3} PASS: {'Category 1': 981, 'Category 2': 1176, 'Category 3': 462, 'Category 4': 81, 'Category 5': 240, 'Category 6': 129} UNEXECUTED: {'Category 1': 3, 'Category 2': 39, 'Category 3': 0, 'Category 4': 0, 'Category 5': 0, 'Category 6': 0} WIP: {'Category 1': 1, 'Category 2': 1, 'Category 3': 0, 'Category 4': 0, 'Category 5': 0, 'Category 6': 0} ``` In this example I create one series for each status (BLOCKED, FAIL, PASS, UNEXECUTED, WIP). The highest value is in PASS - Category 2. Therefore the x-axis should show every bar in relation to these values. My result looks like this: [Example Bar Chart](https://i.stack.imgur.com/uysPP.png) I have tried attaching the x-axis to each series as well as the chart, and the axis at least shows the scale I want, but still all but one series ignore the scale I want to achieve. So far my code for chart generation looks something like this ``` # Extract unique ExecutionStatus values unique_statuses = sorted(self.testexecutions["ExecutionStatus"].unique()) unique_components = sorted(self.testexecutions["Components"].unique(), reverse=True) self.__chart.setTitle("ExecutionStatus By Component") # Create a category axis for the y-axis (Components) y_axis = QtCharts.QBarCategoryAxis() y_axis.setCategories(unique_components) self.__chart.addAxis(y_axis, Qt.AlignLeft) # Create a value axis for the x-axis (Number of TestExecutions) x_axis = QtCharts.QValueAxis() # Group by both 'ExecutionStatus' and 'Components' columns and count occurrences grouped_data = self.testexecutions.groupby(["ExecutionStatus", "Components"]).size().reset_index(name="Count") # Convert the grouped data to a dictionary status_component_amount_dict = {} for status in unique_statuses: status_data = grouped_data[grouped_data["ExecutionStatus"] == status] status_component_amount_dict[status] = dict(zip(status_data["Components"], status_data["Count"])) max_bar_value = self.__get_maximum_bar(status_component_amount_dict) print(f"max_bar_value: {max_bar_value}") x_axis.setRange(0, max_bar_value) x_axis.setLabelFormat("%i") self.__chart.addAxis(x_axis, Qt.AlignBottom) for status in sorted(status_component_amount_dict.keys()): print(f"{status}: {status_component_amount_dict[status]}") series = QtCharts.QHorizontalBarSeries() bar_set = QtCharts.QBarSet(status) for component in unique_components: bar_set.append(status_component_amount_dict[status].get(component, 0)) series.append(bar_set) series.attachAxis(x_axis) series.attachAxis(y_axis) self.__chart.addSeries(series) # Create a legend legend = self.__chart.legend() legend.setAlignment(Qt.AlignTop) self.chart_view.setChart(self.__chart) ``` I still don't know why for example the bars for the status 'WIP' in 'Category 1' fill the entire x-axis, although their value is 1 and the x-axis has the range from 0 to 1176.
I using the following Antlr4 PLSQL grammar files: https://github.com/antlr/grammars-v4/tree/master/sql/plsql. From here I downloaded as follows: ``` wget https://github.com/antlr/grammars-v4/blob/master/sql/plsql/Python3/PlSqlLexerBase.py wget https://github.com/antlr/grammars-v4/blob/master/sql/plsql/Python3/PlSqlParserBase.py wget https://github.com/antlr/grammars-v4/blob/master/sql/plsql/PlSqlLexer.g4 wget https://github.com/antlr/grammars-v4/blob/master/sql/plsql/PlSqlParser.g4 mv PlSql*g4 grammars wget https://www.antlr.org/download/antlr-4.13.1-complete.jar mv antlr-4.13.1-complete.jar lib ``` Giving me : ``` ├── lib │   ├── antlr-4.13.1-complete.jar ├── grammars1 │   ├── PlSqlLexer.g4 │   └── PlSqlParser.g4 ├── PlSqlLexer.g4 ├── PlSqlParser.py ``` When I then run: ``` java -jar ./lib/antlr-4.9.3-complete.jar -Dlanguage=Python3 grammars/*g4 ``` I get the following generated in `grammars`: ``` grammars-v4-master PlSqlLexer.interp PlSqlParserBase.py PlSqlParserListener.py __pycache__ master.zip PlSqlLexer.py PlSqlParser.g4 PlSqlParser.py runPLSQL.py PlSqlLexer.g4 PlSqlLexer.tokens PlSqlParser.interp PlSqlParser.tokens ``` I then create the runPLSQL.py Python script : ``` cd grammars python3 runPLSQL.py ../../plsql/test.sql ``` But this errored with: ``` import pandas Traceback (most recent call last): File "/home/me/try2/grammars/runPLSQL.py", line 11, in <module> from PlSqlParserListener import PlSqlParserListener File "/home/me/try2/grammars/PlSqlParserListener.py", line 6, in <module> from PlSqlParser import PlSqlParser File "/home/me/try2/grammars/PlSqlParser.py", line 14, in <module> from PlSqlParserBase import PlSqlParserBase File "/home/me/try2/grammars/PlSqlParserBase.py", line 1, in <module> {"payload":{"allShortcutsEnabled":false,"fileTree":{"sql/plsql/Python3":{"items":[{"name":"PlSqlLexerBase.py","path":"sql/plsql/Python3/PlSqlLexerBase.py","contentType":"file"} NameError: name 'false' is not defined. Did you mean: 'False'? ``` I had to edit the `PlSqlLexerBase.py` file as below to overcome this and similar errors: 1. Replace `:false` with `:False` 2. Replace `:true` with `:True` 3. Replace `:null` with `:None` But now I get this: ``` import pandas Traceback (most recent call last): File "/home/me/try2/grammars/runPLSQL.py", line 11, in <module> from PlSqlParserListener import PlSqlParserListener File "/home/me/try2/grammars/PlSqlParserListener.py", line 6, in <module> from PlSqlParser import PlSqlParser File "/home/me/try2/grammars/PlSqlParser.py", line 14, in <module> from PlSqlParserBase import PlSqlParserBase ImportError: cannot import name 'PlSqlParserBase' from 'PlSqlParserBase' (/home/me/try2/grammars/PlSqlParserBase.py) ``` The `PlSqlParserBase.py` script starts with: ``` {"payload":{"allShortcutsEnabled":False,"fileTree":{"sql/plsql/Python3":{"items":[{"name":"PlSqlLexerBase.py","path":"sql/plsql/Python3/PlSqlLexerBase.py","contentType":"file"},{"name":"PlSqlParserBase.py","path":"sql/plsql/Python3/PlSqlParserBase.py","contentType":"file"}],"totalCount":2},"sql/plsql":{"items":[{"name":"CSharp","path":"sql/plsql/CSharp","contentTy...... ``` I notice it references relative pathnames, should all the paths/files exist? The top of the `runPLSQL.py` script is: ``` import os import pandas from antlr4 import InputStream, ParseTreeWalker from antlr4.CommonTokenStream import CommonTokenStream from pandas import DataFrame #from PlSql.grammar.PlSqlListener import PlSqlListener #from PlSql.grammar.PlSqlLexer import PlSqlLexer #from PlSql.grammar.PlSqlParser import PlSqlParser from PlSqlParserListener import PlSqlParserListener from PlSqlLexer import PlSqlLexer from PlSqlParser import PlSqlParser from tabulate import tabulate class SQLParser(PlSqlListener): ```
What you want to do is not feasible in plantuml. Plantuml's approach is compatible with future evolution towards a stronger support of UML packages, i.e. objects are defined and named in packages. Indeed, `usecase "Xyz" as XYZ` defines the use case in the enclosing graphical scope. Either it's at top-level scope, or within a package. Plantuml then positions and renders the use case in the enclosing scope, taking into account the associations that refer to the use-cases. The syntax that you would like to use in the first model, assumes that you could create the use-case and decide later on where to position it. This makes sense from a graphical point of view (i.e. a statement to define an object, another to define a graphical constraint for the rendering). While plantuml could definitively have supported this, it would bring several issues: * The syntax would allow the same use cases be included in several packages, which would then raise the question to which of the graphical instance each association would have to be connected. Plantuml would then need additional checks to prevent such double use. * A use case can also be defined implicitly, e.g. `:Actor1: - (UC1)`. This creates the use-case at the top-level. The proposed syntax would change the way plantuml defines enclosing context, and additional logic would have to defer the determination of the graphical scope to the end of the diagram. * Moreover, there is a syntactic ambiguity, since the definition at top level, the referring to the use-case in an association at top-level, the referring a second time to the same use case at top level would all compete with the use in a package. Disambiguating would require additional priority rules and controls. You may say that all these could be easily solved with an appropriate redesign. And you will be completely right. However, plantuml was launched, and enriched progressively, and there are plenty of other features that are more in demand. Moreover, plant**uml**'s approach are more in line with UML packages. In UML the package is not just a graphical element, it is more importantly a namespace. This would allow to have different packages with objects having the same name, but which can be disambiguated. E.g. you could then have a `Restaurant.UC1` and `Cantine.UC1` which could refer to two different use cases and `Student - Restaurant.UC1` vs `Student - Cantine.UC1` would allow to disambiguate `Student - UC1`. The current syntax of plantuml is backwards compatible with such an evolution, while your desired syntax would not support such an evolution without disrupting existing models. This is why you will have to create the use cases in a package if you want to see them in the package.
How can I make an image fit the entire background of a WPF window? Like fill the window. c# code
I try to get data from https://climate-api.open-meteo.com service using Http::get method, but running request method getBody returns null, not array of data : $response = Http::get('https://climate-api.open-meteo.com/v1/climate', [ 'query' => [ 'latitude' => $latitude, 'longitude' => $longitude, 'start_date' => $from->format('Y-m-d'), 'end_date' => $to->format('Y-m-d'), ] ]); // RETURNS TRUE \Log::info(varDump($response->successful(), ' -10 $response->successful()::')); // RETURNS 200 \Log::info(varDump($response->getStatusCode(), ' -11 $response->getStatusCode()::')); // RETURNS NULL \Log::info(varDump(json_decode($response->getBody(), true), ' -12 $response->getBody()::')); Making sample request like : https://climate-api.open-meteo.com/v1/climate?latitude=40.4165&longitude=-3.7026&start_date=2023-07-09&end_date=2023-07-11&models=CMCC_CM2_VHR4&daily=temperature_2m_max I got valid structure of data and do not see I got invalid data in $response->getBody() method ?
As pointed out in other answers you can use variables in the template to output it. But you can also do this in PHP: ```php <?php namespace { use SilverStripe\CMS\Model\SiteTree; use SilverStripe\Forms\TextField; /** * @property ?string $PrimaryColour * @property ?string $AccentColour1 * @property ?string $AccentColour2 */ class Page extends SiteTree { private static $db = [ 'PrimaryColour' => 'Varchar', 'AccentColour1' => 'Varchar', 'AccentColour2' => 'Varchar', ]; public function getCMSFields() { $fields = parent::getCMSFields(); $fields->addFieldsToTab('Root.Colours', [ TextField::create('PrimaryColour','Primary Colour'), TextField::create('AccentColour1','Accent Colour 1'), TextField::create('AccentColour2','Accent Colour 2'), ]); return $fields; } } /** * @method Page data() */ class PageController extends \SilverStripe\CMS\Controllers\ContentController { public function init() { parent::init(); // $this->data() === Page // Requirements::customCSS will insert a <style> block into <head> \SilverStripe\View\Requirements::customCSS(" body { --primary: {$this->data()->PrimaryColour}; --accent1: {$this->data()->AccentColour1}; --accent2: {$this->data()->AccentColour2}; } "); } } } ```
The problem is that when you enter the anonymous function passed to *$watch*, the context changes and *"this"* refers to the *window object* (you can check this thing by adding a `console.log(this);` in the function). To get around the problem we can use an arrow function which do not has its own context, so *"this"* will refer to the container object. From Alpine 3, by adding an *init()* function to the object, you will no longer need to specify an *x-init* attribute: the *init()* function, if present, will be called automatically. ```html <form x-data="config()"> <label> <input type="radio" value="one_date" x-model="type_date" /> Option A </label> <label> <input type="radio" value="multi_date" x-model="type_date"/> Option B </label> </form> <script> function config() { return { type_date: "", date_option: {dateFormat: "d-m-Y"}, init() { this.$watch("type_date", (value) => { console.log(this.date_option); }); } }; } </script> ```
I asked appcenter helpdesk and got the answer. - open the app center build configuration page, then enable the environment variable and add the variable for java ​ JAVA_HOME set as $(JAVA_HOME_11_X64)
*(please give me an explanation if you are downvoting)* # Implementation A way of copying the contents of the array is to create an immutable array and copy the contents of the array. Another way is to use ```Array.Copy()```. Another way is to create a contiguous memory block at a certain location and move the contents of the array at once at the newly created contiguous memory block. ``` public static Data[] DeepCopy<Data>(Data[]data) { // CREATE AN IMUTABLE ARRAY AND COPY THE DATA OF THE ORIGINAL ARRAY ImmutableArray<Data> copy = data.ToImmutableArray<Data>(); // PARSE THE IMUTABLE ARRAY TO A NORMAL ARRAY AND RETURN return copy.ToArray<Data>(); } public static Data[] CopyMemoryChunk<Data>(Data[] data) { // CREATE A CONTIGUOUS MEMORY BLOCK AND ALLOCATE THE VALUS OF THE ARRAY // AT THE MEMORY ADDRESS WHERE THE CONTIGUOUS MEMORY BLOCK IS CREATED Memory<Data> data_obj = new Memory<Data>(data); // PARSE THE CONTENTS OF THE CONTIGUOUS MEMORY BLOCK TO AN ARRAY AND RETURN return data_obj.ToArray(); } ``` # RESULT * At small sets (sub 1000 integers), is faster to copy the address using ```CopyTo()``` or to make a deep copy if the original array using an immutable array, than creating a contiguous memory block and storing the values at the contiguous memory block's address. [![enter image description here][1]][1] <br/> <br/> * At bigger sets (10000 and over), copying the values using ```CopyTo()``` or using an immutable array to create a deep copy, is slower than creating a contiguous memory block and storing the values at the contiguous memory block's address [![enter image description here][2]][2] <br/> # CODE ``` using System.Collections; using System.Collections.Immutable; using System.Diagnostics; using System.Dynamic; using System.Security.Cryptography.X509Certificates; namespace DeepCopy { class Program { public static Data[] DeepCopy<Data>(Data[]data) { // CREATE AN IMUTABLE ARRAY AND COPY THE DATA OF THE ORIGINAL ARRAY ImmutableArray<Data> copy = data.ToImmutableArray<Data>(); // PARSE THE IMUTABLE ARRAY TO A NORMAL ARRAY AND RETURN return copy.ToArray<Data>(); } public static Data[] CopyMemoryChunk<Data>(Data[] data) { // CREATE A CONTIGUOUS MEMORY BLOCK AND ALLOCATE THE VALUS OF THE ARRAY // AT THE MEMORY ADDRESS WHERE THE CONTIGUOUS MEMORY BLOCK IS CREATED Memory<Data> data_obj = new Memory<Data>(data); // PARSE THE CONTENTS OF THE CONTIGUOUS MEMORY BLOCK TO AN ARRAY AND RETURN return data_obj.ToArray(); } public static void Main() { int number_of_tests = 100000; int number_of_elements = 100; CopyUsingDeepCopy(true, number_of_elements); CopyUsingContinguousMemoryBlock(true, number_of_elements); TestSpeed(number_of_tests, number_of_elements); } public static void TestSpeed(int number_of_tests, int number_of_elements) { long TotalCopyUsingDeepCopyTime = 0; long TotalCopyUsingContinguousMemoryBlockTime = 0; long TotalArrayCopyTime = 0; Stopwatch s = new Stopwatch(); for(int i = 0; i < number_of_tests; i++) { s.Start(); CopyUsingDeepCopy(false, number_of_elements); s.Stop(); TotalCopyUsingDeepCopyTime += s.ElapsedTicks; s.Reset(); s.Start(); CopyUsingContinguousMemoryBlock(false, number_of_elements); s.Stop(); TotalCopyUsingContinguousMemoryBlockTime += s.ElapsedTicks; s.Reset(); s.Start(); int [] deep_copy = new int[number_of_elements]; int[] values = new int[number_of_elements]; for(int count = 0; count < number_of_elements; count++) { values[count] = count; } values.CopyTo(deep_copy, 0); s.Stop(); TotalArrayCopyTime += s.ElapsedTicks; s.Reset(); } Console.WriteLine("Average timer ticks took for deep copy: " + (double)(TotalCopyUsingDeepCopyTime / number_of_tests)); Console.WriteLine("Average timer ticks took for contigugous memory block: " + (double)(TotalCopyUsingContinguousMemoryBlockTime / number_of_tests)); Console.WriteLine("Average timer ticks took for 'ArrayCopy()': " + (double)(TotalArrayCopyTime / number_of_tests)); } public static void CopyUsingDeepCopy(bool print, int number_of_elements) { if(print == true) { Console.WriteLine("\n\nCopying using deep copy:\n"); } int[] values = new int[number_of_elements]; for(int i = 0; i < number_of_elements; i++) { values[i] = i; } int[]copy = DeepCopy<int>(values); values[0] = 123456; if(print == true) { PrintValues(values, copy); } } public static void CopyUsingContinguousMemoryBlock(bool print, int number_of_elements) { if(print == true) { Console.WriteLine("\n\nCopying using continguous memory block:\n"); } int[] values = new int[number_of_elements]; for(int i = 0; i < number_of_elements; i++) { values[i] = i; } int[]copy = CopyMemoryChunk<int>(values); values[0] = 123456; if(print == true) { PrintValues(values, copy); } } public static void PrintValues(int[]arr_1, int[]arr_2) { Console.WriteLine("Values in original object:\n"); for(int i = 0; i < arr_1.Length; i++) { if(i != arr_1.Length - 1) { Console.Write(arr_1[i] + ", "); } else { Console.Write(arr_1[i]); } } Console.WriteLine("\n\n"); Console.WriteLine("Values in copy object:\n"); for(int i = 0; i < arr_2.Length; i++) { if(i != arr_2.Length - 1) { Console.Write(arr_2[i] + ", "); } else { Console.Write(arr_2[i]); } } Console.WriteLine("\n\n"); } } } ``` <br/> # SPECS The program was ran using the following specs: [![specs][3]][3] [1]: https://i.stack.imgur.com/HE4Qc.png [2]: https://i.stack.imgur.com/wDp9U.png [3]: https://i.stack.imgur.com/VLEyo.png
Like this: { sleep 1; echo y; sleep 1; } | bash ./script.sh
you can easily redeploy your applicaiton via GUI by - choosing the application you want to redeploy - visiting the deploy page https://dashboard.heroku.com/apps/ your-app-name /deploy/github Now go to the bottom most(on deploy page) and find section `Manual Deploy` and click `deploy branch`
Issue with PySide6 QtCharts with multiple QHorizontalBarSeries using the same x-axis
|python|pyqt|pyside|pyside6|qtcharts|
null
I have a component which takes some state as props and displays in on the screen. I need the fade out the component, then update the state and only then fade it back in. I've seen plenty of examples for specific cases, but not for mine. The catch is that I have multiple buttons which dictate the value shown on the screen. For this matter, I need the value of the clicked button to be passed to `useSpring`, which I've only found to do this using a unified state object with a temp field. ```javascript import React, { useState } from 'react'; import { useSpring, animated } from 'react-spring'; import DisplayComponent from './DisplayComponent'; const FlexRowContainer = () => { const [state, setState] = useState({ message: 1, temp: 0}); const fadeAnimation = useSpring({ opacity: state.message != state.temp ? 0 : 1, onRest: () => { // Check if we're at the end of the fade-out if (state.message != state.temp) { // Update the state here to trigger the fade-in next setState((prev) => ({...prev, message: prev.temp, toggleFade: false})); } }, }); const handleClick = (n: number) => { // Start the fade-out setState((prev) => ({...prev, temp: n})); }; return ( <div> <button onClick={() => handleClick(1)}>Change number 1</button> <button onClick={() => handleClick(2)}>Change number 2</button> <button onClick={() => handleClick(3)}>Change number 3</button> <animated.div style={fadeAnimation}> <DisplayComponent state={state} /> </animated.div> </div> ); }; export default FlexRowContainer; ``` The code works perfectly, but I remain curious if there isn't a simpler alternative. I tried building something with `useTransition` and the `reverse` field but quickly complicated everything. I'm sure the solution is extremely simple, something like [https://stackoverflow.com/questions/65129251/react-spring-fade-images-in-out-when-prop-changes][1] [1]: https://stackoverflow.com/questions/65129251/react-spring-fade-images-in-out-when-prop-changes
Is there a fade in and out simpler, less complicated solution?
I'd like to propose my understanding more than a real answer. I can present the question in another light with this snippet: ```cpp #include <iostream> #include <type_traits> struct S { int val; int& rval = val; int* pval = &val; }; int main() { std::cout << "(S{}.val)\tint&&\t" << std::boolalpha << std::is_same<decltype((S{}.val)), int&&>::value << '\n'; std::cout << "(S{}.rval)\tint&\t" << std::boolalpha << std::is_same<decltype((S{}.rval)), int&>::value << '\n'; std::cout << "(S{}.pval)\tint*&&\t" << std::boolalpha << std::is_same<decltype((S{}.pval)), int*&&>::value << '\n'; } ``` Which outputs ```none (S{}.val) int&& true (S{}.rval) int& true (S{}.pval) int*&& true ``` [Live](https://godbolt.org/z/qb1EcvTK7) My understanding from https://en.cppreference.com/w/cpp/language/value_category:<br> `S{}.xxx` are glvalue because they are named. Period. > a glvalue (“generalized” lvalue) is an expression whose evaluation determines the identity of an object or function; As `S{}.val` and `S{}.pval` are accessed through the materialization of a temporary, their ressources can be reused after the expression usage, making them expiring values: xvalues. `S{}.rval`, on the otherhand, is an unmutable "ressource", as a reference can not be changed (only the ressource it references), thus it would disqualify the expression as xvalue, it can then only be a lvalue.<br> Or maybe we can say that a reference is not a ressource at all. I'm afraid to be unable to give a clearer explanation for this case.<br> A hint from [here](https://stackoverflow.com/questions/70039351/is-a-named-rvalue-reference-ref-a-xvalue): `S{}.rval` is not movable.<br> NB this explanation is failing if I [add constness](https://godbolt.org/z/5G4GrvG1j): according to my dubious "unmutable/non-movable" argument, I would have expected that all the expressions would become xvalues. It's visibly not the case.
Why $response->getBody() of httpClient returns NULL?
|laravel|httpclient|
It looks like you have spaces before your patterns in the file. This space is treated as part of the pattern, not an indentation. It'd be easier to tell this if you included text rather than a screenshot.
Is there a way to automatically generate points of coordinates for the contour of islands using existing map ? Is there a way to generate automatically these coordinate for instance using directly the points of the map of openstreetmap (or other), let say with a certain number of datapoints ? There are some websites like https://geojson.io/#map=10.01/15.3855/-61.289 where we can build polygon it by clicking manually, but it is clearly too time consuming fore the number of islands I have to do... I also already tried searching the datas on the net but what I find is usually bad quality, so I want to use directly the datas from the maps. Thanks a lot !
Generate geojson polygon of islands coasts using a openstreetmap or other
|python|dictionary|selenium-webdriver|openstreetmap|geojson|
null
I’m having a problem in which onUploadProgress configuration setting from Axios isn’t being called at all. This is in the context of react native: ```js const addListing = (listing, onUploadProgress) => { // Content-Type: multipart/form-data; boundary=----WebKitFormBoundaryuL67FWkv1CA const data = new FormData(); data.append('title', listing.title); data.append('price', listing.price); data.append('categoryId', listing.category.value); data.append('description', listing.description); listing.images.forEach((image, index) => data.append('images', { name: 'image' + index, type: 'image/jpeg', uri: image, }) ); if (listing.location) data.append('location', JSON.stringify(listing.location)); return client.post(endpoint, data, { headers: { 'Content-Type': 'multipart/form-data' }, onUploadProgress: (progress) => onUploadProgress(progress.loaded / progress.total), }); }; ``` Looking at axios docs https://axios-http.com/docs/req_config it says: ```js // `onUploadProgress` allows handling of progress events for uploads // browser only onUploadProgress: function (progressEvent) { // Do whatever you want with the native progress event }, ``` Is it saying that it isn’t supposed to work on react-native (only browsers) ?
Does Axios ‘onUploadProgress’ work con react-native?
|react-native|axios|
You may have run this using Docker Desktop searching for postgres? If so their port bindings aren't correct. I would manually run the `docker run` command with the correct binding as follows: ``` docker run --name postgres -e POSTGRES_PASSWORD=password -d -p 5432:5432 postgres ``` That way you can correctly connect to the container on your host machine using `localhost:5432`
{"Voters":[{"Id":15332650,"DisplayName":"Stu"},{"Id":14249087,"DisplayName":"itprorh66"},{"Id":272109,"DisplayName":"David Makogon"}],"SiteSpecificCloseReasonIds":[]}
> However, now that the abstractions have settled in a new location, is it fine to use either? It's better not to do so. The [documentation](https://docs.python.org/3/library/typing.html#typing.Sequence) specifically says: >class typing.Sequence(Reversible[T_co], Collection[T_co]) > > Deprecated alias to collections.abc.Sequence. > > Deprecated since version 3.9: collections.abc.Sequence now supports subscripting ([]) If it's deprecated, it may be removed in the next versions of Python. So go with `collections.abc` generic classes.
You can use an analytic function and conditional aggregation to count the number of invalid div's and exclude those: ```lang-sql SELECT s.nbr, s.model FROM process p INNER JOIN ( SELECT * FROM ( SELECT s.*, COUNT( CASE WHEN s.div IN ('AR91', 'AR10', 'AG55', 'AZ56', 'CZ12') THEN 1 END ) OVER (PARTITION BY s.nbr) AS num_div FROM service s ) WHERE num_div = 0 ) s ON p.nbr = s.nbr WHERE p.unit = 'MC' AND p.car = 'M' AND p.bank = '1' AND p.paid in ('NY', 'NJ') AND s.paid = 'NY' AND TO_DATE(s.ymd DEFAULT NULL ON CONVERSION ERROR, 'YYYYMMDD') BETWEEN DATE '2024-03-03' AND DATE '2024-03-11' ``` Which, for the sample data, outputs: | NBR | MODEL | | :---|:-----| | 600 | 5 | | 601 | 7 | [fiddle](https://dbfiddle.uk/5miYMr0x)
I have the following Oracle SQL query: SELECT user FROM global_users user WHERE user.status = 'ACTIVE' AND user.description IS NOT NULL AND user.updatedGoodsDate BETWEEN '2024-03-10 20:09:53' AND '2024-03-10 20:09:53' AND ROWNUM <= 13 I tried to edit the query into Postgres with Spring Data JPA: @Query("SELECT user FROM global_users user WHERE user.status = :status AND user.description IS NOT NULL AND user.updatedGoodsDate BETWEEN :startDate AND :endDate") List<Users> findTopUsers(@Param("status") TransactionStatus status, @Param("startDate") OffsetDateTime start, @Param("endDate") OffsetDateTime end, @Param("count") long count); But I can't use LIMIT clause in Postgre. Is there some way to edit the query and get the same result? For example can this be implemented with a subquery? Test project: https://github.com/rcbandit111/oracle_rownum_pg_migration_poc/blob/main/src/main/java/com/test/portal/account/platform/repository/DataRepository.java
This should do it: ```js const div = document.createElement('div'); document.body.appendChild(div); Vue.createApp(componentDefinition).mount(div) ``` Please note you've asked a [XY question](https://en.wikipedia.org/wiki/XY_problem). Most likely, if you asked about how to fix the problem you're trying to fix by placing a component at the end of `<body />` I would have given you a more useful answer, solving the actual problem.
As explained in official documentation of [`size()`](https://www.mathworks.com/help/matlab/ref/size.html), > `sz = size(A)` returns a row vector whose elements are the lengths of the corresponding dimensions of A. For example, if `A` is a 3-by-4 matrix, then `size(A)` returns the vector `[3 4]`. In Matlab, the first dimension is row. Second is column. Since `N` is a row vector. `N(2)` returns the 2nd element of this vector. As explained in the documentation, that is the length of the 2nd dimension, ie. the column index. It is equivalent as getting the second output parameter `sz2` as follows. > `[sz1,...,szN] = size(___)` returns the lengths of the queried dimensions of A separately.
I believe it depends. My field of work is primarily reliant on cloud infrastructure scalability, thus I work with AWS rather than GCP, which excels in ML. I can confidently tell that both perform well and are reliable. AWS has a wider range of scalable services than GCP. while on pricing, GCP is very transparent with its pricing model but AWS's pricing model is a little complex. You could look this up online if check out the following link; https://www.veritis.com/blog/aws-vs-azure-vs-gcp-the-cloud-platform-of-your-choice/
null
Trying to build statically an application using libxml2 I have errors like "undefined reference to `__imp_xmlTextReaderRead'" Removing the -static option it works perfectly. The commnad line Iused is : > gcc -o myapp.exe `xml2-config --cflags` -I /mingw64/include myapp.c `xml2-config --libs` -static
Errors linking statically the libxml2
|linker|libxml2|
null
The way the address is expressed `[7:2] addr`, suggests 8-bit data accessed 4 bytes at a time. You are storing 32 bit data, so the address to the memory needs to be adjusted in the module like this `memory[addr >> 2]`. RTL ``` module InstMem ( input [7:2] addr, output wire [31:0] instruction ); reg [31:0] memory[63:0]; assign instruction= memory[addr >> 2]; initial begin memory[0] = 32'h00007033; // and r0, r0, r0 32'h00000000 memory[1] = 32'h00100093; // addi r1, r0, 1 32'h00000001 memory[2] = 32'h00200113; // addi r2, r0, 2 32'h00000002 memory[3] = 32'h00308193; // addi r3, r1, 3 32'h00000004 memory[4] = 32'h00408213; // addi r4, r1, 4 32'h00000005 memory[5] = 32'h00510293; // addi r5, r2, 5 32'h00000007 memory[6] = 32'h00610313; // addi r6, r2, 6 32'h00000008 memory[7] = 32'h00718393; // addi r7, r3, 7 32'h0000000B memory[8] = 32'h00208433; // add r8, r1, r2 32'h00000003 memory[9] = 32'h404404b3; // sub r9, r8, r4 32'hfffffffe memory[10] = 32'h00317533; // and r10, r2, r3 32'h00000000 memory[11] = 32'h0041e5b3; // or r11, r3, r4 32'h00000005 memory[12] = 32'h0041a633; // if r3 is less than r4 then r12 = 1 32'h00000001 memory[13] = 32'h007346b3; // nor r13, r6, r7 32'hfffffff4 memory[14] = 32'h4d34f713; // andi r14, r9, "4D3" 32'h000004D2 memory[15] = 32'h8d35e793; // ori r15, r11, "8d3" 32'hfffff8d7 memory[16] = 32'h4d26a813; // if r13 is less than 32'h000004D2 then r16 = 1 32'h00000000 memory[17] = 32'h4d244893; // nori r17, r8, "4D2" 32'hfffffb2C end endmodule ``` Testbench ``` module instr_memtb() ; reg [7:2] addr; wire [31:0] instruction; InstMem instant(.addr(addr),.instruction(instruction)); initial begin addr = 6'h00; // Initial address (will be overridden by force command in simulation) #1; $display("ins = %h",instruction); addr = 6'h04; // Initial address (will be overridden by force command in simulation) #1; $display("ins = %h",instruction); addr = 6'h08; // Initial address (will be overridden by force command in simulation) #1; $display("ins = %h",instruction); end endmodule ``` Produces On Eda Playground Cadence ``` xcelium> run ins = 00007033 ins = 00100093 ins = 00200113 ``` The testbench vectors seem to access the memory as intended for the first few vectors. Remember that the byte oriented address, delivering dwords (32 bits) skips by 4 each increment (in hex) 0,4,8,C,10,14....
This can be handled in Angular on the component side instead of the html by using this routine. - The reference to the collection is made - A subscription to that reference is created. - we then check each document to see if the field we are looking for is empty - Once the empty field is found, we store that document id in a class variable. First, create a service .ts component to handle the backend work (this component can be useful to many other components: Firestoreservice. This component will contain these exported routines (see this post for that component firestore.service.ts: https://stackoverflow.com/questions/51336131/how-to-retrieve-document-ids-for-collections-in-angularfire2s-firestore In your component, import this service and make a <!-- begin snippet: js hide: false console: true babel: false --> <!-- language: lang-js --> private firestore: Firestoreservice; <!-- end snippet --> reference in your constructor. Done! <!-- begin snippet: js hide: false console: true babel: false --> <!-- language: lang-html --> this.firestoreData = this.firestore.getCollection(this.dataPath); //where dataPath = the collection name this.firestoreData.subscribe(firestoreData => { for (let dID of firestoreData ) { if (dID.field1 == this.ClassVariable1) { if (dID.field2 == '') { this.ClassVariable2 = dID.id; //assign the Firestore documentID to a class variable for use later } } } } ); <!-- end snippet -->
I was trying to add a p5.js sketch to my [Svelte](https://svelte.dev/) project, when I noticed that the sketch seemed to be running much slower than it was in the p5.js [web editor](https://editor.p5js.org/). To test my suspicions, I timed the execution of the draw call with `performance.now()`. Results in the p5.js web editor yielded an average execution time of `~10`ms. In my svelte project, execution time took about `80ms`. To clear any doubts that this was Svelte related, I created a new directy, threw in a boilerplate `html` file and imported p5.min.js from the CDN, and finally linked the `sketch.js`: ```html <!DOCTYPE html> <html lang="en"> <head> <meta charset="UTF-8"> <meta name="viewport" content="width=device-width, initial-scale=1.0"> <title>Document</title> <script src="https://cdnjs.cloudflare.com/ajax/libs/p5.js/1.9.2/p5.min.js"></script> </head> <body> <script src="./sketch.js"></script> </body> </html> ``` Then, I ran my project using (npm) `serve`, and observed that the execution times were still the same. It's not `frameRate` related, (that should only dictate how often the draw call is called, but I tried experimenting with setting different framerates anyway, and the time it took for the draw call to execute stayed consistent throughout). [This](https://editor.p5js.org/codingtrain/sketches/OPYPc4ueq) is the sketch I was experimenting with. My suspicion (although it is kind of a stab in the dark) is that it has to do with the `WebGL` mode used in that sketch (`createCanvas(600, 600, WEBGL);`), and that the web editor version is hardware accelerated, but my local project version is not. That's kind of a weird idea though, as to my understanding hardware acceleration should be enbaled browser-wide? If anyone could elaborate as to why there is such a massive discrepancy between performance, and how I can make the sketch on my website run as smooth as the web editor one, it'd be hugely appreciated.
|python|python-typing|standard-library|python-3.11|pep|
I am building a docker image using the below Dockerfile and the command "docker build -t tiler_test -f Dockerfile ." The building works fine. However when I try to run the docker file with "docker run -p 8080:8080 tiler_test:latest" I get the following error message which I've tried many thing to resolve w/o success. Any help appreciated. #ERROR MESSAGE node:internal/modules/cjs/loader:1473 return process.dlopen(module, path.toNamespacedPath(filename)); ^ Error: libGLX.so.0: cannot open shared object file: No such file or directory at Module._extensions..node (node:internal/modules/cjs/loader:1473:18) at Module.load (node:internal/modules/cjs/loader:1207:32) at Module._load (node:internal/modules/cjs/loader:1023:12) at Module.require (node:internal/modules/cjs/loader:1235:19) at require (node:internal/modules/helpers:176:18) at Object.<anonymous> (/usr/lib/node_modules/tileserver-gl/node_modules/@maplibre/maplibre-gl- native/platform/node/index.js:5:12) at Module._compile (node:internal/modules/cjs/loader:1376:14) at Module._extensions..js (node:internal/modules/cjs/loader:1435:10) at Module.load (node:internal/modules/cjs/loader:1207:32) at Module._load (node:internal/modules/cjs/loader:1023:12) { code: 'ERR_DLOPEN_FAILED' } Node.js v20.11.1 ########################Dockerfile################################ # Use an Ubuntu base image FROM ubuntu:latest # Install dependencies RUN apt-get update && \ apt-get install -y wget curl git unzip build-essential python3 # Install Node.js RUN curl -sL https://deb.nodesource.com/setup_20.x | bash - && \ apt-get install -y nodejs # Clone TileServer-GL repository RUN git clone https://github.com/klokantech/tileserver-gl.git /tileserver-gl # Change working directory WORKDIR /tileserver-gl # Install TileServer-GL dependencies RUN npm install RUN npm install -g npm@10.5.0 RUN npm install -g tileserver-gl WORKDIR / COPY ./config.json /config.json COPY ./test.mbtiles /test.mbtiles COPY templates /templates COPY styles /styles COPY fonts /fonts # Expose the default TileServer-GL port EXPOSE 8080 ENV PORT 8080 ENTRYPOINT ["tileserver-gl"] CMD "tileserver-gl --file test.mbtiles" ############################Dockerfile######################################## I tried running the docker image with: "docker build -t tiler_test -f Dockerfile ." and get the error message cited in description. I was expecting for the CMD to run and be able to access the resulting tileserver-gl map tile server.
tileserver-gl running within docker container - run error
|node.js|docker|ubuntu|tileserver-gl|
null
|robotics|ros2|
I have been trying to get rid of the extra space on the right side of the page by media queries but it doesn't work, there is a specific component that gives this space where its original layout depends on 3*3 grid list then I resized into to 1 column by media query but it still shows white space on the right side, here is the code: ``` .machines { background-color: rgba(245, 245, 245, 0.908); margin: auto; padding: 1rem 1rem 0 1rem; width: 100%; } .machines .content { width: 1240px; margin: auto; display: flex; flex-direction: column; justify-content: center; align-items: center; padding: 3rem; } .machines h2 { font-size: 3rem; color: rgb(155, 11, 11); padding: 2rem; } .machines .imageList { width: 1240px; padding: 3rem; display: grid; grid-template-columns: repeat(3, 1fr); grid-row-gap: 50px; text-align: center; } .machines .imageList img { width: 300px; height: 300px; border-radius: 10px; } .machines .imageList p{ font-family:'Gill Sans', 'Gill Sans MT', Calibri, 'Trebuchet MS', sans-serif; font-weight: 800; font-size: 1.3rem; } .machines .imageList img:hover { opacity: 0.5; } .machines .imageList .longImg img{ width: 130px; height: 300px; border-radius: 10px; } @media screen and (max-width:940px) { .machines .imageList { display: block; grid-template-columns: 1fr; } .machines .content { width: 100%; padding: 0; } .machines .imageList p{ padding-bottom: 2rem; } } ```
Performance of sketch drastically decreases outside of the P5 Web Editor
|webgl|p5.js|
> And since array name a is originally &a[0], it will be a int** type > variable. You are wrong. Arrays used in expressions with rare exceptions are indeed converted to pointers to their first elements. But in this case the type of elements of the array is `int[3]`. So the array `a` used in expressions is converted to a pointer of the type `int ( * )[3]`. As for this code snippet int* p; p = a; //possible error, from int** to int* printf("*p : %d\n", *p); then values of addresses of the expressions `a`, `a[0]` and `&a[0][0]` (it is the value of the address of the extent of memory occupied by the array) are equal each other. So using the value and interpreting it as a value of the type `&a[0][0]` you will get the value of the first element `a[0][0]`.
ctx, span := trace.Tracer.Start(ctx, "name", trace.WithAttributes(attribute.String("faas.trigger", "http.server"))) defer span.End() OR import semconv "go.opentelemetry.io/otel/semconv/v1.4.0" ctx, span := trace.Tracer.Start(ctx, "name", trace.WithAttributes(semconv.FaaSTriggerKey.String("http.server"))) defer span.End()
This can be handled in Angular on the component side instead of the html by using this routine. - The reference to the collection is made - A subscription to that reference is created. - we then check each document to see if the field we are looking for is empty - Once the empty field is found, we store that document id in a class variable. First, create a service .ts component to handle the backend work (this component can be useful to many other components: Firestoreservice. This component will contain these exported routines (see this post for that component firestore.service.ts: https://stackoverflow.com/questions/51336131/how-to-retrieve-document-ids-for-collections-in-angularfire2s-firestore In your component, import this service and make a <!-- begin snippet: js hide: false console: true babel: false --> <!-- language: lang-js --> private firestore: Firestoreservice, <!-- end snippet --> reference in your constructor. Done! <!-- begin snippet: js hide: false console: true babel: false --> <!-- language: lang-html --> this.firestoreData = this.firestore.getCollection(this.dataPath); //where dataPath = the collection name this.firestoreData.subscribe(firestoreData => { for (let dID of firestoreData ) { if (dID.field1 == this.ClassVariable1) { if (dID.field2 == '') { this.ClassVariable2 = dID.id; //assign the Firestore documentID to a class variable for use later } } } } ); <!-- end snippet -->
I am scraping messages about power plant unavailability and converting them into timeseries and storing them in a sql server database. My current structure is the following. * `Messages`: publicationDate datetime, messageSeriesID nvarchar, version int, messageId identity The primary key is on `(messageSeriesId, version)` * `Units`: messageId int, area nvarchar, fueltype nvarchar, unitname nvarchar tsId identity The primary key is on `tsId`. There is a foreign key relation on tsId between this table and `Messages`. The main reason for this table is that one message can contain information about multiple power plants. * `Timeseries`: tsId int, delivery datetime, value decimal I have a partition scheme based on delivery, each partition contains a month of data. The primary key is on `(tsId, delivery)` and it's partitioned along the monthly partition scheme. There is a foreign key on `tsId` to `tsId` in the `Units` table. The `Messages` and `Units` tables contain around a million rows each. The `Timeseries` table contains about 500 million rows. Now, every time I insert a new batch of data, one row goes into the `Messages` table, between one and a few (4) go into the `Units` table, and a lot (up to 100.000s) go into the `Timeseries` table. The problem I'm encountering is that inserts into the `Timeseries` table are too slow (100.000 rows take up to a minute). I already made some improvements on this by setting the fillfactor to 80 instead of 100 when rebuilding the index there. However its still too slow. And I am a bit puzzled, because the way I understand it is this: every partition contains all rows with delivery in that month, but the primary key is on `tsId` first and `delivery` second. So to insert data in this partition, it should simply be placed at the end of the partition (since `tsId` is the identity column and thus increasing by one every transaction). The time series that I am trying to insert spans 3 years and therefore 36 partitions. If I, however, create a time series with the same length that falls within a single partition the insert is notable faster (around 1.5 second). Likewise if I create an empty time series table (`timeseries_test`) with the same structure as the original one, then inserts are also very fast (also for inserting data that spans 3 years). However, querying is done based mainly on delivery, so I don't think partitioning by `tsId` is a good idea. If anyone has a suggestion on the structure or methods to improve inserts it would be greatly appreciated. Create Table statements: CREATE TABLE [dbo].[remit_messages]( [publicationDate] [datetime2](0) NOT NULL, [version] [int] NOT NULL, [messageId] [int] IDENTITY(1,1) NOT NULL, [messageSeriesId] [nvarchar](36) NOT NULL, PRIMARY KEY CLUSTERED ( [messageSeriesId] ASC, [version] ASC )WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY] ) ON [PRIMARY] GO /****** Object: Index [dbo_remit_messages_messageId] Script Date: 2024-03-30 13:26:36 ******/ CREATE UNIQUE NONCLUSTERED INDEX [dbo_remit_messages_messageId] ON [dbo].[remit_messages] ( [messageId] ASC )WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, SORT_IN_TEMPDB = OFF, IGNORE_DUP_KEY = OFF, DROP_EXISTING = OFF, ONLINE = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY] GO CREATE TABLE [dbo].[remit_units]( [tsId] [int] IDENTITY(1,1) NOT NULL, [fuelTypeId] [int] NOT NULL, [areaId] [int] NOT NULL, [messageId] [int] NOT NULL, [unitName] [nvarchar](200) NULL, PRIMARY KEY CLUSTERED ( [messageId] ASC, [tsId] ASC )WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY], CONSTRAINT [dbo_remit_tsId] UNIQUE NONCLUSTERED ( [tsId] ASC )WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY] ) ON [PRIMARY] GO /****** Object: Index [dbo_remit_units_tsid] Script Date: 2024-03-30 13:30:39 ******/ CREATE NONCLUSTERED INDEX [dbo_remit_units_tsid] ON [dbo].[remit_units] ( [tsId] ASC )WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, SORT_IN_TEMPDB = OFF, DROP_EXISTING = OFF, ONLINE = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY] GO ALTER TABLE [dbo].[remit_units] WITH CHECK ADD FOREIGN KEY([messageId]) REFERENCES [dbo].[remit_messages] ([messageId]) ON UPDATE CASCADE ON DELETE CASCADE GO CREATE TABLE [dbo].[remit_ts]( [tsId] [int] NOT NULL, [delivery] [datetime2](0) NOT NULL, [available] [decimal](11, 3) NULL, [unavailable] [decimal](11, 3) NULL, PRIMARY KEY CLUSTERED ( [delivery] ASC, [tsId] ASC )WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON, FILLFACTOR = 80) ON [MonthlyPartitionScheme]([delivery]) ) ON [MonthlyPartitionScheme]([delivery]) GO /****** Object: Index [idx_remit_ts_delivery_inc] Script Date: 2024-03-30 13:33:34 ******/ CREATE NONCLUSTERED INDEX [idx_remit_ts_delivery_inc] ON [dbo].[remit_ts] ( [delivery] ASC ) INCLUDE([tsId],[unavailable],[available]) WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, SORT_IN_TEMPDB = OFF, DROP_EXISTING = OFF, ONLINE = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON, FILLFACTOR = 80) ON [MonthlyPartitionScheme]([delivery]) GO ALTER TABLE [dbo].[remit_ts] WITH CHECK ADD FOREIGN KEY([tsId]) REFERENCES [dbo].[remit_units] ([tsId]) ON UPDATE CASCADE ON DELETE CASCADE GO
> So each `a[n]` is a `int*` type variable. No. The type of `a[n]` is `int [3]`, which is an array of 3 `int`. `int *` is a pointer to an `int`. These are different things. In memory, an array of 3 `int` has bytes for 3 `int` objects, and those bytes contain the values of the `int`. In memory, an `int *` has bytes for an address, and those bytes contain the values of the address. In an expression, `a[n]` will be automatically converted to the address of its first element except when it is the operand of `sizeof` or of unary `&`. This conversion is performed by taking the address of `a[n][0]`, and then the result will be an `int *`. But, to work with C properly, you must remember that `a[n]` is actually an array, even though this conversion is automatically performed. > And since array name `a` is originally `&a[0]`, it will be a `int**` type variable. No. The type of `a` is `int [3][3]`. As with `a[n]`, in an expression, it will be automatically converted to a pointer to its first element except when it is the operand of `sizeof` or unary `&`. The result will be `&a[0]`, and the type of that will be `int (*)[3]`, which is a pointer to an array of 3 `int`. This will **not** be further converted to `int **`. The result of the automatic conversion will be a pointer to a place in memory where there are bytes representing 3 `int`. An `int **` would point to a place in memory where there are bytes representing an address. > `p = a; //possible error, from int** to int*` Since the result of the automatic conversion of `a` has type `int (*)[3]` and `p` has type `int *`, this attempts to assign an `int (*)[3]` to `int *`. This violates the constraints for simple assignment in C 2018 6.5.16.1 1, so the C implementation is required to issue a diagnostic message for it. Then the behavior is not defined by the C standard. However, most C implementations will convert the `int (*)[3]` to `int *` and produce a pointer to the first `int` in `a`.
I can't use built in MySQL database in Laravel 8, i have to authenticate with an external ORACLE database. I have a LoginController with this code: ``` $oracle = new OracleController(); $conn = $oracle->trconnect(); $sql = "select USERNAME, USERID from OFUSERS where USERNAME = '$fnev'"; $result = $oracle->trlekerdez($conn, $sql); oci_close($conn); if ($result) { request()->session()->regenerate(); return redirect()->route('main')->with('success', 'Logged in successfully!'); } else { return redirect()->route('login')->with('username', $fnev)->withErrors(['doflogin' => 'User not found!']); } ``` This works fine, but when i use this route with **->middleware('auth')**, i can't reach main route: `Route::get('/main', [DofController::class, 'main'])->middleware('auth')->name('main');` Can i use middleware('auth')? If not, than what is the solution? How can i authenticate a user with Oracle?
Laravel middleware auth with external database
|php|laravel|oracle-database|middleware|
null
parser solution is okay, but maybe for someone case like me. issue it was `eslint` cashing so I removed the file and added it again.
When I use auth() with attempt() using Tymon JWTAuth, I'm always getting an HTTP OK (200) as a result with the following code. The user is logged in in spite of invalid credentials, and even an empty database table. ``` <?php namespace App\Http\Controllers; use Illuminate\Http\Request; use Illuminate\Support\Facades\Log; use Illuminate\Support\Facades\Hash; use Tymon\JWTAuth\Facades\JWTAuth; use App\Models\Member; class Login extends Controller { public function login() { try { $creds = request(['email', 'password']); $token = auth()->guard('member')->attempt($creds); if (is_null($token)) { return response()->json(['error' => 'Invalid credentials'], 401); } else { return response()->json(['token' => $token], 200); } } catch (Exception $error) { Log::error('Error logging in!'); return response()->json(['error' => 'Error logging in!'], 500); } } } ``` I've tried everything I can come to think of (still learning Laravel), and have used Google and ChatGPT trying to fix it. No dice. I'm kinda expecting attempt() not to give a token when it shouldn't be able to validate the credentials.
Laravel login (auth()->attempt()) never returns null
|php|laravel|laravel-10|
null
I tried all solutions here but nothing worked. What worked for me is adding includes to Include Path in C/C++ Configurations (this is different from C_Cpp->Default:Include Path) - ctrl+Shift+p - C/C++ Edit Configurations (UI) - scroll down to Include path - added all necessary include paths and also ${workspaceFolder}/**
Same issue as this question: https://stackoverflow.com/questions/78060477/apps-script-is-not-working-for-3-hours-already posted earlier this morning. **Situation:** There does not appear to be any way around the issue until Google fixes it. As you can see from Google's Issue Tracker in the following link: https://issuetracker.google.com/issues/326802543), it has been impacting users in many places around the world. In the interest of adding some more information (as many people are trying to get answers), I am including the following link providing Google's confirmation of the issue here: https://support.cloud.google.com/portal/system-status?product=WORKSPACE. Screenshot from the above site: [![Screenshot][1]][1] **Update:** Here in my region of the USA, my Apps Script projects are now currently accessible for the first time this morning. But this is apparently not universal because the upvotes on Google's Issue Tracker continue to increase. [1]: https://i.stack.imgur.com/C1Fnc.png
White Space appearing on the right side of my app when opening from mobile
|css|reactjs|media-queries|whitespace|removing-whitespace|
null
Add a JsonConverterAttribute to the CurrentLocation property public class ObjectOfInterest { [JsonPropertyName("Name")] public string Name { get; set; } = string.Empty; [JsonPropertyName("CurrentLocation")] [JsonConverter(typeof(LocationConverter))] public Location CurrentLocation { get; set; } = new(); } The converter should store the locations indexed by their name: public class LocationConverter : JsonConverter<Location> { private readonly Dictionary<string, Location> _locationDictionary = new(); public LocationConverter(IEnumerable<Location> locations) { foreach (var location in locations) { _locationDictionary[location.Name] = location; } } public override Location Read(ref Utf8JsonReader reader, Type typeToConvert, JsonSerializerOptions options) { string locationName = reader.GetString(); if (_locationDictionary.TryGetValue(locationName, out Location location)) { return location; } throw new KeyNotFoundException($"Location '{locationName}' not found."); } public override void Write(Utf8JsonWriter writer, Location value, JsonSerializerOptions options) { writer.WriteStringValue(value.Name); } } Finaly, you can use the LocationConverter like that: var locations = JsonSerializer.Deserialize<Location[]>(locationsJson); var options = new JsonSerializerOptions(); options.Converters.Add(new LocationConverter(locations)); var objects = JsonSerializer.Deserialize<ObjectOfInterest[]>(objectsJson, options);
> 1. How to put the data directly inside the code file (for the purpose of this example), as a list of terms? Either as a fact: ```prolog rows([row('A', 150), row('B', 300), row('C', 50)]). ``` then `rows(Rows)`, or as individual facts: ```prolog row('A', 150). row('B', 300). row('C', 50). ``` and then a findall to gather those up. ---- > 2. For the query, is there anyway to achieve what I wanted without having to put everything inside a script file? Your findall line does not declare what `Row` is: ```prolog ?- Rows = [row('A', 150), row('B', 300), row('C', 50)], findall(Row, (member(row(Name, Value), Rows), Value > 50), Filtered). ^ | |_ this is an uninitialized variable, so your output is Filtered = [_, _] a list of two unintialized variables. ``` so you could fix it as Paulo Moura answers, or you could do: ```prolog ?- Rows = [row('A', 150), row('B', 300), row('C', 50)], findall(row(Name, Value), (member(row(Name, Value), Rows), Value > 50), Filtered). ``` Or you could use `include/3` with a test predicate: ```prolog row_value_test(row(_, Value)) :- Value > 50. ?- Rows = [row('A', 150), row('B', 300), row('C', 50)], include(row_value_test, Rows, Filtered). ```
Trying my hand at Flask as a prototyping method and encountering some issues. Basically I have my app.py starting a subprocess (tracking.py) that does all some user tracking. I have a couple of values that should go to a database every now and again for safekeeping (I did this with a .json before but figured the database would be cleaner). Those values are all gathered in tracking.py. Now I'm sure this is a rookie mistake but when I import `from flask_sqlalchemy import SQLAlchemy` in tracking.py, I get a ModuleNotFoundError: No module named 'flask_sqlalchemy'. This seems odd because I have no problem importing it from app.py: ``` from flask import Flask, render_template, url_for, send_file from flask_sqlalchemy import SQLAlchemy import threading import atexit import subprocess ``` and both are in the same venv. Happy to provide any further info but I assume it's a dumb oversight. I tried uninstalling and reinstalling (making sure I installed with pip3) and looked at the documentation for SQLAlchemy but could find anything helpful.
You are using a native query (NativeSearchQuery, that is from the previous version of Spring Data Elasticsearch using the deprecated `RestHighLEvelCLient`) and native query can and must be used when Spring Data Elasticsearch cannot produce the desired queries, scripts etc. So the answer is no. Spring Data Elasticsearch cannot do that.
emulator certificate is installed system store using "tmpfs” Manual method <https://pswalia2u.medium.com/install-burpsuites-or-any-ca-certificate-to-system-store-in-android-10-and-11-38e508a5541a> proxy configured by following command adb -s 127.0.0.1:5555 shell settings put global http_proxy localhost:3333 adb -s 127.0.0.1:5555 reverse tcp:3333 tcp:8081 here is videos https://drive.google.com/file/d/1565SgozUA_YLpl94BLitTbZAg67ommuA/view?usp=sharing https://drive.google.com/file/d/10pQqmaRgnv9Gn6yaF04FEtRJ67xuZ_mN/view?usp=sharing https://drive.google.com/file/d/17b_SdULvOR2pJxRi4zznZPeX4XesYgsm/view?usp=sharing I have read many blogs, but none of them have been helpful.
No HTTPS access in Genymotion emulator after configuring proxy
|proxy|android-emulator|genymotion|
null