content
stringlengths
86
88.9k
title
stringlengths
0
150
question
stringlengths
1
35.8k
answers
sequence
answers_scores
sequence
non_answers
sequence
non_answers_scores
sequence
tags
sequence
name
stringlengths
30
130
Q: Confused about Powershell parameter documentation I'm not sure if I'm confused about how some of Powershell documentation regarding parameters work, or if the documentation has some mistakes. For example, the docs for get-verb [1] say that for parameter -Group, the cmdlet accepts pipeline input, at position 0. However, in the Inputs section, it says "None". Also, this doesn't work: "common" | get-verb Since the docs says that -Group accepts pipeline input at position 0, doesn't that imply that the above code should work? The docs for foreach-object [2] say that for parameter -InputObject, the position is "named". However, it accepts the -InputObject parameter at position 0. get-service | foreach {$_.name} Shouldn't that parameter's doc say Position: 0? [1] https://learn.microsoft.com/en-us/powershell/module/microsoft.powershell.utility/get-verb?view=powershell-7.3 [2] https://learn.microsoft.com/en-us/powershell/module/microsoft.powershell.core/foreach-object?view=powershell-7.3 A: Regarding the first question, the docs are in fact showing inaccurate information and GitHub Issue #9522 was raised to correct this. Looking at Get-Help we see the following: PS ..\pwsh> Get-Help Get-Verb -Parameter * | Select-Object Name, Position name position ---- -------- Group 0 Verb 1 Yet, if we look at the autogenerated param block using ProxyCommand.Create: [System.Management.Automation.ProxyCommand]::Create((Get-Command Get-Verb)) We can see the following (positions are reversed from what the doc is showing us): param( [Parameter(Position=0, ValueFromPipeline=$true, ValueFromPipelineByPropertyName=$true)] [string[]] ${Verb}, [Parameter(Position=1, ValueFromPipeline=$true, ValueFromPipelineByPropertyName=$true)] [ValidateSet('Common','Communications','Data','Diagnostic','Lifecycle','Other','Security')] [string[]] ${Group} ) We can also clearly see that -Verb is actually position 0 trying positional binding and one valid verb: Get-Verb Add As for 'common' | Get-Verb not working, it does as long as we're also binding -Verb in the call to the cmdlet, either by value from pipeline by property name or positional binding: # All verbs in `Common` groups. Works fine. # Positional Binding on `-Verb` (Position 0) 'common' | Get-Verb * # `Add` verb in `Common` groups. Also works fine. # `ValueFromPipelineByPropertyName` on both params [pscustomobject]@{ Group = 'common'; Verb = 'Add' } | Get-Verb Regarding the second statement / question: ...However, it accepts the -InputObject parameter at Position 0. This is incorrect, -InputObject in fact can be bound either from pipeline or named. Also note that this parameter is not intended to be used manually in most cases. Position 0 is actually the -Process parameter and we can clearly see it by testing this code: ForEach-Object { $_ } -InputObject 'hello'
Confused about Powershell parameter documentation
I'm not sure if I'm confused about how some of Powershell documentation regarding parameters work, or if the documentation has some mistakes. For example, the docs for get-verb [1] say that for parameter -Group, the cmdlet accepts pipeline input, at position 0. However, in the Inputs section, it says "None". Also, this doesn't work: "common" | get-verb Since the docs says that -Group accepts pipeline input at position 0, doesn't that imply that the above code should work? The docs for foreach-object [2] say that for parameter -InputObject, the position is "named". However, it accepts the -InputObject parameter at position 0. get-service | foreach {$_.name} Shouldn't that parameter's doc say Position: 0? [1] https://learn.microsoft.com/en-us/powershell/module/microsoft.powershell.utility/get-verb?view=powershell-7.3 [2] https://learn.microsoft.com/en-us/powershell/module/microsoft.powershell.core/foreach-object?view=powershell-7.3
[ "\nRegarding the first question, the docs are in fact showing inaccurate information and GitHub Issue #9522 was raised to correct this.\nLooking at Get-Help we see the following:\nPS ..\\pwsh> Get-Help Get-Verb -Parameter * | Select-Object Name, Position\n\nname position\n---- --------\nGroup 0\nVerb 1\n\nYet, if we look at the autogenerated param block using ProxyCommand.Create:\n[System.Management.Automation.ProxyCommand]::Create((Get-Command Get-Verb))\n\nWe can see the following (positions are reversed from what the doc is showing us):\nparam(\n [Parameter(Position=0, ValueFromPipeline=$true, ValueFromPipelineByPropertyName=$true)]\n [string[]]\n ${Verb},\n\n [Parameter(Position=1, ValueFromPipeline=$true, ValueFromPipelineByPropertyName=$true)]\n [ValidateSet('Common','Communications','Data','Diagnostic','Lifecycle','Other','Security')]\n [string[]]\n ${Group}\n)\n\nWe can also clearly see that -Verb is actually position 0 trying positional binding and one valid verb:\nGet-Verb Add\n\nAs for 'common' | Get-Verb not working, it does as long as we're also binding -Verb in the call to the cmdlet, either by value from pipeline by property name or positional binding:\n# All verbs in `Common` groups. Works fine.\n# Positional Binding on `-Verb` (Position 0)\n'common' | Get-Verb *\n\n# `Add` verb in `Common` groups. Also works fine.\n# `ValueFromPipelineByPropertyName` on both params\n[pscustomobject]@{ Group = 'common'; Verb = 'Add' } | Get-Verb\n\n\nRegarding the second statement / question:\n\n...However, it accepts the -InputObject parameter at Position 0.\n\nThis is incorrect, -InputObject in fact can be bound either from pipeline or named. Also note that this parameter is not intended to be used manually in most cases.\nPosition 0 is actually the -Process parameter and we can clearly see it by testing this code:\nForEach-Object { $_ } -InputObject 'hello'\n\n" ]
[ 1 ]
[]
[]
[ "powershell" ]
stackoverflow_0074680814_powershell.txt
Q: Proper way to get TAI timestamp in kotlin I want to get the TAI timestamp in kotlin in "seconds":"nanoseconds" format. It's my current solution and I'm sure there would be some better way to achieve this, import java.time.Instant import java.time.temporal.ChronoUnit; fun main() { val epochNanoseconds = ChronoUnit.NANOS.between(Instant.EPOCH, Instant.now()) val epochSeconds = epochNanoseconds/1000000000 val remainingNanoSeconds = (epochNanoseconds.toDouble()/1000000000 - epochSeconds).toString().split(".")[1] println("$epochSeconds:$remainingNanoSeconds") } Example output: 1670190945:027981042861938477 Is there any way to get seconds and remaining nanoseconds directly from java.time.Instant or any other library available to achieve this conveniently? I think even my solution is not entirely correct val remainingNanoSeconds = (epochNanoseconds.toDouble()/1000000000 - epochSeconds).toString().split(".")[1] This gives me the seconds, not the nanoseconds. A: Yes, you can use the toEpochMilli() method of the Instant class to obtain the number of milliseconds since the start of the epoch as a long value. You can then use this long value to calculate the number of seconds and nanoseconds remaining. The following example demonstrates how this can be done: Java: long epochMilli = Instant.now().toEpochMilli(); long secondsRemaining = epochMilli / 1000; long nanosecondsRemaining = (epochMilli % 1000) * 1000000; System.out.println("Seconds remaining: " + secondsRemaining); System.out.println("Nanoseconds remaining: " + nanosecondsRemaining); Kotlin: val epochMilli = Instant.now().toEpochMilli() val secondsRemaining = epochMilli / 1000 val nanosecondsRemaining = (epochMilli % 1000) * 1000000 println("Seconds remaining: $secondsRemaining") println("Nanoseconds remaining: $nanosecondsRemaining")
Proper way to get TAI timestamp in kotlin
I want to get the TAI timestamp in kotlin in "seconds":"nanoseconds" format. It's my current solution and I'm sure there would be some better way to achieve this, import java.time.Instant import java.time.temporal.ChronoUnit; fun main() { val epochNanoseconds = ChronoUnit.NANOS.between(Instant.EPOCH, Instant.now()) val epochSeconds = epochNanoseconds/1000000000 val remainingNanoSeconds = (epochNanoseconds.toDouble()/1000000000 - epochSeconds).toString().split(".")[1] println("$epochSeconds:$remainingNanoSeconds") } Example output: 1670190945:027981042861938477 Is there any way to get seconds and remaining nanoseconds directly from java.time.Instant or any other library available to achieve this conveniently? I think even my solution is not entirely correct val remainingNanoSeconds = (epochNanoseconds.toDouble()/1000000000 - epochSeconds).toString().split(".")[1] This gives me the seconds, not the nanoseconds.
[ "Yes, you can use the toEpochMilli() method of the Instant class to obtain the number of milliseconds since the start of the epoch as a long value. You can then use this long value to calculate the number of seconds and nanoseconds remaining. The following example demonstrates how this can be done:\nJava:\nlong epochMilli = Instant.now().toEpochMilli();\nlong secondsRemaining = epochMilli / 1000;\nlong nanosecondsRemaining = (epochMilli % 1000) * 1000000;\n\nSystem.out.println(\"Seconds remaining: \" + secondsRemaining);\nSystem.out.println(\"Nanoseconds remaining: \" + nanosecondsRemaining);\n\nKotlin:\nval epochMilli = Instant.now().toEpochMilli()\nval secondsRemaining = epochMilli / 1000\nval nanosecondsRemaining = (epochMilli % 1000) * 1000000\n\nprintln(\"Seconds remaining: $secondsRemaining\")\nprintln(\"Nanoseconds remaining: $nanosecondsRemaining\")\n\n" ]
[ 0 ]
[]
[]
[ "epoch", "java", "kotlin" ]
stackoverflow_0074681344_epoch_java_kotlin.txt
Q: Difficulty implement a C++ array of type abstract base class while having multiple different derived classes in it We are coding a board game for an OOP C++ class where different spaces need to have different types of member data but we also want to be able to quickly access the data at any space. The design we have right now consists of a "board" array that is length 40 and it is of type "space" pointer. Space is a fully abstract class with one virtual function in it. The virtual function is doSpaceAction(). The idea is that we will then create 7 different derived classes from this base class that will have the appropriate member data as well as their own unique implementation of doSpaceAction(). The issue comes with trying to access the member data of the derived class after the pointer to the instance of the derived class has been assigned to a space on the board. Below is some code that is analogous to what we are trying to do. We're all pretty new to OOP so we're not super sure if this is just a bad design for what we are trying to accomplish or if we just don't have the best footing when it comes to syntax. I know the compiler can tell the difference between what derived class pointer is stored b/c it can call the right version of doSpaceAction() but I just don't know what, if any, the proper syntax would be access the member data #include<iostream> class Space{ public: virtual void doSpaceAction()=0; }; class spaceType1 : public Space{ public: int data1; int data2; spaceType1(int data1 = 0, int data2 = 0){ this->data1 = data1; this->data2 = data2; } void doSpaceAction(){ std::cout<<data1<<std::endl; } }; class spaceType2 : public Space{ public: int data3; int data4; spaceType2(int data1 = 0, int data2 = 0){ this->data3 = data1; this->data4 = data2; } void doSpaceAction(){ std::cout<<data3<<std::endl; } }; int main(){ Space * Board[40]; spaceType1 * temp1 = new spaceType1(100, 100); Board[0] = temp1; spaceType2 * temp2 = new spaceType2(200, 200); Board[1] = temp2; Board[0]->doSpaceAction(); // outputs 100 as expected Board[1]->doSpaceAction(); // outputs 200 as expected //These two lines do not work //Board[0]->data1 //Board[1]->data3 } A: If you want to access the data1 and data3 member variables of your spaceType1 and spaceType2 classes, respectively, you will need to cast the pointers stored in the Board array to the appropriate derived class. For example, you could use a dynamic cast to convert a Space* to a spaceType1* and access data1 like this: spaceType1* st1 = dynamic_cast<spaceType1*>(Board[0]); if (st1) { // The cast was successful, so we can access data1 std::cout << st1->data1 << std::endl; } However, this approach is somewhat problematic because it relies on the programmer to manually check the type of each Space object before trying to access its member variables. This can be error-prone, and it makes it difficult to use the Board array in a type-safe way. A better approach would be to use polymorphism to access the member variables of the derived classes. You could do this by adding a virtual member function to the Space class that returns the data you want to access, and then implementing this function in each of the derived classes. For example: class Space { public: virtual int getData() const = 0; }; class spaceType1 : public Space { public: int data1; int data2; spaceType1(int data1 = 0, int data2 = 0) { this->data1 = data1; this->data2 = data2; } void doSpaceAction() { std::cout << data1 << std::endl; } int getData() const override { return data1; } }; class spaceType2 : public Space { public: int data3; int data4; spaceType2(int data1 = 0, int data2 = 0) { this->data3 = data1; this->data4 = data2; } void doSpaceAction() { std::cout << data3 << std::endl; } int getData() const override { return data3; } }; With this design, you can access the member variables of the derived classes by calling the getData() function on each Space object, like this: int main() { Space* Board[40]; spaceType1* temp1 = new spaceType1(100, 100); Board[0] = temp1; spaceType2* temp2 = new spaceType2(200, 200); Board[1] = temp2; Board[0]->doSpaceAction(); // outputs 100 as expected Board[1]->doSpaceAction(); // outputs 200 as expected // Now we can access data1 and data3 using the getData() function std::cout << Board[0]->getData() << std::endl; // outputs 100 std::cout << Board[1]->getData() << std::endl; // outputs 200 } This approach is more type-safe and easier to use than the previous approach. It allows
Difficulty implement a C++ array of type abstract base class while having multiple different derived classes in it
We are coding a board game for an OOP C++ class where different spaces need to have different types of member data but we also want to be able to quickly access the data at any space. The design we have right now consists of a "board" array that is length 40 and it is of type "space" pointer. Space is a fully abstract class with one virtual function in it. The virtual function is doSpaceAction(). The idea is that we will then create 7 different derived classes from this base class that will have the appropriate member data as well as their own unique implementation of doSpaceAction(). The issue comes with trying to access the member data of the derived class after the pointer to the instance of the derived class has been assigned to a space on the board. Below is some code that is analogous to what we are trying to do. We're all pretty new to OOP so we're not super sure if this is just a bad design for what we are trying to accomplish or if we just don't have the best footing when it comes to syntax. I know the compiler can tell the difference between what derived class pointer is stored b/c it can call the right version of doSpaceAction() but I just don't know what, if any, the proper syntax would be access the member data #include<iostream> class Space{ public: virtual void doSpaceAction()=0; }; class spaceType1 : public Space{ public: int data1; int data2; spaceType1(int data1 = 0, int data2 = 0){ this->data1 = data1; this->data2 = data2; } void doSpaceAction(){ std::cout<<data1<<std::endl; } }; class spaceType2 : public Space{ public: int data3; int data4; spaceType2(int data1 = 0, int data2 = 0){ this->data3 = data1; this->data4 = data2; } void doSpaceAction(){ std::cout<<data3<<std::endl; } }; int main(){ Space * Board[40]; spaceType1 * temp1 = new spaceType1(100, 100); Board[0] = temp1; spaceType2 * temp2 = new spaceType2(200, 200); Board[1] = temp2; Board[0]->doSpaceAction(); // outputs 100 as expected Board[1]->doSpaceAction(); // outputs 200 as expected //These two lines do not work //Board[0]->data1 //Board[1]->data3 }
[ "If you want to access the data1 and data3 member variables of your spaceType1 and spaceType2 classes, respectively, you will need to cast the pointers stored in the Board array to the appropriate derived class. For example, you could use a dynamic cast to convert a Space* to a spaceType1* and access data1 like this:\nspaceType1* st1 = dynamic_cast<spaceType1*>(Board[0]);\nif (st1) {\n // The cast was successful, so we can access data1\n std::cout << st1->data1 << std::endl;\n}\n\nHowever, this approach is somewhat problematic because it relies on the programmer to manually check the type of each Space object before trying to access its member variables. This can be error-prone, and it makes it difficult to use the Board array in a type-safe way.\nA better approach would be to use polymorphism to access the member variables of the derived classes. You could do this by adding a virtual member function to the Space class that returns the data you want to access, and then implementing this function in each of the derived classes. For example:\nclass Space {\npublic:\n virtual int getData() const = 0;\n};\n\nclass spaceType1 : public Space {\npublic:\n int data1;\n int data2;\n\n spaceType1(int data1 = 0, int data2 = 0) {\n this->data1 = data1;\n this->data2 = data2;\n }\n\n void doSpaceAction() {\n std::cout << data1 << std::endl;\n }\n\n int getData() const override {\n return data1;\n }\n};\n \nclass spaceType2 : public Space {\npublic:\n int data3;\n int data4;\n\n spaceType2(int data1 = 0, int data2 = 0) {\n this->data3 = data1;\n this->data4 = data2;\n }\n\n void doSpaceAction() {\n std::cout << data3 << std::endl;\n }\n\n int getData() const override {\n return data3;\n }\n};\n\nWith this design, you can access the member variables of the derived classes by calling the getData() function on each Space object, like this:\nint main() {\n Space* Board[40];\n spaceType1* temp1 = new spaceType1(100, 100);\n Board[0] = temp1;\n spaceType2* temp2 = new spaceType2(200, 200);\n Board[1] = temp2;\n\n Board[0]->doSpaceAction(); // outputs 100 as expected\n Board[1]->doSpaceAction(); // outputs 200 as expected\n\n // Now we can access data1 and data3 using the getData() function\n std::cout << Board[0]->getData() << std::endl; // outputs 100\n std::cout << Board[1]->getData() << std::endl; // outputs 200\n}\n\nThis approach is more type-safe and easier to use than the previous approach. It allows\n" ]
[ 0 ]
[]
[]
[ "c++", "oop" ]
stackoverflow_0074681327_c++_oop.txt
Q: could I loop through 3 arrays and join them to one list? could I loop through 3 arrays and join to one list ? list1 = ['test1','test2','test3'] list2 = ['2022-12-12T16:44','2022-12-12T13:45','2022-12-12T22:57'] list3 = ['low','medium','high'] can i get something like this? result =[ ['test1','2022-12-12T16:44','low']] ['test2','2022-12-12T13:45','medium'] ['test3','2022-12-12T22:57','high'] ] A: zip allows you to iterate simultaneously on several iterables (truncating to the length of the shortest iterable): list4 = [ [a,b,c] for a,b,c in zip(list1,list2,list3)] # [['test1', '2022-12-12T16:44', 'low'], # ['test2', '2022-12-12T13:45', 'medium'], # ['test3', '2022-12-12T22:57', 'high']]
could I loop through 3 arrays and join them to one list?
could I loop through 3 arrays and join to one list ? list1 = ['test1','test2','test3'] list2 = ['2022-12-12T16:44','2022-12-12T13:45','2022-12-12T22:57'] list3 = ['low','medium','high'] can i get something like this? result =[ ['test1','2022-12-12T16:44','low']] ['test2','2022-12-12T13:45','medium'] ['test3','2022-12-12T22:57','high'] ]
[ "zip allows you to iterate simultaneously on several iterables (truncating to the length of the shortest iterable):\nlist4 = [ [a,b,c] for a,b,c in zip(list1,list2,list3)]\n\n# [['test1', '2022-12-12T16:44', 'low'],\n# ['test2', '2022-12-12T13:45', 'medium'],\n# ['test3', '2022-12-12T22:57', 'high']]\n\n" ]
[ 3 ]
[]
[]
[ "arrays", "list", "loops", "python", "tuples" ]
stackoverflow_0074681376_arrays_list_loops_python_tuples.txt
Q: lxml: Xpath works in Chrome but not in lxml I'm trying to scrape information from this episode wiki page on Fandom, specifically the episode title in Japanese, 謀略Ⅳ:ドライバーを奪還せよ!: Conspiracy IV: Recapture the Driver! (謀略Ⅳ:ドライバーを奪還せよ!, Bōryaku Fō: Doraibā o Dakkan seyo!) I wrote this xpath which selects the text in Chrome: //div[@class='mw-parser-output']/span/span[@class='t_nihongo_kanji']/text(), but it does not work in lxml when I do this: import requests from lxml import html getPageContent = lambda url : html.fromstring(requests.get(url).content) content = getPageContent("https://kamenrider.fandom.com/wiki/Conspiracy_IV:_Recapture_the_Driver!") JapaneseTitle = content.xpath("//div[@class='mw-parser-output']/span/span[@class='t_nihongo_kanji']/text()") print(JapaneseTitle) I had already written these xpaths to scrape other parts of the page which are working: //h2[@data-source='name']/center/text(), the episode title in English. //div[@data-source='airdate']/div/text(), the air date. //div[@data-source='writer']/div/a, the episode writer a element. //div[@data-source='director']/div/a, the episode director a element. //p[preceding-sibling::h2[contains(span,'Synopsis')] and following-sibling::h2[contains(span,'Plot')]], all the p elements under the Snyposis section. A: As with all questions of this sort, start by breaking down your xpath into smaller expressions: Let's start with the first expression... >>> content.xpath("//div[@class='mw-parser-output']") [<Element div at 0x7fbf905d5400>] Great, that works! But if we add the next component from your expression... >>> content.xpath("//div[@class='mw-parser-output']/span") [] ...we don't get any results. It looks like the <div> element matched by the first component of your expression doesn't have any immediate descendants that are <span> elements. If we select the relevant element in Chrome and select "inspect element", and then "copy full xpath", we get: /html/body/div[4]/div[3]/div[2]/main/div[3]/div[2]/div/span/span[1] And that looks like it should match. But if we match it (or at least a similar element) using lxml, we see a different path: >>> res=content.xpath('//span[@class="t_nihongo_kanji"]')[0] >>> tree = content.getroottree() >>> tree.getpath(res) '/html/body/div[4]/div[3]/div[2]/main/div[3]/div[2]/div/p[1]/span/span[1]' The difference is here: /html/body/div[4]/div[3]/div[2]/main/div[3]/div[2]/div/p[1] <-- extra <p> element One solution is simply to ignore the difference in structure by sticking a // in the middle of the expression, so that we have something like : >>> content.xpath("(//div[@class='mw-parser-output']//span[@class='t_nihongo_kanji'])[1]/text()") ['謀略Ⅳ:ドライバーを奪還せよ!']
lxml: Xpath works in Chrome but not in lxml
I'm trying to scrape information from this episode wiki page on Fandom, specifically the episode title in Japanese, 謀略Ⅳ:ドライバーを奪還せよ!: Conspiracy IV: Recapture the Driver! (謀略Ⅳ:ドライバーを奪還せよ!, Bōryaku Fō: Doraibā o Dakkan seyo!) I wrote this xpath which selects the text in Chrome: //div[@class='mw-parser-output']/span/span[@class='t_nihongo_kanji']/text(), but it does not work in lxml when I do this: import requests from lxml import html getPageContent = lambda url : html.fromstring(requests.get(url).content) content = getPageContent("https://kamenrider.fandom.com/wiki/Conspiracy_IV:_Recapture_the_Driver!") JapaneseTitle = content.xpath("//div[@class='mw-parser-output']/span/span[@class='t_nihongo_kanji']/text()") print(JapaneseTitle) I had already written these xpaths to scrape other parts of the page which are working: //h2[@data-source='name']/center/text(), the episode title in English. //div[@data-source='airdate']/div/text(), the air date. //div[@data-source='writer']/div/a, the episode writer a element. //div[@data-source='director']/div/a, the episode director a element. //p[preceding-sibling::h2[contains(span,'Synopsis')] and following-sibling::h2[contains(span,'Plot')]], all the p elements under the Snyposis section.
[ "As with all questions of this sort, start by breaking down your xpath into smaller expressions:\nLet's start with the first expression...\n>>> content.xpath(\"//div[@class='mw-parser-output']\")\n[<Element div at 0x7fbf905d5400>]\n\nGreat, that works! But if we add the next component from your expression...\n>>> content.xpath(\"//div[@class='mw-parser-output']/span\")\n[]\n\n...we don't get any results. It looks like the <div> element matched by the first component of your expression doesn't have any immediate descendants that are <span> elements.\nIf we select the relevant element in Chrome and select \"inspect element\", and then \"copy full xpath\", we get:\n/html/body/div[4]/div[3]/div[2]/main/div[3]/div[2]/div/span/span[1]\n\nAnd that looks like it should match. But if we match it (or at least a similar element) using lxml, we see a different path:\n>>> res=content.xpath('//span[@class=\"t_nihongo_kanji\"]')[0]\n>>> tree = content.getroottree()\n>>> tree.getpath(res)\n'/html/body/div[4]/div[3]/div[2]/main/div[3]/div[2]/div/p[1]/span/span[1]'\n\nThe difference is here:\n/html/body/div[4]/div[3]/div[2]/main/div[3]/div[2]/div/p[1] <-- extra <p> element\n\nOne solution is simply to ignore the difference in structure by sticking a // in the middle of the expression, so that we have something like :\n>>> content.xpath(\"(//div[@class='mw-parser-output']//span[@class='t_nihongo_kanji'])[1]/text()\")\n['謀略Ⅳ:ドライバーを奪還せよ!']\n\n" ]
[ 0 ]
[]
[]
[ "lxml", "lxml.html", "python", "python_3.x", "xpath" ]
stackoverflow_0074681144_lxml_lxml.html_python_python_3.x_xpath.txt
Q: JavaScript hover via addEventListener I have one box (#fB) and one checkbox (#chck). I'm trying to put hover on this box based on checked or unchecked checkbox. I've written condition IF, but this hover is triggered as FALSE too. I've tried put .pointerEvents = "none"; as a FALSE, but nothing happens. Any advice where is the problem? Thank you very much. document.querySelector("#chck").addEventListener("click", changer); var check = document.querySelector("#chck"); var box = document.querySelector("#fB"); function changer(){ if(check.checked){ box.addEventListener("mouseover", function(){ box.style.background = "green"; }); box.addEventListener("mouseout", function(){ box.style.background = "purple"; }); }else{ box.removeEventListener("mouseover", function(){ box.style.background = "green"; }); box.removeEventListener("mouseout", function(){ box.style.background = "purple"; }); } }; A: It looks like your code is removing the event listeners from the box element when the checkbox is not checked. However, you are passing anonymous functions to the removeEventListener method, which won't work because those functions are not the same as the functions that were added as event listeners. To fix this, you can store references to the event listener functions in variables, and then pass those variables to the removeEventListener method. Here is an example of how you might do this: document.querySelector("#chck").addEventListener("click", changer); var check = document.querySelector("#chck"); var box = document.querySelector("#fB"); // Event listener functions var mouseOver = function() { box.style.background = "green"; }; var mouseOut = function() { box.style.background = "purple"; }; function changer() { if (check.checked) { // Add the event listeners box.addEventListener("mouseover", mouseOver); box.addEventListener("mouseout", mouseOut); } else { // Remove the event listeners box.removeEventListener("mouseover", mouseOver); box.removeEventListener("mouseout", mouseOut); } }
JavaScript hover via addEventListener
I have one box (#fB) and one checkbox (#chck). I'm trying to put hover on this box based on checked or unchecked checkbox. I've written condition IF, but this hover is triggered as FALSE too. I've tried put .pointerEvents = "none"; as a FALSE, but nothing happens. Any advice where is the problem? Thank you very much. document.querySelector("#chck").addEventListener("click", changer); var check = document.querySelector("#chck"); var box = document.querySelector("#fB"); function changer(){ if(check.checked){ box.addEventListener("mouseover", function(){ box.style.background = "green"; }); box.addEventListener("mouseout", function(){ box.style.background = "purple"; }); }else{ box.removeEventListener("mouseover", function(){ box.style.background = "green"; }); box.removeEventListener("mouseout", function(){ box.style.background = "purple"; }); } };
[ "It looks like your code is removing the event listeners from the box element when the checkbox is not checked. However, you are passing anonymous functions to the removeEventListener method, which won't work because those functions are not the same as the functions that were added as event listeners.\nTo fix this, you can store references to the event listener functions in variables, and then pass those variables to the removeEventListener method. Here is an example of how you might do this:\ndocument.querySelector(\"#chck\").addEventListener(\"click\", changer);\n\nvar check = document.querySelector(\"#chck\");\nvar box = document.querySelector(\"#fB\");\n\n// Event listener functions\nvar mouseOver = function() {\n box.style.background = \"green\";\n};\n\nvar mouseOut = function() {\n box.style.background = \"purple\";\n};\n\nfunction changer() {\n if (check.checked) {\n // Add the event listeners\n box.addEventListener(\"mouseover\", mouseOver);\n box.addEventListener(\"mouseout\", mouseOut);\n } else {\n // Remove the event listeners\n box.removeEventListener(\"mouseover\", mouseOver);\n box.removeEventListener(\"mouseout\", mouseOut);\n }\n}\n\n" ]
[ 0 ]
[]
[]
[ "javascript" ]
stackoverflow_0074681373_javascript.txt
Q: App crashes because of pendingIntent when targeting to Android 12 App crashed because of Nearby message API when targeting to android 12. Here is the crash log 2021-10-07 18:59:44.916 10343-10384/com.example.nearbymessagescanner E/AndroidRuntime: FATAL EXCEPTION: GoogleApiHandler Process: com.example.nearbymessagescanner, PID: 10343 java.lang.IllegalArgumentException: com.example.nearbymessagescanner: Targeting S+ (version 31 and above) requires that one of FLAG_IMMUTABLE or FLAG_MUTABLE be specified when creating a PendingIntent. Strongly consider using FLAG_IMMUTABLE, only use FLAG_MUTABLE if some functionality depends on the PendingIntent being mutable, e.g. if it needs to be used with inline replies or bubbles. at android.app.PendingIntent.checkFlags(PendingIntent.java:375) at android.app.PendingIntent.getActivityAsUser(PendingIntent.java:458) at android.app.PendingIntent.getActivity(PendingIntent.java:444) at android.app.PendingIntent.getActivity(PendingIntent.java:408) at com.google.android.gms.common.api.GoogleApiActivity.zaa(com.google.android.gms:play-services-base@@17.5.0:4) at com.google.android.gms.common.GoogleApiAvailability.zaa(com.google.android.gms:play-services-base@@17.5.0:116) at com.google.android.gms.common.api.internal.GoogleApiManager.zaa(com.google.android.gms:play-services-base@@17.5.0:252) at com.google.android.gms.common.api.internal.GoogleApiManager$zaa.zaa(com.google.android.gms:play-services-base@@17.5.0:109) at com.google.android.gms.common.api.internal.GoogleApiManager$zaa.onConnectionFailed(com.google.android.gms:play-services-base@@17.5.0:75) at com.google.android.gms.common.internal.zai.onConnectionFailed(com.google.android.gms:play-services-base@@17.5.0:2) at com.google.android.gms.common.internal.BaseGmsClient$zzf.zza(com.google.android.gms:play-services-basement@@17.5.0:6) at com.google.android.gms.common.internal.BaseGmsClient$zza.zza(com.google.android.gms:play-services-basement@@17.5.0:21) at com.google.android.gms.common.internal.BaseGmsClient$zzc.zzc(com.google.android.gms:play-services-basement@@17.5.0:11) at com.google.android.gms.common.internal.BaseGmsClient$zzb.handleMessage(com.google.android.gms:play-services-basement@@17.5.0:49) at android.os.Handler.dispatchMessage(Handler.java:106) at android.os.Looper.loopOnce(Looper.java:201) at android.os.Looper.loop(Looper.java:288) at android.os.HandlerThread.run(HandlerThread.java:67) This exception happens even I added the flag PendingIntent.FLAG_MUTABLE or PendingIntent.FLAG_IMMUTABLE for the pendingIntent private fun backgroundSubscribe() { Log.d(TAG, "Subscribing for background updates.") val options = SubscribeOptions.Builder().setStrategy(Strategy.BLE_ONLY).build() messagesClient.subscribe(pendingIntent, options) } private val pendingIntent: PendingIntent get() = PendingIntent.getBroadcast( this, 0, Intent(this, BeaconMessageReceiver::class.java), PendingIntent.FLAG_MUTABLE ) This is a sample app that can reproduce this issue by clicking subscribe button in the app. I am using the version 18.0.0 of play-services-nearby implementation 'com.google.android.gms:play-services-nearby:18.0.0' A: It sounds strange, but the fix is adding work manager dependency 2.7.0+ : implementation "androidx.work:work-runtime:2.7.0" You have to update dependencies that should support Android 12 braking changes (I had to update some third parties). Check that on github and documentation pages Also, some libraries are using permission <uses-permission android:name="com.google.android.gms.permission.AD_ID"/> that is required for Android 12. Please check the documentation for this permission Also, check google's issue tracker for google's library-specific issues related to Android 12 Maybe I missed something, but all this helped me to migrate. Good luck :) A: I was able to resolve this issue for a Xamarin project by adding the Xamarin.AndroidX.work.runtime NuGet package. A: Installing Xamarin.AndroidX.work.runtime 2.7.0 resolved the issue for me. Please make sure you have the right version so you don't run into some problems as I did. Happy codding!! A: Updating the dependency alone should fix the issue. 18.1.0+ seems to fix this (looking at decompiled code) and 17.0.0 does not. (Not sure about the versions in between.) implementation("com.google.android.gms:play-services-base:18.1.0")
App crashes because of pendingIntent when targeting to Android 12
App crashed because of Nearby message API when targeting to android 12. Here is the crash log 2021-10-07 18:59:44.916 10343-10384/com.example.nearbymessagescanner E/AndroidRuntime: FATAL EXCEPTION: GoogleApiHandler Process: com.example.nearbymessagescanner, PID: 10343 java.lang.IllegalArgumentException: com.example.nearbymessagescanner: Targeting S+ (version 31 and above) requires that one of FLAG_IMMUTABLE or FLAG_MUTABLE be specified when creating a PendingIntent. Strongly consider using FLAG_IMMUTABLE, only use FLAG_MUTABLE if some functionality depends on the PendingIntent being mutable, e.g. if it needs to be used with inline replies or bubbles. at android.app.PendingIntent.checkFlags(PendingIntent.java:375) at android.app.PendingIntent.getActivityAsUser(PendingIntent.java:458) at android.app.PendingIntent.getActivity(PendingIntent.java:444) at android.app.PendingIntent.getActivity(PendingIntent.java:408) at com.google.android.gms.common.api.GoogleApiActivity.zaa(com.google.android.gms:play-services-base@@17.5.0:4) at com.google.android.gms.common.GoogleApiAvailability.zaa(com.google.android.gms:play-services-base@@17.5.0:116) at com.google.android.gms.common.api.internal.GoogleApiManager.zaa(com.google.android.gms:play-services-base@@17.5.0:252) at com.google.android.gms.common.api.internal.GoogleApiManager$zaa.zaa(com.google.android.gms:play-services-base@@17.5.0:109) at com.google.android.gms.common.api.internal.GoogleApiManager$zaa.onConnectionFailed(com.google.android.gms:play-services-base@@17.5.0:75) at com.google.android.gms.common.internal.zai.onConnectionFailed(com.google.android.gms:play-services-base@@17.5.0:2) at com.google.android.gms.common.internal.BaseGmsClient$zzf.zza(com.google.android.gms:play-services-basement@@17.5.0:6) at com.google.android.gms.common.internal.BaseGmsClient$zza.zza(com.google.android.gms:play-services-basement@@17.5.0:21) at com.google.android.gms.common.internal.BaseGmsClient$zzc.zzc(com.google.android.gms:play-services-basement@@17.5.0:11) at com.google.android.gms.common.internal.BaseGmsClient$zzb.handleMessage(com.google.android.gms:play-services-basement@@17.5.0:49) at android.os.Handler.dispatchMessage(Handler.java:106) at android.os.Looper.loopOnce(Looper.java:201) at android.os.Looper.loop(Looper.java:288) at android.os.HandlerThread.run(HandlerThread.java:67) This exception happens even I added the flag PendingIntent.FLAG_MUTABLE or PendingIntent.FLAG_IMMUTABLE for the pendingIntent private fun backgroundSubscribe() { Log.d(TAG, "Subscribing for background updates.") val options = SubscribeOptions.Builder().setStrategy(Strategy.BLE_ONLY).build() messagesClient.subscribe(pendingIntent, options) } private val pendingIntent: PendingIntent get() = PendingIntent.getBroadcast( this, 0, Intent(this, BeaconMessageReceiver::class.java), PendingIntent.FLAG_MUTABLE ) This is a sample app that can reproduce this issue by clicking subscribe button in the app. I am using the version 18.0.0 of play-services-nearby implementation 'com.google.android.gms:play-services-nearby:18.0.0'
[ "\nIt sounds strange, but the fix is adding work manager dependency 2.7.0+ : implementation \"androidx.work:work-runtime:2.7.0\"\n\nYou have to update dependencies that should support Android 12 braking changes (I had to update some third parties). Check that on github and documentation pages\n\nAlso, some libraries are using permission <uses-permission android:name=\"com.google.android.gms.permission.AD_ID\"/> that is required for Android 12. Please check the documentation for this permission\n\nAlso, check google's issue tracker for google's library-specific issues related to Android 12\n\n\nMaybe I missed something, but all this helped me to migrate. Good luck :)\n", "I was able to resolve this issue for a Xamarin project by adding the Xamarin.AndroidX.work.runtime NuGet package.\n", "Installing Xamarin.AndroidX.work.runtime 2.7.0 resolved the issue for me. Please make sure you have the right version so you don't run into some problems as I did. Happy codding!!\n", "Updating the dependency alone should fix the issue. 18.1.0+ seems to fix this (looking at decompiled code) and 17.0.0 does not. (Not sure about the versions in between.)\nimplementation(\"com.google.android.gms:play-services-base:18.1.0\")\n\n" ]
[ 7, 2, 1, 0 ]
[]
[]
[ "android", "google_nearby", "google_play_services" ]
stackoverflow_0069479332_android_google_nearby_google_play_services.txt
Q: How can I get gmail label names into a list? In google sheets or otherwise In google app script I'm trying to take the gmail labels and print them in a clean list or line where I can suffix every line to easily make filters in bulk. Like this: labelname1, labelname2, labelname3, etc. Any help is greatly appreciated This is as close as ive gotten. Im mainly trying to get it to print to google sheets. But anywhere works, as long as its in correct format. // Logs all of the names of your labels function retrieveLabelNames() { var labels = GmailApp.getUserLabels(); for (var i = 0; i \< labels.length; i++) { Logger.log(labels\[i\].getName()); } A: You can use the Array.join() method to combine the names of the labels into a single string, separated by a comma and a space: function retrieveLabelNames() { var labels = GmailApp.getUserLabels(); var labelNames = labels.map(function(label) { return label.getName(); }); var labelString = labelNames.join(", "); Logger.log(labelString); } You can then write this string to a cell in a Google Sheet using the SpreadsheetApp.getActiveSheet() and Range.setValue() methods: function retrieveLabelNames() { var labels = GmailApp.getUserLabels(); var labelNames = labels.map(function(label) { return label.getName(); }); var labelString = labelNames.join(", "); var sheet = SpreadsheetApp.getActiveSheet(); var cell = sheet.getRange("A1"); cell.setValue(labelString); }
How can I get gmail label names into a list? In google sheets or otherwise
In google app script I'm trying to take the gmail labels and print them in a clean list or line where I can suffix every line to easily make filters in bulk. Like this: labelname1, labelname2, labelname3, etc. Any help is greatly appreciated This is as close as ive gotten. Im mainly trying to get it to print to google sheets. But anywhere works, as long as its in correct format. // Logs all of the names of your labels function retrieveLabelNames() { var labels = GmailApp.getUserLabels(); for (var i = 0; i \< labels.length; i++) { Logger.log(labels\[i\].getName()); }
[ "You can use the Array.join() method to combine the names of the labels into a single string, separated by a comma and a space:\nfunction retrieveLabelNames() {\n var labels = GmailApp.getUserLabels();\n var labelNames = labels.map(function(label) {\n return label.getName();\n });\n var labelString = labelNames.join(\", \");\n Logger.log(labelString);\n}\n\nYou can then write this string to a cell in a Google Sheet using the SpreadsheetApp.getActiveSheet() and Range.setValue() methods:\nfunction retrieveLabelNames() {\n var labels = GmailApp.getUserLabels();\n var labelNames = labels.map(function(label) {\n return label.getName();\n });\n var labelString = labelNames.join(\", \");\n var sheet = SpreadsheetApp.getActiveSheet();\n var cell = sheet.getRange(\"A1\");\n cell.setValue(labelString);\n}\n\n" ]
[ 0 ]
[]
[]
[ "arrays", "google_apps_script", "google_sheets", "javascript" ]
stackoverflow_0074681216_arrays_google_apps_script_google_sheets_javascript.txt
Q: Sum similar keys in an array of objects I have an array of objects like the following: [ { 'name': 'P1', 'value': 150 }, { 'name': 'P1', 'value': 150 }, { 'name': 'P2', 'value': 200 }, { 'name': 'P3', 'value': 450 } ] I need to add up all the values for objects with the same name. (Probably also other mathematical operations like calculate average.) For the example above the result would be: [ { 'name': 'P1', 'value': 300 }, { 'name': 'P2', 'value': 200 }, { 'name': 'P3', 'value': 450 } ] A: First iterate through the array and push the 'name' into another object's property. If the property exists add the 'value' to the value of the property otherwise initialize the property to the 'value'. Once you build this object, iterate through the properties and push them to another array. Here is some code: var obj = [ { 'name': 'P1', 'value': 150 }, { 'name': 'P1', 'value': 150 }, { 'name': 'P2', 'value': 200 }, { 'name': 'P3', 'value': 450 } ]; var holder = {}; obj.forEach(function(d) { if (holder.hasOwnProperty(d.name)) { holder[d.name] = holder[d.name] + d.value; } else { holder[d.name] = d.value; } }); var obj2 = []; for (var prop in holder) { obj2.push({ name: prop, value: holder[prop] }); } console.log(obj2); Hope this helps. A: An ES6 approach to group by name: You can convert your array of objects to a Map by using .reduce(). The Map has key-value pairs, where each key is the name, and each value is the accumulated sum of values for that particular name key. You can then easily convert the Map back into an array using Array.from(), where you can provide a mapping function that will take the keys/values of the Map and convert them into objects: const arr = [ { 'name': 'P1', 'value': 150 }, { 'name': 'P1', 'value': 150 }, { 'name': 'P2', 'value': 200 }, { 'name': 'P3', 'value': 450 } ]; const res = Array.from(arr.reduce( (m, {name, value}) => m.set(name, (m.get(name) || 0) + value), new Map ), ([name, value]) => ({name, value})); console.log(res); The above is quite compact and not necessarily the easiest to read. I would suggest putting it into a function so it's clearer what it's doing. If you're after more self-documenting code, using for...of can make the above easier to understand. The below function is also useful if you want to group or sum on keys with spaces in them: const arr = [ { 'name': 'P1', 'value': 150 }, { 'name': 'P1', 'value': 150 }, { 'name': 'P2', 'value': 200 }, { 'name': 'P3', 'value': 450 } ]; const sumByKey = (arr, key, value) => { const map = new Map(); for(const obj of arr) { const currSum = map.get(obj[key]) || 0; map.set(obj[key], currSum + obj[value]); } const res = Array.from(map, ([k, v]) => ({[key]: k, [value]: v})); return res; } console.log(sumByKey(arr, 'name', 'value')); // 'name' = value to group by, 'value' = value to sum Grouping by more than just name: Here's an approach that should work if you have other overlapping properties other than just name (the keys/values need to be the same in both objects for them to "group"). It involves iterating through your array and reducing it to Map which holds key-value pairs. Each key of the new Map is a string of all the property values you want to group by, and so, if your object key already exists then you know it is a duplicate, which means you can add the object's current value to the stored object. Finally, you can use Array.from() to transform your Map of key-value pairs, to an array of just values: const arr = [{'name':'P1','value':150},{'name':'P1','value':150},{'name':'P2','value':200},{'name':'P3','value':450}]; const res = Array.from(arr.reduce((acc, {value, ...r}) => { const key = JSON.stringify(r); const current = acc.get(key) || {...r, value: 0}; return acc.set(key, {...current, value: current.value + value}); }, new Map).values()); console.log(res); A: You can use Array.reduce() to accumulate results during each iteration. var arr = [{'name':'P1','value':150},{'name':'P1','value':150},{'name':'P2','value':200},{'name':'P3','value':450}]; var result = arr.reduce(function(acc, val){ var o = acc.filter(function(obj){ return obj.name==val.name; }).pop() || {name:val.name, value:0}; o.value += val.value; acc.push(o); return acc; },[]); console.log(result); A: I see these complicated reduce with Array from and Map and Set - this is far simpler const summed = arr.reduce((acc, cur, i) => { const item = i > 0 && acc.find(({name}) => name === cur.name) if (item) item.value += cur.value; else acc.push({ name: cur.name, value: cur.value }); // don't push cur here return acc; }, []) console.log(arr); // not modified console.log(summed); <script> const arr = [{ 'name': 'P1', 'value': 150 }, { 'name': 'P1', 'value': 150 }, { 'name': 'P2', 'value': 200 }, { 'name': 'P3', 'value': 450 } ] </script> A: For some reason when I ran the code by @Vignesh Raja, I obtained the "sum" but also items duplicated. So, I had to remove the duplicates as I described below. Original array: arr=[{name: "LINCE-01", y: 70}, {name: "LINCE-01", y: 155}, {name: "LINCE-01", y: 210}, {name: "MIRAFLORES-03", y: 232}, {name: "MIRAFLORES-03", y: 267}] Using @VigneshRaja's code: var result = arr.reduce(function(acc, val){ var o = acc.filter(function(obj){ return obj.name==val.name; }).pop() || {name:val.name, y:0}; o.y += val.y; acc.push(o); return acc; },[]); console.log(result); First outcome: result: [{name: "LINCE-01", y: 435}, {name: "LINCE-01", y: 435}, {name: "LINCE-01", y: 435}, {name: "MIRAFLORES-03", y: 499}, {name: "MIRAFLORES-03", y: 499}] Removing the duplicates: var finalresult = result.filter(function(itm, i, a) { return i == a.indexOf(itm); }); console.log(finalresult); Finally, I obtained what I whished: finalresult = [{name: "LINCE-01", y: 435}, {name: "MIRAFLORES-03", y: 657}] Regards, A: One more solution which is clean, I guess var obj = [ { 'name': 'P1', 'value': 150 }, { 'name': 'P1', 'value': 150 }, { 'name': 'P2', 'value': 200 }, { 'name': 'P3', 'value': 450 } ]; var result = []; Array.from(new Set(obj.map(x => x.name))).forEach(x => { result.push(obj.filter(y => y.name === x).reduce((output,item) => { let val = output[x] === undefined?0:output[x]; output[x] = (item.value + val); return output; },{})); }) console.log(result); if you need to keep the object structure same than, var obj = [ { 'name': 'P1', 'value': 150 }, { 'name': 'P1', 'value': 150 }, { 'name': 'P2', 'value': 200 }, { 'name': 'P2', 'value': 1000 }, { 'name': 'P3', 'value': 450 } ]; let output = []; let result = {}; let uniqueName = Array.from(new Set(obj.map(x => x.name))); uniqueName.forEach(n => { output.push(obj.filter(x => x.name === n).reduce((a, item) => { let val = a['name'] === undefined? item.value:a['value']+item.value; return {name:n,value:val}; },{}) ); }); console.log(output); A: I have customized Mr Nick Parsons answer(Thanks for the idea). if you need to sum multiple key values. var arr = [{'name':'P1','value':150,'apple':10},{'name':'P1','value':150,'apple':20},{'name':'P2','value':200,'apple':30},{'name':'P3','value':450,'apple':40}]; var res = Object.values(arr.reduce((acc, {value,apple , ...r}) => { var key = Object.entries(r).join('-'); acc[key] = (acc[key] || {...r, apple:0,value: 0}); return (acc[key].apple += apple, acc[key].value += value, acc); }, {})); console.log(res); A: (function () { var arr = [ {'name': 'P1', 'age': 150}, {'name': 'P1', 'age': 150}, {'name': 'P2', 'age': 200}, {'name': 'P3', 'age': 450} ]; var resMap = new Map(); var result = []; arr.map((x) => { if (!resMap.has(x.name)) resMap.set(x.name, x.age); else resMap.set(x.name, (x.age + resMap.get(x.name))); }) resMap.forEach((value, key) => { result.push({ name: key, age: value }) }) console.log(result); })(); A: let ary = [ { 'key1': 'P1', 'key2': 150 }, { 'key1': 'P1', 'key2': 150 }, { 'key1': 'P2', 'key2': 200 }, { 'key1': 'P3', 'key2': 450 } ] result array let newAray = [] for (let index = 0; index < ary.length; index++) { // console.log(ary[index].key1); for (let index2 = index + 1; index2 < ary.length; index2++) { if (ary[index2].key1 == ary[index].key1) { console.log('match'); ary[index].key2 += ary[index2].key2; newAry = ary.filter( val => val !== ary[index2]); newAry = ary.filter(function (e) { return e !== ary[index2]; }); } } } console.log(newAry) A: Here provide more generic version for this question /** * @param {(item: T) => string} keyFn * @param {(itemA: T, itemB: T) => T} mergeFn * @param {number[]} list */ function compress(keyFn, mergeFn, list) { return Array.from( list .reduce((map, item) => { const key = keyFn(item); return map.has(key) // if key already existed ? map.set(key, mergeFn(map.get(key), item)) // merge two items together : map.set(key, item); // save item in map }, new Map()) .values() ); } const testcase = [ { name: "P1", value: 150, }, { name: "P1", value: 150, }, { name: "P2", value: 200, }, { name: "P3", value: 450, }, ]; console.log( compress( /* user define which is unique key */ ({ name }) => name, /* how to merge two item together */ (a, b) => ({ name: a.name, value: a.value + b.value }), /* array */ testcase ) ) A: The method that Vignesh Raja posted will let you sum various values in an array of objects by the key or other property these method will work better A: I like this approach for readability. const original = [ { name: 'P1', value: 150 }, { name: 'P1', value: 150 }, { name: 'P2', value: 200 }, { name: 'P3', value: 450 }, ]; const aggregate = {}; original.forEach((item) => { if (aggregate[item.name]) { aggregate[item.name].value += item.value; } else { aggregate[item.name] = item; } }); const summary = Object.values(aggregate); console.log(original) console.log(aggregate) console.log(summary); A: My data was not so organized as the example data. I faced a few issues such as I wanted to total the number of unique strings in a group and I have extra data that I don't care to tally. I also found it a bit hard to subsitute some of the answers demo data obj values for real world values depending on code readability. I deceded i like @NickParsons answer best --> https://stackoverflow.com/a/57477448/5079799 and decided to add to it. Wasn't sure if this should be a new question like "Sum similar specific keys and/or sum strings in arrays" but this post seemed to be close enough that I felt an answer here was best. I decided that using Arrays for string was best, and I made it an option for my function to DeDup array if desired. But now you can use arr.length for number of hits and as in my case, I will then also have further uses for that array as a collection based on the group. I will likely be adding extra iterations over my Arr_Of_Objs but I like having this master function and then making smaller logical groups of Objs. I have a lot of cross-referencing/tallying and I tried making one large iteration where I did all the logic and it became a total mess quickly. Here is some code: var arr = [ { 'group': 'A', "batch": "FOO_A", "name": "Unique_ID_1", 'value': 150, }, { 'group': 'A', "batch": "FOO_A", "name": "Unique_ID_11", 'value': 150, }, { 'group': 'A', "batch": "FOO_B", "name": "Unique_ID_2", 'value': 150, }, { 'group': 'A', "batch": "FOO_B", "name": "Unique_ID_22", 'value': 150, }, { 'group': 'B', "batch": "BAR_A", "name": "Unique_ID_A1", 'value': 150, }, { 'group': 'B', "batch": "BAR_A", "name": "Unique_ID_A11", 'value': 150, }, { 'group': 'B', "batch": "BAR_B", "name": "Unique_ID_B2", 'value': 150, }, { 'group': 'B', "batch": "BAR_B", "name": "Unique_ID_B22", 'value': 150.016, }, ] const sumByKey = (arr, key, value, DeDup_Str_Arr, TO_Fixed) => { var Is_Int = false if (!isNaN(TO_Fixed) && (function (x) { return (x | 0) === x; })(parseFloat(TO_Fixed))) { Is_Int = true } const map = new Map(); for (const obj of arr) { const currSum = map.get(obj[key]) || 0; var val = obj[value] if (typeof val === 'string' || val instanceof String) { val = val + "," } map.set(obj[key], currSum + val); } if (Is_Int == true) { var res = Array.from(map, ([k, v]) => ({ [key]: k, [value]: Number(Number(v).toFixed(2)) })); // } else { var res = Array.from(map, ([k, v]) => ({ [key]: k, [value]: v })); } var val = res[0][value] if (typeof val === 'string' || val instanceof String) { for (var ai = 0; ai < res.length; ++ai) { var obj = res[ai] var val = obj[value] var vals_arr = val.split(",") vals_arr[0] = vals_arr[0].substring(1) //Removing leading 0 vals_arr.pop() //trailing "," if (DeDup_Str_Arr == true) { let unique = []; for (let element of vals_arr) { if (Array.isArray(element) == true) { for (let elm of element) { if (!unique.includes(elm)) { unique.push(elm) } } } else { if (!unique.includes(element)) { unique.push(element) } } } obj[value] = unique } else { obj[value] = vals_arr } } } return res; } console.log(sumByKey(arr, 'batch', 'value')) console.log(sumByKey(arr, 'batch', 'value', null, 2)) console.log(sumByKey(arr, 'batch', 'value', null, "2")) console.log(sumByKey(arr, 'group', 'batch', null)) console.log(sumByKey(arr, 'group', 'batch', true)) console.log(sumByKey(arr, 'group', 'batch', true, "2")) console.log(sumByKey(arr, 'group', 'batch', true, 2)) console.log(sumByKey(arr, 'group', 'value', true, 2))
Sum similar keys in an array of objects
I have an array of objects like the following: [ { 'name': 'P1', 'value': 150 }, { 'name': 'P1', 'value': 150 }, { 'name': 'P2', 'value': 200 }, { 'name': 'P3', 'value': 450 } ] I need to add up all the values for objects with the same name. (Probably also other mathematical operations like calculate average.) For the example above the result would be: [ { 'name': 'P1', 'value': 300 }, { 'name': 'P2', 'value': 200 }, { 'name': 'P3', 'value': 450 } ]
[ "First iterate through the array and push the 'name' into another object's property. If the property exists add the 'value' to the value of the property otherwise initialize the property to the 'value'. Once you build this object, iterate through the properties and push them to another array. \nHere is some code:\n\n\nvar obj = [\r\n { 'name': 'P1', 'value': 150 },\r\n { 'name': 'P1', 'value': 150 },\r\n { 'name': 'P2', 'value': 200 },\r\n { 'name': 'P3', 'value': 450 }\r\n];\r\n\r\nvar holder = {};\r\n\r\nobj.forEach(function(d) {\r\n if (holder.hasOwnProperty(d.name)) {\r\n holder[d.name] = holder[d.name] + d.value;\r\n } else {\r\n holder[d.name] = d.value;\r\n }\r\n});\r\n\r\nvar obj2 = [];\r\n\r\nfor (var prop in holder) {\r\n obj2.push({ name: prop, value: holder[prop] });\r\n}\r\n\r\nconsole.log(obj2);\n\n\n\nHope this helps.\n", "An ES6 approach to group by name:\nYou can convert your array of objects to a Map by using .reduce(). The Map has key-value pairs, where each key is the name, and each value is the accumulated sum of values for that particular name key. You can then easily convert the Map back into an array using Array.from(), where you can provide a mapping function that will take the keys/values of the Map and convert them into objects:\n\n\nconst arr = [ { 'name': 'P1', 'value': 150 }, { 'name': 'P1', 'value': 150 }, { 'name': 'P2', 'value': 200 }, { 'name': 'P3', 'value': 450 } ];\n\nconst res = Array.from(arr.reduce(\n (m, {name, value}) => m.set(name, (m.get(name) || 0) + value), new Map\n), ([name, value]) => ({name, value}));\nconsole.log(res);\n\n\n\nThe above is quite compact and not necessarily the easiest to read. I would suggest putting it into a function so it's clearer what it's doing. If you're after more self-documenting code, using for...of can make the above easier to understand. The below function is also useful if you want to group or sum on keys with spaces in them:\n\n\nconst arr = [ { 'name': 'P1', 'value': 150 }, { 'name': 'P1', 'value': 150 }, { 'name': 'P2', 'value': 200 }, { 'name': 'P3', 'value': 450 } ];\n\nconst sumByKey = (arr, key, value) => {\n const map = new Map();\n for(const obj of arr) {\n const currSum = map.get(obj[key]) || 0;\n map.set(obj[key], currSum + obj[value]);\n }\n const res = Array.from(map, ([k, v]) => ({[key]: k, [value]: v}));\n return res;\n}\n\nconsole.log(sumByKey(arr, 'name', 'value')); // 'name' = value to group by, 'value' = value to sum\n\n\n\n\nGrouping by more than just name:\nHere's an approach that should work if you have other overlapping properties other than just name (the keys/values need to be the same in both objects for them to \"group\"). It involves iterating through your array and reducing it to Map which holds key-value pairs. Each key of the new Map is a string of all the property values you want to group by, and so, if your object key already exists then you know it is a duplicate, which means you can add the object's current value to the stored object. Finally, you can use Array.from() to transform your Map of key-value pairs, to an array of just values:\n\n\nconst arr = [{'name':'P1','value':150},{'name':'P1','value':150},{'name':'P2','value':200},{'name':'P3','value':450}];\n\nconst res = Array.from(arr.reduce((acc, {value, ...r}) => {\n const key = JSON.stringify(r);\n const current = acc.get(key) || {...r, value: 0}; \n return acc.set(key, {...current, value: current.value + value});\n}, new Map).values());\nconsole.log(res);\n\n\n\n", "You can use Array.reduce() to accumulate results during each iteration.\n\n\nvar arr = [{'name':'P1','value':150},{'name':'P1','value':150},{'name':'P2','value':200},{'name':'P3','value':450}];\r\n\r\nvar result = arr.reduce(function(acc, val){\r\n var o = acc.filter(function(obj){\r\n return obj.name==val.name;\r\n }).pop() || {name:val.name, value:0};\r\n \r\n o.value += val.value;\r\n acc.push(o);\r\n return acc;\r\n},[]);\r\n\r\nconsole.log(result);\n\n\n\n", "I see these complicated reduce with Array from and Map and Set - this is far simpler\n\n\nconst summed = arr.reduce((acc, cur, i) => {\n const item = i > 0 && acc.find(({name}) => name === cur.name)\n if (item) item.value += cur.value;\n else acc.push({ name: cur.name, value: cur.value }); // don't push cur here\n return acc;\n}, [])\nconsole.log(arr); // not modified\nconsole.log(summed);\n<script>\n const arr = [{\n 'name': 'P1',\n 'value': 150\n },\n {\n 'name': 'P1',\n 'value': 150\n },\n {\n 'name': 'P2',\n 'value': 200\n },\n {\n 'name': 'P3',\n 'value': 450\n }\n ]\n</script>\n\n\n\n", "For some reason when I ran the code by @Vignesh Raja, I obtained the \"sum\" but also items duplicated. So, I had to remove the duplicates as I described below.\nOriginal array:\narr=[{name: \"LINCE-01\", y: 70}, \n {name: \"LINCE-01\", y: 155},\n {name: \"LINCE-01\", y: 210},\n {name: \"MIRAFLORES-03\", y: 232},\n {name: \"MIRAFLORES-03\", y: 267}]\n\nUsing @VigneshRaja's code: \nvar result = arr.reduce(function(acc, val){\n var o = acc.filter(function(obj){\n return obj.name==val.name;\n }).pop() || {name:val.name, y:0};\n\n o.y += val.y;\n acc.push(o);\n return acc;\n},[]);\n\nconsole.log(result);\n\nFirst outcome:\nresult: [{name: \"LINCE-01\", y: 435},\n {name: \"LINCE-01\", y: 435},\n {name: \"LINCE-01\", y: 435},\n {name: \"MIRAFLORES-03\", y: 499},\n {name: \"MIRAFLORES-03\", y: 499}]\n\nRemoving the duplicates:\nvar finalresult = result.filter(function(itm, i, a) {\n return i == a.indexOf(itm);\n });\nconsole.log(finalresult);\n\nFinally, I obtained what I whished:\nfinalresult = [{name: \"LINCE-01\", y: 435},\n {name: \"MIRAFLORES-03\", y: 657}]\n\nRegards,\n", "One more solution which is clean, I guess\n\n\n \n\n var obj = [\n { 'name': 'P1', 'value': 150 },\n { 'name': 'P1', 'value': 150 },\n { 'name': 'P2', 'value': 200 },\n { 'name': 'P3', 'value': 450 }\n ];\n\n var result = [];\nArray.from(new Set(obj.map(x => x.name))).forEach(x => {\n\n result.push(obj.filter(y => y.name === x).reduce((output,item) => {\n let val = output[x] === undefined?0:output[x];\n output[x] = (item.value + val); \n return output;\n },{}));\n\n })\n \n console.log(result);\n\n\n\nif you need to keep the object structure same than,\n\n\nvar obj = [\n { 'name': 'P1', 'value': 150 },\n { 'name': 'P1', 'value': 150 },\n { 'name': 'P2', 'value': 200 },\n { 'name': 'P2', 'value': 1000 },\n { 'name': 'P3', 'value': 450 }\n];\n\nlet output = [];\nlet result = {};\nlet uniqueName = Array.from(new Set(obj.map(x => x.name)));\nuniqueName.forEach(n => {\n output.push(obj.filter(x => x.name === n).reduce((a, item) => {\n \n let val = a['name'] === undefined? item.value:a['value']+item.value;\n \n return {name:n,value:val};\n },{})\n);\n});\n\nconsole.log(output);\n\n\n\n", "I have customized Mr Nick Parsons answer(Thanks for the idea). if you need to sum multiple key values.\nvar arr = [{'name':'P1','value':150,'apple':10},{'name':'P1','value':150,'apple':20},{'name':'P2','value':200,'apple':30},{'name':'P3','value':450,'apple':40}];\n\nvar res = Object.values(arr.reduce((acc, {value,apple , ...r}) => {\n var key = Object.entries(r).join('-');\n acc[key] = (acc[key] || {...r, apple:0,value: 0});\n return (acc[key].apple += apple, acc[key].value += value, acc);\n}, {}));\n\nconsole.log(res);\n\n", "\n\n(function () {\nvar arr = [\n {'name': 'P1', 'age': 150},\n {'name': 'P1', 'age': 150},\n {'name': 'P2', 'age': 200},\n {'name': 'P3', 'age': 450}\n];\nvar resMap = new Map();\nvar result = [];\narr.map((x) => {\n if (!resMap.has(x.name))\n resMap.set(x.name, x.age);\n else\n resMap.set(x.name, (x.age + resMap.get(x.name)));\n})\nresMap.forEach((value, key) => {\n result.push({\n name: key,\n age: value\n })\n})\nconsole.log(result);\n})();\n\n\n\n", "let ary = [\n {\n 'key1': 'P1',\n 'key2': 150\n },\n {\n 'key1': 'P1',\n 'key2': 150\n },\n {\n 'key1': 'P2',\n 'key2': 200\n },\n {\n 'key1': 'P3',\n 'key2': 450\n }\n]\n\nresult array let newAray = []\nfor (let index = 0; index < ary.length; index++) {\n // console.log(ary[index].key1);\n \n for (let index2 = index + 1; index2 < ary.length; index2++) {\n \n if (ary[index2].key1 == ary[index].key1) {\n console.log('match');\n ary[index].key2 += ary[index2].key2;\n newAry = ary.filter( val => val !== ary[index2]);\n \n newAry = ary.filter(function (e) {\n return e !== ary[index2];\n });\n \n }\n }\n }\nconsole.log(newAry)\n\n", "Here provide more generic version for this question\n\n\n/**\n * @param {(item: T) => string} keyFn\n * @param {(itemA: T, itemB: T) => T} mergeFn\n * @param {number[]} list\n */\nfunction compress(keyFn, mergeFn, list) {\n return Array.from(\n list\n .reduce((map, item) => {\n const key = keyFn(item);\n\n return map.has(key) // if key already existed\n ? map.set(key, mergeFn(map.get(key), item)) // merge two items together\n : map.set(key, item); // save item in map\n }, new Map())\n .values()\n );\n}\n\nconst testcase = [\n {\n name: \"P1\",\n value: 150,\n },\n {\n name: \"P1\",\n value: 150,\n },\n {\n name: \"P2\",\n value: 200,\n },\n {\n name: \"P3\",\n value: 450,\n },\n];\n\nconsole.log(\n compress(\n /* user define which is unique key */\n ({ name }) => name,\n /* how to merge two item together */\n (a, b) => ({ name: a.name, value: a.value + b.value }),\n /* array */\n testcase\n )\n)\n\n\n\n", "The method that Vignesh Raja posted will let you sum various values in an array of objects by the key or other property these method will work better\n", "I like this approach for readability.\n\n\nconst original = [\n { name: 'P1', value: 150 },\n { name: 'P1', value: 150 },\n { name: 'P2', value: 200 },\n { name: 'P3', value: 450 },\n];\n\nconst aggregate = {};\noriginal.forEach((item) => {\n if (aggregate[item.name]) {\n aggregate[item.name].value += item.value;\n } else {\n aggregate[item.name] = item;\n }\n});\n\n\nconst summary = Object.values(aggregate);\n\nconsole.log(original)\nconsole.log(aggregate)\nconsole.log(summary);\n\n\n\n", "My data was not so organized as the example data. I faced a few issues such as I wanted to total the number of unique strings in a group and I have extra data that I don't care to tally.\nI also found it a bit hard to subsitute some of the answers demo data obj values for real world values depending on code readability. I deceded i like @NickParsons answer best --> https://stackoverflow.com/a/57477448/5079799 and decided to add to it. Wasn't sure if this should be a new question like \"Sum similar specific keys and/or sum strings in arrays\" but this post seemed to be close enough that I felt an answer here was best.\nI decided that using Arrays for string was best, and I made it an option for my function to DeDup array if desired. But now you can use arr.length for number of hits and as in my case, I will then also have further uses for that array as a collection based on the group.\nI will likely be adding extra iterations over my Arr_Of_Objs but I like having this master function and then making smaller logical groups of Objs. I have a lot of cross-referencing/tallying and I tried making one large iteration where I did all the logic and it became a total mess quickly.\nHere is some code:\n\n\nvar arr = [\n{\n 'group': 'A',\n \"batch\": \"FOO_A\",\n \"name\": \"Unique_ID_1\",\n 'value': 150,\n},\n{\n 'group': 'A',\n \"batch\": \"FOO_A\",\n \"name\": \"Unique_ID_11\",\n 'value': 150,\n},\n{\n 'group': 'A',\n \"batch\": \"FOO_B\",\n \"name\": \"Unique_ID_2\",\n 'value': 150,\n},\n{\n 'group': 'A',\n \"batch\": \"FOO_B\",\n \"name\": \"Unique_ID_22\",\n 'value': 150,\n},\n{\n 'group': 'B',\n \"batch\": \"BAR_A\",\n \"name\": \"Unique_ID_A1\",\n 'value': 150,\n},\n{\n 'group': 'B',\n \"batch\": \"BAR_A\",\n \"name\": \"Unique_ID_A11\",\n 'value': 150,\n},\n{\n 'group': 'B',\n \"batch\": \"BAR_B\",\n \"name\": \"Unique_ID_B2\",\n 'value': 150,\n},\n{\n 'group': 'B',\n \"batch\": \"BAR_B\",\n \"name\": \"Unique_ID_B22\",\n 'value': 150.016,\n},\n]\n\nconst sumByKey = (arr, key, value, DeDup_Str_Arr, TO_Fixed) => {\nvar Is_Int = false\nif (!isNaN(TO_Fixed) && (function (x) { return (x | 0) === x; })(parseFloat(TO_Fixed))) {\n Is_Int = true\n}\nconst map = new Map();\nfor (const obj of arr) {\n const currSum = map.get(obj[key]) || 0;\n var val = obj[value]\n if (typeof val === 'string' || val instanceof String) {\n val = val + \",\"\n }\n map.set(obj[key], currSum + val);\n}\nif (Is_Int == true) {\n var res = Array.from(map, ([k, v]) => ({ [key]: k, [value]: Number(Number(v).toFixed(2)) })); // \n} else {\n var res = Array.from(map, ([k, v]) => ({ [key]: k, [value]: v }));\n}\nvar val = res[0][value]\nif (typeof val === 'string' || val instanceof String) {\n for (var ai = 0; ai < res.length; ++ai) {\n var obj = res[ai]\n var val = obj[value]\n var vals_arr = val.split(\",\")\n vals_arr[0] = vals_arr[0].substring(1) //Removing leading 0\n vals_arr.pop() //trailing \",\"\n if (DeDup_Str_Arr == true) {\n let unique = [];\n for (let element of vals_arr) {\n if (Array.isArray(element) == true) {\n for (let elm of element) { if (!unique.includes(elm)) { unique.push(elm) } }\n } else {\n if (!unique.includes(element)) { unique.push(element) }\n }\n }\n obj[value] = unique\n } else {\n obj[value] = vals_arr\n }\n }\n}\nreturn res;\n}\n\nconsole.log(sumByKey(arr, 'batch', 'value'))\nconsole.log(sumByKey(arr, 'batch', 'value', null, 2))\nconsole.log(sumByKey(arr, 'batch', 'value', null, \"2\"))\nconsole.log(sumByKey(arr, 'group', 'batch', null))\nconsole.log(sumByKey(arr, 'group', 'batch', true))\nconsole.log(sumByKey(arr, 'group', 'batch', true, \"2\"))\nconsole.log(sumByKey(arr, 'group', 'batch', true, 2))\nconsole.log(sumByKey(arr, 'group', 'value', true, 2))\n\n\n\n" ]
[ 40, 29, 6, 2, 1, 1, 1, 0, 0, 0, 0, 0, 0 ]
[ "let arr = [\n {'name':'P1','value':150,'apple':10},\n {'name':'P1','value':150,'apple':20},\n {'name':'P2','value':200,'apple':30},\n {'name':'P2','value':600,'apple':30},\n {'name':'P3','value':450,'apple':40}\n];\n\nlet obj = {}\n\narr.forEach((item)=>{\n if(obj[item.name]){\n obj[item.name].value = obj[item.name].value + item.value\n }else{\n obj[item.name] = item\n }\n})\n\nlet valuesArr = Object.values(obj)\nconsole.log(valuesArr);\n\nOutput\n\n[\n{ name: \"P1\", value: 300, apple: 10 },\n{ name: \"P2\", value: 800, apple: 30 },\n{ name: \"P3\", value: 450, apple: 40 } ]\n\n", "\n\nlet data= [\n {\n 'key1': 'P1',\n 'key2': 150\n },\n {\n 'key1': 'P1',\n 'key2': 150\n },\n {\n 'key1': 'P2',\n 'key2': 200\n },\n {\n 'key1': 'P3',\n 'key2': 450\n }\n ]\n \n var holder = []\n data.forEach( index => {\n const data = holder.find( i => i.key1=== index.key1)\n if(!data){\n holder.push({key1:index.key1,key2:index.key2})\n }else{\n data.key2 = parseInt(data.key2) + parseInt(index.key2)\n }\n });\n \n console.log(holder);\n\n\n\n" ]
[ -1, -2 ]
[ "arrays", "javascript", "object" ]
stackoverflow_0024444738_arrays_javascript_object.txt
Q: Trying to overflow HTML tags instead of wrapping is breaking the flex spacing I'm applying CSS to the pre code selector in order to make styled code blocks,like you'd see on GitHub or elsewhere. I'm using flexbox for the layout, and I have two "panel" divs side-by-side inside of a "box" div, one of which has a code block (Which is just code inside of <pre><code> tags), and the "box" div is inside of a main "container" div. The basic CSS I have is... .*, *:before, *:after { margin: 0; padding: 0; box-sizing: inherit; } html { box-sizing: border-box; } body { display: flex; align-items: center; justify-content: center; } pre code { display: inline-block; overflow: auto; white-space: pre; padding: 1rem; margin: 1rem; } .container { display: flex; flex-direction: column; margin: 1rem; gap: 1rem; } .box { display: flex; flex-direction: row; gap: 1rem; } .panel { display: flex; flex-direction: column; flex: 0.5; padding: 1rem; } The two panels should be equal width, due to flex: 0.5, however the right panel expands to fit the block, rather than the block shrinking to fit the panel. If I set white-space: pre-wrap on pre code, I get the desired layout behavior, but then of course the code is word-wrapped, which I don't want. And of course, if I use white-space: pre and add a dedicated width to the pre code, I get the desired behavior, where the code block has a horizontal scrollbar. I can't use a dedicated width, because I need the block to fit any panel it's inside of. Setting width: 100% on pre code does nothing at all, for some reason. Just to make sure I wasn't causing this error myself by doing something somewhere else, I put together this snippet to confirm my issue. https://codepen.io/evprkr/pen/poKQXJr A: It looks like the issue you're experiencing is caused by the fact that the pre and code elements are inline-block elements, which means that they will not take up the full width of their parent container. Instead, they will only be as wide as their content. One solution to this problem would be to change the display property of the pre and code elements to block, which will cause them to take up the full width of their parent container. Here is an updated version of your CSS that includes this change: * { margin: 0; padding: 0; box-sizing: inherit; } html { box-sizing: border-box; } body { display: flex; align-items: center; justify-content: center; } pre, code { display: block; overflow: auto; white-space: pre; padding: 1rem; margin: 1rem; } .container { display: flex; flex-direction: column; margin: 1rem; gap: 1rem; } .box { display: flex; flex-direction: row; gap: 1rem; } .panel { display: flex; flex-direction: column; flex: 0.5; padding: 1rem; } This should cause the code block to take up the full width of the .panel element, and the scrollbar should appear when the code block overflows. I hope this helps! Let me know if you have any other questions.
Trying to overflow HTML tags instead of wrapping is breaking the flex spacing
I'm applying CSS to the pre code selector in order to make styled code blocks,like you'd see on GitHub or elsewhere. I'm using flexbox for the layout, and I have two "panel" divs side-by-side inside of a "box" div, one of which has a code block (Which is just code inside of <pre><code> tags), and the "box" div is inside of a main "container" div. The basic CSS I have is... .*, *:before, *:after { margin: 0; padding: 0; box-sizing: inherit; } html { box-sizing: border-box; } body { display: flex; align-items: center; justify-content: center; } pre code { display: inline-block; overflow: auto; white-space: pre; padding: 1rem; margin: 1rem; } .container { display: flex; flex-direction: column; margin: 1rem; gap: 1rem; } .box { display: flex; flex-direction: row; gap: 1rem; } .panel { display: flex; flex-direction: column; flex: 0.5; padding: 1rem; } The two panels should be equal width, due to flex: 0.5, however the right panel expands to fit the block, rather than the block shrinking to fit the panel. If I set white-space: pre-wrap on pre code, I get the desired layout behavior, but then of course the code is word-wrapped, which I don't want. And of course, if I use white-space: pre and add a dedicated width to the pre code, I get the desired behavior, where the code block has a horizontal scrollbar. I can't use a dedicated width, because I need the block to fit any panel it's inside of. Setting width: 100% on pre code does nothing at all, for some reason. Just to make sure I wasn't causing this error myself by doing something somewhere else, I put together this snippet to confirm my issue. https://codepen.io/evprkr/pen/poKQXJr
[ "It looks like the issue you're experiencing is caused by the fact that the pre and code elements are inline-block elements, which means that they will not take up the full width of their parent container. Instead, they will only be as wide as their content.\nOne solution to this problem would be to change the display property of the pre and code elements to block, which will cause them to take up the full width of their parent container.\nHere is an updated version of your CSS that includes this change:\n* {\n margin: 0;\n padding: 0;\n box-sizing: inherit;\n}\n\nhtml {\n box-sizing: border-box;\n}\n\nbody {\n display: flex;\n align-items: center;\n justify-content: center;\n}\n\npre, code {\n display: block;\n overflow: auto;\n white-space: pre;\n padding: 1rem;\n margin: 1rem;\n}\n\n.container {\n display: flex;\n flex-direction: column;\n margin: 1rem;\n gap: 1rem;\n}\n\n.box {\n display: flex;\n flex-direction: row;\n gap: 1rem;\n}\n\n.panel {\n display: flex;\n flex-direction: column;\n flex: 0.5;\n padding: 1rem;\n}\n\nThis should cause the code block to take up the full width of the .panel element, and the scrollbar should appear when the code block overflows.\nI hope this helps! Let me know if you have any other questions.\n" ]
[ 0 ]
[]
[]
[ "css", "flexbox", "html" ]
stackoverflow_0074681388_css_flexbox_html.txt
Q: how to plot the multiple data frames on a single violin plot next to each other? I have two data frames, and the shapes of the two data frames are not same. I want to plot the two data frame values of the violin plots next to each other instead of overlapping. import pandas as pd import numpy as np import matplotlib.pyplot as plt data1 = { 'DT' : np.random.normal(-1, 1, 100), 'RF' : np.random.normal(-1, 1, 110), 'KNN' : np.random.normal(-1, 1, 120) } maxsize = max([a.size for a in data1.values()]) data_pad1 = {k:np.pad(v, pad_width=(0,maxsize-v.size,), mode='constant', constant_values=np.nan) for k,v in data1.items()} df1 = pd.DataFrame(data_pad1) # data frame data2 = { 'DT' : np.random.normal(-1, 1, 50), 'RF' : np.random.normal(-1, 1, 60), 'KNN' : np.random.normal(-1, 1, 80) } maxsize = max([a.size for a in data2.values()]) data_pad2 = {k:np.pad(v, pad_width=(0,maxsize-v.size,), mode='constant', constant_values=np.nan) for k,v in data2.items()} df2 = pd.DataFrame(data_pad2) # dataframe2 #plotting fig, ax = plt.subplots(figsize=(15, 6)) ax = sns.violinplot(data=df1, color="blue") ax = sns.violinplot(data=df2, color="red") plt.show() Here is my output image. But I want to get each blue and red violin plot next to each other instead of overlapping. A: I suggest relabeling the columns in each dataframe to reflect the dataframe number, e.g.: data2 = { 'DT2' : np.random.normal(-1, 1, 50), 'RF2' : np.random.normal(-1, 1, 60), 'KNN2' : np.random.normal(-1, 1, 80) } You may then: concatenate both dataframes: df = pd.concat([df1, df2], axis=1) define your own palette: my_palette = {"DT1": "blue", "DT2": "red","KNN1": "blue", "KNN2": "red", "RF1": "blue", "RF2": "red"} and then force the plotting order using the order parameter: sns.violinplot(data=df, order = ['DT1', 'DT2', 'KNN1', 'KNN2', 'RF1', 'RF2'], palette=my_palette) This yields the following result: EDIT: You may manually set the labels to replace each label pair (e.g. DT1, DT2) with a single label (e.g. DT): locs, labels = plt.xticks() # Get the current locations and labels. plt.xticks(np.arange(0.5, 4.5, step=2)) # Set label locations. plt.xticks([0.5, 2.5, 4.5], ['DT', 'KNN', 'RFF']) # Set text labels. This yields:
how to plot the multiple data frames on a single violin plot next to each other?
I have two data frames, and the shapes of the two data frames are not same. I want to plot the two data frame values of the violin plots next to each other instead of overlapping. import pandas as pd import numpy as np import matplotlib.pyplot as plt data1 = { 'DT' : np.random.normal(-1, 1, 100), 'RF' : np.random.normal(-1, 1, 110), 'KNN' : np.random.normal(-1, 1, 120) } maxsize = max([a.size for a in data1.values()]) data_pad1 = {k:np.pad(v, pad_width=(0,maxsize-v.size,), mode='constant', constant_values=np.nan) for k,v in data1.items()} df1 = pd.DataFrame(data_pad1) # data frame data2 = { 'DT' : np.random.normal(-1, 1, 50), 'RF' : np.random.normal(-1, 1, 60), 'KNN' : np.random.normal(-1, 1, 80) } maxsize = max([a.size for a in data2.values()]) data_pad2 = {k:np.pad(v, pad_width=(0,maxsize-v.size,), mode='constant', constant_values=np.nan) for k,v in data2.items()} df2 = pd.DataFrame(data_pad2) # dataframe2 #plotting fig, ax = plt.subplots(figsize=(15, 6)) ax = sns.violinplot(data=df1, color="blue") ax = sns.violinplot(data=df2, color="red") plt.show() Here is my output image. But I want to get each blue and red violin plot next to each other instead of overlapping.
[ "I suggest relabeling the columns in each dataframe to reflect the dataframe number, e.g.:\ndata2 = {\n 'DT2' : np.random.normal(-1, 1, 50),\n 'RF2' : np.random.normal(-1, 1, 60),\n 'KNN2' : np.random.normal(-1, 1, 80)\n}\n\nYou may then:\n\nconcatenate both dataframes:\ndf = pd.concat([df1, df2], axis=1)\n\ndefine your own palette:\nmy_palette = {\"DT1\": \"blue\", \"DT2\": \"red\",\"KNN1\": \"blue\", \"KNN2\": \"red\", \"RF1\": \"blue\", \"RF2\": \"red\"}\n\nand then force the plotting order using the order parameter:\nsns.violinplot(data=df, order = ['DT1', 'DT2', 'KNN1', 'KNN2', 'RF1', 'RF2'], palette=my_palette)\n\n\nThis yields the following result:\n\nEDIT:\nYou may manually set the labels to replace each label pair (e.g. DT1, DT2) with a single label (e.g. DT):\nlocs, labels = plt.xticks() # Get the current locations and labels.\nplt.xticks(np.arange(0.5, 4.5, step=2)) # Set label locations.\nplt.xticks([0.5, 2.5, 4.5], ['DT', 'KNN', 'RFF']) # Set text labels.\n\nThis yields:\n\n" ]
[ 0 ]
[]
[]
[ "matplotlib", "python", "violin_plot" ]
stackoverflow_0074680995_matplotlib_python_violin_plot.txt
Q: Remove features with whitespace in sklearn Countvectorizer with char_wb I am trying to build char level ngrams using sklearn's CountVectorizer. When using analyzer='char_wb' the vocab has features with whitespaces around it. I want to exclude the features/words with whitespaces. from sklearn.feature_extraction.text import CountVectorizer vectorizer = CountVectorizer(binary=True, analyzer='char_wb', ngram_range=(4, 5)) vectorizer.fit(['this is a plural']) vectorizer.vocabulary_ the vocabulary from the above code is [' thi', 'this', 'his ', ' this', 'this ', ' is ', ' a ', ' plu', 'plur', 'lura', 'ural', 'ral ', ' plur', 'plura', 'lural', 'ural '] I have tried using other analyzers e.g. word and char. None of those gives the kind of feature i need. A: I hope you get an improved answer because I'm confident this answer is a bit of a bad hack. I'm not sure it does what you want, and what it does is not very efficient. It does produce your vocabulary though (probably)! import re def my_analyzer(s): out=[] for w in re.split(r"\W+", s): if len(w) < 5: out.append(w) else: for l4 in re.findall(r"(?=(\w{4}))", w): out.append(l4) for l5 in re.findall(r"(?=(\w{5}))", w): out.append(l5) return out from sklearn.feature_extraction.text import CountVectorizer vectorizer = CountVectorizer(binary=True, analyzer=my_analyzer) vectorizer.fit(['this is a plural']) print(vectorizer.vocabulary_) # {'this': 6, 'is': 1, 'a': 0, 'plur': 4, 'lura': 2, 'ural': 7, 'plura': 5, 'lural': 3} corpus = [ 'This is the first document.', 'This document is the second document.', 'And this is the third one.', 'Is this the first document?', ] vectorizer.fit(corpus) print(vectorizer.vocabulary_) #{'This': 3, 'is': 15, 'the': 22, 'firs': 11, 'irst': 14, 'first': 12, 'docu': 7, 'ocum': 17, 'cume': 5, 'umen': 26, 'ment': 16, 'docum': 8, 'ocume': 18, 'cumen': 6, 'ument': 27, '': 0, 'seco': 20, 'econ': 9, 'cond': 4, 'secon': 21, 'econd': 10, 'And': 1, 'this': 25, 'thir': 23, 'hird': 13, 'third': 24, 'one': 19, 'Is': 2}
Remove features with whitespace in sklearn Countvectorizer with char_wb
I am trying to build char level ngrams using sklearn's CountVectorizer. When using analyzer='char_wb' the vocab has features with whitespaces around it. I want to exclude the features/words with whitespaces. from sklearn.feature_extraction.text import CountVectorizer vectorizer = CountVectorizer(binary=True, analyzer='char_wb', ngram_range=(4, 5)) vectorizer.fit(['this is a plural']) vectorizer.vocabulary_ the vocabulary from the above code is [' thi', 'this', 'his ', ' this', 'this ', ' is ', ' a ', ' plu', 'plur', 'lura', 'ural', 'ral ', ' plur', 'plura', 'lural', 'ural '] I have tried using other analyzers e.g. word and char. None of those gives the kind of feature i need.
[ "I hope you get an improved answer because I'm confident this answer is a bit of a bad hack. I'm not sure it does what you want, and what it does is not very efficient. It does produce your vocabulary though (probably)!\nimport re\n\ndef my_analyzer(s):\n out=[]\n for w in re.split(r\"\\W+\", s):\n if len(w) < 5:\n out.append(w)\n else:\n for l4 in re.findall(r\"(?=(\\w{4}))\", w):\n out.append(l4)\n for l5 in re.findall(r\"(?=(\\w{5}))\", w):\n out.append(l5)\n return out\n\nfrom sklearn.feature_extraction.text import CountVectorizer\n\nvectorizer = CountVectorizer(binary=True, analyzer=my_analyzer)\n\nvectorizer.fit(['this is a plural'])\nprint(vectorizer.vocabulary_)\n# {'this': 6, 'is': 1, 'a': 0, 'plur': 4, 'lura': 2, 'ural': 7, 'plura': 5, 'lural': 3}\n\ncorpus = [\n 'This is the first document.',\n 'This document is the second document.',\n 'And this is the third one.',\n 'Is this the first document?',\n]\nvectorizer.fit(corpus)\nprint(vectorizer.vocabulary_)\n#{'This': 3, 'is': 15, 'the': 22, 'firs': 11, 'irst': 14, 'first': 12, 'docu': 7, 'ocum': 17, 'cume': 5, 'umen': 26, 'ment': 16, 'docum': 8, 'ocume': 18, 'cumen': 6, 'ument': 27, '': 0, 'seco': 20, 'econ': 9, 'cond': 4, 'secon': 21, 'econd': 10, 'And': 1, 'this': 25, 'thir': 23, 'hird': 13, 'third': 24, 'one': 19, 'Is': 2}\n\n" ]
[ 0 ]
[]
[]
[ "countvectorizer", "python", "scikit_learn", "tfidfvectorizer" ]
stackoverflow_0074638757_countvectorizer_python_scikit_learn_tfidfvectorizer.txt
Q: SOLVED; Chromium Webdriver with "--no-sandbox" is opening a fully transparent/invisible Chrome window The relevant code is as follows: ' # find the Chromium profile with website caches for the webdriver chrome_options = Options() profile_filepath = "user-data-dir=" + "/home/hephaestus/.config/chromium/Profile1" chrome_options.add_argument(str(profile_filepath)) # put chromium into --no-sandbox mode as a workaround for "DevToolsActivePort file doesn't exist" chrome_options.add_argument("--no-sandbox") # start an automatic Chrome tab and go to embervision.live; wait for page to load driver = webdriver.Chrome("./chromedriver", options=chrome_options) ` When I run this Python code (and import the needed libraries), I get the screenshot below. Chromium that was opened with the above code is on the right, and is transparent and glitching out. Desktop view with Chromium webdriver tab glitching out on the right I am able to enter web addresses and interact with the page, but I just can't see any of it. I'm not sure why. I deleted and re-downloaded Selenium and Chromium, to no avail. I had to add the "--no-sandbox" option because it was getting another error that said "DevToolsActivePort file doesn't exist". I'm not sure what else is causing this issue. Any help is appreciated. Thank you! A: So I found a solution that works for me! Uninstall and reinstall Chromium completely. When reinstalling, check that your Chromium version matches with Selenium (which I didn't even know was a thing). DO NOT run your Python code as a sudo user. I did "sudo python3 upload_image.py" and got the "DevToolsActivePort file doesn't exist" error. When I ran just "python3 upload_image.py", it did not raise the error. Do not use the option "--no-sandbox" when running as a non-sudo user ("python 3 upload_image.py"). For some reason, the "--no-sandbox" option also broke my Chromium browser in the same transparent/infinite way as I posted above. Hope this helps someone in the future!
SOLVED; Chromium Webdriver with "--no-sandbox" is opening a fully transparent/invisible Chrome window
The relevant code is as follows: ' # find the Chromium profile with website caches for the webdriver chrome_options = Options() profile_filepath = "user-data-dir=" + "/home/hephaestus/.config/chromium/Profile1" chrome_options.add_argument(str(profile_filepath)) # put chromium into --no-sandbox mode as a workaround for "DevToolsActivePort file doesn't exist" chrome_options.add_argument("--no-sandbox") # start an automatic Chrome tab and go to embervision.live; wait for page to load driver = webdriver.Chrome("./chromedriver", options=chrome_options) ` When I run this Python code (and import the needed libraries), I get the screenshot below. Chromium that was opened with the above code is on the right, and is transparent and glitching out. Desktop view with Chromium webdriver tab glitching out on the right I am able to enter web addresses and interact with the page, but I just can't see any of it. I'm not sure why. I deleted and re-downloaded Selenium and Chromium, to no avail. I had to add the "--no-sandbox" option because it was getting another error that said "DevToolsActivePort file doesn't exist". I'm not sure what else is causing this issue. Any help is appreciated. Thank you!
[ "So I found a solution that works for me!\n\nUninstall and reinstall Chromium completely. When reinstalling, check that your Chromium version matches with Selenium (which I didn't even know was a thing).\n\nDO NOT run your Python code as a sudo user. I did \"sudo python3 upload_image.py\" and got the \"DevToolsActivePort file doesn't exist\" error. When I ran just \"python3 upload_image.py\", it did not raise the error.\n\nDo not use the option \"--no-sandbox\" when running as a non-sudo user (\"python 3 upload_image.py\"). For some reason, the \"--no-sandbox\" option also broke my Chromium browser in the same transparent/infinite way as I posted above.\n\n\nHope this helps someone in the future!\n" ]
[ 0 ]
[]
[]
[ "chromium", "python", "selenium", "webdriver" ]
stackoverflow_0074593964_chromium_python_selenium_webdriver.txt
Q: Generating random INTERVAL I have a function below that returns a random INTERVAL between a range of hours, which appears to be working fine but is currently limited to hours only. I would would like to expand this functionality to also support returning a random INTERVAL for days, minutes by passing in a literal (ie 'DAY', 'MINUTE' or 'SECOND') For example if I call random_interval (1,4, 'DAY') I would get something like this +000000002 11:24:43.000000000 or if i call random_interval (20,40, 'MINUTE') I would get something like +000000000 00:24:44.000000000 Thanks in advance to all who answer and for your time and expertise. CREATE OR REPLACE FUNCTION random_interval( p_min_hours IN NUMBER, p_max_hours IN NUMBER ) RETURN INTERVAL DAY TO SECOND IS BEGIN RETURN floor(dbms_random.value(p_min_hours, p_max_hours)) * interval '1' hour + floor(dbms_random.value(0, 60)) * interval '1' minute + floor(dbms_random.value(0, 60)) * interval '1' second; END random_interval; / SELECT random_interval(1, 10) as random_val FROM dual CONNECT BY level <= 10 order by 1 RANDOM_VAL +000000000 01:04:03.000000000 +000000000 03:14:52.000000000 +000000000 04:39:42.000000000 +000000000 05:00:39.000000000 +000000000 05:03:28.000000000 +000000000 07:03:19.000000000 +000000000 07:06:13.000000000 +000000000 08:50:55.000000000 +000000000 09:10:02.000000000 +000000000 09:26:44.000000000 A: Try giving this a shot instead CREATE OR REPLACE FUNCTION random_interval( p_min IN NUMBER, p_max IN NUMBER, p_period VARCHAR2 ) RETURN INTERVAL DAY TO SECOND IS BEGIN IF p_period = 'HOUR' THEN RETURN floor(dbms_random.value(p_min, p_max)) * interval '1' hour + floor(dbms_random.value(0, 60)) * interval '1' minute + floor(dbms_random.value(0, 60)) * interval '1' second; ELSE IF p_period = 'DAY' THEN RETURN floor(dbms_random.value(p_min, p_max)) * interval '1' day + floor(dbms_random.value(0, 24)) * interval '1' hour + floor(dbms_random.value(0, 60)) * interval '1' minute + floor(dbms_random.value(0, 60)) * interval '1' second; ELSE IF p_period = 'MINUTE' THEN RETURN floor(dbms_random.value(p_min, p_max)) * interval '1' minute + floor(dbms_random.value(0, 60)) * interval '1' second; ELSE IF p_period = 'SECOND' THEN RETURN floor(dbms_random.value(p_min, p_max)) * interval '1' second; ELSE RETURN NULL; END IF; END random_interval; / SELECT random_interval(1, 10, 'DAY') as random_val FROM dual CONNECT BY level <= 10 order by 1 RANDOM_VAL +000000003 02:46:09.000000000 +000000004 19:19:56.000000000 +000000002 11:24:43.000000000 +000000002 16:20:44.000000000 +000000001 22:24:30.000000000 +000000002 15:14:38.000000000 +000000003 00:48:03.000000000 +000000003 18:08:13.000000000 +000000002 01:05:34.000000000 +000000002 08:12:19.000000000 A: You don't need a user-defined function as you can use the built-in functions DBMS_RANDOM.VALUE(lower_bound, upper_bound) and NUMTODSINTERVAL(amount, duration): SELECT NUMTODSINTERVAL( DBMS_RANDOM.VALUE(1, 3), 'MINUTE' ) FROM DUAL; Which will generate a random interval greater than or equal to 1 minute and less than 3 minutes (with a random about of seconds). If you did want to wrap it into a function then: CREATE FUNCTION random_interval( p_min IN NUMBER, p_max IN NUMBER, p_duration IN VARCHAR2 ) RETURN INTERVAL DAY TO SECOND IS BEGIN RETURN NUMTODSINTERVAL(DBMS_RANDOM.VALUE(p_min, p_max), p_duration); END; / If you want the seconds to be an integer then: CREATE OR REPLACE FUNCTION random_interval( p_min IN NUMBER, p_max IN NUMBER, p_duration IN VARCHAR2 ) RETURN INTERVAL DAY TO SECOND IS v_interval INTERVAL DAY TO SECOND := NUMTODSINTERVAL(DBMS_RANDOM.VALUE(p_min, p_max), p_duration); BEGIN RETURN ( EXTRACT(DAY FROM v_interval) * 24 * 60 * 60 + EXTRACT(HOUR FROM v_interval) * 60 * 60 + EXTRACT(MINUTE FROM v_interval) * 60 + FLOOR(EXTRACT(SECOND FROM v_interval)) ) * INTERVAL '1' SECOND; END; / fiddle
Generating random INTERVAL
I have a function below that returns a random INTERVAL between a range of hours, which appears to be working fine but is currently limited to hours only. I would would like to expand this functionality to also support returning a random INTERVAL for days, minutes by passing in a literal (ie 'DAY', 'MINUTE' or 'SECOND') For example if I call random_interval (1,4, 'DAY') I would get something like this +000000002 11:24:43.000000000 or if i call random_interval (20,40, 'MINUTE') I would get something like +000000000 00:24:44.000000000 Thanks in advance to all who answer and for your time and expertise. CREATE OR REPLACE FUNCTION random_interval( p_min_hours IN NUMBER, p_max_hours IN NUMBER ) RETURN INTERVAL DAY TO SECOND IS BEGIN RETURN floor(dbms_random.value(p_min_hours, p_max_hours)) * interval '1' hour + floor(dbms_random.value(0, 60)) * interval '1' minute + floor(dbms_random.value(0, 60)) * interval '1' second; END random_interval; / SELECT random_interval(1, 10) as random_val FROM dual CONNECT BY level <= 10 order by 1 RANDOM_VAL +000000000 01:04:03.000000000 +000000000 03:14:52.000000000 +000000000 04:39:42.000000000 +000000000 05:00:39.000000000 +000000000 05:03:28.000000000 +000000000 07:03:19.000000000 +000000000 07:06:13.000000000 +000000000 08:50:55.000000000 +000000000 09:10:02.000000000 +000000000 09:26:44.000000000
[ "Try giving this a shot instead\n\nCREATE OR REPLACE FUNCTION random_interval(\n p_min IN NUMBER,\n p_max IN NUMBER, \n p_period VARCHAR2\n ) RETURN INTERVAL DAY TO SECOND\n IS\n BEGIN\n IF p_period = 'HOUR' THEN \n RETURN floor(dbms_random.value(p_min, p_max)) * interval '1' hour\n + floor(dbms_random.value(0, 60)) * interval '1' minute\n + floor(dbms_random.value(0, 60)) * interval '1' second;\n ELSE IF p_period = 'DAY' THEN\n RETURN floor(dbms_random.value(p_min, p_max)) * interval '1' day\n + floor(dbms_random.value(0, 24)) * interval '1' hour\n + floor(dbms_random.value(0, 60)) * interval '1' minute\n + floor(dbms_random.value(0, 60)) * interval '1' second; \n ELSE IF p_period = 'MINUTE' THEN\n RETURN floor(dbms_random.value(p_min, p_max)) * interval '1' minute\n + floor(dbms_random.value(0, 60)) * interval '1' second; \n ELSE IF p_period = 'SECOND' THEN\n RETURN floor(dbms_random.value(p_min, p_max)) * interval '1' second;\n ELSE \n RETURN NULL;\n END IF;\nEND random_interval;\n/\n\nSELECT random_interval(1, 10, 'DAY') as random_val FROM dual CONNECT BY level <= 10 order by 1\n\nRANDOM_VAL\n+000000003 02:46:09.000000000\n+000000004 19:19:56.000000000\n+000000002 11:24:43.000000000\n+000000002 16:20:44.000000000\n+000000001 22:24:30.000000000\n+000000002 15:14:38.000000000\n+000000003 00:48:03.000000000\n+000000003 18:08:13.000000000\n+000000002 01:05:34.000000000\n+000000002 08:12:19.000000000\n\n", "You don't need a user-defined function as you can use the built-in functions DBMS_RANDOM.VALUE(lower_bound, upper_bound) and NUMTODSINTERVAL(amount, duration):\nSELECT NUMTODSINTERVAL(\n DBMS_RANDOM.VALUE(1, 3),\n 'MINUTE'\n )\nFROM DUAL;\n\nWhich will generate a random interval greater than or equal to 1 minute and less than 3 minutes (with a random about of seconds).\nIf you did want to wrap it into a function then:\nCREATE FUNCTION random_interval(\n p_min IN NUMBER,\n p_max IN NUMBER,\n p_duration IN VARCHAR2\n) RETURN INTERVAL DAY TO SECOND\nIS\nBEGIN\n RETURN NUMTODSINTERVAL(DBMS_RANDOM.VALUE(p_min, p_max), p_duration);\nEND;\n/\n\nIf you want the seconds to be an integer then:\nCREATE OR REPLACE FUNCTION random_interval(\n p_min IN NUMBER,\n p_max IN NUMBER,\n p_duration IN VARCHAR2\n) RETURN INTERVAL DAY TO SECOND\nIS\n v_interval INTERVAL DAY TO SECOND := NUMTODSINTERVAL(DBMS_RANDOM.VALUE(p_min, p_max), p_duration);\nBEGIN\n RETURN ( EXTRACT(DAY FROM v_interval) * 24 * 60 * 60\n + EXTRACT(HOUR FROM v_interval) * 60 * 60\n + EXTRACT(MINUTE FROM v_interval) * 60\n + FLOOR(EXTRACT(SECOND FROM v_interval))\n ) * INTERVAL '1' SECOND;\nEND;\n/\n\nfiddle\n" ]
[ 2, 0 ]
[]
[]
[ "function", "intervals", "oracle" ]
stackoverflow_0074675273_function_intervals_oracle.txt
Q: The task is: Given a group of integers, find the maximum value among them all, how do i do it without arrays using c++? The task is: Given a group of integers, find the maximum value among them all. Input Givin an integer N (1≤N≤10^5) – the number of integers to be entered. The following line contains N space-separated integers (−10^18≤ai≤10^18) — ai is the value of the ith integer. Output Print the answer to the task. Examples input 4 3 5 1 4 output 5 and this is my code which gives me wrong on answer test 3# #include<iostream> using namespace std; void main() { long long b,c=0; int a; cin>>a; for (int i = 0; i<a ; i++) { cin>>b; if (b>=c) c=b; else c=c; } cout<<c; } A: The inputted integers can be negative. When all the integers are negative, your code will print out 0 since that is bigger than all the input integers. A possible way to fix this is to set c to -10^18. Another way would be to somehow initialize c as the first value.
The task is: Given a group of integers, find the maximum value among them all, how do i do it without arrays using c++?
The task is: Given a group of integers, find the maximum value among them all. Input Givin an integer N (1≤N≤10^5) – the number of integers to be entered. The following line contains N space-separated integers (−10^18≤ai≤10^18) — ai is the value of the ith integer. Output Print the answer to the task. Examples input 4 3 5 1 4 output 5 and this is my code which gives me wrong on answer test 3# #include<iostream> using namespace std; void main() { long long b,c=0; int a; cin>>a; for (int i = 0; i<a ; i++) { cin>>b; if (b>=c) c=b; else c=c; } cout<<c; }
[ "The inputted integers can be negative. When all the integers are negative, your code will print out 0 since that is bigger than all the input integers. A possible way to fix this is to set c to -10^18. Another way would be to somehow initialize c as the first value.\n" ]
[ 1 ]
[]
[]
[ "c++" ]
stackoverflow_0074681402_c++.txt
Q: How to validate and sanitize array of data in php? I want to validate and sanitize data which comes from POST array. My POST data is something like this: Array ( [category_name] => fsdfsfwereq34 [subCategory] => Array ( [0] => sdfadsffasfasdf [1] => sdfasfdsafadsf [2] => safdfdasfas ) [category-submitted] => TRUE ) 1 I can validate and sanitize category_name and this is how I do it in PHP. if (!empty( $_POST['category_name'])) { $category = filter_input(INPUT_POST, 'category_name', FILTER_SANITIZE_STRING); } else { $category = NULL; } But I do not know how to do it for values of the subCategory array. Can anybody tell me how can I do this? Hope somebody may help me out. Thank you. A: You can use foreach to loop through all values of an array. In the loop you can push your filtered values to a new array $subCategory (e.g.). E.g.: $subCategory = $_POST['subCategory']; $subcategories = array(); if (!empty( $subCategory ) && is_array( $subCategory ) ) { foreach( $subCategory as $key => $value ) { $subCategories[] = filter_var( $value, FILTER_SANITIZE_STRING ) } } A: You can loop through or for array you use this: http://php.net/manual/en/function.filter-input-array.php the same way. A: You can loop through or for array you use this: http://php.net/manual/en/function.filter-input-array.php the same way. filter_input_array is wrong when talking about user defined local arrays, use filter_var_array instead.
How to validate and sanitize array of data in php?
I want to validate and sanitize data which comes from POST array. My POST data is something like this: Array ( [category_name] => fsdfsfwereq34 [subCategory] => Array ( [0] => sdfadsffasfasdf [1] => sdfasfdsafadsf [2] => safdfdasfas ) [category-submitted] => TRUE ) 1 I can validate and sanitize category_name and this is how I do it in PHP. if (!empty( $_POST['category_name'])) { $category = filter_input(INPUT_POST, 'category_name', FILTER_SANITIZE_STRING); } else { $category = NULL; } But I do not know how to do it for values of the subCategory array. Can anybody tell me how can I do this? Hope somebody may help me out. Thank you.
[ "You can use foreach to loop through all values of an array. In the loop you can push your filtered values to a new array $subCategory (e.g.).\nE.g.:\n$subCategory = $_POST['subCategory'];\n$subcategories = array();\nif (!empty( $subCategory ) && is_array( $subCategory ) ) {\n foreach( $subCategory as $key => $value ) {\n $subCategories[] = filter_var( $value, FILTER_SANITIZE_STRING )\n }\n}\n\n", "You can loop through or for array you use this:\nhttp://php.net/manual/en/function.filter-input-array.php the same way.\n", "\nYou can loop through or for array you use this:\nhttp://php.net/manual/en/function.filter-input-array.php the same way.\n\nfilter_input_array is wrong when talking about user defined local arrays, use filter_var_array instead.\n" ]
[ 3, 3, 0 ]
[]
[]
[ "arrays", "php", "sanitization", "validation" ]
stackoverflow_0029841311_arrays_php_sanitization_validation.txt
Q: How to find the index of an array where summation is greater than a target value? Suppose I have a 1D array sorted in descending order, like: arr = np.array([10, 10, 8, 5, 4, 4, 3, 2, 2, 2]) I want the index value, where the summation of this array starting from 0 to that index is greater than or equal to a specified target value. For example, let the target value be 40: index=0 (0) => sum=10 (10) index=1 (0,1) => sum=20 (10+10) index=2 (0,1,2) => sum=28 (10+10+8) index=3 (0,1,2,3) => sum=33 (10+10+8+5) index=4 (0,1,2,3,4) => sum=37 (10+10+8+5+4) index=5 (0,1,2,3,4,5) => sum=41 (10+10+8+5+4+4) and finally I want to get the index value 5, since the sum 41 is greater than the target value 40. How can I do this in most Pythonic and appropriate way, so it can work with large numbers and large sized arrays. A: To find the index of an array where the summation is greater than a target value in Python, you can use a for loop to iterate over the elements in the array and keep track of the running total. When the running total is greater than the target value, you can return the index at which that occurred. # define the target value target = 10 # define the array arr = [1, 2, 3, 4, 5, 6, 7] # initialize the running total to 0 and the index to -1 total = 0 index = -1 # iterate over the elements in the array for i in range(len(arr)): total += arr[i] if total > target: index = i break # print the index where the summation is greater than the target value print(index) # 3 In this example, the index where the summation of the array is greater than the target value is 3. This is because the summation of the first three elements in the array (1 + 2 + 3) is greater than the target value of 10. A: using numpy: import numpy as np # Create the array arr = np.array([10, 10, 8, 5, 4, 4, 3, 2, 2, 2]) # Compute the cumulative sum of the elements in the array cumsum = np.cumsum(arr) # Find the index of the first element in the cumulative sum that is greater than or equal to the target value index = np.argmax(cumsum >= 40) # Print the result print(index) # Output: 5
How to find the index of an array where summation is greater than a target value?
Suppose I have a 1D array sorted in descending order, like: arr = np.array([10, 10, 8, 5, 4, 4, 3, 2, 2, 2]) I want the index value, where the summation of this array starting from 0 to that index is greater than or equal to a specified target value. For example, let the target value be 40: index=0 (0) => sum=10 (10) index=1 (0,1) => sum=20 (10+10) index=2 (0,1,2) => sum=28 (10+10+8) index=3 (0,1,2,3) => sum=33 (10+10+8+5) index=4 (0,1,2,3,4) => sum=37 (10+10+8+5+4) index=5 (0,1,2,3,4,5) => sum=41 (10+10+8+5+4+4) and finally I want to get the index value 5, since the sum 41 is greater than the target value 40. How can I do this in most Pythonic and appropriate way, so it can work with large numbers and large sized arrays.
[ "To find the index of an array where the summation is greater than a target value in Python, you can use a for loop to iterate over the elements in the array and keep track of the running total. When the running total is greater than the target value, you can return the index at which that occurred.\n# define the target value\ntarget = 10\n\n# define the array\narr = [1, 2, 3, 4, 5, 6, 7]\n\n# initialize the running total to 0 and the index to -1\ntotal = 0\nindex = -1\n\n# iterate over the elements in the array\nfor i in range(len(arr)):\n total += arr[i]\n if total > target:\n index = i\n break\n\n# print the index where the summation is greater than the target value\nprint(index) # 3\n\nIn this example, the index where the summation of the array is greater than the target value is 3. This is because the summation of the first three elements in the array (1 + 2 + 3) is greater than the target value of 10.\n", "using numpy:\nimport numpy as np\n\n# Create the array\narr = np.array([10, 10, 8, 5, 4, 4, 3, 2, 2, 2])\n\n# Compute the cumulative sum of the elements in the array\ncumsum = np.cumsum(arr)\n\n# Find the index of the first element in the cumulative sum that is greater than or equal to the target value\nindex = np.argmax(cumsum >= 40)\n\n# Print the result\nprint(index) # Output: 5\n\n" ]
[ 0, 0 ]
[]
[]
[ "arrays", "numpy", "python" ]
stackoverflow_0074681382_arrays_numpy_python.txt
Q: Getting content from a dm in discord.py So I want to know if it is possible, that a bot gets the content sent to it in a dm and send that in a specifyed channel on a server. So basically you dm the bot the word "test" and the bots sends the word in a channel of a server A: Yes, it is possible for a bot to receive a direct message and then repost the message in a specified channel on a server. This can be done using the Discord API. You can do the following: Create a Discord bot and add it to your server. You can do this using the Discord developer portal. Use the Discord API to listen for messages sent to the bot in a DM. You can do this using the message event and the DMChannel class in the Discord API. When the bot receives a DM, use the Discord API to repost the message in the specified channel on the server. You can do this using the send method of the TextChannel class in the Discord API.
Getting content from a dm in discord.py
So I want to know if it is possible, that a bot gets the content sent to it in a dm and send that in a specifyed channel on a server. So basically you dm the bot the word "test" and the bots sends the word in a channel of a server
[ "Yes, it is possible for a bot to receive a direct message and then repost the message in a specified channel on a server. This can be done using the Discord API.\nYou can do the following:\n\nCreate a Discord bot and add it to your server. You can do this using the Discord developer portal.\n\nUse the Discord API to listen for messages sent to the bot in a DM. You can do this using the message event and the DMChannel class in the Discord API.\n\nWhen the bot receives a DM, use the Discord API to repost the message in the specified channel on the server. You can do this using the send method of the TextChannel class in the Discord API.\n\n\n" ]
[ 0 ]
[]
[]
[ "discord", "discord.py", "python" ]
stackoverflow_0074681161_discord_discord.py_python.txt
Q: How to calculate distance after key is pressed? Hey so I'm trying to calculate a person's score after they press a key. I have three arrows and I want to find how far the arrow is from the center and use that to find the score. This is what I have so far: import turtle import math sc = turtle.Screen() sc.title("Arrow Game") sc.bgcolor("#C7F6B6") arrow1= turtle.Turtle() arrow1.color("purple") arrow1.shape("arrow") arrow1.shapesize(0.5,1) arrow1.pu() arrow1.goto(-250,-250) def shoot(): arrow1.showturtle() arrow1.fd(500) def movlt(): can.lt(10) x = arrow1.xcor() y = arrow1.ycor() arrow1.lt(10) arrow1.setx(x-5) arrow1.sety(y+5) def movrt(): x = arrow1.xcor() y = arrow1.ycor() arrow1.rt(10) can.rt(10) arrow1.setx(x-5) arrow1.sety(y-5) sc.listen() scs = turtle.Turtle() scs.pu() scs.goto(-140,40) scs.pd() center = 150 xs = arrow1.xcor() - center ys = arrow1.ycor() - center distance=math.sqrt(xs**2 + ys**2) sc.onkeypress(movlt, "q") sc.onkeypress(movrt, "e") sc.onkeypress(shoot, "1") def score(): ptss = 0 if distance > 5: ptss += 10 elif distance < 5: ptss += 6 return ptss gmm = score() ptss = gmm if distance > 5: scs.write("10 pts") elif distance < 5: scs.write("6 pts") The problem I have is that I don't know how to make it wait until the key is pressed. A: To make your code wait until a key is pressed, you can use the turtle.Screen.onkeypress() method. This method takes two arguments: a callback function that will be called when the key is pressed, and the key that you want to listen for. Here is an example of how you can use the onkeypress() method to wait for a key press: import turtle # Create a turtle screen and set the background color. sc = turtle.Screen() sc.bgcolor("#C7F6B6") # Create a turtle and set its color, shape, and position. arrow1 = turtle.Turtle() arrow1.color("purple") arrow1.shape("arrow") arrow1.shapesize(0.5, 1) arrow1.pu() arrow1.goto(-250, -250) # Define a function that will be called when the key is pressed. def shoot(): # Move the turtle forward by 500 units. arrow1.fd(500) # Calculate the distance of the arrow from the center. center = 150 xs = arrow1.xcor() - center ys = arrow1.ycor() - center distance = math.sqrt(xs**2 + ys**2) # Calculate the score based on the distance of the arrow from the center. ptss = 0 if distance > 5: ptss += 10 elif distance < 5: ptss += 6 # Print the score. sc.write("Score: {} pts".format(ptss)) # Listen for the "1" key to be pressed. sc.listen() sc.onkeypress(shoot, "1") In this example, the shoot function is called when the "1" key is pressed. This function calculates the distance of the arrow from the center, calculates the score based on that distance, and prints the score on the turtle screen. You can use this approach to make your code wait until the key is pressed and then calculate and display the score. You may want to adjust the details of how the score is calculated, but this should give you a starting point for implementing the functionality you are looking for. A: To solve your problem, you could use the onkeyrelease event provided by the turtle screen. This event triggers when a key is released, so you can use it to determine when the key press has completed. Here is an example of how you could use it: sc.onkeypress(movlt, "q") sc.onkeypress(movrt, "e") # Use onkeyrelease to determine when the key press is finished sc.onkeyrelease(shoot, "1") You can then move the score and gmm calculations inside of the shoot function, since this is where you want them to be executed. This will allow you to calculate the score once the key press is finished and the arrow has reached its final position. def shoot(): arrow1.showturtle() arrow1.fd(500) xs = arrow1.xcor() - center ys = arrow1.ycor() - center distance=math.sqrt(xs**2 + ys**2) def score(): ptss = 0 if distance > 5: ptss += 10 elif distance < 5: ptss += 6 return ptss gmm = score() ptss = gmm if distance > 5: scs.write("10 pts") elif distance < 5: scs.write("6 pts")
How to calculate distance after key is pressed?
Hey so I'm trying to calculate a person's score after they press a key. I have three arrows and I want to find how far the arrow is from the center and use that to find the score. This is what I have so far: import turtle import math sc = turtle.Screen() sc.title("Arrow Game") sc.bgcolor("#C7F6B6") arrow1= turtle.Turtle() arrow1.color("purple") arrow1.shape("arrow") arrow1.shapesize(0.5,1) arrow1.pu() arrow1.goto(-250,-250) def shoot(): arrow1.showturtle() arrow1.fd(500) def movlt(): can.lt(10) x = arrow1.xcor() y = arrow1.ycor() arrow1.lt(10) arrow1.setx(x-5) arrow1.sety(y+5) def movrt(): x = arrow1.xcor() y = arrow1.ycor() arrow1.rt(10) can.rt(10) arrow1.setx(x-5) arrow1.sety(y-5) sc.listen() scs = turtle.Turtle() scs.pu() scs.goto(-140,40) scs.pd() center = 150 xs = arrow1.xcor() - center ys = arrow1.ycor() - center distance=math.sqrt(xs**2 + ys**2) sc.onkeypress(movlt, "q") sc.onkeypress(movrt, "e") sc.onkeypress(shoot, "1") def score(): ptss = 0 if distance > 5: ptss += 10 elif distance < 5: ptss += 6 return ptss gmm = score() ptss = gmm if distance > 5: scs.write("10 pts") elif distance < 5: scs.write("6 pts") The problem I have is that I don't know how to make it wait until the key is pressed.
[ "To make your code wait until a key is pressed, you can use the turtle.Screen.onkeypress() method. This method takes two arguments: a callback function that will be called when the key is pressed, and the key that you want to listen for.\nHere is an example of how you can use the onkeypress() method to wait for a key press:\nimport turtle\n\n# Create a turtle screen and set the background color.\nsc = turtle.Screen()\nsc.bgcolor(\"#C7F6B6\")\n\n# Create a turtle and set its color, shape, and position.\narrow1 = turtle.Turtle()\narrow1.color(\"purple\")\narrow1.shape(\"arrow\")\narrow1.shapesize(0.5, 1)\narrow1.pu()\narrow1.goto(-250, -250)\n\n# Define a function that will be called when the key is pressed.\ndef shoot():\n # Move the turtle forward by 500 units.\n arrow1.fd(500)\n\n # Calculate the distance of the arrow from the center.\n center = 150\n xs = arrow1.xcor() - center\n ys = arrow1.ycor() - center\n distance = math.sqrt(xs**2 + ys**2)\n\n # Calculate the score based on the distance of the arrow from the center.\n ptss = 0\n if distance > 5:\n ptss += 10\n elif distance < 5:\n ptss += 6\n\n # Print the score.\n sc.write(\"Score: {} pts\".format(ptss))\n\n# Listen for the \"1\" key to be pressed.\nsc.listen()\nsc.onkeypress(shoot, \"1\")\n\nIn this example, the shoot function is called when the \"1\" key is pressed. This function calculates the distance of the arrow from the center, calculates the score based on that distance, and prints the score on the turtle screen.\nYou can use this approach to make your code wait until the key is pressed and then calculate and display the score. You may want to adjust the details of how the score is calculated, but this should give you a starting point for implementing the functionality you are looking for.\n", "To solve your problem, you could use the onkeyrelease event provided by the turtle screen. This event triggers when a key is released, so you can use it to determine when the key press has completed. Here is an example of how you could use it:\nsc.onkeypress(movlt, \"q\")\nsc.onkeypress(movrt, \"e\")\n\n# Use onkeyrelease to determine when the key press is finished\nsc.onkeyrelease(shoot, \"1\")\n\nYou can then move the score and gmm calculations inside of the shoot function, since this is where you want them to be executed. This will allow you to calculate the score once the key press is finished and the arrow has reached its final position.\ndef shoot():\n arrow1.showturtle()\n arrow1.fd(500)\n \n xs = arrow1.xcor() - center\n ys = arrow1.ycor() - center\n \n distance=math.sqrt(xs**2 + ys**2)\n \n def score():\n ptss = 0\n if distance > 5:\n ptss += 10\n elif distance < 5:\n ptss += 6\n return ptss\n \n gmm = score()\n ptss = gmm\n \n if distance > 5:\n scs.write(\"10 pts\")\n elif distance < 5:\n scs.write(\"6 pts\")\n\n" ]
[ 0, 0 ]
[]
[]
[ "python" ]
stackoverflow_0074681396_python.txt
Q: Spring Configuration Problem: Not a managed type: Entity I am trying to create Entities for a Postgres Database using Spring Data JPA and am getting this error the whole time: Error creating bean with name 'rawDataService': Unsatisfied dependency expressed through field 'rawDataRepository': Error creating bean with name 'rawDataRepository' defined in com.example.testcontroller.repository.RawDataRepository defined in @EnableJpaRepositories declared on ApplicationConfig: Not a managed type: class com.example.testcontroller.entity.RawData PlainJpaConfig: import org.springframework.boot.autoconfigure.domain.EntityScan; import org.springframework.context.annotation.ComponentScan; import org.springframework.context.annotation.Configuration; import org.springframework.context.annotation.Import; @Configuration @Import(InfrastructureConfig.class) @ComponentScan(basePackages = "com.*") @EntityScan(basePackages = "com.*") public class PlainJpaConfig { } InfraStructureConfig: import javax.sql.DataSource; import org.springframework.boot.jdbc.DataSourceBuilder; import org.springframework.context.annotation.Bean; import org.springframework.context.annotation.Configuration; import org.springframework.jdbc.datasource.embedded.EmbeddedDatabaseBuilder; import org.springframework.jdbc.datasource.embedded.EmbeddedDatabaseType; import org.springframework.orm.jpa.JpaTransactionManager; import org.springframework.orm.jpa.LocalContainerEntityManagerFactoryBean; import org.springframework.orm.jpa.vendor.Database; import org.springframework.orm.jpa.vendor.HibernateJpaVendorAdapter; import org.springframework.transaction.PlatformTransactionManager; import org.springframework.transaction.annotation.EnableTransactionManagement; @Configuration @EnableTransactionManagement public class InfrastructureConfig { @Bean public DataSource dataSource() { DataSourceBuilder dataSourceBuilder = DataSourceBuilder.create(); dataSourceBuilder.driverClassName("org.postgresql.Driver"); dataSourceBuilder.url("jdbc:postgresql://localhost:5460/greensoftdb"); dataSourceBuilder.username("user"); dataSourceBuilder.password("pwd"); return dataSourceBuilder.build(); } @Bean public LocalContainerEntityManagerFactoryBean entityManagerFactory() { HibernateJpaVendorAdapter vendorAdapter = new HibernateJpaVendorAdapter(); vendorAdapter.setDatabase(Database.POSTGRESQL); vendorAdapter.setGenerateDdl(true); LocalContainerEntityManagerFactoryBean factory = new LocalContainerEntityManagerFactoryBean(); factory.setJpaVendorAdapter(vendorAdapter); factory.setPackagesToScan("com.*"); factory.setDataSource(dataSource()); return factory; } @Bean public PlatformTransactionManager transactionManager() { JpaTransactionManager txManager = new JpaTransactionManager(); txManager.setEntityManagerFactory(entityManagerFactory().getObject()); return txManager; } } ApplicationConfig: import org.springframework.context.annotation.Configuration; import org.springframework.context.annotation.Import; import org.springframework.data.jpa.repository.config.EnableJpaRepositories; @Configuration @Import({InfrastructureConfig.class, PlainJpaConfig.class}) @EnableJpaRepositories(basePackages = "com.*") public class ApplicationConfig { } RawData: package com.example.testcontroller.entity; import javax.persistence.*; @Entity @Table(name="RawData") public class RawData { //same as in MqttDataModel @Id @GeneratedValue(strategy= GenerationType.AUTO) public Long id; //TODO:mqttID? //Same public double timestamp; //Same public double energy_value; //Same public String topics; } RawDataRepository: package com.example.testcontroller.repository; import com.example.testcontroller.entity.RawData; import org.springframework.data.jpa.repository.JpaRepository; import org.springframework.stereotype.Repository; @Repository public interface RawDataRepository extends JpaRepository<RawData, Long> { } package com.example.testcontroller.repository; import com.example.testcontroller.entity.RawData; import org.springframework.data.jpa.repository.JpaRepository; import org.springframework.stereotype.Repository; @Repository public interface RawDataRepository extends JpaRepository<RawData, Long> { } RawDataService: package com.example.testcontroller.service; import com.example.testcontroller.entity.RawData; import com.example.testcontroller.repository.RawDataRepository; import org.springframework.beans.factory.annotation.Autowired; import org.springframework.stereotype.Service; import java.util.List; @Service public class RawDataService { @Autowired private RawDataRepository rawDataRepository; public List<RawData> findAll(){ return rawDataRepository.findAll(); } } TestcontrollerApplication: package com.example.testcontroller; import org.springframework.boot.SpringApplication; import org.springframework.boot.autoconfigure.SpringBootApplication; @SpringBootApplication public class TestcontrollerApplication { public static void main(String[] args) { SpringApplication.run(TestcontrollerApplication.class, args); } } Pom.xml: 4.0.0 org.springframework.boot spring-boot-starter-parent 3.0.0 com.example testcontroller 0.0.1-SNAPSHOT testcontroller testcontroller <java.version>17</java.version> org.springframework.boot spring-boot-starter-web <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-test</artifactId> <scope>test</scope> </dependency> <dependency> <groupId>org.eclipse.paho</groupId> <artifactId>org.eclipse.paho.client.mqttv3</artifactId> <version>1.2.5</version> </dependency> <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-data-jpa</artifactId> </dependency> <dependency> <groupId>org.json</groupId> <artifactId>json</artifactId> <version>20220924</version> </dependency> <dependency> <groupId>mysql</groupId> <artifactId>mysql-connector-java</artifactId> <version>8.0.30</version> </dependency> <dependency> <groupId>org.hibernate</groupId> <artifactId>hibernate-entitymanager</artifactId> <version>5.6.14.Final</version> </dependency> <dependency> <groupId>com.h2database</groupId> <artifactId>h2</artifactId> </dependency> <dependency> <groupId>org.postgresql</groupId> <artifactId>postgresql</artifactId> </dependency> </dependencies> <build> <plugins> <plugin> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-maven-plugin</artifactId> </plugin> </plugins> </build> I have EntityScan,EnableJPARepositories and everything else in config files, so I am really lost. I can't find what is wrong. Thank you for your help! A: I solved it. After rebuilding the project it could not detect javax anymore and only suggested Jakarta.persistence. Now it is working
Spring Configuration Problem: Not a managed type: Entity
I am trying to create Entities for a Postgres Database using Spring Data JPA and am getting this error the whole time: Error creating bean with name 'rawDataService': Unsatisfied dependency expressed through field 'rawDataRepository': Error creating bean with name 'rawDataRepository' defined in com.example.testcontroller.repository.RawDataRepository defined in @EnableJpaRepositories declared on ApplicationConfig: Not a managed type: class com.example.testcontroller.entity.RawData PlainJpaConfig: import org.springframework.boot.autoconfigure.domain.EntityScan; import org.springframework.context.annotation.ComponentScan; import org.springframework.context.annotation.Configuration; import org.springframework.context.annotation.Import; @Configuration @Import(InfrastructureConfig.class) @ComponentScan(basePackages = "com.*") @EntityScan(basePackages = "com.*") public class PlainJpaConfig { } InfraStructureConfig: import javax.sql.DataSource; import org.springframework.boot.jdbc.DataSourceBuilder; import org.springframework.context.annotation.Bean; import org.springframework.context.annotation.Configuration; import org.springframework.jdbc.datasource.embedded.EmbeddedDatabaseBuilder; import org.springframework.jdbc.datasource.embedded.EmbeddedDatabaseType; import org.springframework.orm.jpa.JpaTransactionManager; import org.springframework.orm.jpa.LocalContainerEntityManagerFactoryBean; import org.springframework.orm.jpa.vendor.Database; import org.springframework.orm.jpa.vendor.HibernateJpaVendorAdapter; import org.springframework.transaction.PlatformTransactionManager; import org.springframework.transaction.annotation.EnableTransactionManagement; @Configuration @EnableTransactionManagement public class InfrastructureConfig { @Bean public DataSource dataSource() { DataSourceBuilder dataSourceBuilder = DataSourceBuilder.create(); dataSourceBuilder.driverClassName("org.postgresql.Driver"); dataSourceBuilder.url("jdbc:postgresql://localhost:5460/greensoftdb"); dataSourceBuilder.username("user"); dataSourceBuilder.password("pwd"); return dataSourceBuilder.build(); } @Bean public LocalContainerEntityManagerFactoryBean entityManagerFactory() { HibernateJpaVendorAdapter vendorAdapter = new HibernateJpaVendorAdapter(); vendorAdapter.setDatabase(Database.POSTGRESQL); vendorAdapter.setGenerateDdl(true); LocalContainerEntityManagerFactoryBean factory = new LocalContainerEntityManagerFactoryBean(); factory.setJpaVendorAdapter(vendorAdapter); factory.setPackagesToScan("com.*"); factory.setDataSource(dataSource()); return factory; } @Bean public PlatformTransactionManager transactionManager() { JpaTransactionManager txManager = new JpaTransactionManager(); txManager.setEntityManagerFactory(entityManagerFactory().getObject()); return txManager; } } ApplicationConfig: import org.springframework.context.annotation.Configuration; import org.springframework.context.annotation.Import; import org.springframework.data.jpa.repository.config.EnableJpaRepositories; @Configuration @Import({InfrastructureConfig.class, PlainJpaConfig.class}) @EnableJpaRepositories(basePackages = "com.*") public class ApplicationConfig { } RawData: package com.example.testcontroller.entity; import javax.persistence.*; @Entity @Table(name="RawData") public class RawData { //same as in MqttDataModel @Id @GeneratedValue(strategy= GenerationType.AUTO) public Long id; //TODO:mqttID? //Same public double timestamp; //Same public double energy_value; //Same public String topics; } RawDataRepository: package com.example.testcontroller.repository; import com.example.testcontroller.entity.RawData; import org.springframework.data.jpa.repository.JpaRepository; import org.springframework.stereotype.Repository; @Repository public interface RawDataRepository extends JpaRepository<RawData, Long> { } package com.example.testcontroller.repository; import com.example.testcontroller.entity.RawData; import org.springframework.data.jpa.repository.JpaRepository; import org.springframework.stereotype.Repository; @Repository public interface RawDataRepository extends JpaRepository<RawData, Long> { } RawDataService: package com.example.testcontroller.service; import com.example.testcontroller.entity.RawData; import com.example.testcontroller.repository.RawDataRepository; import org.springframework.beans.factory.annotation.Autowired; import org.springframework.stereotype.Service; import java.util.List; @Service public class RawDataService { @Autowired private RawDataRepository rawDataRepository; public List<RawData> findAll(){ return rawDataRepository.findAll(); } } TestcontrollerApplication: package com.example.testcontroller; import org.springframework.boot.SpringApplication; import org.springframework.boot.autoconfigure.SpringBootApplication; @SpringBootApplication public class TestcontrollerApplication { public static void main(String[] args) { SpringApplication.run(TestcontrollerApplication.class, args); } } Pom.xml: 4.0.0 org.springframework.boot spring-boot-starter-parent 3.0.0 com.example testcontroller 0.0.1-SNAPSHOT testcontroller testcontroller <java.version>17</java.version> org.springframework.boot spring-boot-starter-web <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-test</artifactId> <scope>test</scope> </dependency> <dependency> <groupId>org.eclipse.paho</groupId> <artifactId>org.eclipse.paho.client.mqttv3</artifactId> <version>1.2.5</version> </dependency> <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-data-jpa</artifactId> </dependency> <dependency> <groupId>org.json</groupId> <artifactId>json</artifactId> <version>20220924</version> </dependency> <dependency> <groupId>mysql</groupId> <artifactId>mysql-connector-java</artifactId> <version>8.0.30</version> </dependency> <dependency> <groupId>org.hibernate</groupId> <artifactId>hibernate-entitymanager</artifactId> <version>5.6.14.Final</version> </dependency> <dependency> <groupId>com.h2database</groupId> <artifactId>h2</artifactId> </dependency> <dependency> <groupId>org.postgresql</groupId> <artifactId>postgresql</artifactId> </dependency> </dependencies> <build> <plugins> <plugin> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-maven-plugin</artifactId> </plugin> </plugins> </build> I have EntityScan,EnableJPARepositories and everything else in config files, so I am really lost. I can't find what is wrong. Thank you for your help!
[ "I solved it. After rebuilding the project it could not detect javax anymore and only suggested Jakarta.persistence. Now it is working\n" ]
[ 0 ]
[]
[]
[ "hibernate", "spring", "spring_boot", "spring_data", "spring_data_jpa" ]
stackoverflow_0074680587_hibernate_spring_spring_boot_spring_data_spring_data_jpa.txt
Q: Microk8s ArgoCD Error during SSL Handshake I have a problem. In my home setup, I have a portal server, which controls all the HTTP/HTTPS traffic of all hosted domains and then forwards it to the right server using proxy. One of the sites where I am having this current trouble has the following apache config: <VirtualHost *:80> ServerName my.domain.com ProxyRequests Off <Proxy *> Order deny,allow Allow from all </Proxy> ProxyPreserveHost on ProxyPass / http://192.168.1.8/ ProxyPassReverse / http://192.168.1.8 </VirtualHost> <IfModule mod_ssl.c> <VirtualHost *:443> ServerName my.domain.com ProxyRequests Off <Proxy *> Order deny,allow Allow from all </Proxy> SSLProxyEngine On SSLProxyCheckPeerCN on SSLProxyCheckPeerExpire on ProxyPreserveHost on ProxyPass / https://192.168.1.8/ ProxyPassReverse / https://192.168.1.8 Include /etc/letsencrypt/options-ssl-apache.conf ErrorLog /var/log/apache2/sites/my.domain.com/error.log CustomLog /var/log/apache2/sites/my.domain.com/access.log combined SSLCertificateFile /etc/letsencrypt/live/my.domain.com/fullchain.pem SSLCertificateKeyFile /etc/letsencrypt/live/my.domain.com/privkey.pem </VirtualHost> </IfModule> This config is used for almost all the domains and is working correctly on each server. But now I want one server to use a kubernetes cluster using Microk8s and ArgoCD, so I am using my domain: my.domain.com for showing the ArgoCD UI. So I installed ArgoCD using the two commands: kubectl create namespace argocd kubectl apply -n argocd -f https://raw.githubusercontent.com/argoproj/argo-cd/stable/manifests/core-install.yaml And enabled Ingress using: microk8s enable ingress. Then I made this Ingress resource which should forward the incomming DNS: my.domain.com, to the Argocd-server service using this config: apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: argocd-server-ingress namespace: argocd annotations: kubernetes.io/ingress.class: nginx nginx.ingress.kubernetes.io/force-ssl-redirect: "true" nginx.ingress.kubernetes.io/ssl-passthrough: "true" spec: rules: - host: my.domain.com http: paths: - path: / pathType: Prefix backend: service: name: argocd-server port: name: https I applied this ingress route and when I get this ingress from kubectl I get: NAME CLASS HOSTS ADDRESS PORTS AGE argocd-server-ingress <none> my.domain.com 80 14m Finally I have set the --enable-ssl-passthrough flag in the Ingress-Nginx-Controller DaemonSet (NOT CONTROLLER, BUT DAEMONSET), because I don't have a controller when I enable ingress for microk8s. After all this configuration, I go to my browser and enter my domain and get: What am I doing wrong here, or what am I missing??? A: It sounds like you have set up an Ingress resource in your Kubernetes cluster to handle incoming traffic to your ArgoCD UI at my.domain.com. The nginx.ingress.kubernetes.io/ssl-passthrough: "true" annotation indicates that you want SSL/TLS traffic to be passed directly to the backend service without being terminated by the Ingress controller. However, it appears that there is a problem with your setup, as the Ingress resource is not being correctly assigned an Ingress class. The kubernetes.io/ingress.class: nginx annotation specifies that you want to use the nginx Ingress controller to handle incoming traffic, but the CLASS column in the output of kubectl get ingress shows a value of for the Ingress resource. To fix this issue, you will need to make sure that the nginx Ingress controller is properly installed and configured in your cluster. You can check the status of the Ingress controller by running the following command: kubectl get pods -n kube-system | grep nginx-ingress If the Ingress controller is not running, you will need to install it using the instructions in the Kubernetes documentation: Once the nginx Ingress controller is installed and running, you should be able to update your Ingress resource to use the correct Ingress class. You can do this by adding the following annotation to your Ingress resource definition: kubernetes.io/ingress.class: nginx After updating the Ingress resource, you can apply the changes by running the following command: kubectl apply -f <ingress-resource-definition-file> This should cause the Ingress controller to be associated with your Ingress resource, and incoming traffic to my.domain.com should be routed correctly to your ArgoCD UI. If you are still having problems, you may need to check the logs for the Ingress controller to see if there are any error messages that can help you diagnose the issue. A: If you are using the kubectl command to check the status of your pods, it is possible that the information you are seeing is not up to date. You can try running the kubectl get pods --namespace=ingress-nginx --refresh command to see if this updates the status of your pods. It is also possible that the pods you are seeing are not the ones you expect. In Kubernetes, a pod is a unit of deployment that typically contains one or more containers. It sounds like you are trying to deploy an ingress controller, which is a special type of pod that is used to route traffic to other pods in your cluster. The ingress controller pod typically has the name ingress-nginx-controller, so if the pod you are seeing does not have this name, it may not be the correct one. Finally, it is worth mentioning that the class field in the output of kubectl get ingresses indicates the class of load balancer that is being used for the ingress. If the class field is set to , this typically means that no load balancer has been configured for the ingress. In order to route traffic to your pods, you will need to create a load balancer and specify its class in the ingress resource. You can do this using the kubectl command or by editing the ingress resource directl A: #Ingress is a Kubernetes resource that provides routing to services within the cluster. To create an Ingress resource, you will need to define a few things: #A set of rules that define how incoming requests should be routed to the appropriate services. #A LoadBalancer, which will be used to distribute incoming requests to the appropriate service. #A default backend, which will be used to handle requests that don't match any of the rules defined in the Ingress resource. #Here is an example of an Ingress resource that you can use as a starting point: apiVersion: networking.k8s.io/v1beta1 kind: Ingress metadata: name: my-ingress labels: app: my-app spec: rules: - http: paths: - path: / backend: serviceName: my-service servicePort: 80 backend: serviceName: my-default-backend servicePort: 80 #This Ingress resource defines a single routing rule that sends requests with the path / to the service my-service on port 80. If a request does not match this rule, it will be sent to the default backend my-default-backend on port 80. #You can then create the Ingress resource using the kubectl command-line tool: kubectl apply -f my-ingress.yaml This will create the Ingress resource in your cluster. You can verify that it was created successfully by running the following command: kubectl get ingress This should print a list of Ingress resources in your cluster, including the one you just created. #I hope this helps! Let me know if you have any other questions. A: I finally got it working! I noticed that nobody on my server was listening to port 80 nor 443, but yet I still got the 404 from Nginx when I called my domain. After some googling I found the answer. First I downloaded the deployment file of Ingress-nginx-controller using: wget https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.0.4/deploy/static/provider/baremetal/deploy.yaml Then I added hostNetwork: true at: serviceAccountName: ingress-nginx terminationGracePeriodSeconds: 300 hostNetwork: true volumes: - name: webhook-cert secret: secretName: ingress-nginx-admission I deployed the yaml file and POOF! The ArgoCD web UI popped up! Still many thanks for your help @Siedud. I still learned a lot from you and I hope you can use this information to prevent spending time on this kind of problem ever! First thing I will do after deploying any ingress controller is checking for port 80/443 usage on my server!
Microk8s ArgoCD Error during SSL Handshake
I have a problem. In my home setup, I have a portal server, which controls all the HTTP/HTTPS traffic of all hosted domains and then forwards it to the right server using proxy. One of the sites where I am having this current trouble has the following apache config: <VirtualHost *:80> ServerName my.domain.com ProxyRequests Off <Proxy *> Order deny,allow Allow from all </Proxy> ProxyPreserveHost on ProxyPass / http://192.168.1.8/ ProxyPassReverse / http://192.168.1.8 </VirtualHost> <IfModule mod_ssl.c> <VirtualHost *:443> ServerName my.domain.com ProxyRequests Off <Proxy *> Order deny,allow Allow from all </Proxy> SSLProxyEngine On SSLProxyCheckPeerCN on SSLProxyCheckPeerExpire on ProxyPreserveHost on ProxyPass / https://192.168.1.8/ ProxyPassReverse / https://192.168.1.8 Include /etc/letsencrypt/options-ssl-apache.conf ErrorLog /var/log/apache2/sites/my.domain.com/error.log CustomLog /var/log/apache2/sites/my.domain.com/access.log combined SSLCertificateFile /etc/letsencrypt/live/my.domain.com/fullchain.pem SSLCertificateKeyFile /etc/letsencrypt/live/my.domain.com/privkey.pem </VirtualHost> </IfModule> This config is used for almost all the domains and is working correctly on each server. But now I want one server to use a kubernetes cluster using Microk8s and ArgoCD, so I am using my domain: my.domain.com for showing the ArgoCD UI. So I installed ArgoCD using the two commands: kubectl create namespace argocd kubectl apply -n argocd -f https://raw.githubusercontent.com/argoproj/argo-cd/stable/manifests/core-install.yaml And enabled Ingress using: microk8s enable ingress. Then I made this Ingress resource which should forward the incomming DNS: my.domain.com, to the Argocd-server service using this config: apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: argocd-server-ingress namespace: argocd annotations: kubernetes.io/ingress.class: nginx nginx.ingress.kubernetes.io/force-ssl-redirect: "true" nginx.ingress.kubernetes.io/ssl-passthrough: "true" spec: rules: - host: my.domain.com http: paths: - path: / pathType: Prefix backend: service: name: argocd-server port: name: https I applied this ingress route and when I get this ingress from kubectl I get: NAME CLASS HOSTS ADDRESS PORTS AGE argocd-server-ingress <none> my.domain.com 80 14m Finally I have set the --enable-ssl-passthrough flag in the Ingress-Nginx-Controller DaemonSet (NOT CONTROLLER, BUT DAEMONSET), because I don't have a controller when I enable ingress for microk8s. After all this configuration, I go to my browser and enter my domain and get: What am I doing wrong here, or what am I missing???
[ "It sounds like you have set up an Ingress resource in your Kubernetes cluster to handle incoming traffic to your ArgoCD UI at my.domain.com. The nginx.ingress.kubernetes.io/ssl-passthrough: \"true\" annotation indicates that you want SSL/TLS traffic to be passed directly to the backend service without being terminated by the Ingress controller.\nHowever, it appears that there is a problem with your setup, as the Ingress resource is not being correctly assigned an Ingress class. The kubernetes.io/ingress.class: nginx annotation specifies that you want to use the nginx Ingress controller to handle incoming traffic, but the CLASS column in the output of kubectl get ingress shows a value of for the Ingress resource.\nTo fix this issue, you will need to make sure that the nginx Ingress controller is properly installed and configured in your cluster. You can check the status of the Ingress controller by running the following command:\nkubectl get pods -n kube-system | grep nginx-ingress\nIf the Ingress controller is not running, you will need to install it using the instructions in the Kubernetes documentation:\nOnce the nginx Ingress controller is installed and running, you should be able to update your Ingress resource to use the correct Ingress class. You can do this by adding the following annotation to your Ingress resource definition:\nkubernetes.io/ingress.class: nginx\nAfter updating the Ingress resource, you can apply the changes by running the following command:\nkubectl apply -f <ingress-resource-definition-file>\nThis should cause the Ingress controller to be associated with your Ingress resource, and incoming traffic to my.domain.com should be routed correctly to your ArgoCD UI. If you are still having problems, you may need to check the logs for the Ingress controller to see if there are any error messages that can help you diagnose the issue.\n", "If you are using the kubectl command to check the status of your pods, it is possible that the information you are seeing is not up to date. You can try running the kubectl get pods --namespace=ingress-nginx --refresh command to see if this updates the status of your pods.\nIt is also possible that the pods you are seeing are not the ones you expect. In Kubernetes, a pod is a unit of deployment that typically contains one or more containers. It sounds like you are trying to deploy an ingress controller, which is a special type of pod that is used to route traffic to other pods in your cluster. The ingress controller pod typically has the name ingress-nginx-controller, so if the pod you are seeing does not have this name, it may not be the correct one.\nFinally, it is worth mentioning that the class field in the output of kubectl get ingresses indicates the class of load balancer that is being used for the ingress. If the class field is set to , this typically means that no load balancer has been configured for the ingress. In order to route traffic to your pods, you will need to create a load balancer and specify its class in the ingress resource. You can do this using the kubectl command or by editing the ingress resource directl\n", "#Ingress is a Kubernetes resource that provides routing to services within the cluster. To create an Ingress resource, you will need to define a few things:\n\n#A set of rules that define how incoming requests should be routed to the appropriate services.\n#A LoadBalancer, which will be used to distribute incoming requests to the appropriate service.\n#A default backend, which will be used to handle requests that don't match any of the rules defined in the Ingress resource.\n#Here is an example of an Ingress resource that you can use as a starting point:\n\napiVersion: networking.k8s.io/v1beta1\nkind: Ingress\nmetadata:\n name: my-ingress\n labels:\n app: my-app\nspec:\n rules:\n - http:\n paths:\n - path: /\n backend:\n serviceName: my-service\n servicePort: 80\n backend:\n serviceName: my-default-backend\n servicePort: 80\n\n#This Ingress resource defines a single routing rule that sends requests with the path / to the service my-service on port 80. If a request does not match this rule, it will be sent to the default backend my-default-backend on port 80.\n\n#You can then create the Ingress resource using the kubectl command-line tool:\n\nkubectl apply -f my-ingress.yaml\nThis will create the Ingress resource in your cluster. You can verify that it was created successfully by running the following command:\n\nkubectl get ingress\nThis should print a list of Ingress resources in your cluster, including the one you just created.\n\n#I hope this helps! Let me know if you have any other questions.\n\n", "I finally got it working! I noticed that nobody on my server was listening to port 80 nor 443, but yet I still got the 404 from Nginx when I called my domain. After some googling I found the answer.\nFirst I downloaded the deployment file of Ingress-nginx-controller using:\nwget https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.0.4/deploy/static/provider/baremetal/deploy.yaml\n\nThen I added hostNetwork: true at:\n serviceAccountName: ingress-nginx\n terminationGracePeriodSeconds: 300\n hostNetwork: true\n volumes:\n - name: webhook-cert\n secret:\n secretName: ingress-nginx-admission\n\nI deployed the yaml file and POOF! The ArgoCD web UI popped up! Still many thanks for your help @Siedud. I still learned a lot from you and I hope you can use this information to prevent spending time on this kind of problem ever! First thing I will do after deploying any ingress controller is checking for port 80/443 usage on my server!\n" ]
[ 0, 0, 0, 0 ]
[]
[]
[ "kubernetes", "kubernetes_ingress", "microk8s", "ssl" ]
stackoverflow_0074680239_kubernetes_kubernetes_ingress_microk8s_ssl.txt
Q: Spawning threads for performance I am writing a rust script that needs to brute force the solution to some calculation and is likely to run 2^80 times. That is a lot! I am trying to make it run as fast as possible and thus want to divide the burden to multiple threads. However if I understand correctly this only accelerates my script if the threads actually run on different cores, otherwise they will not truly run simultaneously but switch between one another when running.. hence i guess not really increasing speed? How can I make sure they use different cores, and how can I know that no more cores are available? Thanks I tried looking at various forum posts A: Use std::thread::available_parallelism to know how many threads to run and let your OS handle the rest. Typically when you create a thread, the OS thread scheduler is given free liberty to decide where and when those threads execute, however it will do so in a way that best takes advantage of CPU resources. So of course if you use less threads than the system has available, you are potentially missing out on performance. If you use more than the number of available threads, that's not particularly a problem since the thread scheduler will try its best to balance the threads that have work to do, but more than the available threads would be a mall waste of memory, OS resources, and context-switches. Creating your threads to match the number of logical CPU cores on your system is the sweetspot, and the above function will get that. You could tell the OS exactly which cores to run which threads by setting their affinity, however that isn't really advisable since it wouldn't particularly make anything faster unless you start really configuring your kernel or are really taking advantage of your NUMA nodes.
Spawning threads for performance
I am writing a rust script that needs to brute force the solution to some calculation and is likely to run 2^80 times. That is a lot! I am trying to make it run as fast as possible and thus want to divide the burden to multiple threads. However if I understand correctly this only accelerates my script if the threads actually run on different cores, otherwise they will not truly run simultaneously but switch between one another when running.. hence i guess not really increasing speed? How can I make sure they use different cores, and how can I know that no more cores are available? Thanks I tried looking at various forum posts
[ "Use std::thread::available_parallelism to know how many threads to run and let your OS handle the rest.\nTypically when you create a thread, the OS thread scheduler is given free liberty to decide where and when those threads execute, however it will do so in a way that best takes advantage of CPU resources. So of course if you use less threads than the system has available, you are potentially missing out on performance. If you use more than the number of available threads, that's not particularly a problem since the thread scheduler will try its best to balance the threads that have work to do, but more than the available threads would be a mall waste of memory, OS resources, and context-switches. Creating your threads to match the number of logical CPU cores on your system is the sweetspot, and the above function will get that.\nYou could tell the OS exactly which cores to run which threads by setting their affinity, however that isn't really advisable since it wouldn't particularly make anything faster unless you start really configuring your kernel or are really taking advantage of your NUMA nodes.\n" ]
[ 1 ]
[]
[]
[ "multithreading", "performance", "rust" ]
stackoverflow_0074681365_multithreading_performance_rust.txt
Q: How to Successfully Build the Open Source Apple Chess Game in XCode I am trying to download and run the source code of a previous version of the Apple macOS chess game (preferably in the 369-408 version range) using XCode 14.1. The game is written in Objective-C and interfaces with a chess engine called "sjeng" that is written in C. (Correct me if I'm wrong). I have already navigated some preliminary stumbling blocks (which you may want to follow to duplicate if you'd like to give this a try): Downloading the source code in the first place. [ The next four steps come from here ] Commenting out the "#include..." line from the Chess.xcconfig file. Removing the com.apple.private.tcc.allow entitlement from the Chess.entitlements file. Getting my provisioning profile set up for the X-Code project (this is straight-forward as long as you already have a developer profile). Changing the bundle identifier from "com.apple..." to something random. Resolving "Implicit declaration of function is invalid in C99" compile-time errors related to the C code within the sjeng chess engine. This question helped with that. But now I am stuck on the next and hopefully final step which is this is the build error: ./build-book normal nbook.pgn + test -z '' + SJENG=/Users/classified/Library/Developer/Xcode/DerivedData/MBChess-frynfmbcfskhcfdlqxxctvlldmnm/Build/Products/Development/sjeng.ChessEngine + cat + /Users/classified/Library/Developer/Xcode/DerivedData/MBChess-frynfmbcfskhcfdlqxxctvlldmnm/Build/Products/Development/sjeng.ChessEngine ./build-book: line 21: /Users/classified/Library/Developer/Xcode/DerivedData/MBChess-frynfmbcfskhcfdlqxxctvlldmnm/Build/Products/Development/sjeng.ChessEngine: No such file or directory make: *** [nbook.db] Error 1 Command ExternalBuildToolExecution failed with a nonzero exit code I have no clue what this stage of the build process pertains to. I have verified in the Finder that the directory in the error message does indeed not exist. I tried "Cleaning the Build Folder" in XCode and building again, but same result. Can anyone get the game actually running (from source) on macOS and describe the steps required to get there? A: Here is how it worked for me: Download the project from here (build tag 408); Unarchive the project and open MBChess.xcodeproj file with Xcode; Open MBChess target and do as follows: Change Bundle Identifier to something more relevant to you Enable "Automatically manage signing" flag Choose your Apple Developer team OR choose any personal team (Optional) If you chose a personal team, don't forget to remove incompatible entitlements from here (Game Center) Remove Chess.xcconfig file from Project Navigator: Find Chess.entitelements file and remove com.apple.private.tcc.allow array from it: Select sjeng target and build it first Select MBChess target and build it for the same platform At this point the app should build successfully (I was using macOS Ventura 13.0.1 (22A400) as the target platform with Xcode Version 14.1 (14B47b))
How to Successfully Build the Open Source Apple Chess Game in XCode
I am trying to download and run the source code of a previous version of the Apple macOS chess game (preferably in the 369-408 version range) using XCode 14.1. The game is written in Objective-C and interfaces with a chess engine called "sjeng" that is written in C. (Correct me if I'm wrong). I have already navigated some preliminary stumbling blocks (which you may want to follow to duplicate if you'd like to give this a try): Downloading the source code in the first place. [ The next four steps come from here ] Commenting out the "#include..." line from the Chess.xcconfig file. Removing the com.apple.private.tcc.allow entitlement from the Chess.entitlements file. Getting my provisioning profile set up for the X-Code project (this is straight-forward as long as you already have a developer profile). Changing the bundle identifier from "com.apple..." to something random. Resolving "Implicit declaration of function is invalid in C99" compile-time errors related to the C code within the sjeng chess engine. This question helped with that. But now I am stuck on the next and hopefully final step which is this is the build error: ./build-book normal nbook.pgn + test -z '' + SJENG=/Users/classified/Library/Developer/Xcode/DerivedData/MBChess-frynfmbcfskhcfdlqxxctvlldmnm/Build/Products/Development/sjeng.ChessEngine + cat + /Users/classified/Library/Developer/Xcode/DerivedData/MBChess-frynfmbcfskhcfdlqxxctvlldmnm/Build/Products/Development/sjeng.ChessEngine ./build-book: line 21: /Users/classified/Library/Developer/Xcode/DerivedData/MBChess-frynfmbcfskhcfdlqxxctvlldmnm/Build/Products/Development/sjeng.ChessEngine: No such file or directory make: *** [nbook.db] Error 1 Command ExternalBuildToolExecution failed with a nonzero exit code I have no clue what this stage of the build process pertains to. I have verified in the Finder that the directory in the error message does indeed not exist. I tried "Cleaning the Build Folder" in XCode and building again, but same result. Can anyone get the game actually running (from source) on macOS and describe the steps required to get there?
[ "Here is how it worked for me:\n\nDownload the project from here (build tag 408);\nUnarchive the project and open MBChess.xcodeproj file with Xcode;\nOpen MBChess target and do as follows:\n\nChange Bundle Identifier to something more relevant to you\nEnable \"Automatically manage signing\" flag\nChoose your Apple Developer team OR choose any personal team\n(Optional) If you chose a personal team, don't forget to remove incompatible entitlements from here (Game Center)\n\n\n\n\n\nRemove Chess.xcconfig file from Project Navigator:\n\n\nFind Chess.entitelements file and remove com.apple.private.tcc.allow array from it:\n\n\nSelect sjeng target and build it first\n\n\nSelect MBChess target and build it for the same platform\n\n\n\nAt this point the app should build successfully (I was using macOS Ventura 13.0.1 (22A400) as the target platform with Xcode Version 14.1 (14B47b))\n" ]
[ 0 ]
[]
[]
[ "apple_open_source", "build", "c", "objective_c", "xcode" ]
stackoverflow_0074660340_apple_open_source_build_c_objective_c_xcode.txt
Q: How to properly install MechanicalSoup for Python? I wanted to practice web scraping with Python module MechanicalSoup, but when I started installing it using pip install mechanicalsoup I encountered this error "Could not find function xmlCheckVersion in library libxml2. Is libxml2 installed?". I then tried running pip3 install lxml --use-pep517 to install lxml and its dependencies it returned the same error. Note that I'm using Visual Studio Code and installing this in a Python Virtual Environment. I looked up every where for possible resolution but so far nothing I found has worked. Any help would be appreciated. Thanks! A: To properly install MechanicalSoup, you need to make sure that you have the required dependencies installed. In this case, it looks like you need to install the lxml library. Here are the steps you can follow to properly install MechanicalSoup: Create a Python virtual environment for your project, if you haven't already done so. This will help you avoid conflicts with other Python projects and their dependencies. To create a virtual environment, you can use the virtualenv module. For example: $ virtualenv my-project-venv Activate your virtual environment. This will enable the virtual environment for your current shell session and allow you to install packages within this environment. To activate your virtual environment, you can use the source command, followed by the path to your virtual environment's bin/activate script. For example: $ source my-project-venv/bin/activate Install the required dependencies for MechanicalSoup. In this case, you need to install the lxml library. You can do this using pip by running the following command: $ pip install lxml Install MechanicalSoup itself. Once you have installed the required dependencies, you can install MechanicalSoup using pip by running the following command: Copy code $ pip install mechanicalsoup After following these steps, you should be able to import and use MechanicalSoup in your Python code. For example: import mechanicalsoup browser = mechanicalsoup.StatefulBrowser() I hope this helps! Let me know if you have any other questions.
How to properly install MechanicalSoup for Python?
I wanted to practice web scraping with Python module MechanicalSoup, but when I started installing it using pip install mechanicalsoup I encountered this error "Could not find function xmlCheckVersion in library libxml2. Is libxml2 installed?". I then tried running pip3 install lxml --use-pep517 to install lxml and its dependencies it returned the same error. Note that I'm using Visual Studio Code and installing this in a Python Virtual Environment. I looked up every where for possible resolution but so far nothing I found has worked. Any help would be appreciated. Thanks!
[ "To properly install MechanicalSoup, you need to make sure that you have the required dependencies installed. In this case, it looks like you need to install the lxml library.\nHere are the steps you can follow to properly install MechanicalSoup:\nCreate a Python virtual environment for your project, if you haven't already done so. This will help you avoid conflicts with other Python projects and their dependencies. To create a virtual environment, you can use the virtualenv module. For example:\n$ virtualenv my-project-venv\n\nActivate your virtual environment. This will enable the virtual environment for your current shell session and allow you to install packages within this environment. To activate your virtual environment, you can use the source command, followed by the path to your virtual environment's bin/activate script. For example:\n$ source my-project-venv/bin/activate\n\nInstall the required dependencies for MechanicalSoup. In this case, you need to install the lxml library. You can do this using pip by running the following command:\n$ pip install lxml\n\nInstall MechanicalSoup itself. Once you have installed the required dependencies, you can install MechanicalSoup using pip by running the following command:\nCopy code\n$ pip install mechanicalsoup\n\nAfter following these steps, you should be able to import and use MechanicalSoup in your Python code. For example:\nimport mechanicalsoup\n\nbrowser = mechanicalsoup.StatefulBrowser()\n\nI hope this helps! Let me know if you have any other questions.\n" ]
[ 0 ]
[]
[]
[ "beautifulsoup", "mechanicalsoup", "python", "web_scraping" ]
stackoverflow_0074681403_beautifulsoup_mechanicalsoup_python_web_scraping.txt
Q: J unit test for function that returns a csv file I have a function checkInbox() which runs a script that gets values from a mysql table and returns it in a csv file as well as prints it on the terminal. My question is how can I run a j unit test to make sure that the information in the terminal is correct. checkInbox(): public class CheckInbox { public static void main(String[] args) { int send = 1; int recieve = 1; checkInbox(send, recieve); } public static int checkInbox(int senderId, int recieverId) { String url = "jdbc:mysql://localhost/petcare"; String password = "ParkSideRoad161997"; String username = "root"; PreparedStatement p = null; ResultSet rs = null; String csvFilePath = "inbox.csv"; try (Connection connection = DriverManager.getConnection(url, username, password)) { Connection con = DriverManager.getConnection(url, username, password); BufferedWriter fileWriter = new BufferedWriter(new FileWriter(csvFilePath)); // write header line containing column names fileWriter.write("Message"); System.out.println("Message"); String sql2 = "select * from message where senderID = ? OR receiverID = ?"; p = con.prepareStatement(sql2); p.setInt(1, senderId); p.setInt(2, recieverId); rs = p.executeQuery(); while (rs.next()) { // int messageId = rs.getInt("messageID"); String post = rs.getString("post"); String line = String.format("%s", post); System.out.println(line); fileWriter.newLine(); fileWriter.write(line); } p.close(); fileWriter.close(); } catch (SQLException e) { System.out.println("Datababse error:"); e.printStackTrace(); } catch (IOException e) { System.out.println("File IO error:"); e.printStackTrace(); } return 0; } } terminal: Message Hi Bob are you free to groom my pet Hi John, yes I am free to groom your pet Sounds good when should we make an apt? Whenever you feel works best. What I want to be able to do is write a test function that makes sure that 1 or more of the columns is correct in the csv file. A: Your test should assert that the code under test returns expecteds results when called when particular input values. assertEquals(inbox.csv, inbox.checkInbox(1, 1)); checkInbox(int, int) currently returns an int. You're comparing an int result with "inbox.csv" - an int cannot equal a String so this doesn't make sense right now. If you're intending to compare the contents of the retrieved csv, then your method under test needs to return something containing the contents of the csv, not an int. When writing your code, it helps to think about how you are structuring your code to make it easier to test. Smaller methods that do a single thing are usually easier to test, vs large methods that do several things are harder to test.
J unit test for function that returns a csv file
I have a function checkInbox() which runs a script that gets values from a mysql table and returns it in a csv file as well as prints it on the terminal. My question is how can I run a j unit test to make sure that the information in the terminal is correct. checkInbox(): public class CheckInbox { public static void main(String[] args) { int send = 1; int recieve = 1; checkInbox(send, recieve); } public static int checkInbox(int senderId, int recieverId) { String url = "jdbc:mysql://localhost/petcare"; String password = "ParkSideRoad161997"; String username = "root"; PreparedStatement p = null; ResultSet rs = null; String csvFilePath = "inbox.csv"; try (Connection connection = DriverManager.getConnection(url, username, password)) { Connection con = DriverManager.getConnection(url, username, password); BufferedWriter fileWriter = new BufferedWriter(new FileWriter(csvFilePath)); // write header line containing column names fileWriter.write("Message"); System.out.println("Message"); String sql2 = "select * from message where senderID = ? OR receiverID = ?"; p = con.prepareStatement(sql2); p.setInt(1, senderId); p.setInt(2, recieverId); rs = p.executeQuery(); while (rs.next()) { // int messageId = rs.getInt("messageID"); String post = rs.getString("post"); String line = String.format("%s", post); System.out.println(line); fileWriter.newLine(); fileWriter.write(line); } p.close(); fileWriter.close(); } catch (SQLException e) { System.out.println("Datababse error:"); e.printStackTrace(); } catch (IOException e) { System.out.println("File IO error:"); e.printStackTrace(); } return 0; } } terminal: Message Hi Bob are you free to groom my pet Hi John, yes I am free to groom your pet Sounds good when should we make an apt? Whenever you feel works best. What I want to be able to do is write a test function that makes sure that 1 or more of the columns is correct in the csv file.
[ "Your test should assert that the code under test returns expecteds results when called when particular input values.\nassertEquals(inbox.csv, inbox.checkInbox(1, 1));\n\ncheckInbox(int, int) currently returns an int. You're comparing an int result with \"inbox.csv\" - an int cannot equal a String so this doesn't make sense right now.\nIf you're intending to compare the contents of the retrieved csv, then your method under test needs to return something containing the contents of the csv, not an int.\nWhen writing your code, it helps to think about how you are structuring your code to make it easier to test. Smaller methods that do a single thing are usually easier to test, vs large methods that do several things are harder to test.\n" ]
[ 0 ]
[]
[]
[ "java", "junit", "mysql" ]
stackoverflow_0074681252_java_junit_mysql.txt
Q: AutoHotKey Script for date time function, what code for "MM" may be substituted to provide the month as text "i.e. NOV, DEC, etc." my script follows My current script is: #z:: FormatTime, CurrentpateTime„ MM.dd.yyyy HH:mm:ss -{SPACE} sendInput %CurrentDateTlme% return I wish to have the month display as text In the resultant output [JAN, FEB, MAR, APR, etc.] I wish to be able to enter date time sequences with a keysstroke anywhere I am working on my PC. I simply do not know the code for 01, 02, I googled "what is the text code for the Integer month dates" However, the returns are specific to Excel rather than general docs such as Google docs A: If you are after month format as JAN, FEB..., you can use MMM date format. From documentation: MMM Abbreviated month name (e.g. Jan) in the current user's language For example: FormatTime, CurrentpateTime,, MMM StringUpper, result, CurrentpateTime MsgBox %result% will display : DEC
AutoHotKey Script for date time function, what code for "MM" may be substituted to provide the month as text "i.e. NOV, DEC, etc." my script follows
My current script is: #z:: FormatTime, CurrentpateTime„ MM.dd.yyyy HH:mm:ss -{SPACE} sendInput %CurrentDateTlme% return I wish to have the month display as text In the resultant output [JAN, FEB, MAR, APR, etc.] I wish to be able to enter date time sequences with a keysstroke anywhere I am working on my PC. I simply do not know the code for 01, 02, I googled "what is the text code for the Integer month dates" However, the returns are specific to Excel rather than general docs such as Google docs
[ "If you are after month format as JAN, FEB..., you can use MMM date format.\nFrom documentation:\nMMM Abbreviated month name (e.g. Jan) in the current user's language\n\nFor example:\nFormatTime, CurrentpateTime,, MMM\nStringUpper, result, CurrentpateTime\nMsgBox %result%\n\nwill display : DEC\n" ]
[ 0 ]
[]
[]
[ "autohotkey", "datetime", "integer", "replace", "windows" ]
stackoverflow_0074680536_autohotkey_datetime_integer_replace_windows.txt
Q: What's the name of flutter widget that has icons bellow the screen? What's the name of the Flutter widget that has icons below the screen and I can slide to right and left to change between these screens (Ex: Twitter main page) I could create a Container with a Row and the Icons and do this manually, but I suspect that already exists this widget on Flutter. A: this bottom navigation bar can be done using BottomNavigationBar in the bottomNavigationBar property on your Scaffold : bottomNavigationBar: BottomNavigationBar( items: [ BottomNavigationBarItem(icon: Icon(Icons.home), label: 'Home'), BottomNavigationBarItem( icon: Icon(Icons.business), label: 'Business'), BottomNavigationBarItem(icon: Icon(Icons.school), label: 'School'), ], ), and for the slidable pages can be done using a PageView widget: PageView( children: <Widget>[ ScreenOne(), ScreenTwo(), ScreenThree(), ], ); and you can link both of them when you click an item in the BottomNavigationBar, it will navigate to a specific page with a PageController.
What's the name of flutter widget that has icons bellow the screen?
What's the name of the Flutter widget that has icons below the screen and I can slide to right and left to change between these screens (Ex: Twitter main page) I could create a Container with a Row and the Icons and do this manually, but I suspect that already exists this widget on Flutter.
[ "this bottom navigation bar can be done using BottomNavigationBar in the bottomNavigationBar property on your Scaffold :\nbottomNavigationBar: BottomNavigationBar(\n items: [\n BottomNavigationBarItem(icon: Icon(Icons.home), label: 'Home'),\n BottomNavigationBarItem(\n icon: Icon(Icons.business), label: 'Business'),\n BottomNavigationBarItem(icon: Icon(Icons.school), label: 'School'),\n ],\n ),\n\nand for the slidable pages can be done using a PageView widget:\nPageView(\n children: <Widget>[\n ScreenOne(),\n ScreenTwo(),\n ScreenThree(),\n ],\n);\n\nand you can link both of them when you click an item in the BottomNavigationBar, it will navigate to a specific page with a PageController.\n" ]
[ 0 ]
[]
[]
[ "dart", "flutter", "user_interface" ]
stackoverflow_0074681374_dart_flutter_user_interface.txt
Q: NodeJS get audio file I have stored some audio files in GridFS, and I am able to do a find() query to fetch them. But how do I get them so I can stream the audio from these files? import clientPromise from "../../lib/mongodb"; import { MongoClient, GridFSBucket } from 'mongodb' var CryptoJS = require("crypto-js"); //import { hash } from 'bcryptjs'; export default async function audio(req, res) { const uri = process.env.MONGODB_URI let client let clientPromise const options = {} client = new MongoClient(uri, options) clientPromise = client.connect() const clients = await clientPromise const database = clients.db('AUDIODB'); const bucket = new GridFSBucket(database,{bucketName:"uploads"}); const { id } = req.query; const cursor = bucket.find({}); cursor.forEach(doc => console.log(doc)); //THIS SHOWS ME THE FILES UPLOADED } export async function getServerSideProps(context) { try { // client.db() will be the default database passed in the MONGODB_URI // You can change the database by calling the client.db() function and specifying a database like: // const db = client.db("myDatabase"); // Then you can execute queries against your database like so: // db.find({}) or any of the MongoDB Node Driver commands await clientPromise return { props: { isConnected: true }, } } catch (e) { console.error(e) return { props: { isConnected: false }, } } } as you can see from above, I have it showing all the files that I have uploaded cursor.forEach(doc => console.log(doc)); //THIS SHOWS ME THE FILES UPLOADED However this just gets me the following { _id: new ObjectId("6267ba6dd2512c3257f30980"), filename: '/Volumes/Studio/AUDIOFILES/2022/AUDIOPATH/AUDIO.mp3', length: 168906, chunkSize: 261120, uploadDate: 2022-04-26T09:25:05.589Z } But clearly I can't put <audio src="FILENAME"/> as my server does not host the uploaded file. A: // Import the necessary modules import { MongoClient, GridFSBucket } from 'mongodb' // Import the client connection promise import clientPromise from '../../lib/mongodb' // Import the 'crypto' module to generate a unique filename import crypto from 'crypto' // Import the 'path' module to help with generating the file path import path from 'path' // Import the 'mime' module to help with setting the correct content type import mime from 'mime' // Define the audio route handler export default async function audio(req, res) { // Get the file ID from the request query const { id } = req.query // Connect to the MongoDB server const client = await clientPromise // Get the default database and create a new GridFS bucket instance const db = client.db() const bucket = new GridFSBucket(db, { bucketName: 'uploads' }) // Use the file ID to open a download stream from the GridFS bucket const downloadStream = bucket.openDownloadStream(id) // Set the response content type to the appropriate MIME type for the file res.setHeader('Content-Type', mime.getType(id)) // Pipe the download stream to the response object downloadStream.pipe(res) }
NodeJS get audio file
I have stored some audio files in GridFS, and I am able to do a find() query to fetch them. But how do I get them so I can stream the audio from these files? import clientPromise from "../../lib/mongodb"; import { MongoClient, GridFSBucket } from 'mongodb' var CryptoJS = require("crypto-js"); //import { hash } from 'bcryptjs'; export default async function audio(req, res) { const uri = process.env.MONGODB_URI let client let clientPromise const options = {} client = new MongoClient(uri, options) clientPromise = client.connect() const clients = await clientPromise const database = clients.db('AUDIODB'); const bucket = new GridFSBucket(database,{bucketName:"uploads"}); const { id } = req.query; const cursor = bucket.find({}); cursor.forEach(doc => console.log(doc)); //THIS SHOWS ME THE FILES UPLOADED } export async function getServerSideProps(context) { try { // client.db() will be the default database passed in the MONGODB_URI // You can change the database by calling the client.db() function and specifying a database like: // const db = client.db("myDatabase"); // Then you can execute queries against your database like so: // db.find({}) or any of the MongoDB Node Driver commands await clientPromise return { props: { isConnected: true }, } } catch (e) { console.error(e) return { props: { isConnected: false }, } } } as you can see from above, I have it showing all the files that I have uploaded cursor.forEach(doc => console.log(doc)); //THIS SHOWS ME THE FILES UPLOADED However this just gets me the following { _id: new ObjectId("6267ba6dd2512c3257f30980"), filename: '/Volumes/Studio/AUDIOFILES/2022/AUDIOPATH/AUDIO.mp3', length: 168906, chunkSize: 261120, uploadDate: 2022-04-26T09:25:05.589Z } But clearly I can't put <audio src="FILENAME"/> as my server does not host the uploaded file.
[ "// Import the necessary modules\nimport { MongoClient, GridFSBucket } from 'mongodb'\n\n// Import the client connection promise\nimport clientPromise from '../../lib/mongodb'\n\n// Import the 'crypto' module to generate a unique filename\nimport crypto from 'crypto'\n\n// Import the 'path' module to help with generating the file path\nimport path from 'path'\n\n// Import the 'mime' module to help with setting the correct content type\nimport mime from 'mime'\n\n// Define the audio route handler\nexport default async function audio(req, res) {\n // Get the file ID from the request query\n const { id } = req.query\n\n // Connect to the MongoDB server\n const client = await clientPromise\n\n // Get the default database and create a new GridFS bucket instance\n const db = client.db()\n const bucket = new GridFSBucket(db, { bucketName: 'uploads' })\n\n // Use the file ID to open a download stream from the GridFS bucket\n const downloadStream = bucket.openDownloadStream(id)\n\n // Set the response content type to the appropriate MIME type for the file\n res.setHeader('Content-Type', mime.getType(id))\n\n // Pipe the download stream to the response object\n downloadStream.pipe(res)\n}\n\n" ]
[ 0 ]
[]
[]
[ "audio", "gridfs", "mongodb", "node.js" ]
stackoverflow_0072011858_audio_gridfs_mongodb_node.js.txt
Q: Python how to do find with leading and trailing spaces I'm doing an extensive word search. How do I do a find that keeps leading and trailing spaces. the word is imported from a list. An example: find " oil " in "Use Cooking Oil" but do not find with "Sally spoiled the food." .find() strips the leading and trailing spaces. nltk tokenizing does also. this code works if i want a simple lookup. It finds "oil" in "spoiled" which, for me, creates a false positive. the false positive is what I am trying to solve. I've tried putting " oil " in the word list (with spaces), but all methods I've tried strip the leading spaces (" oil " becomes "oil"). for r in search_list_df['title']: ###<- Search for word in this list. tfl_converted = [] token_found_list.clear() words = search_list ###<- list of words to cycle through. (including " oil ") for x in words: phrase = x text = r if phrase in r: ### <- this works if i DO NOT care about leading spaces. token_found_list.append(x) tfl_converted = ", ".join(token_found_list) if len(token_found_list) > 0: search_list_output.append(tfl_converted) else: tfl_converted = float("nan") search_list_output.append(tfl_converted) ** How do I iterate through a list of words and keep the leading and trailing spaces to avoid false positives and find only exact word matches?** A: You could split the sentence into an array of words. This way, you can see if a word is present in the array, and thus overcome false positives: words = [word.lower() for word in sentence.split()] if 'oil' in words: print(True) Here, I have also made sure that every word in the sentence is lowercase, such that case sensitivity is not going to be a problem. The split() method makes sure that the string sentence is split by spaces. Hope this helps A: Create a function find_term with oneliner, def find_term(sentence, term): return len([word for word in sentence.lower().split() if term == word]) > 0 then you can use it in your code like, sentence = " xy z Oil spoil" if find_term(sentence, "oil"): #do something with the sentence
Python how to do find with leading and trailing spaces
I'm doing an extensive word search. How do I do a find that keeps leading and trailing spaces. the word is imported from a list. An example: find " oil " in "Use Cooking Oil" but do not find with "Sally spoiled the food." .find() strips the leading and trailing spaces. nltk tokenizing does also. this code works if i want a simple lookup. It finds "oil" in "spoiled" which, for me, creates a false positive. the false positive is what I am trying to solve. I've tried putting " oil " in the word list (with spaces), but all methods I've tried strip the leading spaces (" oil " becomes "oil"). for r in search_list_df['title']: ###<- Search for word in this list. tfl_converted = [] token_found_list.clear() words = search_list ###<- list of words to cycle through. (including " oil ") for x in words: phrase = x text = r if phrase in r: ### <- this works if i DO NOT care about leading spaces. token_found_list.append(x) tfl_converted = ", ".join(token_found_list) if len(token_found_list) > 0: search_list_output.append(tfl_converted) else: tfl_converted = float("nan") search_list_output.append(tfl_converted) ** How do I iterate through a list of words and keep the leading and trailing spaces to avoid false positives and find only exact word matches?**
[ "You could split the sentence into an array of words. This way, you can see if a word is present in the array, and thus overcome false positives:\nwords = [word.lower() for word in sentence.split()]\nif 'oil' in words:\n print(True)\n\nHere, I have also made sure that every word in the sentence is lowercase, such that case sensitivity is not going to be a problem. The split() method makes sure that the string sentence is split by spaces.\nHope this helps\n", "Create a function find_term with oneliner,\ndef find_term(sentence, term):\n return len([word for word in sentence.lower().split() if term == word]) > 0\n\n\nthen you can use it in your code like,\nsentence = \" xy z Oil spoil\"\n\nif find_term(sentence, \"oil\"):\n #do something with the sentence\n\n" ]
[ 0, 0 ]
[]
[]
[ "find", "python", "space" ]
stackoverflow_0074680977_find_python_space.txt
Q: How to not rebuild the project without changes in the directory? I'm trying to write Makefile which would rebuild the file "./target/js/bundle.js" after changing any file in a directory "./ts" or its subdirectories. "make" should not rebuild "./target/js/bundle.js" without changing any file in a directory "./ts" or its subdirectories. The structure of the project: /ts - directory with typescript's sources /ts/tsconfig.json - config for the tsc (TypeScript compiler), this filename is known by tsc /target/js/bundle.js - target file /Makefile /ts/tsconfig.json: { "compilerOptions": { "outFile": "../target/js/build.js" } } My Makefile: target/js/bundle.js: ts/* cd ts && tsc clean: rm -r target Now "make" run "cd ts && tsc" every time. A: This rule: target/js/bundle.js: ts/* cd ts && tsc says that if the file target/js/bundle.js does not exist, or it does exist but some file matching the glob pattern ts/* has a newer modification time, then re-run the recipe. So, if you're seeing the recipe re-run every time then one of those two things is always true. You can run make -d to have make generate a lot of information about how it's making its decision; that will tell you why it decides to rebuild the target. I don't know how tsc works: are you sure that running tsc from the ts directory will understand that it needs to build the file ../target/js/bundle.js? A: You can use the find command to list all files in the ./ts directory and its subdirectories, then use the -newer flag to check if any of those files are newer than ./target/js/bundle.js. Here is an example Makefile: target/js/bundle.js: # Get all files in the ts directory and its subdirectories files = $(shell find ./ts -type f) # Check if any of the files are newer than the target if $(find $files -newer target/js/bundle.js); then \ # If any files are newer, run the tsc command cd ts && tsc; \ fi clean: rm -r target The if statement checks if any files in the ./ts directory or its subdirectories are newer than ./target/js/bundle.js, and if any are found, it runs the tsc command. Otherwise, it does not run the command and ./target/js/bundle.js is not rebuilt. Note that you will need to add a recipe for the clean target as well.
How to not rebuild the project without changes in the directory?
I'm trying to write Makefile which would rebuild the file "./target/js/bundle.js" after changing any file in a directory "./ts" or its subdirectories. "make" should not rebuild "./target/js/bundle.js" without changing any file in a directory "./ts" or its subdirectories. The structure of the project: /ts - directory with typescript's sources /ts/tsconfig.json - config for the tsc (TypeScript compiler), this filename is known by tsc /target/js/bundle.js - target file /Makefile /ts/tsconfig.json: { "compilerOptions": { "outFile": "../target/js/build.js" } } My Makefile: target/js/bundle.js: ts/* cd ts && tsc clean: rm -r target Now "make" run "cd ts && tsc" every time.
[ "This rule:\ntarget/js/bundle.js: ts/*\n cd ts && tsc\n\nsays that if the file target/js/bundle.js does not exist, or it does exist but some file matching the glob pattern ts/* has a newer modification time, then re-run the recipe.\nSo, if you're seeing the recipe re-run every time then one of those two things is always true.\nYou can run make -d to have make generate a lot of information about how it's making its decision; that will tell you why it decides to rebuild the target.\nI don't know how tsc works: are you sure that running tsc from the ts directory will understand that it needs to build the file ../target/js/bundle.js?\n", "You can use the find command to list all files in the ./ts directory and its subdirectories, then use the -newer flag to check if any of those files are newer than ./target/js/bundle.js. Here is an example Makefile:\ntarget/js/bundle.js:\n # Get all files in the ts directory and its subdirectories\n files = $(shell find ./ts -type f)\n # Check if any of the files are newer than the target\n if $(find $files -newer target/js/bundle.js); then \\\n # If any files are newer, run the tsc command\n cd ts && tsc; \\\n fi\n\nclean:\n rm -r target\n\nThe if statement checks if any files in the ./ts directory or its subdirectories are newer than ./target/js/bundle.js, and if any are found, it runs the tsc command. Otherwise, it does not run the command and ./target/js/bundle.js is not rebuilt.\nNote that you will need to add a recipe for the clean target as well.\n" ]
[ 0, 0 ]
[]
[]
[ "makefile" ]
stackoverflow_0074681271_makefile.txt
Q: dbt get value from agate.Row to string I want to run a macro in a COPY INTO statement to S3 bucket. Apparently in snowflake I can't do dynamic path. So I'm doing a hacky way to solve this. {% macro unload_snowflake_to_s3() %} {# Get all tables and views from the information schema. #} {%- set query -%} select concat('COPY INTO @MY_STAGE/year=', year(current_date()), '/my_file FROM (SELECT OBJECT_CONSTRUCT(*) from my_table)'); {%- endset -%} -- {%- set final_query = run_query(query) -%} -- {{ dbt_utils.log_info(final_query) }} -- {{ dbt_utils.log_info(final_query.rows.values()[0]) }} {%- do run_query(final_query.columns.values()[0]) -%} -- {% do final_query.print_table() %} {% endmacro %} Based on above macros, what I'm trying to do is: Use CONCAT to add year in the bucket path. Hence, the query becomes a string. Use the concatenated query to do run_query()again to actually run the COPY INTO statement. Output and error I got from dbt log: 09:06:08 09:06:08 + | column | data_type | | ----------------------------------------------------------------------------------------------------------- | --------- | | COPY INTO @MY_STAGE/year=', year(current_date()), '/my_file FROM (SELECT OBJECT_CONSTRUCT(*) from my_table) | Text | 09:06:08 09:06:08 + <agate.Row: ('COPY INTO @MY_STAGE/year=2022/my_file FROM (SELECT OBJECT_CONSTRUCT(*) from my_table)')> 09:06:09 Encountered an error while running operation: Database Error 001003 (42000): SQL compilation error: syntax error line 1 at position 0 unexpected '<'. root@2c50ba8af043:/dbt# I think the error is that I didn't extract the row and column specifically which is in agate format. How can I convert/extract this to string? A: You might have better luck with dbt_utils.get_query_results_as_dict. But you don't need to use your database to construct that path. The jinja context has a run_started_at variable that is a Python datetime object, so you can build your string in jinja, without hitting the database: {% set yr = run_started_at.strftime("%Y") %} {% set query = 'COPY INTO @MY_STAGE/year=' ~ yr ~ '/my_file FROM (SELECT OBJECT_CONSTRUCT(*) from my_table)' %} Finally, depending on how you're calling this macro you probably want to gate this whole thing with an {% if execute %} flag, so dbt doesn't do the COPY when it's parsing your models. A: You can use dbt_utils.get_query_results_as_dict function to get rid of agate part. Maybe after that your copy statement can work. {%- set final_query = dbt_utils.get_query_results_as_dict(query) -%} {{log(final_query ,true)}} {% for keys,val in final_query.items() %} {{log(keys,true)}} {{log( val ,true)}} {% endfor %} if you run like this you will see ('COPY INTO @MY_STAGE/year=', year(current_date())...') and lastly remove "('')" by {%- set final_val=val | replace('(', '')| replace(')', '') | replace("'", '') -%}``` That's it.
dbt get value from agate.Row to string
I want to run a macro in a COPY INTO statement to S3 bucket. Apparently in snowflake I can't do dynamic path. So I'm doing a hacky way to solve this. {% macro unload_snowflake_to_s3() %} {# Get all tables and views from the information schema. #} {%- set query -%} select concat('COPY INTO @MY_STAGE/year=', year(current_date()), '/my_file FROM (SELECT OBJECT_CONSTRUCT(*) from my_table)'); {%- endset -%} -- {%- set final_query = run_query(query) -%} -- {{ dbt_utils.log_info(final_query) }} -- {{ dbt_utils.log_info(final_query.rows.values()[0]) }} {%- do run_query(final_query.columns.values()[0]) -%} -- {% do final_query.print_table() %} {% endmacro %} Based on above macros, what I'm trying to do is: Use CONCAT to add year in the bucket path. Hence, the query becomes a string. Use the concatenated query to do run_query()again to actually run the COPY INTO statement. Output and error I got from dbt log: 09:06:08 09:06:08 + | column | data_type | | ----------------------------------------------------------------------------------------------------------- | --------- | | COPY INTO @MY_STAGE/year=', year(current_date()), '/my_file FROM (SELECT OBJECT_CONSTRUCT(*) from my_table) | Text | 09:06:08 09:06:08 + <agate.Row: ('COPY INTO @MY_STAGE/year=2022/my_file FROM (SELECT OBJECT_CONSTRUCT(*) from my_table)')> 09:06:09 Encountered an error while running operation: Database Error 001003 (42000): SQL compilation error: syntax error line 1 at position 0 unexpected '<'. root@2c50ba8af043:/dbt# I think the error is that I didn't extract the row and column specifically which is in agate format. How can I convert/extract this to string?
[ "You might have better luck with dbt_utils.get_query_results_as_dict.\nBut you don't need to use your database to construct that path. The jinja context has a run_started_at variable that is a Python datetime object, so you can build your string in jinja, without hitting the database:\n{% set yr = run_started_at.strftime(\"%Y\") %}\n{% set query = 'COPY INTO @MY_STAGE/year=' ~ yr ~ '/my_file FROM (SELECT OBJECT_CONSTRUCT(*) from my_table)' %}\n\nFinally, depending on how you're calling this macro you probably want to gate this whole thing with an {% if execute %} flag, so dbt doesn't do the COPY when it's parsing your models.\n", "You can use dbt_utils.get_query_results_as_dict function to get rid of agate part. Maybe after that your copy statement can work.\n{%- set final_query = dbt_utils.get_query_results_as_dict(query) -%}\n{{log(final_query ,true)}}\n{% for keys,val in final_query.items() %}\n \n {{log(keys,true)}}\n {{log( val ,true)}}\n{% endfor %}\n\nif you run like this you will see ('COPY INTO @MY_STAGE/year=', year(current_date())...') and lastly remove \"('')\" by\n{%- set final_val=val | replace('(', '')| replace(')', '') | replace(\"'\", '') -%}```\n\nThat's it.\n\n" ]
[ 0, 0 ]
[]
[]
[ "dbt" ]
stackoverflow_0074344248_dbt.txt
Q: How to override Invoke configuration project-wide for Fabric? I'm currently specifying a connection override per constructor call: fabric2.Connection(…, config=invoke.Config(overrides={"shell": "bash"})). How would I translate this to a configuration file, so that I don't have to configure it per call? Fabric doesn't seem to have a way to set Connection parameters (connect_kwargs is for SSHClient.connect), and doesn't mention taking into account any Invoke configuration files. The closest thing I can get to illustrate the issue is just this fabric.yml, which does nothing: connection: overrides: shell: bash
How to override Invoke configuration project-wide for Fabric?
I'm currently specifying a connection override per constructor call: fabric2.Connection(…, config=invoke.Config(overrides={"shell": "bash"})). How would I translate this to a configuration file, so that I don't have to configure it per call? Fabric doesn't seem to have a way to set Connection parameters (connect_kwargs is for SSHClient.connect), and doesn't mention taking into account any Invoke configuration files. The closest thing I can get to illustrate the issue is just this fabric.yml, which does nothing: connection: overrides: shell: bash
[]
[]
[ "To specify the overrides parameter in a Fabric configuration file, you can use the connect_kwargs parameter in the connection section of the configuration file, like this:\nconnection:\nconnect_kwargs:\noverrides:\nshell: bash\nThis will pass the specified overrides dictionary as the overrides parameter to the fabric2.Connection constructor.\nNote that the connect_kwargs parameter is a dictionary of keyword arguments that will be passed to the paramiko.SSHClient.connect method when establishing the SSH connection. In this case, we're using it to pass the overrides dictionary to the fabric2.Connection constructor.\nYou can then use this configuration file by specifying the --config option when running Fabric, like this:\nfab --config fabric.yml \nThis will read the configuration settings from the fabric.yml file and apply them to the fabric2.Connection instance that is created.\nI hope this helps! Let me know if you have any other questions.\n" ]
[ -1 ]
[ "configuration", "pyinvoke", "python_fabric_2" ]
stackoverflow_0074681399_configuration_pyinvoke_python_fabric_2.txt
Q: Marching Cubes generating holes in mesh I'm working on a Marching Cubes implementation in Unity. My code is based on Paul Bourke's code actually with a lot of modifications, but anyway i'm checking if a block at a position is null if it is than a debug texture will be placed on it. This is my MC script public class MarchingCubes { private World world; private Chunk chunk; private List<Vector3> vertices = new List<Vector3> (); private List<Vector3> normals = new List<Vector3> (); private Vector3[] ns; private List<int> triangles = new List<int> (); private List<Vector2> uvs = new List<Vector2> (); private Vector3[] positions = new Vector3[8]; private float[] corners = new float[8]; private Vector3i size = new Vector3i (16, 128, 16); Vector3[] vertlist = new Vector3[12]; private float isolevel = 1f; private float Corner (Vector3i pos) { int x = pos.x; int y = pos.y; int z = pos.z; if (x < size.x && z < size.z) { return chunk.GetValue (x, y, z); } else { int ix = chunk.X, iz = chunk.Z; int rx = chunk.region.x, rz = chunk.region.z; if (x >= size.x) { ix++; x = 0; } if (z >= size.z) { iz++; z = 0; } return chunk.region.GetChunk (ix, iz).GetValue (x, y, z); } } Block block; public Mesh MarchChunk (World world, Chunk chunk, Mesh mesh) { this.world = world; this.chunk = chunk; vertices.Clear (); triangles.Clear (); uvs.Clear (); for (int x = 0; x < size.x; x++) { for (int y = 1; y < size.y - 2; y++) { for (int z = 0; z < size.z; z++) { block = chunk.GetBlock (x, y, z); int cubeIndex = 0; for (int i = 0; i < corners.Length; i++) { corners [i] = Corner (new Vector3i (x, y, z) + offset [i]); positions [i] = (new Vector3i (x, y, z) + offset [i]).ToVector3 (); if (corners [i] < isolevel) cubeIndex |= (1 << i); } if (eTable [cubeIndex] == 0) continue; for (int i = 0; i < vertlist.Length; i++) { if ((eTable [cubeIndex] & 1 << i) == 1 << i) vertlist [i] = LinearInt (positions [eCons [i, 0]], positions [eCons [i, 1]], corners [eCons [i, 0]], corners [eCons [i, 1]]); } for (int i = 0; triTable [cubeIndex, i] != -1; i += 3) { int index = vertices.Count; vertices.Add (vertlist [triTable [cubeIndex, i]]); vertices.Add (vertlist [triTable [cubeIndex, i + 1]]); vertices.Add (vertlist [triTable [cubeIndex, i + 2]]); float tec = (0.125f); Vector2 uvBase = block != null ? block.UV : new Vector2 (); uvs.Add (uvBase); uvs.Add (uvBase + new Vector2 (0, tec)); uvs.Add (uvBase + new Vector2 (tec, tec)); triangles.Add (index + 0); triangles.Add (index + 1); triangles.Add (index + 2); } } } } if (mesh == null) mesh = new Mesh (); mesh.Clear (); mesh.vertices = vertices.ToArray (); mesh.triangles = triangles.ToArray (); mesh.uv = uvs.ToArray (); mesh.RecalculateNormals (); return mesh; } bool IsBitSet (int b, int pos) { return ((b & pos) == pos); } Vector3 LinearInt (Vector3 p1, Vector3 p2, float v1, float v2) { Vector3 p; p.x = p1.x + (isolevel - v1) * (p2.x - p1.x) / (v2 - v1); p.y = p1.y + (isolevel - v1) * (p2.y - p1.y) / (v2 - v1); p.z = p1.z + (isolevel - v1) * (p2.z - p1.z) / (v2 - v1); return p; } private static int[,] eCons = new int[12, 2] { { 0, 1 }, { 1, 2 }, { 2, 3 }, { 3, 0 }, { 4, 5 }, { 5, 6 }, { 6, 7 }, { 7, 4 }, { 0, 4 }, { 1, 5 }, { 2, 6 }, { 3, 7 } }; private static Vector3i[] offset = new Vector3i[8] { new Vector3i (0, 0, 1), new Vector3i (1, 0, 1), new Vector3i (1, 0, 0), new Vector3i (0, 0, 0), new Vector3i (0, 1, 1), new Vector3i (1, 1, 1), new Vector3i (1, 1, 0), new Vector3i (0, 1, 0) }; } I didn't put the tables in the sample, because they are the same as the ones in Bourke's code. EDIT: What I figured out yet is that the cell's value at the blue triangles are 0 so they don't have to be triangulated, but the cell's value under them is 1 and because of this a top triangle is created to complete the mesh. A: It looks like you're using a 3D array to store the corner values of a block at a position. If you want to check whether a block at a position is null, it might be easier to use a 2D array and store the block data instead of the corner values. You can then check whether the block at a position is null and assign the debug texture accordingly. Here's an example of how you might store the block data in a 2D array and check whether a block at a position is null: Block[,] blocks = new Block[size.x, size.z]; for (int x = 0; x < size.x; x++) { for (int z = 0; z < size.z; z++) { blocks[x, z] = chunk.GetBlock(x, y, z); } } if (blocks[x, z] == null) { // Assign debug texture } You can also use a 3D array to store the block data if you want, but it might make the code a bit more complicated. Block[,,] blocks = new Block[size.x, size.y, size.z]; for (int x = 0; x < size.x; x++) { for (int y = 0; y < size.y; y++) { for (int z = 0; z < size.z; z++) { blocks[x, y, z] = chunk.GetBlock(x, y, z); } } } if (blocks[x, y, z] == null) { // Assign debug texture } So, something like this: public class MarchingCubes { // Other class fields and properties Block[,] blocks = new Block[size.x, size.z]; public Mesh MarchChunk(World world, Chunk chunk, Mesh mesh) { // Initialize class fields and properties for (int x = 0; x < size.x; x++) { for (int z = 0; z < size.z; z++) { blocks[x, z] = chunk.GetBlock(x, y, z); } } for (int x = 0; x < size.x; x++) { for (int y = 1; y < size.y - 2; y++) { for (int z = 0; z < size.z; z++) { block = chunk.GetBlock(x, y, z); // Check if the block at the current position is null if (blocks[x, z] == null) { // Assign debug texture } int cubeIndex = 0; for (int i = 0; i < corners.Length; i++) { corners[i] = Corner(new Vector3i(x, y, z) + offset[i]); positions[i] = (new Vector3i(x, y, z) + offset[i]).ToVector3(); if (corners[i] < isolevel) cubeIndex |= (1 << i); } if (eTable[cubeIndex] == 0) continue; for (int i = 0; i < vertlist.Length; i++) { if ((eTable[cubeIndex] & 1 << i) == 1 << i) vertlist[i] = LinearInt(positions[eCons[i, 0]], positions[eCons[i, 1]], corners[eCons[i, 0]], corners[eCons[i, 1]]); } for (int i = 0; triTable[cubeIndex, i] != -1; i += 3) { int index = vertices.Count; vertices.Add(vertlist[triTable[cubeIndex, i]]); vertices.Add(vertlist[triTable[cubeIndex, i + 1]]); vertices.Add(vertlist[triTable[cubeIndex, i + 2]]); float tec = (0.125f); Vector2 uvBase = block != null ? block.UV : new Vector2(); uvs.Add(uvBase); uvs.Add(uvBase + new Vector2(0, tec)); uvs.Add(uvBase + new Vector2(tec, tec)); triangles.Add(index + 0); triangles.Add(index + 1); triangles.Add(index + 2); } } } } // Other code... return mesh; } }
Marching Cubes generating holes in mesh
I'm working on a Marching Cubes implementation in Unity. My code is based on Paul Bourke's code actually with a lot of modifications, but anyway i'm checking if a block at a position is null if it is than a debug texture will be placed on it. This is my MC script public class MarchingCubes { private World world; private Chunk chunk; private List<Vector3> vertices = new List<Vector3> (); private List<Vector3> normals = new List<Vector3> (); private Vector3[] ns; private List<int> triangles = new List<int> (); private List<Vector2> uvs = new List<Vector2> (); private Vector3[] positions = new Vector3[8]; private float[] corners = new float[8]; private Vector3i size = new Vector3i (16, 128, 16); Vector3[] vertlist = new Vector3[12]; private float isolevel = 1f; private float Corner (Vector3i pos) { int x = pos.x; int y = pos.y; int z = pos.z; if (x < size.x && z < size.z) { return chunk.GetValue (x, y, z); } else { int ix = chunk.X, iz = chunk.Z; int rx = chunk.region.x, rz = chunk.region.z; if (x >= size.x) { ix++; x = 0; } if (z >= size.z) { iz++; z = 0; } return chunk.region.GetChunk (ix, iz).GetValue (x, y, z); } } Block block; public Mesh MarchChunk (World world, Chunk chunk, Mesh mesh) { this.world = world; this.chunk = chunk; vertices.Clear (); triangles.Clear (); uvs.Clear (); for (int x = 0; x < size.x; x++) { for (int y = 1; y < size.y - 2; y++) { for (int z = 0; z < size.z; z++) { block = chunk.GetBlock (x, y, z); int cubeIndex = 0; for (int i = 0; i < corners.Length; i++) { corners [i] = Corner (new Vector3i (x, y, z) + offset [i]); positions [i] = (new Vector3i (x, y, z) + offset [i]).ToVector3 (); if (corners [i] < isolevel) cubeIndex |= (1 << i); } if (eTable [cubeIndex] == 0) continue; for (int i = 0; i < vertlist.Length; i++) { if ((eTable [cubeIndex] & 1 << i) == 1 << i) vertlist [i] = LinearInt (positions [eCons [i, 0]], positions [eCons [i, 1]], corners [eCons [i, 0]], corners [eCons [i, 1]]); } for (int i = 0; triTable [cubeIndex, i] != -1; i += 3) { int index = vertices.Count; vertices.Add (vertlist [triTable [cubeIndex, i]]); vertices.Add (vertlist [triTable [cubeIndex, i + 1]]); vertices.Add (vertlist [triTable [cubeIndex, i + 2]]); float tec = (0.125f); Vector2 uvBase = block != null ? block.UV : new Vector2 (); uvs.Add (uvBase); uvs.Add (uvBase + new Vector2 (0, tec)); uvs.Add (uvBase + new Vector2 (tec, tec)); triangles.Add (index + 0); triangles.Add (index + 1); triangles.Add (index + 2); } } } } if (mesh == null) mesh = new Mesh (); mesh.Clear (); mesh.vertices = vertices.ToArray (); mesh.triangles = triangles.ToArray (); mesh.uv = uvs.ToArray (); mesh.RecalculateNormals (); return mesh; } bool IsBitSet (int b, int pos) { return ((b & pos) == pos); } Vector3 LinearInt (Vector3 p1, Vector3 p2, float v1, float v2) { Vector3 p; p.x = p1.x + (isolevel - v1) * (p2.x - p1.x) / (v2 - v1); p.y = p1.y + (isolevel - v1) * (p2.y - p1.y) / (v2 - v1); p.z = p1.z + (isolevel - v1) * (p2.z - p1.z) / (v2 - v1); return p; } private static int[,] eCons = new int[12, 2] { { 0, 1 }, { 1, 2 }, { 2, 3 }, { 3, 0 }, { 4, 5 }, { 5, 6 }, { 6, 7 }, { 7, 4 }, { 0, 4 }, { 1, 5 }, { 2, 6 }, { 3, 7 } }; private static Vector3i[] offset = new Vector3i[8] { new Vector3i (0, 0, 1), new Vector3i (1, 0, 1), new Vector3i (1, 0, 0), new Vector3i (0, 0, 0), new Vector3i (0, 1, 1), new Vector3i (1, 1, 1), new Vector3i (1, 1, 0), new Vector3i (0, 1, 0) }; } I didn't put the tables in the sample, because they are the same as the ones in Bourke's code. EDIT: What I figured out yet is that the cell's value at the blue triangles are 0 so they don't have to be triangulated, but the cell's value under them is 1 and because of this a top triangle is created to complete the mesh.
[ "It looks like you're using a 3D array to store the corner values of a block at a position. If you want to check whether a block at a position is null, it might be easier to use a 2D array and store the block data instead of the corner values. You can then check whether the block at a position is null and assign the debug texture accordingly.\nHere's an example of how you might store the block data in a 2D array and check whether a block at a position is null:\nBlock[,] blocks = new Block[size.x, size.z];\n\nfor (int x = 0; x < size.x; x++) {\n for (int z = 0; z < size.z; z++) {\n blocks[x, z] = chunk.GetBlock(x, y, z);\n }\n}\n\nif (blocks[x, z] == null) {\n // Assign debug texture\n}\n\nYou can also use a 3D array to store the block data if you want, but it might make the code a bit more complicated.\nBlock[,,] blocks = new Block[size.x, size.y, size.z];\n\nfor (int x = 0; x < size.x; x++) {\n for (int y = 0; y < size.y; y++) {\n for (int z = 0; z < size.z; z++) {\n blocks[x, y, z] = chunk.GetBlock(x, y, z);\n }\n }\n}\n\nif (blocks[x, y, z] == null) {\n // Assign debug texture\n}\n\nSo, something like this:\npublic class MarchingCubes\n{\n // Other class fields and properties\n\n Block[,] blocks = new Block[size.x, size.z];\n\n public Mesh MarchChunk(World world, Chunk chunk, Mesh mesh)\n {\n // Initialize class fields and properties\n\n for (int x = 0; x < size.x; x++) {\n for (int z = 0; z < size.z; z++) {\n blocks[x, z] = chunk.GetBlock(x, y, z);\n }\n }\n\n for (int x = 0; x < size.x; x++) {\n for (int y = 1; y < size.y - 2; y++) {\n for (int z = 0; z < size.z; z++) {\n\n block = chunk.GetBlock(x, y, z);\n\n // Check if the block at the current position is null\n if (blocks[x, z] == null) {\n // Assign debug texture\n }\n\n int cubeIndex = 0;\n\n for (int i = 0; i < corners.Length; i++) {\n corners[i] = Corner(new Vector3i(x, y, z) + offset[i]);\n positions[i] = (new Vector3i(x, y, z) + offset[i]).ToVector3();\n\n if (corners[i] < isolevel)\n cubeIndex |= (1 << i);\n }\n\n if (eTable[cubeIndex] == 0)\n continue;\n\n for (int i = 0; i < vertlist.Length; i++) {\n if ((eTable[cubeIndex] & 1 << i) == 1 << i)\n vertlist[i] = LinearInt(positions[eCons[i, 0]], positions[eCons[i, 1]], corners[eCons[i, 0]], corners[eCons[i, 1]]);\n }\n\n for (int i = 0; triTable[cubeIndex, i] != -1; i += 3) {\n int index = vertices.Count;\n\n vertices.Add(vertlist[triTable[cubeIndex, i]]);\n vertices.Add(vertlist[triTable[cubeIndex, i + 1]]);\n vertices.Add(vertlist[triTable[cubeIndex, i + 2]]);\n\n float tec = (0.125f);\n Vector2 uvBase = block != null ? block.UV : new Vector2();\n\n uvs.Add(uvBase);\n uvs.Add(uvBase + new Vector2(0, tec));\n uvs.Add(uvBase + new Vector2(tec, tec));\n\n triangles.Add(index + 0);\n triangles.Add(index + 1);\n triangles.Add(index + 2);\n }\n }\n }\n }\n\n // Other code...\n\n return mesh;\n }\n}\n\n" ]
[ 0 ]
[]
[]
[ "c#", "marching_cubes", "unity3d" ]
stackoverflow_0044760112_c#_marching_cubes_unity3d.txt
Q: What's the easiest way to test a createUIDefinion.json file for Azure solution templates? I'm in the process of publishing my solution template in the Azure marketplace. My mainTemplate.json file, for example, is easy to test without publishing because I can deploy from Git. But I can't seem to test the UI file via Git deployment. So the problem is getting my createUIdefinition.json file tested in a timely fashion. It seems like every time I made a change to the createUIdefinition.json file, I have to upload a new package to the publishing portal, which means I have to wait for Microsoft certification before I can stage a test. It's a 24-hour process. Is there an easier way to test my createUIdefinition.json changes without going through that process? For example, I have a bug somewhere in the regex that validates one of my user inputs: { "name": "EmailUser", "type": "Microsoft.Common.TextBox", "label": "Email Address", "toolTip": "The email address for your account", "defaultValue": "", "constraints": { "required": true, "regex": "\\w+([-+.']\\w+)*@\\w+([-.]\\w+)*\\.\\w+([-.]\\w+)*", "validationMessage": "Must be a valid email address." } (Side note, if anyone can spot my bug -- maybe when escaping the characters? -- please let me know! No email address validates properly.) And it's driving me a bit batty having to wait a day just to test my supposed fixes. There must be a better way, thanks! A: I found my answer. There's a specially crafted URL that can be used to preview createUIDefinition.json. The format is like this: <a href="https://portal.azure.com/#blade/Microsoft_Azure_Compute/CreateMultiVmWizardBlade/internal_bladeCallId/anything/internal_bladeCallerParams/{"initialData":{},"providerConfig":{"createUiDefinition":"URL_ENCODED_LINK TO_createUiDefinition.json"}}">[Preview createUiDefinition.json]</a> So the steps to test are: upload createUIdefinition.json to a public-accessible URL (github or Azure blob storage both work fine) Modify the above link with the full URL to your file. Paste it into a browser. Login to Azure when prompted, you will be redirected to your UI blade(s). Use F12 to bring up your script console in your browser to see the json-formatted output after filling in your UI values. Note that you can't do a full deployment here, these steps are only for testing your UI, validating your regex, etc. You still need to test the output and make sure it works with your mainTemplate.json file with a separate deployment. A: The Azure Portal now has a more intuitive way of testing. Go to aka.ms/createuidef/sandbox, paste your createUiDefinition.json, and click preview to see how it looks. This way you can make changes and see them in real-time without having to republish. A: I found easiest mehtod to test createUiDefinition.json for azure solution template. Go to http://old.armviz.io/#/ Click On Portal UI Editor. Paste Your content from createUiDefinition.json file to Portal UI Editor. Click on Preview button Section. it will take you into azure portal to test createUiDefinition.json file. A: Azure have now a sandbox to test the createUiDefinition file. https://portal.azure.com/?feature.customPortal=false#view/Microsoft_Azure_CreateUIDef/SandboxBlade Just copy and paste your createUiDefinition.json content and click preview, it will generate all the ui so you can test your configuration. You can also view a few examples or get the fields definitions from there.
What's the easiest way to test a createUIDefinion.json file for Azure solution templates?
I'm in the process of publishing my solution template in the Azure marketplace. My mainTemplate.json file, for example, is easy to test without publishing because I can deploy from Git. But I can't seem to test the UI file via Git deployment. So the problem is getting my createUIdefinition.json file tested in a timely fashion. It seems like every time I made a change to the createUIdefinition.json file, I have to upload a new package to the publishing portal, which means I have to wait for Microsoft certification before I can stage a test. It's a 24-hour process. Is there an easier way to test my createUIdefinition.json changes without going through that process? For example, I have a bug somewhere in the regex that validates one of my user inputs: { "name": "EmailUser", "type": "Microsoft.Common.TextBox", "label": "Email Address", "toolTip": "The email address for your account", "defaultValue": "", "constraints": { "required": true, "regex": "\\w+([-+.']\\w+)*@\\w+([-.]\\w+)*\\.\\w+([-.]\\w+)*", "validationMessage": "Must be a valid email address." } (Side note, if anyone can spot my bug -- maybe when escaping the characters? -- please let me know! No email address validates properly.) And it's driving me a bit batty having to wait a day just to test my supposed fixes. There must be a better way, thanks!
[ "I found my answer. There's a specially crafted URL that can be used to preview createUIDefinition.json. The format is like this:\n<a href=\"https://portal.azure.com/#blade/Microsoft_Azure_Compute/CreateMultiVmWizardBlade/internal_bladeCallId/anything/internal_bladeCallerParams/{\"initialData\":{},\"providerConfig\":{\"createUiDefinition\":\"URL_ENCODED_LINK TO_createUiDefinition.json\"}}\">[Preview createUiDefinition.json]</a>\n\nSo the steps to test are:\n\nupload createUIdefinition.json to a public-accessible URL (github or Azure blob storage both work fine)\nModify the above link with the full URL to your file.\nPaste it into a browser.\nLogin to Azure when prompted, you will be redirected to your UI blade(s).\nUse F12 to bring up your script console in your browser to see the json-formatted output after filling in your UI values.\n\nNote that you can't do a full deployment here, these steps are only for testing your UI, validating your regex, etc. You still need to test the output and make sure it works with your mainTemplate.json file with a separate deployment.\n", "The Azure Portal now has a more intuitive way of testing. Go to aka.ms/createuidef/sandbox, paste your createUiDefinition.json, and click preview to see how it looks. This way you can make changes and see them in real-time without having to republish. \n", "I found easiest mehtod to test createUiDefinition.json for azure solution template.\n\nGo to http://old.armviz.io/#/\nClick On Portal UI Editor.\nPaste Your content from createUiDefinition.json file to Portal UI Editor.\nClick on Preview button Section. it will take you into azure portal to test createUiDefinition.json file.\n\n", "Azure have now a sandbox to test the createUiDefinition file.\nhttps://portal.azure.com/?feature.customPortal=false#view/Microsoft_Azure_CreateUIDef/SandboxBlade\nJust copy and paste your createUiDefinition.json content and click preview, it will generate all the ui so you can test your configuration.\nYou can also view a few examples or get the fields definitions from there.\n\n\n" ]
[ 6, 4, 0, 0 ]
[]
[]
[ "azure", "azure_marketplace", "json" ]
stackoverflow_0036522270_azure_azure_marketplace_json.txt
Q: php output buffer not working after php 7.4 to php 8.1 upgrade The below code is not printing anything in the browser. actually, It should show the header menu. if I remove ob_start(); and ob_end_clean() at least its printing menu without CSS. // Turn on output buffering HTML ob_start(); echo preg_replace( '/\n|\t/i', '', implode( '' , $wr_nitro_header_html ) ); WR_Nitro_Header_Builder::prop( 'html', ob_get_contents() ); ob_end_clean() update: same code is working fine for php7.4 but php8.1 is not working A: It looks like you are using the ob_start and ob_end_clean functions to buffer the output of your code and then store it in the $wr_nitro_header_html array. In PHP 8.1, the behavior of output buffering has changed and you may need to update your code to account for this. One possible solution is to use the ob_get_clean function instead of ob_end_clean to retrieve the buffered output and then clear the buffer. This should allow you to store the output in your array as you are doing in your current code. Here is an example of how you might update your code to use ob_get_clean: // Turn on output buffering HTML ob_start(); echo preg_replace( '/\n|\t/i', '', implode( '' , $wr_nitro_header_html ) ); // Get the buffered output and clear the buffer $output = ob_get_clean(); // Store the output in the $wr_nitro_header_html array WR_Nitro_Header_Builder::prop( 'html', $output ); Alternatively, you can use the ob_implicit_flush function to enable implicit flushing of output buffers in your code. This means that the output will be sent to the browser as soon as it is generated, without the need to buffer it. Here is an example of how you might use ob_implicit_flush in your code: // Enable implicit flushing of output buffers ob_implicit_flush(true); echo preg_replace( '/\n|\t/i', '', implode( '' , $wr_nitro_header_html ) ); WR_Nitro_Header_Builder::prop( 'html', ob_get_contents() );
php output buffer not working after php 7.4 to php 8.1 upgrade
The below code is not printing anything in the browser. actually, It should show the header menu. if I remove ob_start(); and ob_end_clean() at least its printing menu without CSS. // Turn on output buffering HTML ob_start(); echo preg_replace( '/\n|\t/i', '', implode( '' , $wr_nitro_header_html ) ); WR_Nitro_Header_Builder::prop( 'html', ob_get_contents() ); ob_end_clean() update: same code is working fine for php7.4 but php8.1 is not working
[ "It looks like you are using the ob_start and ob_end_clean functions to buffer the output of your code and then store it in the $wr_nitro_header_html array. In PHP 8.1, the behavior of output buffering has changed and you may need to update your code to account for this.\nOne possible solution is to use the ob_get_clean function instead of ob_end_clean to retrieve the buffered output and then clear the buffer. This should allow you to store the output in your array as you are doing in your current code. Here is an example of how you might update your code to use ob_get_clean:\n// Turn on output buffering HTML\nob_start();\necho preg_replace( '/\\n|\\t/i', '', implode( '' , $wr_nitro_header_html ) );\n\n// Get the buffered output and clear the buffer\n$output = ob_get_clean();\n\n// Store the output in the $wr_nitro_header_html array\nWR_Nitro_Header_Builder::prop( 'html', $output );\n\nAlternatively, you can use the ob_implicit_flush function to enable implicit flushing of output buffers in your code. This means that the output will be sent to the browser as soon as it is generated, without the need to buffer it. Here is an example of how you might use ob_implicit_flush in your code:\n// Enable implicit flushing of output buffers\nob_implicit_flush(true);\n\necho preg_replace( '/\\n|\\t/i', '', implode( '' , $wr_nitro_header_html ) );\nWR_Nitro_Header_Builder::prop( 'html', ob_get_contents() );\n\n" ]
[ 0 ]
[]
[]
[ "output_buffering", "php_8.1", "wordpress" ]
stackoverflow_0074650983_output_buffering_php_8.1_wordpress.txt
Q: Power Automate: Using "Wait for image" and "Extract text with OCR" in unattended mode possible? I want to automate a Webswing session to run in unattended mode. Webswing is a web server that allows applications to run within the web browser. So there is no access to UI elements that the bot could access. Therefore, I initially worked with image recognition (e.g., using the "Wait for image" and "Extract text with OCR" actions) in attended mode. Now I would like to switch to unattended mode. Does anyone have experience with this and know if the image recognition actions in a session like Webswing can be applied to unattended robots or are there other commands I can use for this use case? A: Yes, however it is worth keeping in mind that the screen size needs to be set to the size you run the flow in when attended. See: how to set screen resolution unattended mode
Power Automate: Using "Wait for image" and "Extract text with OCR" in unattended mode possible?
I want to automate a Webswing session to run in unattended mode. Webswing is a web server that allows applications to run within the web browser. So there is no access to UI elements that the bot could access. Therefore, I initially worked with image recognition (e.g., using the "Wait for image" and "Extract text with OCR" actions) in attended mode. Now I would like to switch to unattended mode. Does anyone have experience with this and know if the image recognition actions in a session like Webswing can be applied to unattended robots or are there other commands I can use for this use case?
[ "Yes, however it is worth keeping in mind that the screen size needs to be set to the size you run the flow in when attended.\nSee: how to set screen resolution unattended mode\n" ]
[ 0 ]
[]
[]
[ "power_automate", "power_automate_desktop" ]
stackoverflow_0074274779_power_automate_power_automate_desktop.txt
Q: FFMPEG concatenate two streams from multiple numerated files I have bunch of numerated files which represent audio and video stream, like that: vod-idx-video=5000000-1.ts vod-idx-video=5000000-2.ts ... vod-idx-video=5000000-700.ts vod-idx-audio_fra=128000-1.aac vod-idx-audio_fra=128000-2.aac ... vod-idx-audio_fra=128000-700.aac number of files for video and audio streams is the same I need to concatenate them in one file with audio/video streams together I tried to concatenate separately video ts files in one video file, then audio aac in one audio file and then concatenate audio+video together but in final result audio and video goes out of sync. Can't figure out the command which would take two inputs (-i "vilist.txt" -i "audlist.txt") and then take streams from these two lists and concatenate in one output file txt files are lists like that: file 'vod-idx-audio_fra=128000-1.aac' file 'vod-idx-audio_fra=128000-2.aac' ... file 'vod-idx-video=5000000-1.ts' file 'vod-idx-video=5000000-2.ts' ... please help. preferably I would like to concatenate the streams and files without decoding/enconding them A: do not mind, I figured out the command
FFMPEG concatenate two streams from multiple numerated files
I have bunch of numerated files which represent audio and video stream, like that: vod-idx-video=5000000-1.ts vod-idx-video=5000000-2.ts ... vod-idx-video=5000000-700.ts vod-idx-audio_fra=128000-1.aac vod-idx-audio_fra=128000-2.aac ... vod-idx-audio_fra=128000-700.aac number of files for video and audio streams is the same I need to concatenate them in one file with audio/video streams together I tried to concatenate separately video ts files in one video file, then audio aac in one audio file and then concatenate audio+video together but in final result audio and video goes out of sync. Can't figure out the command which would take two inputs (-i "vilist.txt" -i "audlist.txt") and then take streams from these two lists and concatenate in one output file txt files are lists like that: file 'vod-idx-audio_fra=128000-1.aac' file 'vod-idx-audio_fra=128000-2.aac' ... file 'vod-idx-video=5000000-1.ts' file 'vod-idx-video=5000000-2.ts' ... please help. preferably I would like to concatenate the streams and files without decoding/enconding them
[ "do not mind, I figured out the command\n" ]
[ 0 ]
[]
[]
[ "ffmpeg" ]
stackoverflow_0074680054_ffmpeg.txt
Q: TF2 transform can't find an actuall existing frame In a global planner node that I wrote, I have the following init code #!/usr/bin/env python import rospy import copy import tf2_ros import time import numpy as np import math import tf from math import sqrt, pow from geometry_msgs.msg import Vector3, Point from std_msgs.msg import Int32MultiArray from std_msgs.msg import Bool from nav_msgs.msg import OccupancyGrid, Path from geometry_msgs.msg import PoseStamped, PointStamped from tf2_geometry_msgs import do_transform_point from Queue import PriorityQueue class GlobalPlanner(): def __init__(self): print("init global planner") self.tfBuffer = tf2_ros.Buffer() self.listener = tf2_ros.TransformListener(self.tfBuffer) self.drone_position_sub = rospy.Subscriber('uav/sensors/gps', PoseStamped, self.get_drone_position) self.drone_position = [] self.drone_map_position = [] self.map_sub = rospy.Subscriber("/map", OccupancyGrid, self.get_map) self.goal_sub = rospy.Subscriber("/cell_tower/position", Point, self.getTransformedGoal) self.goal_position = [] self.goal = Point() self.goal_map_position = [] self.occupancy_grid = OccupancyGrid() self.map = [] self.p_path = Int32MultiArray() self.position_pub = rospy.Publisher("/uav/input/position", Vector3, queue_size = 1) #next_movement in self.next_movement = Vector3 self.next_movement.z = 3 self.path_pub = rospy.Publisher('/uav/path', Int32MultiArray, queue_size=1) self.width = rospy.get_param('global_planner_node/map_width') self.height = rospy.get_param('global_planner_node/map_height') #Check whether there is a path plan self.have_plan = False self.path = [] self.euc_distance_drone_goal = 100 self.twod_distance_drone_goal = [] self.map_distance_drone_goal = [] self.mainLoop() And there is a call-back function call getTransformed goal, which will take the goal position in the "cell_tower" frame to the "world" frame. Which looks like this def getTransformedGoal(self, msg): self.goal = msg try: #Lookup the tower to world transform transform = self.tfBuffer.lookup_transform('cell_tower', 'world', rospy.Time()) #transform = self.tfBuffer.lookup_transform('world','cell-tower' rospy.Time()) #Convert the goal to a PointStamped goal_pointStamped = PointStamped() goal_pointStamped.point.x = self.goal.x goal_pointStamped.point.y = self.goal.y goal_pointStamped.point.z = self.goal.z #Use the do_transform_point function to convert the point using the transform new_point = do_transform_point(goal_pointStamped, transform) #Convert the point back into a vector message containing integers transform_point = [new_point.point.x, new_point.point.y] #Publish the vector self.goal_position = transform_point except (tf2_ros.LookupException, tf2_ros.ConnectivityException, tf2_ros.ExtrapolationException) as e: print(e) print('global_planner tf2 exception, continuing') The error message said that "cell_tower" passed to lookupTransform argument target_frame does not exist. I check the RQT plot for both active and all, which shows that when active, the topic /tf is not being subscribe by the node global planner. Check the following image, which is for active enter image description here and this image is for all the node (include non-active) enter image description here But I have actually set up the listner, I have another node call local planner that use the same strategy and it works for that node, but not for the global planner I'm not sure why this is. A: Try adding a timeout to your lookup_transform() function call, as your transformation may not be available when you need it: transform = self.tfBuffer.lookup_transform('cell_tower', 'world',rospy.Time.now(), rospy.Duration(1.0))
TF2 transform can't find an actuall existing frame
In a global planner node that I wrote, I have the following init code #!/usr/bin/env python import rospy import copy import tf2_ros import time import numpy as np import math import tf from math import sqrt, pow from geometry_msgs.msg import Vector3, Point from std_msgs.msg import Int32MultiArray from std_msgs.msg import Bool from nav_msgs.msg import OccupancyGrid, Path from geometry_msgs.msg import PoseStamped, PointStamped from tf2_geometry_msgs import do_transform_point from Queue import PriorityQueue class GlobalPlanner(): def __init__(self): print("init global planner") self.tfBuffer = tf2_ros.Buffer() self.listener = tf2_ros.TransformListener(self.tfBuffer) self.drone_position_sub = rospy.Subscriber('uav/sensors/gps', PoseStamped, self.get_drone_position) self.drone_position = [] self.drone_map_position = [] self.map_sub = rospy.Subscriber("/map", OccupancyGrid, self.get_map) self.goal_sub = rospy.Subscriber("/cell_tower/position", Point, self.getTransformedGoal) self.goal_position = [] self.goal = Point() self.goal_map_position = [] self.occupancy_grid = OccupancyGrid() self.map = [] self.p_path = Int32MultiArray() self.position_pub = rospy.Publisher("/uav/input/position", Vector3, queue_size = 1) #next_movement in self.next_movement = Vector3 self.next_movement.z = 3 self.path_pub = rospy.Publisher('/uav/path', Int32MultiArray, queue_size=1) self.width = rospy.get_param('global_planner_node/map_width') self.height = rospy.get_param('global_planner_node/map_height') #Check whether there is a path plan self.have_plan = False self.path = [] self.euc_distance_drone_goal = 100 self.twod_distance_drone_goal = [] self.map_distance_drone_goal = [] self.mainLoop() And there is a call-back function call getTransformed goal, which will take the goal position in the "cell_tower" frame to the "world" frame. Which looks like this def getTransformedGoal(self, msg): self.goal = msg try: #Lookup the tower to world transform transform = self.tfBuffer.lookup_transform('cell_tower', 'world', rospy.Time()) #transform = self.tfBuffer.lookup_transform('world','cell-tower' rospy.Time()) #Convert the goal to a PointStamped goal_pointStamped = PointStamped() goal_pointStamped.point.x = self.goal.x goal_pointStamped.point.y = self.goal.y goal_pointStamped.point.z = self.goal.z #Use the do_transform_point function to convert the point using the transform new_point = do_transform_point(goal_pointStamped, transform) #Convert the point back into a vector message containing integers transform_point = [new_point.point.x, new_point.point.y] #Publish the vector self.goal_position = transform_point except (tf2_ros.LookupException, tf2_ros.ConnectivityException, tf2_ros.ExtrapolationException) as e: print(e) print('global_planner tf2 exception, continuing') The error message said that "cell_tower" passed to lookupTransform argument target_frame does not exist. I check the RQT plot for both active and all, which shows that when active, the topic /tf is not being subscribe by the node global planner. Check the following image, which is for active enter image description here and this image is for all the node (include non-active) enter image description here But I have actually set up the listner, I have another node call local planner that use the same strategy and it works for that node, but not for the global planner I'm not sure why this is.
[ "Try adding a timeout to your lookup_transform() function call, as your transformation may not be available when you need it:\ntransform = self.tfBuffer.lookup_transform('cell_tower', 'world',rospy.Time.now(), rospy.Duration(1.0))\n\n" ]
[ 0 ]
[]
[]
[ "python", "ros", "slam", "subscriber", "tf2_ros" ]
stackoverflow_0074681266_python_ros_slam_subscriber_tf2_ros.txt
Q: how to parse all data I dont know why but when i get all data from requests it works but if i want get data by some category it return me that import requests import json headers = {'Accept': 'application/json, text/javascript, */*; q=0.01', 'Accept-Encoding': 'gzip, deflate, br', 'Accept-Language': 'uk-UA,uk;q=0.9,en-US;q=0.8,en;q=0.7,ru;q=0.6', 'X-Requested-With': 'XMLHttpRequest'} def get_data(): # url of all data url = 'https://buff.163.com/api/market/goods?game=csgo&page_num=1&use_suggestion=0&trigger=undefined_trigger&_=1670185664532' # url by category url2 = 'https://buff.163.com/api/market/goods?game=csgo&page_num=1&category_group=rifle&use_suggestion=0&trigger=undefined_trigger&_=1670191032071' r = requests.get(url=url2, headers=headers) print(r.json()) with open('r.json', 'w', encoding="utf-8") as file: json.dump(r.json(), file, indent=4, ensure_ascii=False) def main(): get_data() if __name__ == '__main__': main() when i run url i get good json object but when i run url2 i get that '{'code': 'Login Required', 'error': 'Please login.', 'extra': None}' help me pls do it!!!!! A: It looks like you need to authenticate with the server before you can access the data in the second URL. The server is returning a "Login Required" error because it is unable to verify that you are authorized to access the data. To fix this issue, you need to include the necessary authentication information in the request headers when making the request to the second URL. This could include a login token or other authentication credentials that the server requires in order to grant you access to the data. Without more information about the authentication requirements of the server, it is not possible to provide specific instructions on how to include the necessary authentication information in the request headers. You will need to consult the documentation for the server or contact the server's maintainers to learn more about the authentication requirements. A: You are not authorized on the website. Try to use cookie to get correct response from site. By the way you can use selenium web driver function get_cookie(), then save it and use in your request. To my mind such way you’ll get desired result. If you have any questions you can ask me on telegram @deep0xFF. Im good in selenium webdriver and requests also.)
how to parse all data
I dont know why but when i get all data from requests it works but if i want get data by some category it return me that import requests import json headers = {'Accept': 'application/json, text/javascript, */*; q=0.01', 'Accept-Encoding': 'gzip, deflate, br', 'Accept-Language': 'uk-UA,uk;q=0.9,en-US;q=0.8,en;q=0.7,ru;q=0.6', 'X-Requested-With': 'XMLHttpRequest'} def get_data(): # url of all data url = 'https://buff.163.com/api/market/goods?game=csgo&page_num=1&use_suggestion=0&trigger=undefined_trigger&_=1670185664532' # url by category url2 = 'https://buff.163.com/api/market/goods?game=csgo&page_num=1&category_group=rifle&use_suggestion=0&trigger=undefined_trigger&_=1670191032071' r = requests.get(url=url2, headers=headers) print(r.json()) with open('r.json', 'w', encoding="utf-8") as file: json.dump(r.json(), file, indent=4, ensure_ascii=False) def main(): get_data() if __name__ == '__main__': main() when i run url i get good json object but when i run url2 i get that '{'code': 'Login Required', 'error': 'Please login.', 'extra': None}' help me pls do it!!!!!
[ "It looks like you need to authenticate with the server before you can access the data in the second URL. The server is returning a \"Login Required\" error because it is unable to verify that you are authorized to access the data.\nTo fix this issue, you need to include the necessary authentication information in the request headers when making the request to the second URL. This could include a login token or other authentication credentials that the server requires in order to grant you access to the data.\nWithout more information about the authentication requirements of the server, it is not possible to provide specific instructions on how to include the necessary authentication information in the request headers. You will need to consult the documentation for the server or contact the server's maintainers to learn more about the authentication requirements.\n", "You are not authorized on the website. Try to use cookie to get correct response from site.\nBy the way you can use selenium web driver function get_cookie(), then save it and use in your request. To my mind such way you’ll get desired result.\nIf you have any questions you can ask me on telegram @deep0xFF. Im good in selenium webdriver and requests also.)\n" ]
[ 0, 0 ]
[]
[]
[ "json", "parsing", "python" ]
stackoverflow_0074681343_json_parsing_python.txt
Q: How to create security group ingress dynamically in terraform I am creating a security group that has some standard ingress rules. I also want to add additional ingress rules based on a variable. variable "additional_ingress" { type = list(object({ protocol = string from_port = string to_port = string cidr_blocks = list(string) })) default = [] } resource "aws_security_group" "ec2" { name = "my-sg" description = "SG for ec2" vpc_id = data.aws_vpc.this.id egress { to_port = 0 from_port = 0 protocol = "-1" cidr_blocks = ["0.0.0.0/0"] } ingress { protocol = "tcp" from_port = 22 to_port = 22 cidr_blocks = ["10.0.0.0/8"] } # rdp ingress { protocol = "tcp" from_port = 3389 to_port = 3389 cidr_blocks = ["10.0.0.0/8"] } # additional ingress rules ingress { for_each = var.additional_ingress protocol = each.value.protocol from_port = each.value.from_port to_port = each.value.to_port cidr_blocks = each.value.cidr_blocks } } I am getting error A reference to "each.value" has been used in a context in which it unavailable, such as when the configuration no longer contains the value in its "for_each" expression. │ Remove this reference to each.value in your configuration to work around this error. How do I add ingress rules based on variable A: This is most easily managed with the aws_security_group_rule resource and the for_each meta-argument: resource "aws_security_group_rule" "ec2" { for_each = var.additional_ingress type = each.value.type from_port = each.value.from_port to_port = each.value.to_port protocol = each.value.protocol cidr_blocks = each.value.cidr_blocks security_group_id = aws_security_group.ec2.id } Note that the variable declaration for additional_ingress is missing the type key in its object constructor definition, so that would need to be added: variable "additional_ingress" { type = list(object({ type = string ... })) default = [] } A: You can use dynamic blocks like this: dynamic "ingress" { for_each = var.additional_ingress protocol = ingress.value.protocol from_port = ingress.value.from_port to_port = ingress.value.to_port cidr_blocks = ingress.value.cidr_blocks } Provided the additional_ingress is an object (map) of ingress entries. Of course, I would advise to use aws_security_group_rule resource, but if this is already live project, I understand if you'd want to stay with inline rules. Have fun!
How to create security group ingress dynamically in terraform
I am creating a security group that has some standard ingress rules. I also want to add additional ingress rules based on a variable. variable "additional_ingress" { type = list(object({ protocol = string from_port = string to_port = string cidr_blocks = list(string) })) default = [] } resource "aws_security_group" "ec2" { name = "my-sg" description = "SG for ec2" vpc_id = data.aws_vpc.this.id egress { to_port = 0 from_port = 0 protocol = "-1" cidr_blocks = ["0.0.0.0/0"] } ingress { protocol = "tcp" from_port = 22 to_port = 22 cidr_blocks = ["10.0.0.0/8"] } # rdp ingress { protocol = "tcp" from_port = 3389 to_port = 3389 cidr_blocks = ["10.0.0.0/8"] } # additional ingress rules ingress { for_each = var.additional_ingress protocol = each.value.protocol from_port = each.value.from_port to_port = each.value.to_port cidr_blocks = each.value.cidr_blocks } } I am getting error A reference to "each.value" has been used in a context in which it unavailable, such as when the configuration no longer contains the value in its "for_each" expression. │ Remove this reference to each.value in your configuration to work around this error. How do I add ingress rules based on variable
[ "This is most easily managed with the aws_security_group_rule resource and the for_each meta-argument:\nresource \"aws_security_group_rule\" \"ec2\" {\n for_each = var.additional_ingress\n\n type = each.value.type\n from_port = each.value.from_port\n to_port = each.value.to_port\n protocol = each.value.protocol\n cidr_blocks = each.value.cidr_blocks\n security_group_id = aws_security_group.ec2.id\n}\n\nNote that the variable declaration for additional_ingress is missing the type key in its object constructor definition, so that would need to be added:\nvariable \"additional_ingress\" {\n type = list(object({\n type = string\n ...\n }))\n default = []\n}\n\n", "You can use dynamic blocks like this:\ndynamic \"ingress\" {\n for_each = var.additional_ingress\n protocol = ingress.value.protocol\n from_port = ingress.value.from_port\n to_port = ingress.value.to_port\n cidr_blocks = ingress.value.cidr_blocks\n }\n\nProvided the additional_ingress is an object (map) of ingress entries.\nOf course, I would advise to use aws_security_group_rule resource, but if this is already live project, I understand if you'd want to stay with inline rules. Have fun!\n" ]
[ 4, 0 ]
[]
[]
[ "terraform", "terraform_provider_aws" ]
stackoverflow_0074661359_terraform_terraform_provider_aws.txt
Q: Outliers in certain values in column R Outliers data Given Data: Color | Number Green | 5 Red | 20 Green | 5 Green | 15 Green | 100 Red | 7 Red | 10 Red | 8 Green | 6 . Want to only take values of "green"’s number only and then plot and find outliers for them. How do you do this? A: We may subset the dataset where the Color is "Green", select the 'Number' column and use boxplot and extract the outliers boxplot(subset(Data, Color == "Green", select = Number)$Number)$out [1] #100
Outliers in certain values in column R
Outliers data Given Data: Color | Number Green | 5 Red | 20 Green | 5 Green | 15 Green | 100 Red | 7 Red | 10 Red | 8 Green | 6 . Want to only take values of "green"’s number only and then plot and find outliers for them. How do you do this?
[ "We may subset the dataset where the Color is \"Green\", select the 'Number' column and use boxplot and extract the outliers\nboxplot(subset(Data, Color == \"Green\", select = Number)$Number)$out\n[1] #100\n\n" ]
[ 0 ]
[]
[]
[ "r" ]
stackoverflow_0074681452_r.txt
Q: I am able to hit the API and output correct data. How do I pull the data from async Function and convert it to CSV format? ` // dog.ceo API async function fetchDogApiResult (apiPath) { const response = await fetch(`https://dog.ceo/api/${apiPath}`); if (!response.ok) throw new Error(`Response not OK (${response.status})`); const data = await response.json(); if (data.status !== 'success') throw new Error('Response not successful'); return data.message; } async function fetchBreeds () { return fetchDogApiResult('breeds/list/all'); } async function fetchSubBreeds (breed) { return fetchDogApiResult(`breed/${breed}/list`); } async function fetchImages (breed, subBreed) { return fetchDogApiResult(`breed/${breed}${subBreed ? `/${subBreed}` : ''}/images`); } async function fetchDogData () { const breeds = await fetchBreeds(); return Promise.all(Object.entries(breeds).map(async ([breed, subBreeds]) => ({ breed, subBreeds, images: (await fetchImages(breed)).map(url => ({url})), }))); } (async () => { const dogData = await fetchDogData(); console.log(JSON.stringify(dogData)); })(); ` Using this code I output JSON data from the API. I need to convert this data to a 3 column CSV file. I am lost on pulling the data from async functions. This is also utilizing Vanilla JS. Thanks in advance! I have tried to utilize a normal function to handle the data and convert to CSV but it has not worked and unsure where to go from this point. A: To convert the data to a CSV format with three columns, you can use the json2csv library to convert the JSON data to a CSV string. You can then write the CSV string to a file or output it to the console. Here is an example of how to do this: const json2csv = require('json2csv'); // dog.ceo API async function fetchDogApiResult (apiPath) { const response = await fetch(`https://dog.ceo/api/${apiPath}`); if (!response.ok) throw new Error(`Response not OK (${response.status})`); const data = await response.json(); if (data.status !== 'success') throw new Error('Response not successful'); return data.message; } async function fetchBreeds () { return fetchDogApiResult('breeds/list/all'); } async function fetchSubBreeds (breed) { return fetchDogApiResult(`breed/${breed}/list`); } async function fetchImages (breed, subBreed) { return fetchDogApiResult(`breed/${breed}${subBreed ? `/${subBreed}` : ''}/images`); } async function fetchDogData () { const breeds = await fetchBreeds(); return Promise.all(Object.entries(breeds).map(async ([breed, subBreeds]) => ({ breed, subBreeds, images: (await fetchImages(breed)).map(url => ({url})), }))); } (async () => { const dogData = await fetchDogData(); // Convert the JSON data to a CSV string const csvData = json2csv.parse(dogData, { fields: ['breed', 'subBreeds', 'images'], }); // Output the CSV data to the console console.log(csvData); })();
I am able to hit the API and output correct data. How do I pull the data from async Function and convert it to CSV format?
` // dog.ceo API async function fetchDogApiResult (apiPath) { const response = await fetch(`https://dog.ceo/api/${apiPath}`); if (!response.ok) throw new Error(`Response not OK (${response.status})`); const data = await response.json(); if (data.status !== 'success') throw new Error('Response not successful'); return data.message; } async function fetchBreeds () { return fetchDogApiResult('breeds/list/all'); } async function fetchSubBreeds (breed) { return fetchDogApiResult(`breed/${breed}/list`); } async function fetchImages (breed, subBreed) { return fetchDogApiResult(`breed/${breed}${subBreed ? `/${subBreed}` : ''}/images`); } async function fetchDogData () { const breeds = await fetchBreeds(); return Promise.all(Object.entries(breeds).map(async ([breed, subBreeds]) => ({ breed, subBreeds, images: (await fetchImages(breed)).map(url => ({url})), }))); } (async () => { const dogData = await fetchDogData(); console.log(JSON.stringify(dogData)); })(); ` Using this code I output JSON data from the API. I need to convert this data to a 3 column CSV file. I am lost on pulling the data from async functions. This is also utilizing Vanilla JS. Thanks in advance! I have tried to utilize a normal function to handle the data and convert to CSV but it has not worked and unsure where to go from this point.
[ "To convert the data to a CSV format with three columns, you can use the json2csv library to convert the JSON data to a CSV string. You can then write the CSV string to a file or output it to the console.\nHere is an example of how to do this:\nconst json2csv = require('json2csv');\n\n// dog.ceo API\n\nasync function fetchDogApiResult (apiPath) {\n const response = await fetch(`https://dog.ceo/api/${apiPath}`);\n if (!response.ok) throw new Error(`Response not OK (${response.status})`);\n const data = await response.json();\n if (data.status !== 'success') throw new Error('Response not successful');\n return data.message;\n}\n\nasync function fetchBreeds () {\n return fetchDogApiResult('breeds/list/all');\n}\n\nasync function fetchSubBreeds (breed) {\n return fetchDogApiResult(`breed/${breed}/list`);\n}\n\nasync function fetchImages (breed, subBreed) {\n return fetchDogApiResult(`breed/${breed}${subBreed ? `/${subBreed}` : ''}/images`);\n}\n\nasync function fetchDogData () {\n const breeds = await fetchBreeds();\n return Promise.all(Object.entries(breeds).map(async ([breed, subBreeds]) => ({\n breed,\n subBreeds,\n images: (await fetchImages(breed)).map(url => ({url})),\n })));\n}\n\n(async () => {\n const dogData = await fetchDogData();\n\n // Convert the JSON data to a CSV string\n const csvData = json2csv.parse(dogData, {\n fields: ['breed', 'subBreeds', 'images'],\n });\n\n // Output the CSV data to the console\n console.log(csvData);\n})();\n\n" ]
[ 0 ]
[]
[]
[ "api", "javascript" ]
stackoverflow_0074681431_api_javascript.txt
Q: PLSQL Error PLS-00457: expressions have to be of SQL types I am really unable to figure out why I am unable to make the below code work. I tried to replicate the scenario explained in the below answer Trying to use a FORALL to insert data dynamically to a table specified to the procedure CREATE TABLE VISION.TEMP_TEST_TABLE ( A NUMBER(10), B NUMBER(10) ) CREATE OR REPLACE PROCEDURE VISION.PR_TEST_FORALL AUTHID CURRENT_USER Is v_SQL1 varchar2(1000) := 'select rownum, rownum from dual connect by rownum <= 11000'; v_SQL varchar2(1000) := 'INSERT /*+ APPEND */ INTO TEMP_TEST_TABLE VALUES :1'; TYPE generic_Looper_CurType IS REF CURSOR; generic_Looper_Cursor generic_Looper_CurType; TYPE TEST_FS_ARRAY_TYPE IS TABLE OF VISION.TEMP_TEST_TABLE%ROWTYPE INDEX BY BINARY_INTEGER; TEST_FS_ARRAY_OBJ TEST_FS_ARRAY_TYPE; FETCH_SIZE NUMBER := 10000; BEGIN open generic_Looper_Cursor for v_SQL1; loop FETCH generic_Looper_Cursor BULK COLLECT INTO TEST_FS_ARRAY_OBJ LIMIT fetch_size; execute immediate 'insert into TEMP_TEST_TABLE select * from table(:TEST_FS_ARRAY_OBJ)' using TEST_FS_ARRAY_OBJ; commit; COMMIT; EXIT WHEN generic_Looper_Cursor%NOTFOUND; END LOOP; End; / [Warning] ORA-24344: success with compilation error 23/11 PLS-00457: expressions have to be of SQL types 21/5 PL/SQL: Statement ignored (2: 0): Warning: compiled but with compilation errors A: There are two problems with this line of code: execute immediate 'insert into TEMP_TEST_TABLE select * from table(:TEST_FS_ARRAY_OBJ)' using TEST_FS_ARRAY_OBJ; You cannot pass table name as bind variable. You cannot use locally defined nested table in SQL statement. It has to be defined in schema level to use in SQL statement.
PLSQL Error PLS-00457: expressions have to be of SQL types
I am really unable to figure out why I am unable to make the below code work. I tried to replicate the scenario explained in the below answer Trying to use a FORALL to insert data dynamically to a table specified to the procedure CREATE TABLE VISION.TEMP_TEST_TABLE ( A NUMBER(10), B NUMBER(10) ) CREATE OR REPLACE PROCEDURE VISION.PR_TEST_FORALL AUTHID CURRENT_USER Is v_SQL1 varchar2(1000) := 'select rownum, rownum from dual connect by rownum <= 11000'; v_SQL varchar2(1000) := 'INSERT /*+ APPEND */ INTO TEMP_TEST_TABLE VALUES :1'; TYPE generic_Looper_CurType IS REF CURSOR; generic_Looper_Cursor generic_Looper_CurType; TYPE TEST_FS_ARRAY_TYPE IS TABLE OF VISION.TEMP_TEST_TABLE%ROWTYPE INDEX BY BINARY_INTEGER; TEST_FS_ARRAY_OBJ TEST_FS_ARRAY_TYPE; FETCH_SIZE NUMBER := 10000; BEGIN open generic_Looper_Cursor for v_SQL1; loop FETCH generic_Looper_Cursor BULK COLLECT INTO TEST_FS_ARRAY_OBJ LIMIT fetch_size; execute immediate 'insert into TEMP_TEST_TABLE select * from table(:TEST_FS_ARRAY_OBJ)' using TEST_FS_ARRAY_OBJ; commit; COMMIT; EXIT WHEN generic_Looper_Cursor%NOTFOUND; END LOOP; End; / [Warning] ORA-24344: success with compilation error 23/11 PLS-00457: expressions have to be of SQL types 21/5 PL/SQL: Statement ignored (2: 0): Warning: compiled but with compilation errors
[ "There are two problems with this line of code:\nexecute immediate\n'insert into TEMP_TEST_TABLE select * from table(:TEST_FS_ARRAY_OBJ)'\nusing TEST_FS_ARRAY_OBJ;\n\n\nYou cannot pass table name as bind variable.\nYou cannot use locally defined nested table in SQL statement. It has to be defined in schema level to use in SQL statement.\n\n" ]
[ 0 ]
[]
[]
[ "plsql" ]
stackoverflow_0074678675_plsql.txt
Q: Why my batch script its creating standard users and not admin? I'm trying to run the following command to generate a admin user using Batch. But everytime I run it, it just create standard users. Someone can please give me a hint ? @echo off rem Prompt the user for the username of the admin user set /p username=Enter the username of the admin user: rem Prompt the user for the password of the admin user set /p password=Enter the password of the admin user: rem Create the admin user net user %username% %password% /add rem Add the admin user to the local Administrators group net localgroup Administrators %username% /add A: It looks like you are using the correct commands to create a user and add them to the local Administrators group. However, the issue may be with the net user command. The net user command has a parameter called /add which is used to create a new user account. However, this parameter does not specify that the user is an administrator. To create an administrator user, you need to use the /add parameter along with the /active:yes and /passwordchg:no parameters. These parameters specify that the user account is active and that the user cannot change their password. Here is an updated version of the commands that you can use to create an administrator user: net user %username% %password% /add /active:yes /passwordchg:no net localgroup Administrators %username% /add These commands should create the admin user and add them to the local Administrators group. You can verify this by checking the user accounts and group memberships in the Windows Control Panel.
Why my batch script its creating standard users and not admin?
I'm trying to run the following command to generate a admin user using Batch. But everytime I run it, it just create standard users. Someone can please give me a hint ? @echo off rem Prompt the user for the username of the admin user set /p username=Enter the username of the admin user: rem Prompt the user for the password of the admin user set /p password=Enter the password of the admin user: rem Create the admin user net user %username% %password% /add rem Add the admin user to the local Administrators group net localgroup Administrators %username% /add
[ "It looks like you are using the correct commands to create a user and add them to the local Administrators group. However, the issue may be with the net user command.\nThe net user command has a parameter called /add which is used to create a new user account. However, this parameter does not specify that the user is an administrator.\nTo create an administrator user, you need to use the /add parameter along with the /active:yes and /passwordchg:no parameters. These parameters specify that the user account is active and that the user cannot change their password.\nHere is an updated version of the commands that you can use to create an administrator user:\nnet user %username% %password% /add /active:yes /passwordchg:no\nnet localgroup Administrators %username% /add\n\nThese commands should create the admin user and add them to the local Administrators group. You can verify this by checking the user accounts and group memberships in the Windows Control Panel.\n" ]
[ 0 ]
[]
[]
[ "batch_file", "windows" ]
stackoverflow_0074681410_batch_file_windows.txt
Q: Create new input for local storage Incrementing and storing the scores on local storage are working fine. Score gets incremented depending if game is won or lost. When I refresh the page (scores go to 0) and start playing again, it starts updating the same local storage from the beginning. Is it possible to leave the last local storage as it is and set/start with the new one when page is refreshed? js function gameWon() { if (animalStatus === answer) { let oldScore = parseInt(document.getElementById("win").innerText); document.getElementById("win").innerText = ++oldScore; document.getElementById("key-container").innerHTML = `Well done! That was correct!<br> <button id="play-again" onclick="playAgain()">Play again!</button>`; localStorage.setItem("wins", document.getElementById("win").innerText); } }; function gameLost() { if (wrongGuess === 6) { let oldScore = parseInt(document.getElementById("loss").innerText); document.getElementById("loss").innerText = ++oldScore; document.getElementById("key-container").innerHTML = `Unfortunately you ran out of possible guesses.<br> Correct answer was: ${answer}!<br><button id="play-again" onclick="playAgain()">Play again!</button>`; localStorage.setItem("losses", document.getElementById("loss").innerText); } }; A: If I’m understanding the question correctly, you can initialize a key to be used for local storage on page load. Ie maybe the current time stamp in ms. const storageKey = +new Date(); Then later to access localStorage, you can do localStorage.setItem(storageKey, JSON.stringify({ wins: document.getElementById("win").innerText, losses: document.getElementById("loss").innerText })); So to be able to have new localStorage values per refresh session, you’d need to structure your data so that the access key is whatever storage key you’d want to use E.g. time in ms, and the value should now be an object containing wins and losses Alternatively you can keep the wins and losses as separate keys if you prefix those with your storage key E.g. localStorage.setItem( storageKey + “_wins”, document.getElementById("win").innerText ); localStorage.setItem( storageKey + “_losses”, document.getElementById("loss").innerText ); The idea is that every time the page loads, the time in ms will be different. So next time the page is reloaded, you’ll get a new storage key and be able to access/ update fresh wins and losses in localStorage.
Create new input for local storage
Incrementing and storing the scores on local storage are working fine. Score gets incremented depending if game is won or lost. When I refresh the page (scores go to 0) and start playing again, it starts updating the same local storage from the beginning. Is it possible to leave the last local storage as it is and set/start with the new one when page is refreshed? js function gameWon() { if (animalStatus === answer) { let oldScore = parseInt(document.getElementById("win").innerText); document.getElementById("win").innerText = ++oldScore; document.getElementById("key-container").innerHTML = `Well done! That was correct!<br> <button id="play-again" onclick="playAgain()">Play again!</button>`; localStorage.setItem("wins", document.getElementById("win").innerText); } }; function gameLost() { if (wrongGuess === 6) { let oldScore = parseInt(document.getElementById("loss").innerText); document.getElementById("loss").innerText = ++oldScore; document.getElementById("key-container").innerHTML = `Unfortunately you ran out of possible guesses.<br> Correct answer was: ${answer}!<br><button id="play-again" onclick="playAgain()">Play again!</button>`; localStorage.setItem("losses", document.getElementById("loss").innerText); } };
[ "If I’m understanding the question correctly, you can initialize a key to be used for local storage on page load. Ie maybe the current time stamp in ms.\nconst storageKey = +new Date();\n\nThen later to access localStorage, you can do\nlocalStorage.setItem(storageKey, JSON.stringify({ \n wins: document.getElementById(\"win\").innerText,\n losses: document.getElementById(\"loss\").innerText \n}));\n\nSo to be able to have new localStorage values per refresh session, you’d need to structure your data so that the access key is whatever storage key you’d want to use E.g. time in ms, and the value should now be an object containing wins and losses\nAlternatively you can keep the wins and losses as separate keys if you prefix those with your storage key\nE.g.\nlocalStorage.setItem(\n storageKey + “_wins”, \n document.getElementById(\"win\").innerText\n);\n\nlocalStorage.setItem(\n storageKey + “_losses”, \n document.getElementById(\"loss\").innerText\n);\n\nThe idea is that every time the page loads, the time in ms will be different. So next time the page is reloaded, you’ll get a new storage key and be able to access/ update fresh wins and losses in localStorage.\n" ]
[ 0 ]
[]
[]
[ "local_storage" ]
stackoverflow_0074681358_local_storage.txt
Q: Coin Toss game for fun How do I create a coin toss using def and return and using random int 0 and 1. I have never used python before. So I'm wondering how to make a function. from random import randint num = input('Number of times to flip coin: ') flips = [randint(0,1) for r in range(num)] results = [] for object in flips: if object == 0: results.append('Heads') elif object == 1: results.append('Tails') print results A: Like this? from random import randint def flipcoin(num_of_times): results = [] for i in range(num_of_times): results.append(randint(0,1)) return results num = int(input('Number of times to flip coin: ')) results = flipcoin(num) print(results) EDIT: Dealing with coin faces, also using a function. from random import randint def coin_face(x): if (x == 0): return "Heads" if (x == 1): return "Tails" def flipcoin(num_of_times): results = [] for i in range(num_of_times): results.append(coin_face(randint(0,1))) return results num = int(input('Number of times to flip coin: ')) results = flipcoin(num) print(results) Thanks. A: Using a function for each coin flip: from random import randint def toss(): flip = randint(0,1) if flip == 0: return 'Heads' return 'Tails' results = [] num = input('Number of times to flip coin: ') for i in range(num): results.append(toss()) print(results)
Coin Toss game for fun
How do I create a coin toss using def and return and using random int 0 and 1. I have never used python before. So I'm wondering how to make a function. from random import randint num = input('Number of times to flip coin: ') flips = [randint(0,1) for r in range(num)] results = [] for object in flips: if object == 0: results.append('Heads') elif object == 1: results.append('Tails') print results
[ "Like this?\nfrom random import randint\n\ndef flipcoin(num_of_times):\n results = []\n for i in range(num_of_times):\n results.append(randint(0,1))\n return results\n\nnum = int(input('Number of times to flip coin: '))\nresults = flipcoin(num)\n\nprint(results)\n\nEDIT: Dealing with coin faces, also using a function.\nfrom random import randint\n\ndef coin_face(x):\n if (x == 0):\n return \"Heads\"\n if (x == 1):\n return \"Tails\"\n\ndef flipcoin(num_of_times):\n results = []\n for i in range(num_of_times):\n results.append(coin_face(randint(0,1)))\n return results\n\nnum = int(input('Number of times to flip coin: '))\nresults = flipcoin(num)\n\nprint(results)\n\nThanks.\n", "Using a function for each coin flip:\nfrom random import randint\ndef toss():\n flip = randint(0,1)\n if flip == 0:\n return 'Heads'\n return 'Tails'\n\nresults = []\n\nnum = input('Number of times to flip coin: ')\nfor i in range(num):\n results.append(toss())\n\nprint(results)\n\n" ]
[ 0, 0 ]
[]
[]
[ "python", "python_3.x" ]
stackoverflow_0074681448_python_python_3.x.txt
Q: Python: how to instantiate a class "like a data class"? Data classes have this nice property of a much short / more readable "init function". Example: from dataclasses import dataclass, field @dataclass class MyClass1: x: int = field(default=1) y: int = field(default=2) As opposed to: class MyClass2: def __init__(self, x : int = 1, y : int = 2): self.x = x self.y = y Here, the code for MyClass1 is shorter because it doesn't need the explicit "def __init__(...)" function. Further, the ability to use fields allows for even more control while maintaining readability. How does that work under the hood, and how can one implement this (and only this) particular syntactic sugar without actually using/importing dataclass? A: In Python, classes are defined using the class keyword, and the @dataclass decorator is used to make a class a data class. The field function is used to specify the default value for a field in the class. To define a class without using the @dataclass decorator, you can simply use the class keyword followed by the class name, and then specify the init method, which is the constructor for the class. The init method takes in the necessary parameters and assigns them to the corresponding fields in the class. Here's an example of how you could define the MyClass2 class without using the @dataclass decorator: class MyClass2: def __init__(self, x : int = 1, y : int = 2): self.x = x self.y = y As you can see, this is a bit more verbose than using the @dataclass decorator, but it achieves the same result. If you want to define a class that has the same concise syntax as a data class, but without using the @dataclass decorator, you can use a metaclass. A metaclass is a class that is used to create a class. In Python, a metaclass is specified using the metaclass keyword argument in the class definition. Here's an example of how you could define a metaclass that has the same behavior as the @dataclass decorator: class DataClassMeta(type): def __new__(cls, name, bases, dct): # This code is executed when the class is defined # It creates the __init__ method for the class # using the fields and their default values fields = {} for key, value in dct.items(): if isinstance(value, field): fields[key] = value.default dct["__init__"] = lambda self, **kwargs: self.__dict__.update({**fields, **kwargs}) return super().__new__(cls, name, bases, dct) class MyClass3(metaclass=DataClassMeta): x = field(default=1) y = field(default=2) In this code, the DataClassMeta metaclass is defined, and it has a new method that creates the init method for the class using the fields and their default values. The MyClass3 class is then defined using the DataClassMeta metaclass. This allows it to have the same concise syntax as a data class, without using the @dataclass decorator. You can use this approach to define a class that has the same concise syntax as a data class, but without using the @dataclass decorator. However, keep in mind that the @dataclass decorator provides additional functionality, such as automatically generating methods for comparing instances of the class and for representing them as strings. If you want to include this functionality in your class, you'll need to implement it yourself.
Python: how to instantiate a class "like a data class"?
Data classes have this nice property of a much short / more readable "init function". Example: from dataclasses import dataclass, field @dataclass class MyClass1: x: int = field(default=1) y: int = field(default=2) As opposed to: class MyClass2: def __init__(self, x : int = 1, y : int = 2): self.x = x self.y = y Here, the code for MyClass1 is shorter because it doesn't need the explicit "def __init__(...)" function. Further, the ability to use fields allows for even more control while maintaining readability. How does that work under the hood, and how can one implement this (and only this) particular syntactic sugar without actually using/importing dataclass?
[ "In Python, classes are defined using the class keyword, and the @dataclass decorator is used to make a class a data class. The field function is used to specify the default value for a field in the class.\nTo define a class without using the @dataclass decorator, you can simply use the class keyword followed by the class name, and then specify the init method, which is the constructor for the class. The init method takes in the necessary parameters and assigns them to the corresponding fields in the class. Here's an example of how you could define the MyClass2 class without using the @dataclass decorator:\nclass MyClass2:\n def __init__(self, x : int = 1, y : int = 2):\n self.x = x\n self.y = y\n\nAs you can see, this is a bit more verbose than using the @dataclass decorator, but it achieves the same result.\nIf you want to define a class that has the same concise syntax as a data class, but without using the @dataclass decorator, you can use a metaclass. A metaclass is a class that is used to create a class. In Python, a metaclass is specified using the metaclass keyword argument in the class definition.\nHere's an example of how you could define a metaclass that has the same behavior as the @dataclass decorator:\nclass DataClassMeta(type):\n def __new__(cls, name, bases, dct):\n # This code is executed when the class is defined\n # It creates the __init__ method for the class\n # using the fields and their default values\n fields = {}\n for key, value in dct.items():\n if isinstance(value, field):\n fields[key] = value.default\n dct[\"__init__\"] = lambda self, **kwargs: \n\nself.__dict__.update({**fields, **kwargs})\n return super().__new__(cls, name, bases, dct)\n \n class MyClass3(metaclass=DataClassMeta):\n x = field(default=1)\n y = field(default=2)\n\nIn this code, the DataClassMeta metaclass is defined, and it has a new method that creates the init method for the class using the fields and their default values. The MyClass3 class is then defined using the DataClassMeta metaclass. This allows it to have the same concise syntax as a data class, without using the @dataclass decorator.\nYou can use this approach to define a class that has the same concise syntax as a data class, but without using the @dataclass decorator. However, keep in mind that the @dataclass decorator provides additional functionality, such as automatically generating methods for comparing instances of the class and for representing them as strings. If you want to include this functionality in your class, you'll need to implement it yourself.\n" ]
[ 0 ]
[]
[]
[ "python", "python_dataclasses" ]
stackoverflow_0074681453_python_python_dataclasses.txt
Q: Detect use-after-move for global variables After some effort, I convinced both the clang compiler and clang-tidy (static analyzer) to warn of a use-after-move situation. (see https://stackoverflow.com/a/74250567/225186) int main(int, char**) { a_class a; auto b = std::move(a); a.f(); // warns here, for example "invalid invocation of method 'f' on object 'a' while it is in the 'consumed' state [-Werror,-Wconsumed]" } However, if I make the variable global (or static or lazily static), there is no more warning. a_class a; int main(int, char**) { auto b = std::move(a); a.f(); // no warns here! } See here: https://godbolt.org/z/3zW61qYfY Is it possible to generalize some sort of use-after-move detection at compile-time for global variables? Or is it impossible, even in principle? Note: please don't make this discussion about global object (I know it is a bad idea) or about the legality of using moved objects (I know some class are designed for that to be ok). The question is technical, about the compiler and tools to detect a certain bug-prone pattern in the program. Full working code, compile with clang ... -Wconsumed -Werror -std=c++11 or use clang-tidy. The clang annotation (extensions) help the compiler detect the patterns. #include<cassert> #include<memory> class [[clang::consumable(unconsumed)]] a_class { std::unique_ptr<int> p_; public: [[clang::callable_when(unconsumed)]] void f() {} // private: [[clang::set_typestate(consumed)]] void invalidate() {} // not needed but good to know }; a_class a; int main(int, char**) { // a_class a; auto b = std::move(a); a.f(); // global doesn't warn here } Most of the information I could find about this clang extension is from here: Andrea Kling's blog https://awesomekling.github.io/Catching-use-after-move-bugs-with-Clang-consumed-annotations/ A: It is possible to use clang's consumed annotations to detect use-after-move situations in global variables, but it requires additional steps and may not be practical in all cases. When a global variable is moved, its type is changed to the "consumed" state, indicating that it is no longer valid for use. However, because global variables are not destroyed when they go out of scope, their consumed state is not automatically reset. This means that if a global variable is moved and then accessed again later, it will be in the consumed state and clang's consumed annotations will not generate a warning. To enable clang's consumed annotations to generate a warning for use-after-move situations involving global variables, it is necessary to manually reset the variable's type state to "unconsumed" when it is no longer being used. This can be done using the clang::reset_typestate annotation, which can be applied to a function that resets the variable's type state. For example: #include <clang/Analysis/Analyses/Consumed.h> void reset_a_class() { clang::reset_typestate<a_class>(clang::ConsumedState::Unconsumed); } int main(int, char**) { // a_class a; auto b = std::move(a); reset_a_class(); a.f(); // generates warning } In this case, the call to reset_a_class() resets the consumed state of the global variable a, allowing it to be used again without generating a warning from clang's consumed annotations. Here is an updated version of the reset_a_class function that properly declares the a variable and uses the clang::release_memory_on_return annotation to release the memory when the function returns. // This function resets the `a_class` object by moving it to a temporary // variable, which releases the memory when it goes out of scope. void reset_a_class() { // Declare the `a` variable. a_class a; // Use the `clang::release_memory_on_return` annotation to release the // memory for `a` when the function returns. [[clang::release_memory_on_return]] auto b = std::move(a); } To use this function in the online compiler at godbolt.org, you will need to add the -fconsumed and -frelease-memory-on-return flags to the compiler options. This will enable the Clang annotations used in the reset_a_class function. Here is an example of how you can use this function in godbolt.org: #include<memory> class [[clang::consumable(unconsumed)]] a_class { std::unique_ptr<int> p
Detect use-after-move for global variables
After some effort, I convinced both the clang compiler and clang-tidy (static analyzer) to warn of a use-after-move situation. (see https://stackoverflow.com/a/74250567/225186) int main(int, char**) { a_class a; auto b = std::move(a); a.f(); // warns here, for example "invalid invocation of method 'f' on object 'a' while it is in the 'consumed' state [-Werror,-Wconsumed]" } However, if I make the variable global (or static or lazily static), there is no more warning. a_class a; int main(int, char**) { auto b = std::move(a); a.f(); // no warns here! } See here: https://godbolt.org/z/3zW61qYfY Is it possible to generalize some sort of use-after-move detection at compile-time for global variables? Or is it impossible, even in principle? Note: please don't make this discussion about global object (I know it is a bad idea) or about the legality of using moved objects (I know some class are designed for that to be ok). The question is technical, about the compiler and tools to detect a certain bug-prone pattern in the program. Full working code, compile with clang ... -Wconsumed -Werror -std=c++11 or use clang-tidy. The clang annotation (extensions) help the compiler detect the patterns. #include<cassert> #include<memory> class [[clang::consumable(unconsumed)]] a_class { std::unique_ptr<int> p_; public: [[clang::callable_when(unconsumed)]] void f() {} // private: [[clang::set_typestate(consumed)]] void invalidate() {} // not needed but good to know }; a_class a; int main(int, char**) { // a_class a; auto b = std::move(a); a.f(); // global doesn't warn here } Most of the information I could find about this clang extension is from here: Andrea Kling's blog https://awesomekling.github.io/Catching-use-after-move-bugs-with-Clang-consumed-annotations/
[ "It is possible to use clang's consumed annotations to detect use-after-move situations in global variables, but it requires additional steps and may not be practical in all cases.\nWhen a global variable is moved, its type is changed to the \"consumed\" state, indicating that it is no longer valid for use. However, because global variables are not destroyed when they go out of scope, their consumed state is not automatically reset. This means that if a global variable is moved and then accessed again later, it will be in the consumed state and clang's consumed annotations will not generate a warning.\nTo enable clang's consumed annotations to generate a warning for use-after-move situations involving global variables, it is necessary to manually reset the variable's type state to \"unconsumed\" when it is no longer being used. This can be done using the clang::reset_typestate annotation, which can be applied to a function that resets the variable's type state. For example:\n#include <clang/Analysis/Analyses/Consumed.h>\n\nvoid reset_a_class() {\n clang::reset_typestate<a_class>(clang::ConsumedState::Unconsumed);\n}\n\nint main(int, char**) {\n // a_class a;\n auto b = std::move(a);\n reset_a_class();\n a.f(); // generates warning\n}\n\nIn this case, the call to reset_a_class() resets the consumed state of the global variable a, allowing it to be used again without generating a warning from clang's consumed annotations.\n\nHere is an updated version of the reset_a_class function that properly declares the a variable and uses the clang::release_memory_on_return annotation to release the memory when the function returns.\n// This function resets the `a_class` object by moving it to a temporary\n// variable, which releases the memory when it goes out of scope.\nvoid reset_a_class() {\n // Declare the `a` variable.\n a_class a;\n\n // Use the `clang::release_memory_on_return` annotation to release the\n // memory for `a` when the function returns.\n [[clang::release_memory_on_return]] auto b = std::move(a);\n}\n\nTo use this function in the online compiler at godbolt.org, you will need to add the -fconsumed and -frelease-memory-on-return flags to the compiler options. This will enable the Clang annotations used in the reset_a_class function. Here is an example of how you can use this function in godbolt.org:\n#include<memory>\n\nclass [[clang::consumable(unconsumed)]] a_class {\n std::unique_ptr<int> p\n\n" ]
[ 0 ]
[ "Yes, it is possible to generalize use-after-move detection for global variables in C++ using clang's \"consumable\" and \"callable_when\" annotations. These annotations allow you to specify the \"typestate\" of an object - whether it is in the \"consumed\" or \"unconsumed\" state - and whether a certain method can be called depending on the typestate.\nHere is an example of how you can use these annotations to prevent use-after-move situations with global variables:\n#include<cassert>\n#include<memory>\n\nclass [[clang::consumable(unconsumed)]] a_class {\nstd::unique_ptr<int> p_;\n\npublic:\n[[clang::callable_when(unconsumed)]]\nvoid f() {}\n};\n\na_class a;\n\nint main(int, char**) {\nauto b = std::move(a);\na.f(); // this line should now produce a warning\n}\n\nWhen you compile this code with clang, using the flags \"-Wconsumed -Werror -std=c++11\", you should see a warning like this:\nwarning: invalid invocation of method 'f' on object 'a' while it is in the 'consumed' state [-Werror,-Wconsumed]\n\nThis warning indicates that the method \"f\" is being called on an object \"a\" that is in the \"consumed\" state, which is not allowed according to the annotations.\nIt's worth noting that using global variables is generally considered to be bad practice, as it can lead to unpredictable behavior and hard-to-debug errors. Instead, you should try to avoid using global variables and use other more explicitly managed mechanisms, such as function arguments or object fields, to pass data between different parts of your code.\n", "It is not possible to use the Clang -Wconsumed warning or the clang::consumable annotation to detect use-after-move situations for global variables. This is because these warnings and annotations are only applied to local variables and function parameters.\nThe -Wconsumed warning is designed to help catch situations where a local variable or function parameter is used after it has been moved from. This warning is triggered when a callable method is invoked on an object that is in the consumed state, which indicates that the object has been moved from.\nThe clang::consumable annotation is used to mark a class as being \"consumable\", which means that its objects can be moved from. This annotation is used in conjunction with the -Wconsumed warning to help the compiler detect use-after-move situations.\nHowever, neither the -Wconsumed warning nor the clang::consumable annotation can be used to detect use-after-move situations for global variables. This is because global variables are not local variables or function parameters, so they are not subject to the same rules and warnings.\nIn general, it is not possible to detect use-after-move situations for global variables at compile-time, even in principle. This is because global variables are not associated with any particular scope or function, so it is not possible for the compiler to track their lifecycle or state.\nInstead, it is generally recommended to avoid using global variables, as they can cause various issues, including use-after-move situations. If you need to share state between multiple functions or modules, you can use other mechanisms, such as function parameters or static local variables, which are subject to the same rules and warnings as local variables.\nI hope this helps! Let me know if you have any other questions.\n", "It is not possible to perform use-after-move detection for global variables at compile-time in C++. This is because global variables are allocated at compile time, before the program is run, and their lifetimes are not tied to any specific scope in the code. As a result, the compiler cannot track the lifetime of global variables and cannot determine when they are moved from or otherwise invalidated.\nFurthermore, even if the compiler could track the lifetime of global variables, it is not clear how it could determine whether a given use of a global variable is safe or not. For example, in the code you provided, the call to a.f() after a has been moved from is not necessarily invalid. It may be that a_class has a move constructor that leaves the object in a valid state, or that f() is implemented in such a way that it can be called on an object even after it has been moved from.\nTherefore, in general, it is not possible to perform use-after-move detection for global variables at compile-time in C++. It is up to you to ensure that global variables are used safely and correctly, and to avoid using them after they have been moved from.\n@Midas approach of using clang's consumable and callable_when annotations unfortunately doesn't work for all cases - here are a few examples to when you're not able to detect use-after-move situations with global variables:\n1. If the global variable is assigned to another variable after it has been moved from, the annotations will not be able to detect the use-after-move situation. See:\na_class a;\n\nint main(int, char**) {\n auto b = std::move(a);\n a_class c = a; // use-after-move situation, but not detected\n c.f(); // this line should produce a warning, but does not\n}\n\nIn this example, the global variable a is moved from and then assigned to another variable c. The annotations will not be able to detect this use-after-move situation, and the call to c.f() will not produce a warning.\n2. If the global variable is used indirectly, through a pointer or reference, the annotations will not be able to detect the use-after-move situation. Refer to the following snippet:\na_class a;\n\nint main(int, char**) {\n auto b = std::move(a);\n a_class* c = &a; // use-after-move situation, but not detected\n c->f(); // this line should produce a warning, but does not\n}\n\nHere the global variable a is moved from and then accessed through a pointer c. The annotations will not be able to detect this use-after-move situation, and the call to c->f() will not produce a warning.\n3. The last scenario that came to mind is that if the global variable is used in a different translation unit (i.e., a different source file), the annotations will not be able to detect the use-after-move situation. For example:\n// a_class.h\nclass a_class {\n // ...\n};\n\n// a_class.cpp\na_class a;\n\n// main.cpp\n#include \"a_class.h\"\n\nint main(int, char**) {\n auto b = std::move(a);\n a.f(); // use-after-move situation, but not detected\n}\n\nIn this scenario, the global variable a is defined in a_class.cpp, but is used in main.cpp. The annotations in a_class.h will not be able to detect this use-after-move situation, and the call to a.f() in main.cpp will not produce a warning.\nNOTE: There are probably dozens of other scenarios / usage cases that will not work with consumables - but these were just the few that came to my mind when writing this post!\nSo overall, while using clang's consumable and callable_when annotations can help detect some use-after-move situations with global variables, it is not a complete solution and may not be able to detect all such situations.\nEDIT 1:\nOP replied that he doesn't need a canonical reply on whether every use-after-move is being detected. In general, you could std::unique_ptr from the C++ standard library. Basically using smart pointer (that only can be moved from and not copied) -> It has get() method that can be used to access the object that it points to, but if the std::unique_ptr has been moved from, calling this method will trigger an exception. This allows you to detect use-after-move situations at runtime, but it is not a compile-time check.\nHere is a small example on how to use std::unique_ptr to detect a basic use-after-move situation:\n#include <memory>\n#include <iostream>\n\nint main()\n{\n // Create a std::unique_ptr that points to an int\n std::unique_ptr<int> ptr = std::make_unique<int>(42);\n\n // Move the std::unique_ptr to another variable\n std::unique_ptr<int> other_ptr = std::move(ptr);\n\n try\n {\n // Attempt to access the int that the moved-from std::unique_ptr points to\n std::cout << *ptr.get() << std::endl;\n }\n catch (const std::exception& e)\n {\n // If the .get() method is called on a moved-from std::unique_ptr, it will throw an exception\n std::cout << \"Exception: \" << e.what() << std::endl;\n }\n\n return 0;\n}\n\nIn this example, the code attempts to access the int that the ptr variable points to, but since ptr has been moved from, calling the .get() method on it will throw an exception. This allows you to detect the use-after-move situation at runtime.\nNOTE: This is not a compile-time check, so it will not produce a warning when you compile the code. It is simply a way to detect use-after-move situations at runtime.\n" ]
[ -1, -1, -2 ]
[ "c++", "clang", "clang_tidy", "move", "static_analysis" ]
stackoverflow_0074266113_c++_clang_clang_tidy_move_static_analysis.txt
Q: Migrate from template_file to templatefile Apparently, template_file was deprecated, and I need to migrate to templatefile I have the following YAML that needs to be populated with two variables data "template_file" "user_data" { template = file("cloud-init.yaml") vars = { user = var.USER tskey = var.TAILSCALE_AUTHKEY } } Used below user_data = data.template_file.user_data.rendered How to do this in a new way, using templatefile? EDIT: Full source code https://github.com/skhaz/my-cloud-workspace A: Create a separate .yml file Then call the template .yml file through: user_data = file("filename.yml") A: These days you should use templatefile, not template_file. In your case you could do: locals { user_data = templatefile("cloud-init.yaml", { user = var.USER tskey = var.TAILSCALE_AUTHKEY }) } then user_data = local.user_data
Migrate from template_file to templatefile
Apparently, template_file was deprecated, and I need to migrate to templatefile I have the following YAML that needs to be populated with two variables data "template_file" "user_data" { template = file("cloud-init.yaml") vars = { user = var.USER tskey = var.TAILSCALE_AUTHKEY } } Used below user_data = data.template_file.user_data.rendered How to do this in a new way, using templatefile? EDIT: Full source code https://github.com/skhaz/my-cloud-workspace
[ "Create a separate .yml file\nThen call the template .yml file through:\n\nuser_data = file(\"filename.yml\")\n\n", "These days you should use templatefile, not template_file. In your case you could do:\nlocals {\n user_data = templatefile(\"cloud-init.yaml\", {\n user = var.USER\n tskey = var.TAILSCALE_AUTHKEY\n })\n}\n\nthen\nuser_data = local.user_data\n\n" ]
[ 0, 0 ]
[]
[]
[ "terraform" ]
stackoverflow_0074508452_terraform.txt
Q: How to locate a specific var type inside many others arrays in python? I'd like know how can I localize a specific type variable in a set of arrays, that could change its own length structure, i.e: [[[[11.0, 16.0], [113.0, 16.0], [113.0, 41.0], [11.0, 41.0]], ("I WANNA BE LOCATED", 548967)]] I just needed to extract the type variable that is a Str in this case: "I WANNA BE LOCATED" I tried use "for" loop, but it doesn't help, because possibly in my case, the string might be there in other indices. Is there another way? Maybe with numpy or some lambda? A: Here is an example of how you could use these functions to extract the string from the nested array: # Define the nested array arr = [[[[11.0, 16.0], [113.0, 16.0], [113.0, 41.0], [11.0, 41.0]], (1, "I WANNA BE LOCATED",)]] # Define a function to extract the string from the nested array def extract_string(arr): print(arr) # Iterate over the elements in the array for i, elem in enumerate(arr): # Check if the element is a string if isinstance(elem, str): # Return the string if it is a string return elem # Check if the element is a nested array elif isinstance(elem, list) or isinstance(elem, tuple): # Recursively call the function to search for the string in the nested array result = extract_string(elem) if result: return result # Extract the string from the nested array string = extract_string(arr) print(string) In this code, the extract_string() function recursively searches the nested array for a string. If it finds a string, it returns the string. If it finds a nested array, it recursively calls itself to search for the string in the nested array. This allows the function to search for the string in any level of the nested array. A: I'd do it recursively; this, for example, will work (provided you only have tuples and lists): collection = [[[[11.0, 16.0], [113.0, 16.0], [113.0, 41.0], [11.0, 41.0]], ("I WANNA BE LOCATED", 548967)]] def get_string(thing): if type(thing) == str: return thing if type(thing) in [list, tuple]: for i in thing: if (a := get_string(i)): return a return None get_string(collection) # Out[456]: 'I WANNA BE LOCATED' A: Flatten the arbitrarily nested list; Filter the strings (and perhaps bytes). Example: from collections.abc import Iterable li=[[[[11.0, 16.0], [113.0, 16.0], [113.0, 41.0], [11.0, 41.0]], ("I WANNA BE LOCATED", 548967)]] def flatten(xs): for x in xs: if isinstance(x, Iterable) and not isinstance(x, (str, bytes)): yield from flatten(x) else: yield x >>> [item for item in flatten(li) if isinstance(item,(str, bytes))] ['I WANNA BE LOCATED']
How to locate a specific var type inside many others arrays in python?
I'd like know how can I localize a specific type variable in a set of arrays, that could change its own length structure, i.e: [[[[11.0, 16.0], [113.0, 16.0], [113.0, 41.0], [11.0, 41.0]], ("I WANNA BE LOCATED", 548967)]] I just needed to extract the type variable that is a Str in this case: "I WANNA BE LOCATED" I tried use "for" loop, but it doesn't help, because possibly in my case, the string might be there in other indices. Is there another way? Maybe with numpy or some lambda?
[ "Here is an example of how you could use these functions to extract the string from the nested array:\n# Define the nested array\narr = [[[[11.0, 16.0], [113.0, 16.0], [113.0, 41.0], [11.0, 41.0]], (1, \"I WANNA BE LOCATED\",)]]\n\n# Define a function to extract the string from the nested array\ndef extract_string(arr):\n print(arr)\n # Iterate over the elements in the array\n for i, elem in enumerate(arr):\n # Check if the element is a string\n if isinstance(elem, str):\n # Return the string if it is a string\n return elem\n # Check if the element is a nested array\n elif isinstance(elem, list) or isinstance(elem, tuple):\n # Recursively call the function to search for the string in the nested array\n result = extract_string(elem)\n if result:\n return result\n\n# Extract the string from the nested array\nstring = extract_string(arr)\nprint(string)\n\nIn this code, the extract_string() function recursively searches the nested array for a string. If it finds a string, it returns the string. If it finds a nested array, it recursively calls itself to search for the string in the nested array. This allows the function to search for the string in any level of the nested array.\n", "I'd do it recursively; this, for example, will work (provided you only have tuples and lists):\ncollection = [[[[11.0, 16.0], [113.0, 16.0], [113.0, 41.0], [11.0, 41.0]], (\"I WANNA BE LOCATED\", 548967)]]\n\ndef get_string(thing):\n if type(thing) == str:\n return thing\n if type(thing) in [list, tuple]:\n for i in thing:\n if (a := get_string(i)):\n return a\n return None\n\nget_string(collection)\n# Out[456]: 'I WANNA BE LOCATED'\n\n", "\nFlatten the arbitrarily nested list;\nFilter the strings (and perhaps bytes).\n\nExample:\nfrom collections.abc import Iterable\n\nli=[[[[11.0, 16.0], [113.0, 16.0], [113.0, 41.0], [11.0, 41.0]], (\"I WANNA BE LOCATED\", 548967)]]\n\ndef flatten(xs):\n for x in xs:\n if isinstance(x, Iterable) and not isinstance(x, (str, bytes)):\n yield from flatten(x)\n else:\n yield x\n\n>>> [item for item in flatten(li) if isinstance(item,(str, bytes))]\n['I WANNA BE LOCATED']\n\n" ]
[ 1, 1, 1 ]
[]
[]
[ "filter", "indexing", "list", "numpy", "python" ]
stackoverflow_0074681279_filter_indexing_list_numpy_python.txt
Q: How do I get the unix timestamp as a variable in C++? I am trying to make an accurate program that tells you the time, but I can't get the current Unix timestamp. Is there any way I can get the timestamp? I tried using int time = std::chrono::steady_clock::now(); but that gives me an error, saying that 'std::chrono' has not been declared. By the way, I'm new to C++ Let me know if you have the answer. A: Try using std::time, it should be available in Dev C++ 5.11, but let me know if it also throws an error: #include <iostream> #include <ctime> #include <cstddef> // Include the NULL macro int main() { // Get the current time in seconds time_t now = std::time(NULL); // Convert the Unix timestamp to a tm struct tm *time = std::localtime(&now); // Print the current time and date std::cout << "The current time and date is " << time->tm_year + 1900 << "-" << time->tm_mon + 1 << "-" << time->tm_mday << " " << time->tm_hour << ":" << time->tm_min << ":" << time->tm_sec << std::endl; return 0; } In this example, the std::time() function is called with a NULL argument to get the current time in seconds. The value returned by the std::time() function is then printed to the console using the std::cout object. You can use the std::localtime() function to convert the Unix timestamp to a more human-readable format. This function returns a tm struct that contains the local time broken down into its component parts (year, month, day, hour, minute, etc.). A: Not sure why you need to use Unix timestamp, since C++11 and higher has more convinient types and functions to work with time. But anyway, here is example of how it can be done in both new and old way, copied from here: https://en.cppreference.com/w/cpp/chrono/system_clock/to_time_t #include <iostream> #include <ctime> #include <chrono> #include <thread> using namespace std::chrono_literals;   int main() { // The old way std::time_t oldt = std::time(nullptr);   std::this_thread::sleep_for(2700ms);   // The new way std::time_t newt = std::chrono::system_clock::to_time_t( std::chrono::system_clock::now());   std::cout << "oldt-newt == " << oldt-newt << " s\n"; } Pay attention, that system_clock is used here. It is the only clock that gets "wall clock" time. Other clocks, like steady_clock is undefined in terms of what it values means. I.e. it can be time from last reboot or time from program start or anything else, the only guarantee steady_clock has is that it is steadely increasing. At the same time system_clock can go back, e.g. when user chages it or when adjusting for day light saving.
How do I get the unix timestamp as a variable in C++?
I am trying to make an accurate program that tells you the time, but I can't get the current Unix timestamp. Is there any way I can get the timestamp? I tried using int time = std::chrono::steady_clock::now(); but that gives me an error, saying that 'std::chrono' has not been declared. By the way, I'm new to C++ Let me know if you have the answer.
[ "Try using std::time, it should be available in Dev C++ 5.11, but let me know if it also throws an error:\n#include <iostream>\n#include <ctime>\n#include <cstddef> // Include the NULL macro\n\nint main() {\n // Get the current time in seconds\n time_t now = std::time(NULL);\n\n // Convert the Unix timestamp to a tm struct\n tm *time = std::localtime(&now);\n\n // Print the current time and date\n std::cout << \"The current time and date is \" << time->tm_year + 1900\n << \"-\" << time->tm_mon + 1 << \"-\" << time->tm_mday\n << \" \" << time->tm_hour << \":\" << time->tm_min\n << \":\" << time->tm_sec << std::endl;\n\n return 0;\n}\n\nIn this example, the std::time() function is called with a NULL argument to get the current time in seconds. The value returned by the std::time() function is then printed to the console using the std::cout object.\nYou can use the std::localtime() function to convert the Unix timestamp to a more human-readable format. This function returns a tm struct that contains the local time broken down into its component parts (year, month, day, hour, minute, etc.).\n", "Not sure why you need to use Unix timestamp, since C++11 and higher has more convinient types and functions to work with time. But anyway, here is example of how it can be done in both new and old way, copied from here: https://en.cppreference.com/w/cpp/chrono/system_clock/to_time_t\n#include <iostream>\n#include <ctime>\n#include <chrono>\n#include <thread>\nusing namespace std::chrono_literals;\n \nint main()\n{\n // The old way\n std::time_t oldt = std::time(nullptr);\n \n std::this_thread::sleep_for(2700ms);\n \n // The new way\n std::time_t newt = std::chrono::system_clock::to_time_t(\n std::chrono::system_clock::now());\n \n std::cout << \"oldt-newt == \" << oldt-newt << \" s\\n\";\n}\n\nPay attention, that system_clock is used here. It is the only clock that gets \"wall clock\" time. Other clocks, like steady_clock is undefined in terms of what it values means. I.e. it can be time from last reboot or time from program start or anything else, the only guarantee steady_clock has is that it is steadely increasing. At the same time system_clock can go back, e.g. when user chages it or when adjusting for day light saving.\n" ]
[ 0, 0 ]
[]
[]
[ "c++", "dev_c++", "unix_timestamp" ]
stackoverflow_0074680489_c++_dev_c++_unix_timestamp.txt
Q: how do I fix this key signing error in google play console? So I just built the new version of my app but when I upload the apk to the google play console I get this error Image of Error I have no idea what's causing it. I checked and within unity the key is the same one I used the first time I built the app. ID is same from the keystore. I have no idea what's causing this or how to fix it so i'd appreciate any help! A: You must the same signing key used to sign the app bundle when it was initially uploaded to the Google Play Store. This signing key is typically stored in a Keystore file, which you can use to sign the app bundle using the jarsigner tool. A: You should use choose the same Project Key Alias of the Keystore.
how do I fix this key signing error in google play console?
So I just built the new version of my app but when I upload the apk to the google play console I get this error Image of Error I have no idea what's causing it. I checked and within unity the key is the same one I used the first time I built the app. ID is same from the keystore. I have no idea what's causing this or how to fix it so i'd appreciate any help!
[ "You must the same signing key used to sign the app bundle when it was initially uploaded to the Google Play Store. This signing key is typically stored in a Keystore file, which you can use to sign the app bundle using the jarsigner tool.\n", "\nYou should use choose the same Project Key Alias of the Keystore.\n" ]
[ 0, 0 ]
[]
[]
[ "google_play_console", "unity3d" ]
stackoverflow_0074681241_google_play_console_unity3d.txt
Q: How to make this REACT APP display some text? I'm trying to display some text in my react course, but the app won't display it. I've starter the react app called "myapp" and created a file "TodoList.js" which is rendered on my "App.js", but when I write some text on TodoList.js it doesn't appear on the app This is the App.js code: import React from "react"; import TodoList from "./TodoList"; function App() { return ( <TodoList /> ) } export default App; This is the TodoList.js CODE: import React from 'react' export default function TodoList() { return ( <div> hello </div> ) } It's supposed to display "hello" on the browser, but it does not work and I can't understand why
How to make this REACT APP display some text?
I'm trying to display some text in my react course, but the app won't display it. I've starter the react app called "myapp" and created a file "TodoList.js" which is rendered on my "App.js", but when I write some text on TodoList.js it doesn't appear on the app This is the App.js code: import React from "react"; import TodoList from "./TodoList"; function App() { return ( <TodoList /> ) } export default App; This is the TodoList.js CODE: import React from 'react' export default function TodoList() { return ( <div> hello </div> ) } It's supposed to display "hello" on the browser, but it does not work and I can't understand why
[]
[]
[ "In the React 18, Render method has changed. App.js and TodoList should be no problem, the problem will be in index.js\nindex.js should look like this.\nimport React from 'react';\nimport ReactDOM from 'react-dom/client';\nimport App from './App';\n\nconst root = ReactDOM.createRoot(document.getElementById('root'));\nroot.render(\n <React.StrictMode>\n <App />\n </React.StrictMode>\n);\n\n" ]
[ -1 ]
[ "javascript", "reactjs" ]
stackoverflow_0074680580_javascript_reactjs.txt
Q: R package caret: How to access the results of both the training and testing data? I am using the following code: tc <- trainControl(method = "cv", number = 20) lm1_cv <- train(y~., data = data, method = "lm", preProcess = c("center", "scale"), trControl = tc) lm1_cv Which has the following output: Linear Regression 1338 samples 6 predictor Pre-processing: centered (8), scaled (8) Resampling: Cross-Validated (20 fold) Summary of sample sizes: 1272, 1272, 1270, 1271, 1270, 1272, ... Resampling results: RMSE Rsquared MAE 6048.516 0.7443666 4203.653 I have two questions: 1.) Caret is performing 20-fold cross validation. Is the average of all the testing data results stored in lm1_cv$results? 2.) If so, how do I access the average results (RMSE, etc) of all the training data? Overall: My goal is to compare the performance of the model on training data vs testing data. But I'm not sure how to access both. A: Yes, the average results of all the testing data are stored in the lm1_cv$results object. The train() function in the caret package automatically performs cross-validation and stores the results in the results element of the returned object. To access the average results (RMSE, etc.) of all the training data, you can use the resamples() function from the caret package. This function allows you to extract the training and testing data results from a cross-validation model. Here is an example of how you could do this: # Extract the training and testing data results results <- resamples(lm1_cv) # Calculate the average RMSE for the training data mean(results$RMSE[, "Training"]) # Calculate the average RMSE for the testing data mean(results$RMSE[, "Testing"]) In this example, the resamples() function is used to extract the training and testing data results from the lm1_cv object. The mean() function is then used to calculate the average RMSE for the training and testing data. The results$RMSE[, "Training"] and results$RMSE[, "Testing"] arguments select the RMSE values for the training and testing data, respectively. You can use this approach to calculate the average performance of the model on the training and testing data, and compare the results.
R package caret: How to access the results of both the training and testing data?
I am using the following code: tc <- trainControl(method = "cv", number = 20) lm1_cv <- train(y~., data = data, method = "lm", preProcess = c("center", "scale"), trControl = tc) lm1_cv Which has the following output: Linear Regression 1338 samples 6 predictor Pre-processing: centered (8), scaled (8) Resampling: Cross-Validated (20 fold) Summary of sample sizes: 1272, 1272, 1270, 1271, 1270, 1272, ... Resampling results: RMSE Rsquared MAE 6048.516 0.7443666 4203.653 I have two questions: 1.) Caret is performing 20-fold cross validation. Is the average of all the testing data results stored in lm1_cv$results? 2.) If so, how do I access the average results (RMSE, etc) of all the training data? Overall: My goal is to compare the performance of the model on training data vs testing data. But I'm not sure how to access both.
[ "Yes, the average results of all the testing data are stored in the lm1_cv$results object. The train() function in the caret package automatically performs cross-validation and stores the results in the results element of the returned object.\nTo access the average results (RMSE, etc.) of all the training data, you can use the resamples() function from the caret package. This function allows you to extract the training and testing data results from a cross-validation model. Here is an example of how you could do this:\n# Extract the training and testing data results\nresults <- resamples(lm1_cv)\n\n# Calculate the average RMSE for the training data\nmean(results$RMSE[, \"Training\"])\n\n# Calculate the average RMSE for the testing data\nmean(results$RMSE[, \"Testing\"])\n\nIn this example, the resamples() function is used to extract the training and testing data results from the lm1_cv object. The mean() function is then used to calculate the average RMSE for the training and testing data. The results$RMSE[, \"Training\"] and results$RMSE[, \"Testing\"] arguments select the RMSE values for the training and testing data, respectively. You can use this approach to calculate the average performance of the model on the training and testing data, and compare the results.\n" ]
[ 0 ]
[]
[]
[ "machine_learning", "r", "r_caret", "regression" ]
stackoverflow_0074679463_machine_learning_r_r_caret_regression.txt
Q: Getting error when I try to upgrade react-router v5 to V6 I m getting typescript error when I tried to upgraded React-router-dom v5 to v6, How can I fix this typescript error. below you can find the code Thanks in advance ` export function withRouter(ui: React.ReactElement) { const history = useNavigate(); const routerValues: any = { history: undefined, location: undefined }; const result = ( <MemoryRouter> {ui} <Route path="*" element={({ history, location }) => { routerValues.history = history; routerValues.location = location; return null; }} /> </MemoryRouter> enter image description here` below you can find entire file code ` import React from "react"; import { Reducer } from "@reduxjs/toolkit"; import { Provider } from "react-redux"; import { MemoryRouter, Route, useNavigate } from "react-router-dom"; import buildStore from "../redux/store"; export function withRedux( ui: React.ReactElement, reducer: { [key: string]: Reducer; }, initialState: any ) { const store = buildStore(initialState, true); const dispatchSpy = jest.spyOn(store, "dispatch"); return { result: <Provider store={store}>{ui}</Provider>, store, dispatchSpy }; } export function withRouter(ui: React.ReactElement) { const history = useNavigate(); const routerValues: any = { history: undefined, location: undefined }; const result = ( <MemoryRouter> {ui} <Route path="*" element={({ history, location }) => { routerValues.history = history; routerValues.location = location; return null; }} /> </MemoryRouter> ); return { result, routerValues }; } ` I am passing history and location props which were work fine when I was using react router v5 here is the previous code : ` const result = ( <MemoryRouter> {ui} <Route path="*" render={({ history, location }) => { routerValues.history = history; routerValues.location = location; return null; }} /> </MemoryRouter> ` After update react router v6 I changed in my code because We know that v6 no longer support render keyword inside route So I Replace it ` const result = ( <MemoryRouter> {ui} <Route path="*" element={({ history, location }) => { routerValues.history = history; routerValues.location = location; return null; }} /> </MemoryRouter> ); ` But I don't have Idea in v6 How can I pass these props inside route A: Try this: export function withRouter(ui: React.ReactElement) { const history = useNavigate(); const location = useLocation(); const routerValues: any = { history: history, location: location }; const result = ( <MemoryRouter> {ui} </MemoryRouter> ); return { result, routerValues }; } A: Issues The withRouter Higher Order Component/render function can't use the RRD hooks outside the router it is rendering. react-router-dom@6 Route components don't take "route props" and the element prop takes a React.ReactNode, a.k.a. JSX. The "route props" should be passed as props to the component being rendered. Solution You'll need to create two components. One is a test render function that provides the MemoryRouter as a test wrapper, and the other is a correct withRouter HOC. Example: Create a custom render function that renders the component under test into a wrapper component that provides all the various contexts (routers, redux, etc) import { render } from '@testing-library/react'; import { MemoryRouter } from 'react-router-dom'; const Wrappers = ({ children }) => ( <MemoryRouter> {children} </MemoryRouter> ); const customRender = (ui: React.ReactElement, options: object) => { return render(ui, { wrapper: Wrappers, ...options }); }; export { customRender as render }; See the RTL setup docs for more information on custom render functions. Create separate withRouter HOC to only decorate older React Class components that can't use the RRD hooks directly. Here's an example Typescript implementation. import { ComponentType } from 'react'; import { Location, NavigateFunction, useLocation, useParams } from 'react-router-dom'; export interface WithRouterProps { location: Location; navigate: NavigateFunction; params: ReturnType<typeof useParams>; } export const withRouter = <P extends object>(Component: ComponentType<P>) => (props: Omit<P, keyof WithRouterProps>) => { const location = useLocation(); const params = useParams(); const navigate = useNavigate(); return ( <Component {...props} {...{ location, params, navigate }} /> ); };
Getting error when I try to upgrade react-router v5 to V6
I m getting typescript error when I tried to upgraded React-router-dom v5 to v6, How can I fix this typescript error. below you can find the code Thanks in advance ` export function withRouter(ui: React.ReactElement) { const history = useNavigate(); const routerValues: any = { history: undefined, location: undefined }; const result = ( <MemoryRouter> {ui} <Route path="*" element={({ history, location }) => { routerValues.history = history; routerValues.location = location; return null; }} /> </MemoryRouter> enter image description here` below you can find entire file code ` import React from "react"; import { Reducer } from "@reduxjs/toolkit"; import { Provider } from "react-redux"; import { MemoryRouter, Route, useNavigate } from "react-router-dom"; import buildStore from "../redux/store"; export function withRedux( ui: React.ReactElement, reducer: { [key: string]: Reducer; }, initialState: any ) { const store = buildStore(initialState, true); const dispatchSpy = jest.spyOn(store, "dispatch"); return { result: <Provider store={store}>{ui}</Provider>, store, dispatchSpy }; } export function withRouter(ui: React.ReactElement) { const history = useNavigate(); const routerValues: any = { history: undefined, location: undefined }; const result = ( <MemoryRouter> {ui} <Route path="*" element={({ history, location }) => { routerValues.history = history; routerValues.location = location; return null; }} /> </MemoryRouter> ); return { result, routerValues }; } ` I am passing history and location props which were work fine when I was using react router v5 here is the previous code : ` const result = ( <MemoryRouter> {ui} <Route path="*" render={({ history, location }) => { routerValues.history = history; routerValues.location = location; return null; }} /> </MemoryRouter> ` After update react router v6 I changed in my code because We know that v6 no longer support render keyword inside route So I Replace it ` const result = ( <MemoryRouter> {ui} <Route path="*" element={({ history, location }) => { routerValues.history = history; routerValues.location = location; return null; }} /> </MemoryRouter> ); ` But I don't have Idea in v6 How can I pass these props inside route
[ "Try this:\nexport function withRouter(ui: React.ReactElement) {\n const history = useNavigate();\n const location = useLocation();\n \n const routerValues: any = {\n history: history,\n location: location\n };\n\n const result = (\n <MemoryRouter>\n {ui}\n </MemoryRouter>\n );\n\n return { result, routerValues };\n}\n\n", "Issues\n\nThe withRouter Higher Order Component/render function can't use the RRD hooks outside the router it is rendering.\nreact-router-dom@6 Route components don't take \"route props\" and the element prop takes a React.ReactNode, a.k.a. JSX. The \"route props\" should be passed as props to the component being rendered.\n\nSolution\nYou'll need to create two components. One is a test render function that provides the MemoryRouter as a test wrapper, and the other is a correct withRouter HOC.\nExample:\nCreate a custom render function that renders the component under test into a wrapper component that provides all the various contexts (routers, redux, etc)\nimport { render } from '@testing-library/react';\nimport { MemoryRouter } from 'react-router-dom';\n\nconst Wrappers = ({ children }) => (\n <MemoryRouter>\n {children}\n </MemoryRouter>\n);\n\nconst customRender = (ui: React.ReactElement, options: object) => {\n return render(ui, { wrapper: Wrappers, ...options });\n};\n\nexport { customRender as render };\n\nSee the RTL setup docs for more information on custom render functions.\nCreate separate withRouter HOC to only decorate older React Class components that can't use the RRD hooks directly. Here's an example Typescript implementation.\nimport { ComponentType } from 'react';\nimport {\n Location,\n NavigateFunction,\n useLocation,\n useParams\n} from 'react-router-dom';\n\nexport interface WithRouterProps {\n location: Location;\n navigate: NavigateFunction;\n params: ReturnType<typeof useParams>;\n}\n\nexport const withRouter = <P extends object>(Component: ComponentType<P>) => \n (props: Omit<P, keyof WithRouterProps>) => {\n const location = useLocation();\n const params = useParams();\n const navigate = useNavigate();\n\n return (\n <Component\n {...props}\n {...{ location, params, navigate }}\n />\n );\n };\n\n" ]
[ 0, 0 ]
[]
[]
[ "javascript", "react_router_dom", "reactjs", "typescript" ]
stackoverflow_0074673119_javascript_react_router_dom_reactjs_typescript.txt
Q: Got email of 85% Amazon RDS used in Free Tier I am using an Amazon EC2 instance to host my site using the AWS Free Tier. I received this email: Dear AWS Customer, Your AWS account has exceeded 85% of the usage limit for one or more AWS Free Tier-eligible services for the month of September. AWS Free Tier Usage as of 09/29/2019: AWS Free Tier: 17.1331 GB-Mo Usage Limit: 20 GB of database storage, in any combination of RDS General Purpose (SSD) or Magnetic storage But I just have a 2.1 Mb of Database. What to do? A: From AWS Forums Posted by: BrianW@AWS You should not be getting this message. The free tier is based on allocated storage, not consumed storage. If you allocate a 20 GB database, you will not exceed the free tier no matter how much you insert into the database. We will on making sure these e-mails are more helpful in the future. So 20 GB is allocated storage for one year and you consume more for the month of September which is 2.1MB so based on 20GB for the year, you have to manager for each month accordingly. A: If you used the 20GB for a portion of the month, then it would be charged based on that portion. So, you can allocate 20GB and use it for the whole month. This will consume 100% of the monthly allocation of the free tier, which is fine. It will continue each month like that. Please note that the Free Tier for Amazon RDS is only available for the first 12 months of your AWS Account. A: This happened to me too (albeit a couple of years later :). I created the instance 2 days ago and barely anything in it. I was told that i created my db instance with an allocation of 200gb (doesn't matter what you store is what you allocate). they divide the allocation by 30 and then each day of that month the storage increases according to what that figure is . see chat below: What is confusing for me is why it was created with 200gb in the first place. I'm pretty sure I accepted defaults on the creation of the instance , and being that I opted for free tier the default should have been 20. anyway that's what happened. Also i deleted the instance but if i create another then the usage that was calculated will carry on to the new instance so no free storage for me after all. "When it comes to creating RDS instances, you are charged not for what you store but the storage you provisioned. Although the instance is now deleted, I can see you originally allocated 200 GB. AWS does a calculation where this allocated storage is divided by 30 and then each day, you will see the storage usage increases on the billing console. If you provisioned 200 GB and this is divided by 30, by the third day you have reached almost 20 GB of usage. That's why you got the alert. "
Got email of 85% Amazon RDS used in Free Tier
I am using an Amazon EC2 instance to host my site using the AWS Free Tier. I received this email: Dear AWS Customer, Your AWS account has exceeded 85% of the usage limit for one or more AWS Free Tier-eligible services for the month of September. AWS Free Tier Usage as of 09/29/2019: AWS Free Tier: 17.1331 GB-Mo Usage Limit: 20 GB of database storage, in any combination of RDS General Purpose (SSD) or Magnetic storage But I just have a 2.1 Mb of Database. What to do?
[ "From AWS Forums Posted by: BrianW@AWS\n\nYou should not be getting this message. The free tier is based on\n allocated storage, not consumed storage. If you allocate a 20 GB\n database, you will not exceed the free tier no matter how much you\n insert into the database. We will on making sure these e-mails are\n more helpful in the future.\n\nSo 20 GB is allocated storage for one year and you consume more for the month of September which is 2.1MB so based on 20GB for the year, you have to manager for each month accordingly.\n", "If you used the 20GB for a portion of the month, then it would be charged based on that portion.\nSo, you can allocate 20GB and use it for the whole month. This will consume 100% of the monthly allocation of the free tier, which is fine. It will continue each month like that.\nPlease note that the Free Tier for Amazon RDS is only available for the first 12 months of your AWS Account.\n", "This happened to me too (albeit a couple of years later :). I created the instance 2 days ago and barely anything in it. I was told that i created my db instance with an allocation of 200gb (doesn't matter what you store is what you allocate). they divide the allocation by 30 and then each day of that month the storage increases according to what that figure is . see chat below:\nWhat is confusing for me is why it was created with 200gb in the first place. I'm pretty sure I accepted defaults on the creation of the instance , and being that I opted for free tier the default should have been 20. anyway that's what happened. Also i deleted the instance but if i create another then the usage that was calculated will carry on to the new instance so no free storage for me after all.\n\"When it comes to creating RDS instances, you are charged not for what you store but the storage you provisioned. Although the instance is now deleted, I can see you originally allocated 200 GB. AWS does a calculation where this allocated storage is divided by 30 and then each day, you will see the storage usage increases on the billing console. If you provisioned 200 GB and this is divided by 30, by the third day you have reached almost 20 GB of usage. That's why you got the alert. \"\n" ]
[ 1, 0, 0 ]
[]
[]
[ "amazon_ec2", "amazon_rds", "amazon_web_services" ]
stackoverflow_0058173657_amazon_ec2_amazon_rds_amazon_web_services.txt
Q: Amazon web services auto scaling group creation error I'm having trouble with creating an auto scaling group. The error message is attached.(https://i.stack.imgur.com/ouL9U.png). I'm not exactly sure what the error is asking me to resolve. I was able to create an auto scaling group without using a custom VPC but now using one, I am getting errors and not sure how to resolve. Though I see nothing wrong with the creation of my VPC. I'm using my own VPC with three availability zones, also I created a load balancer with a target group. I haven't found any solutions for this error online so any help would be much appreciated! Many thanks Ciaran Whilst creating the auto scaling group with my custom VPC and selecting the three subnets, the load balancer and target group. I got this error. I was expecting the creation to be successful since I previously created one without a custom VPC. A: There could be several reasons for an error when creating an Amazon Web Services (AWS) auto scaling group. Some common causes include: Incorrect configuration of the auto scaling group's settings, such as the minimum and maximum number of instances or the desired capacity. Inadequate permissions for the user or role creating the auto scaling group, which can be resolved by checking the AWS Identity and Access Management (IAM) policies. Insufficient resources available in the selected region or availability zone, which can be resolved by adjusting the settings or choosing a different region or availability zone. Network connectivity issues between the AWS account and the region or availability zone, which can be resolved by checking the VPC and security group settings. Outdated or incompatible AWS CLI or SDK versions, which can be resolved by updating to the latest versions. If the error persists, it is recommended to contact AWS support for further assistance.
Amazon web services auto scaling group creation error
I'm having trouble with creating an auto scaling group. The error message is attached.(https://i.stack.imgur.com/ouL9U.png). I'm not exactly sure what the error is asking me to resolve. I was able to create an auto scaling group without using a custom VPC but now using one, I am getting errors and not sure how to resolve. Though I see nothing wrong with the creation of my VPC. I'm using my own VPC with three availability zones, also I created a load balancer with a target group. I haven't found any solutions for this error online so any help would be much appreciated! Many thanks Ciaran Whilst creating the auto scaling group with my custom VPC and selecting the three subnets, the load balancer and target group. I got this error. I was expecting the creation to be successful since I previously created one without a custom VPC.
[ "There could be several reasons for an error when creating an Amazon Web Services (AWS) auto scaling group. Some common causes include:\nIncorrect configuration of the auto scaling group's settings, such as the minimum and maximum number of instances or the desired capacity.\n\nInadequate permissions for the user or role creating the auto scaling group, which can be resolved by checking the AWS Identity and Access Management (IAM) policies.\n\nInsufficient resources available in the selected region or availability zone, which can be resolved by adjusting the settings or choosing a different region or availability zone.\n\nNetwork connectivity issues between the AWS account and the region or availability zone, which can be resolved by checking the VPC and security group settings.\n\nOutdated or incompatible AWS CLI or SDK versions, which can be resolved by updating to the latest versions.\n\nIf the error persists, it is recommended to contact AWS support for further assistance.\n" ]
[ 0 ]
[]
[]
[ "amazon", "amazon_vpc", "amazon_web_services", "autoscaling", "aws_auto_scaling" ]
stackoverflow_0074681469_amazon_amazon_vpc_amazon_web_services_autoscaling_aws_auto_scaling.txt
Q: Error: No value associated with key CodingKeys I'm trying to fetch JSON data from a currency API. But I'm getting the below error: CurrencyConverter[32130:2625405] [boringssl] boringssl_metrics_log_metric_block_invoke(153) Failed to log metrics keyNotFound(CodingKeys(stringValue: "rates", intValue: nil), Swift.DecodingError.Context(codingPath: [], debugDescription: "No value associated with key CodingKeys(stringValue: "rates", intValue: nil) ("rates").", underlyingError: nil)) This is my View Controller: import UIKit class ViewController: UIViewController { // MARK: - Outlets @IBOutlet var priceLabel: UILabel! @IBOutlet var textField: UITextField! @IBOutlet var pickerView: UIPickerView! override func viewDidLoad() { super.viewDidLoad() fetchJSON() } //MARK: - Method func fetchJSON() { guard let url = URL(string:"https://v6.exchangerate-api.com/v6/xxx/latest/TRY") else { return } "xxx" includes my API key URLSession.shared.dataTask(with: url) { data, response, error in // handle any errors if there are any if error != nil { print(error!) return } //safely unwrap the data guard let safeData = data else { return } // decode the JSON Data do { let results = try JSONDecoder().decode(ExchangeRates.self, from: safeData) print(results.rates) } catch { print(error) } }.resume() } } This is the struct for the ExchangeRater: import Foundation struct ExchangeRates: Codable { let rates: [String: Double] } A: It looks like you are trying to parse the JSON data into an object of type ExchangeRates, which has a property rates that is a dictionary of type [String: Double]. However, the JSON data you are trying to parse does not have a property named "rates" in it.
Error: No value associated with key CodingKeys
I'm trying to fetch JSON data from a currency API. But I'm getting the below error: CurrencyConverter[32130:2625405] [boringssl] boringssl_metrics_log_metric_block_invoke(153) Failed to log metrics keyNotFound(CodingKeys(stringValue: "rates", intValue: nil), Swift.DecodingError.Context(codingPath: [], debugDescription: "No value associated with key CodingKeys(stringValue: "rates", intValue: nil) ("rates").", underlyingError: nil)) This is my View Controller: import UIKit class ViewController: UIViewController { // MARK: - Outlets @IBOutlet var priceLabel: UILabel! @IBOutlet var textField: UITextField! @IBOutlet var pickerView: UIPickerView! override func viewDidLoad() { super.viewDidLoad() fetchJSON() } //MARK: - Method func fetchJSON() { guard let url = URL(string:"https://v6.exchangerate-api.com/v6/xxx/latest/TRY") else { return } "xxx" includes my API key URLSession.shared.dataTask(with: url) { data, response, error in // handle any errors if there are any if error != nil { print(error!) return } //safely unwrap the data guard let safeData = data else { return } // decode the JSON Data do { let results = try JSONDecoder().decode(ExchangeRates.self, from: safeData) print(results.rates) } catch { print(error) } }.resume() } } This is the struct for the ExchangeRater: import Foundation struct ExchangeRates: Codable { let rates: [String: Double] }
[ "It looks like you are trying to parse the JSON data into an object of type ExchangeRates, which has a property rates that is a dictionary of type [String: Double]. However, the JSON data you are trying to parse does not have a property named \"rates\" in it.\n" ]
[ 0 ]
[]
[]
[ "api", "json", "swift" ]
stackoverflow_0074678446_api_json_swift.txt
Q: Selenium - python webdriver exits from browser after loading I try to open browser using Selenium in Python and after the browser opens, it exits from it, I tried several ways to write my code but every possible way works this way. Thank you in advance for help `from selenium import webdriver from selenium.webdriver import Chrome from selenium.webdriver.chrome.service import Service from webdriver_manager.chrome import ChromeDriverManager options = webdriver.ChromeOptions() options.add_experimental_option("detach", True) s=Service(ChromeDriverManager().install()) driver = webdriver.Chrome(service=s) driver.get("https://amazon.com")` I expected the browser to open amazon.com and stay like this until I close or the programme close it. Actual result - when the browser loads the website, it exists from itself. A: It looks like you are using the webdriver.Chrome class to create your Chrome driver instance. This class has a service parameter that you can use to specify the Chrome service that should be used to start the Chrome browser. In your code, you are creating a Chrome service using the Service class and passing it to the webdriver.Chrome class as the service parameter. However, you are not starting the Chrome service before creating the driver instance. To fix this, you can call the start() method on the Chrome service before creating the driver instance, like this: from selenium import webdriver from selenium.webdriver import Chrome from selenium.webdriver.chrome.service import Service from webdriver_manager.chrome import ChromeDriverManager options = webdriver.ChromeOptions() options.add_experimental_option("detach", True) # Create the Chrome service s = Service(ChromeDriverManager().install()) # Start the Chrome service s.start() # Create the driver instance using the Chrome service driver = webdriver.Chrome(service=s) # Open the website driver.get("https://amazon.com") This should start the Chrome service before creating the driver instance, which should prevent the browser from exiting immediately after opening. You can then use the driver.quit() method to close the browser when you are done. A: Use driver.close() function after getting result ;)
Selenium - python webdriver exits from browser after loading
I try to open browser using Selenium in Python and after the browser opens, it exits from it, I tried several ways to write my code but every possible way works this way. Thank you in advance for help `from selenium import webdriver from selenium.webdriver import Chrome from selenium.webdriver.chrome.service import Service from webdriver_manager.chrome import ChromeDriverManager options = webdriver.ChromeOptions() options.add_experimental_option("detach", True) s=Service(ChromeDriverManager().install()) driver = webdriver.Chrome(service=s) driver.get("https://amazon.com")` I expected the browser to open amazon.com and stay like this until I close or the programme close it. Actual result - when the browser loads the website, it exists from itself.
[ "It looks like you are using the webdriver.Chrome class to create your Chrome driver instance. This class has a service parameter that you can use to specify the Chrome service that should be used to start the Chrome browser.\nIn your code, you are creating a Chrome service using the Service class and passing it to the webdriver.Chrome class as the service parameter. However, you are not starting the Chrome service before creating the driver instance. To fix this, you can call the start() method on the Chrome service before creating the driver instance, like this:\nfrom selenium import webdriver\nfrom selenium.webdriver import Chrome\nfrom selenium.webdriver.chrome.service import Service\nfrom webdriver_manager.chrome import ChromeDriverManager\n\noptions = webdriver.ChromeOptions()\noptions.add_experimental_option(\"detach\", True)\n\n# Create the Chrome service\ns = Service(ChromeDriverManager().install())\n\n# Start the Chrome service\ns.start()\n\n# Create the driver instance using the Chrome service\ndriver = webdriver.Chrome(service=s)\n\n# Open the website\ndriver.get(\"https://amazon.com\")\n\nThis should start the Chrome service before creating the driver instance, which should prevent the browser from exiting immediately after opening. You can then use the driver.quit() method to close the browser when you are done.\n", "Use driver.close() function after getting result ;)\n" ]
[ 0, 0 ]
[]
[]
[ "automation", "crash", "python", "selenium", "webdriver" ]
stackoverflow_0074681137_automation_crash_python_selenium_webdriver.txt
Q: How to return HTML from Next.js middleware? I'm trying to return HTTP Status Code 410 (gone) alongside a custom simple HTML: <h1>Error 410</h1> <h2>Permanently deleted or Gone</h2> <p>This page is not found and is gone from this server forever</p> Is it possible? Because I can't find a method on NextResponse object. How can I return HTML from middleware? A: This is not supported anymore. Middleware can no longer produce a response body as of v12.2+. https://nextjs.org/docs/messages/returning-response-body-in-middleware A: this is the type. there is no method to send html type NextApiResponse<T = any> = ServerResponse<IncomingMessage> & { send: Send<T>; json: Send<T>; status: (statusCode: number) => NextApiResponse<T>; redirect(url: string): NextApiResponse<T>; redirect(status: number, url: string): NextApiResponse<T>; setPreviewData: (data: object | string, options?: { maxAge?: number; path?: string; }) => NextApiResponse<T>; clearPreviewData: (options?: { path?: string; }) => NextApiResponse<T>; unstable_revalidate: () => void; revalidate: (urlPath: string, opts?: { unstable_onlyGenerated?: boolean; }) => Promise<void>; } express has sendFile app.get("/", (req, res) => { res.sendFile(__dirname + "/index.html"); }); NextApiResponse, sendandjson` res.json(body) - Sends a JSON response. body must be a serializable object res.send(body) - Sends the HTTP response. body can be a string, an object or a Buffer you can redirect the user to a URL where you display your html A: While it's true that returning a response body from middleware has been disabled from version v12.2, Next.js v13 reintroduced the ability to produce a response as an experimental feature through the allowMiddlewareResponseBody flag in next.config.js. // next.config.js module.exports = { experimental: { allowMiddlewareResponseBody: true } } After enabling this experimental flag, you can return a response from your middleware as follows. import { NextResponse } from 'next/server' export function middleware(request) { return new NextResponse( ` <h1>Error 410</h1> <h2>Permanently deleted or Gone</h2> <p>This page is not found and is gone from this server forever</p> `, { status: 410, headers: { 'content-type': 'text/html' } } ) }
How to return HTML from Next.js middleware?
I'm trying to return HTTP Status Code 410 (gone) alongside a custom simple HTML: <h1>Error 410</h1> <h2>Permanently deleted or Gone</h2> <p>This page is not found and is gone from this server forever</p> Is it possible? Because I can't find a method on NextResponse object. How can I return HTML from middleware?
[ "This is not supported anymore.\n\nMiddleware can no longer produce a response body as of v12.2+.\n\nhttps://nextjs.org/docs/messages/returning-response-body-in-middleware\n", "this is the type. there is no method to send html\ntype NextApiResponse<T = any> = ServerResponse<IncomingMessage> & {\n send: Send<T>;\n json: Send<T>;\n status: (statusCode: number) => NextApiResponse<T>;\n redirect(url: string): NextApiResponse<T>;\n redirect(status: number, url: string): NextApiResponse<T>;\n setPreviewData: (data: object | string, options?: {\n maxAge?: number;\n path?: string;\n }) => NextApiResponse<T>;\n clearPreviewData: (options?: {\n path?: string;\n }) => NextApiResponse<T>;\n unstable_revalidate: () => void;\n revalidate: (urlPath: string, opts?: {\n unstable_onlyGenerated?: boolean;\n }) => Promise<void>;\n}\n\nexpress has sendFile\napp.get(\"/\", (req, res) => {\n res.sendFile(__dirname + \"/index.html\");\n});\n\nNextApiResponse, sendandjson`\n\nres.json(body) - Sends a JSON response. body must be a serializable object\nres.send(body) - Sends the HTTP response. body can be a string, an object or a Buffer\n\n\nyou can redirect the user to a URL where you display your html\n", "While it's true that returning a response body from middleware has been disabled from version v12.2, Next.js v13 reintroduced the ability to produce a response as an experimental feature through the allowMiddlewareResponseBody flag in next.config.js.\n// next.config.js\nmodule.exports = {\n experimental: {\n allowMiddlewareResponseBody: true\n }\n}\n\nAfter enabling this experimental flag, you can return a response from your middleware as follows.\nimport { NextResponse } from 'next/server'\n\nexport function middleware(request) {\n return new NextResponse(\n `\n <h1>Error 410</h1>\n <h2>Permanently deleted or Gone</h2>\n <p>This page is not found and is gone from this server forever</p>\n `,\n { status: 410, headers: { 'content-type': 'text/html' } }\n )\n}\n\n" ]
[ 2, 0, 0 ]
[]
[]
[ "javascript", "next.js" ]
stackoverflow_0074570148_javascript_next.js.txt
Q: How to detect and change shape in Java OpenGL I drew some shapes using OpenGL, here is the code: The drawing methods: private void drawRectangle(GL2 gl, double x1, double y1, double x2, double y2, double x3, double y3, double x4, double y4) { gl.glBegin(GL2.GL_LINE_LOOP); gl.glVertex2d(x1, y1); gl.glVertex2d(x2, y2); gl.glVertex2d(x3, y3); gl.glVertex2d(x4, y4); gl.glEnd(); } private void drawCircle(GL2 gl, double centerX, double centerY, double radius) { gl.glBegin(GL2.GL_LINE_STRIP); for (double i = 0; i < 360; i++) { double radian = i * Math.PI / 180; double x = radius * Math.cos(radian) + centerX; double y = radius * Math.sin(radian) + centerY; gl.glVertex2d(x, y); } gl.glEnd(); } I drew some circles and rectangles, here is the display method: public void display(GLAutoDrawable drawable) { final GL2 gl = drawable.getGL().getGL2(); gl.glColor3f(1f, 1f, 1f); drawCircle(gl, 0.7, 0.2, 0.1); drawCircle(gl, 0.3, 0.1, 0.3); drawRectangle(gl, -0.5, -0.5, 0, -0.5, 0, 0, -0.5, 0); } The displayed result: I want to fill the shape with a color when I click on it. (static color for all clicks, red for example) class Color { public double c1, c2, c3; public Color(double c1, double c2, double c3) { super(); this.c1 = c1; this.c2 = c2; this.c3 = c3; } } So I want to know 2 things: 1- how can I fill the shape with a color? 2- how to fill the shape when I click on it? I'm using javax.swing.JFrame to show the canvas, but there is no problem if there is a way using anything else such as JavaFX or AWT or something else. The main method: public static void main(String[] args) { final GLProfile profile = GLProfile.get(GLProfile.GL2); GLCapabilities capabilities = new GLCapabilities(profile); final GLCanvas glcanvas = new GLCanvas(capabilities); Driver driver = new Driver(); glcanvas.addGLEventListener(driver); glcanvas.setSize(400, 400); final JFrame frame = new JFrame(); frame.getContentPane().add(glcanvas); frame.setSize(frame.getContentPane().getPreferredSize()); frame.setVisible(true); } A: If you want to fill the shape with a color, you must use one of the Triangle primitives instead of GL2.GL_LINE_LOOP. e.g.: GL2.GL_TRIANGLE_FAN: gl.glBegin(GL2.GL_TRIANGLE_FAN); gl.glVertex2d(x1, y1); gl.glVertex2d(x2, y2); gl.glVertex2d(x3, y3); gl.glVertex2d(x4, y4); gl.glEnd(); gl.glBegin(GL2.GL_TRIANGLE_FAN); for (double i = 0; i < 360; i++) { double radian = i * Math.PI / 180; double x = radius * Math.cos(radian) + centerX; double y = radius * Math.sin(radian) + centerY; gl.glVertex2d(x, y); } gl.glEnd();
How to detect and change shape in Java OpenGL
I drew some shapes using OpenGL, here is the code: The drawing methods: private void drawRectangle(GL2 gl, double x1, double y1, double x2, double y2, double x3, double y3, double x4, double y4) { gl.glBegin(GL2.GL_LINE_LOOP); gl.glVertex2d(x1, y1); gl.glVertex2d(x2, y2); gl.glVertex2d(x3, y3); gl.glVertex2d(x4, y4); gl.glEnd(); } private void drawCircle(GL2 gl, double centerX, double centerY, double radius) { gl.glBegin(GL2.GL_LINE_STRIP); for (double i = 0; i < 360; i++) { double radian = i * Math.PI / 180; double x = radius * Math.cos(radian) + centerX; double y = radius * Math.sin(radian) + centerY; gl.glVertex2d(x, y); } gl.glEnd(); } I drew some circles and rectangles, here is the display method: public void display(GLAutoDrawable drawable) { final GL2 gl = drawable.getGL().getGL2(); gl.glColor3f(1f, 1f, 1f); drawCircle(gl, 0.7, 0.2, 0.1); drawCircle(gl, 0.3, 0.1, 0.3); drawRectangle(gl, -0.5, -0.5, 0, -0.5, 0, 0, -0.5, 0); } The displayed result: I want to fill the shape with a color when I click on it. (static color for all clicks, red for example) class Color { public double c1, c2, c3; public Color(double c1, double c2, double c3) { super(); this.c1 = c1; this.c2 = c2; this.c3 = c3; } } So I want to know 2 things: 1- how can I fill the shape with a color? 2- how to fill the shape when I click on it? I'm using javax.swing.JFrame to show the canvas, but there is no problem if there is a way using anything else such as JavaFX or AWT or something else. The main method: public static void main(String[] args) { final GLProfile profile = GLProfile.get(GLProfile.GL2); GLCapabilities capabilities = new GLCapabilities(profile); final GLCanvas glcanvas = new GLCanvas(capabilities); Driver driver = new Driver(); glcanvas.addGLEventListener(driver); glcanvas.setSize(400, 400); final JFrame frame = new JFrame(); frame.getContentPane().add(glcanvas); frame.setSize(frame.getContentPane().getPreferredSize()); frame.setVisible(true); }
[ "If you want to fill the shape with a color, you must use one of the Triangle primitives instead of GL2.GL_LINE_LOOP. e.g.: GL2.GL_TRIANGLE_FAN:\ngl.glBegin(GL2.GL_TRIANGLE_FAN);\ngl.glVertex2d(x1, y1);\ngl.glVertex2d(x2, y2);\ngl.glVertex2d(x3, y3);\ngl.glVertex2d(x4, y4);\ngl.glEnd();\n\ngl.glBegin(GL2.GL_TRIANGLE_FAN);\nfor (double i = 0; i < 360; i++) {\n double radian = i * Math.PI / 180;\n double x = radius * Math.cos(radian) + centerX;\n double y = radius * Math.sin(radian) + centerY;\n gl.glVertex2d(x, y);\n}\ngl.glEnd();\n\n" ]
[ 0 ]
[]
[]
[ "java", "javafx", "jogl", "opengl", "swing" ]
stackoverflow_0074681441_java_javafx_jogl_opengl_swing.txt
Q: Protect AWS API Gateway To Only My Extension Calling It Is it a possibility to setup something like the API Gateway CORS Access-Control-Allow-Origin to only allow a Firefox extension that I am writing to call it? Setting Access-Control-Allow-Origin to '*' is what I did for testing, but does not seem like a good policy for when it is released. I wondered if there was anyway to make it so the AWS API Gateway only gives a good response when the request is made by my extension, and not another. Or is this just an impossibility to restrict the API to only my extension? I am using XMLHttpRequest to make the call to the API Gateway A: Just a note; CORS is for browsers to restrict cross-origin HTTP requests. CORS won't stop someone invoking the API from outside a browser e.g. using cURL, Postman, or some other non-browser based app. A: Yes, it is possible to restrict access to your API Gateway to only certain clients, such as your Firefox extension. One way to do this is to use a custom domain name for your API Gateway and then set up a whitelist of allowed origins in the Access-Control-Allow-Origin header in your API Gateway. This will only allow requests from the specified origins to be processed by your API Gateway, and all other requests will be rejected. To set up a custom domain name for your API Gateway, you can follow the instructions in the AWS documentation. Once you have a custom domain name set up, you can configure your API Gateway to use it by setting the domainName property in your API Gateway configuration. To set up a whitelist of allowed origins in your API Gateway, you can use the Access-Control-Allow-Origin header in your API Gateway configuration. This header specifies a list of origins that are allowed to access your API Gateway. To allow requests from your Firefox extension, you would need to add the origin of your extension to this list. For example, if your Firefox extension is hosted at https://my-firefox-extension.com, you would need to add this origin to the Access-Control-Allow-Origin header in your API Gateway configuration like this: Access-Control-Allow-Origin: https://my-firefox-extension.com This will allow requests from your Firefox extension to be processed by your API Gateway, while rejecting all other requests. Keep in mind that you will need to update your API Gateway configuration and redeploy your API in order for these changes to take effect. You can learn more about configuring the Access-Control-Allow-Origin header in the AWS documentation.
Protect AWS API Gateway To Only My Extension Calling It
Is it a possibility to setup something like the API Gateway CORS Access-Control-Allow-Origin to only allow a Firefox extension that I am writing to call it? Setting Access-Control-Allow-Origin to '*' is what I did for testing, but does not seem like a good policy for when it is released. I wondered if there was anyway to make it so the AWS API Gateway only gives a good response when the request is made by my extension, and not another. Or is this just an impossibility to restrict the API to only my extension? I am using XMLHttpRequest to make the call to the API Gateway
[ "Just a note; CORS is for browsers to restrict cross-origin HTTP requests. CORS won't stop someone invoking the API from outside a browser e.g. using cURL, Postman, or some other non-browser based app.\n", "Yes, it is possible to restrict access to your API Gateway to only certain clients, such as your Firefox extension. One way to do this is to use a custom domain name for your API Gateway and then set up a whitelist of allowed origins in the Access-Control-Allow-Origin header in your API Gateway. This will only allow requests from the specified origins to be processed by your API Gateway, and all other requests will be rejected.\nTo set up a custom domain name for your API Gateway, you can follow the instructions in the AWS documentation. Once you have a custom domain name set up, you can configure your API Gateway to use it by setting the domainName property in your API Gateway configuration.\nTo set up a whitelist of allowed origins in your API Gateway, you can use the Access-Control-Allow-Origin header in your API Gateway configuration. This header specifies a list of origins that are allowed to access your API Gateway. To allow requests from your Firefox extension, you would need to add the origin of your extension to this list.\nFor example, if your Firefox extension is hosted at https://my-firefox-extension.com, you would need to add this origin to the Access-Control-Allow-Origin header in your API Gateway configuration like this:\nAccess-Control-Allow-Origin: https://my-firefox-extension.com\n\nThis will allow requests from your Firefox extension to be processed by your API Gateway, while rejecting all other requests.\nKeep in mind that you will need to update your API Gateway configuration and redeploy your API in order for these changes to take effect. You can learn more about configuring the Access-Control-Allow-Origin header in the AWS documentation.\n" ]
[ 1, 0 ]
[]
[]
[ "amazon_web_services", "api", "cors", "manifest" ]
stackoverflow_0074680138_amazon_web_services_api_cors_manifest.txt
Q: Search engine for over 5000 entities from .txt files I have over 5000 .txt files stored locally on my app each file is at least 15 lines of words So am trying to search with multiple words all over the 5000 list Finally i was able to search in all of them but with only one problem The app freezes until the whole process finished Future<List<FatwaModel>> searchFatawy(String searchText) async { if (searchText.isEmpty) return []; emit(SearchFatawyLoadingState()); searchFatawyTxt.clear(); RegExp regExp = RegExp( RemoveExtinctionsAtWord() .normalise(searchText) .trim() .split(' ') .where((element) => element.length > 1) .join('|'), caseSensitive: false, ); Future.forEach(fullFatawy, (FatwaModel fatwa) { bool check = regExp.hasMatch(RemoveExtinctionsAtWord().normalise( RegExp(r'(?<=:)(.*)(?=)').firstMatch(fatwa.fatwaBody)?.group(0) ?? '', )); if (check) searchFatawyTxt.add(fatwa); }).then((value) { emit(SearchFatawySuccessState()); }); // searchFatawyTxt = fullFatawy // .where((fatwa) => regExp.hasMatch(RemoveExtinctionsAtWord().normalise( // RegExp(r'(?<=:)(.*)(?=)').firstMatch(fatwa.fatwaBody)?.group(0) ?? // '', // ))) // .toList(); //Sorting the list depending on how many keywords found in a single txt file searchFatawyTxt.sort( (FatwaModel a, FatwaModel b) { int aMatchCount = regExp .allMatches( RemoveExtinctionsAtWord().normalise( RegExp(r'(?<=:)(.*)(?=)').firstMatch(a.fatwaBody)?.group(0) ?? '', ), ) .length; int bMatchCount = regExp .allMatches( RemoveExtinctionsAtWord().normalise( RegExp(r'(?<=:)(.*)(?=)').firstMatch(b.fatwaBody)?.group(0) ?? '', ), ) .length; return bMatchCount.compareTo(aMatchCount); }, ); return searchFatawyTxt; } All am trying to do is showing a progress bar while the search is being process without freezing the app. A: Instead of calling that method directly on your app ( on the main thread ), you will need to call it in another isolate that doesn't share a memory with the main thread. ad the quickest and easiest way to do it is by calling a compute() method which spawns an isolate and runs the provided callback on that isolate, passes it the provided message, and (eventually) returns the value returned by callback. Future<List<FatwaModel>> isolatedMethod = compute(searchFatawy, searchText); Note that I am passing your method declaration, not calling it inside the compute(). and now you can use that isolatedMethod as the Future which you will use in your app.
Search engine for over 5000 entities from .txt files
I have over 5000 .txt files stored locally on my app each file is at least 15 lines of words So am trying to search with multiple words all over the 5000 list Finally i was able to search in all of them but with only one problem The app freezes until the whole process finished Future<List<FatwaModel>> searchFatawy(String searchText) async { if (searchText.isEmpty) return []; emit(SearchFatawyLoadingState()); searchFatawyTxt.clear(); RegExp regExp = RegExp( RemoveExtinctionsAtWord() .normalise(searchText) .trim() .split(' ') .where((element) => element.length > 1) .join('|'), caseSensitive: false, ); Future.forEach(fullFatawy, (FatwaModel fatwa) { bool check = regExp.hasMatch(RemoveExtinctionsAtWord().normalise( RegExp(r'(?<=:)(.*)(?=)').firstMatch(fatwa.fatwaBody)?.group(0) ?? '', )); if (check) searchFatawyTxt.add(fatwa); }).then((value) { emit(SearchFatawySuccessState()); }); // searchFatawyTxt = fullFatawy // .where((fatwa) => regExp.hasMatch(RemoveExtinctionsAtWord().normalise( // RegExp(r'(?<=:)(.*)(?=)').firstMatch(fatwa.fatwaBody)?.group(0) ?? // '', // ))) // .toList(); //Sorting the list depending on how many keywords found in a single txt file searchFatawyTxt.sort( (FatwaModel a, FatwaModel b) { int aMatchCount = regExp .allMatches( RemoveExtinctionsAtWord().normalise( RegExp(r'(?<=:)(.*)(?=)').firstMatch(a.fatwaBody)?.group(0) ?? '', ), ) .length; int bMatchCount = regExp .allMatches( RemoveExtinctionsAtWord().normalise( RegExp(r'(?<=:)(.*)(?=)').firstMatch(b.fatwaBody)?.group(0) ?? '', ), ) .length; return bMatchCount.compareTo(aMatchCount); }, ); return searchFatawyTxt; } All am trying to do is showing a progress bar while the search is being process without freezing the app.
[ "Instead of calling that method directly on your app ( on the main thread ), you will need to call it in another isolate that doesn't share a memory with the main thread.\nad the quickest and easiest way to do it is by calling a compute() method which spawns an isolate and runs the provided callback on that isolate, passes it the provided message, and (eventually) returns the value returned by callback.\nFuture<List<FatwaModel>> isolatedMethod = compute(searchFatawy, searchText);\n\nNote that I am passing your method declaration, not calling it inside the compute().\nand now you can use that isolatedMethod as the Future which you will use in your app.\n" ]
[ 0 ]
[]
[]
[ "dart", "flutter", "regex", "search" ]
stackoverflow_0074681363_dart_flutter_regex_search.txt
Q: Array out of bound exception Java I am trying to run a program for school where I need to fix the error but I can't seem to find the error for some reason. I receive a ArrayIndexOutOfBoundException: Index 3 out of bounds for length 3 error for line 17. Here is the line 17: CellPhone(Long.parseLong(temp[0]),temp[1],Integer.parseInt(temp[3]),Double.parseDouble(temp[2]) ` public static void main(String[] args) throws FileNotFoundException { // Create 2 objects CellList l1=new CellList(); CellList l2=new CellList(); Scanner sc=new Scanner(new File("test")); //Read file while(sc.hasNextLine()) { String[] temp=sc.nextLine().split(" "); l1.addToStart(new CellPhone(Long.parseLong(temp[0]),temp[1],Integer.parseInt(temp[3]),Double.parseDouble(temp[2]))); } sc.close(); //Show list l1.showContents(); //Prompt for serial number sc=new Scanner(System.in); System.out.println("\nEnter serial number to serach: "); long s=sc.nextLong(); System.out.println(l1.search(s).getPhone()); System.out.println(); l2=l1; if(l1==l2) { System.out.println("Both are equal lists"); } else { System.out.println("Both are not equal lists"); } } } ` Anyone have any hint on where the error is? A: It looks like you're trying to access the element at index 3 in the temp array on line 17, but that array only has 3 elements in it (indices 0, 1, and 2). This is why you're getting the ArrayIndexOutOfBoundException: Index 3 out of bounds for length 3 error. One way to fix this is to change the line of code so that it only tries to access elements that exist in the array. For example, you could change line 17 to this: CellPhone(Long.parseLong(temp[0]),temp[1],Integer.parseInt(temp[2]),Double.parseDouble(temp[3])) This will fix the error because it will only try to access elements that exist in the temp array. However, it's important to note that this solution assumes that the data in the file is properly formatted and that each line of the file contains exactly 4 elements, separated by spaces. If this is not the case, you may need to change your code in other ways to properly handle the data in the file. I hope this helps! Let me know if you have any other questions.
Array out of bound exception Java
I am trying to run a program for school where I need to fix the error but I can't seem to find the error for some reason. I receive a ArrayIndexOutOfBoundException: Index 3 out of bounds for length 3 error for line 17. Here is the line 17: CellPhone(Long.parseLong(temp[0]),temp[1],Integer.parseInt(temp[3]),Double.parseDouble(temp[2]) ` public static void main(String[] args) throws FileNotFoundException { // Create 2 objects CellList l1=new CellList(); CellList l2=new CellList(); Scanner sc=new Scanner(new File("test")); //Read file while(sc.hasNextLine()) { String[] temp=sc.nextLine().split(" "); l1.addToStart(new CellPhone(Long.parseLong(temp[0]),temp[1],Integer.parseInt(temp[3]),Double.parseDouble(temp[2]))); } sc.close(); //Show list l1.showContents(); //Prompt for serial number sc=new Scanner(System.in); System.out.println("\nEnter serial number to serach: "); long s=sc.nextLong(); System.out.println(l1.search(s).getPhone()); System.out.println(); l2=l1; if(l1==l2) { System.out.println("Both are equal lists"); } else { System.out.println("Both are not equal lists"); } } } ` Anyone have any hint on where the error is?
[ "It looks like you're trying to access the element at index 3 in the temp array on line 17, but that array only has 3 elements in it (indices 0, 1, and 2). This is why you're getting the ArrayIndexOutOfBoundException: Index 3 out of bounds for length 3 error.\nOne way to fix this is to change the line of code so that it only tries to access elements that exist in the array. For example, you could change line 17 to this:\nCellPhone(Long.parseLong(temp[0]),temp[1],Integer.parseInt(temp[2]),Double.parseDouble(temp[3]))\n\nThis will fix the error because it will only try to access elements that exist in the temp array.\nHowever, it's important to note that this solution assumes that the data in the file is properly formatted and that each line of the file contains exactly 4 elements, separated by spaces. If this is not the case, you may need to change your code in other ways to properly handle the data in the file.\nI hope this helps! Let me know if you have any other questions.\n" ]
[ 0 ]
[]
[]
[ "arrays", "java", "object" ]
stackoverflow_0074681464_arrays_java_object.txt
Q: How to choose the values for input and output dim in layers? based on what? I'm trying to build a graph encoder in pytorch_geometric, that takes the graph features as input and produces embeddings in a low dimensionality in unsupervised learning. so would be this Encoder model be a correct model to do this job? I dont know how to set the input dimensionalty correctly to make it work for all types of graph data, at the end it gives an error I don't understand where it comes from since I want to do the projection in order to project the data into a lower dimension can someone help me on this? class Encoder(torch.nn.Module): def __init__(self, node_features_dim, out_features): super(GNN, self).__init__() # Base model self.conv1 = GCNConv(node_features_dim, 2 * out_features) self.conv2 = GCNConv(2 * out_features, 2 * out_features) self.conv3 = GCNConv(2 * out_features, out_features) # projection model self.projection = Linear(node_features_dim, out_features, bias=False) def forward(self, x, edge_index): emb = self.conv1(x, edge_index) emb.relu() emb = self.conv2(emb, edge_index) emb.relu() emb = self.conv3(emb, edge_index) emb.relu() emb = self.projection(emb) return emb data = next(iter(dataloader)) x, edge_index = data.x, data.edge_index num_features = data.x.shape[-1] out_features = 4 # based on what I should set this? hidden_dim = num_features // 4 Traceback (most recent call last): File "C:\Users\marl\Desktop\files\model.py", line 204, in <module> emb = model(x, edge_index) File "C:\Users\marl\miniconda3\envs\tensor\lib\site-packages\torch\nn\modules\module.py", line 1130, in _call_impl return forward_call(*input, **kwargs) File "C:\Users\marl\Desktop\files\model.py", line 101, in forward embeddings = self.projection(embeddings) File "C:\Users\marl\miniconda3\envs\tensor\lib\site-packages\torch\nn\modules\module.py", line 1130, in _call_impl return forward_call(*input, **kwargs) File "C:\Users\marl\miniconda3\envs\tensor\lib\site-packages\torch_geometric\nn\dense\linear.py", line 136, in forward return F.linear(x, self.weight, self.bias) RuntimeError: mat1 and mat2 shapes cannot be multiplied (441x4 and 44x4) Process finished with exit code 1 A: It looks like you are trying to multiply a tensor of shape 441x4 with a tensor of shape 44x4 in the forward method of the Encoder class. This is not possible because the inner dimensions (the second and third dimensions) must match in order to perform matrix multiplication. You can fix this error by ensuring that the tensors you are multiplying have compatible shapes. For example, you can use the torch.mm method to perform a matrix multiplication on two tensors with shape (m, n) and (n, p) to produce a tensor with shape (m, p). Here is an example of how you can modify your code to fix this error: class Encoder(torch.nn.Module): def __init__(self, node_features_dim, out_features): super(GNN, self).__init__() # Base model self.conv1 = GCNConv(node_features_dim, 2 * out_features) self.conv2 = GCNConv(2 * out_features, 2 * out_features) self.conv3 = GCNConv(2 * out_features, out_features) # projection model self.projection = Linear(node_features_dim, out_features, bias=False) def forward(self, x, edge_index): emb = self.conv1(x, edge_index) emb.relu() emb = self.conv2(emb, edge_index) emb.relu() emb = self.conv3(emb, edge_index) emb.relu() # Ensure that the tensors have compatible shapes before multiplying emb = torch.mm(emb, self.projection) return emb
How to choose the values for input and output dim in layers? based on what?
I'm trying to build a graph encoder in pytorch_geometric, that takes the graph features as input and produces embeddings in a low dimensionality in unsupervised learning. so would be this Encoder model be a correct model to do this job? I dont know how to set the input dimensionalty correctly to make it work for all types of graph data, at the end it gives an error I don't understand where it comes from since I want to do the projection in order to project the data into a lower dimension can someone help me on this? class Encoder(torch.nn.Module): def __init__(self, node_features_dim, out_features): super(GNN, self).__init__() # Base model self.conv1 = GCNConv(node_features_dim, 2 * out_features) self.conv2 = GCNConv(2 * out_features, 2 * out_features) self.conv3 = GCNConv(2 * out_features, out_features) # projection model self.projection = Linear(node_features_dim, out_features, bias=False) def forward(self, x, edge_index): emb = self.conv1(x, edge_index) emb.relu() emb = self.conv2(emb, edge_index) emb.relu() emb = self.conv3(emb, edge_index) emb.relu() emb = self.projection(emb) return emb data = next(iter(dataloader)) x, edge_index = data.x, data.edge_index num_features = data.x.shape[-1] out_features = 4 # based on what I should set this? hidden_dim = num_features // 4 Traceback (most recent call last): File "C:\Users\marl\Desktop\files\model.py", line 204, in <module> emb = model(x, edge_index) File "C:\Users\marl\miniconda3\envs\tensor\lib\site-packages\torch\nn\modules\module.py", line 1130, in _call_impl return forward_call(*input, **kwargs) File "C:\Users\marl\Desktop\files\model.py", line 101, in forward embeddings = self.projection(embeddings) File "C:\Users\marl\miniconda3\envs\tensor\lib\site-packages\torch\nn\modules\module.py", line 1130, in _call_impl return forward_call(*input, **kwargs) File "C:\Users\marl\miniconda3\envs\tensor\lib\site-packages\torch_geometric\nn\dense\linear.py", line 136, in forward return F.linear(x, self.weight, self.bias) RuntimeError: mat1 and mat2 shapes cannot be multiplied (441x4 and 44x4) Process finished with exit code 1
[ "It looks like you are trying to multiply a tensor of shape 441x4 with a tensor of shape 44x4 in the forward method of the Encoder class. This is not possible because the inner dimensions (the second and third dimensions) must match in order to perform matrix multiplication.\nYou can fix this error by ensuring that the tensors you are multiplying have compatible shapes. For example, you can use the torch.mm method to perform a matrix multiplication on two tensors with shape (m, n) and (n, p) to produce a tensor with shape (m, p).\nHere is an example of how you can modify your code to fix this error:\nclass Encoder(torch.nn.Module):\n def __init__(self, node_features_dim, out_features):\n super(GNN, self).__init__()\n\n # Base model\n self.conv1 = GCNConv(node_features_dim, 2 * out_features)\n self.conv2 = GCNConv(2 * out_features, 2 * out_features)\n self.conv3 = GCNConv(2 * out_features, out_features)\n # projection model\n self.projection = Linear(node_features_dim, out_features, bias=False)\n\n def forward(self, x, edge_index):\n emb = self.conv1(x, edge_index)\n emb.relu()\n emb = self.conv2(emb, edge_index)\n emb.relu()\n emb = self.conv3(emb, edge_index)\n emb.relu()\n\n # Ensure that the tensors have compatible shapes before multiplying\n emb = torch.mm(emb, self.projection)\n\n return emb\n\n" ]
[ 0 ]
[]
[]
[ "python_3.x", "pytorch_geometric" ]
stackoverflow_0074675268_python_3.x_pytorch_geometric.txt
Q: (Convert decimals to fractions) Java I have been trying to figure this out for hours, but I can not do it. I was trying to search something to get some help, but everything I can find uses BigInteger, and we can not use that for this assignment. (Convert decimals to fractions) Write a program that prompts the user to enter a decimal number and displays the number in a fraction. Hint: read the decimal number as a string, extract the integer part and fractional part from the string, and use the Rational class in LiveExample 13.13 to obtain a rational number for the decimal number. Use the template at https://liveexample.pearsoncmg.com/test/Exercise13_19.txt for your code. Sample Run 1 Enter a decimal number: 3.25 The fraction number is 13/4 Sample Run 2 Enter a decimal number: -0.45452 The fraction number is -11363/25000 Class Name: Exercise13_19 public static void main(String[] args) { Scanner input = new Scanner(System.in); System.out.print("Enter a decimal number: "); String[] decimal = input.nextLine().split("[.]"); // Create a Rational object of the integer part of the decimal number Rational r1 = new Rational(new BigInteger(decimal[0]), BigInteger.ONE); // Create a Rational object of the fractional part of the decimal number Rational r2 = new Rational(new BigInteger(decimal[1]), new BigInteger( String.valueOf((int)Math.pow(10, decimal[1].length())))); // Display fraction number System.out.println("The fraction number is " + (decimal[0].charAt(0) == '-' ? (r1).subtract(r2) : (r1).add(r2))); } } I have tried to make this code work with the template by changing the BigInteger sections, but I can not figure it out for the life of me. I have reread the book several times. A: Your main problem is that your algorithm is wrong. I don't see how adding (or subtracting) the whole number part and the "fractional" part would yield a fractional representation of the number. A possible algorithm is to just remove the . and put it as the numerator. Put the denominator as 10 to the power of how many digits your "fractional" part has. After that, simplify the fraction by finding common factors. For the 3.25 example, a run of the program would give you $325/100$, which would then be simplified to 13/4.
(Convert decimals to fractions) Java
I have been trying to figure this out for hours, but I can not do it. I was trying to search something to get some help, but everything I can find uses BigInteger, and we can not use that for this assignment. (Convert decimals to fractions) Write a program that prompts the user to enter a decimal number and displays the number in a fraction. Hint: read the decimal number as a string, extract the integer part and fractional part from the string, and use the Rational class in LiveExample 13.13 to obtain a rational number for the decimal number. Use the template at https://liveexample.pearsoncmg.com/test/Exercise13_19.txt for your code. Sample Run 1 Enter a decimal number: 3.25 The fraction number is 13/4 Sample Run 2 Enter a decimal number: -0.45452 The fraction number is -11363/25000 Class Name: Exercise13_19 public static void main(String[] args) { Scanner input = new Scanner(System.in); System.out.print("Enter a decimal number: "); String[] decimal = input.nextLine().split("[.]"); // Create a Rational object of the integer part of the decimal number Rational r1 = new Rational(new BigInteger(decimal[0]), BigInteger.ONE); // Create a Rational object of the fractional part of the decimal number Rational r2 = new Rational(new BigInteger(decimal[1]), new BigInteger( String.valueOf((int)Math.pow(10, decimal[1].length())))); // Display fraction number System.out.println("The fraction number is " + (decimal[0].charAt(0) == '-' ? (r1).subtract(r2) : (r1).add(r2))); } } I have tried to make this code work with the template by changing the BigInteger sections, but I can not figure it out for the life of me. I have reread the book several times.
[ "Your main problem is that your algorithm is wrong. I don't see how adding (or subtracting) the whole number part and the \"fractional\" part would yield a fractional representation of the number.\nA possible algorithm is to just remove the . and put it as the numerator. Put the denominator as 10 to the power of how many digits your \"fractional\" part has. After that, simplify the fraction by finding common factors. For the 3.25 example, a run of the program would give you $325/100$, which would then be simplified to 13/4.\n" ]
[ 0 ]
[]
[]
[ "java" ]
stackoverflow_0074681440_java.txt
Q: Pass Elements in Variant Array as Arguments to ParamArray Background I am creating a VBA function (UDF) called MyUDF(), which wraps CallByName(). I wish to mimic precisely the signature and parametric behavior of CallByName(). Furthermore, MyUDF() must copy its Args() argument to a modular variable ArgsCopy — a Variant array — whose elements are then passed by MyUDF() as further arguments to CallByName(). Don't ask why — it's a long story. Reference CallByName() displays in the VBA editor like so and it is described in the documentation like so: Syntax CallByName (object, procname, calltype, [args()]_) The CallByName function syntax has these named arguments: Part Description object Required: Variant (Object). The name of the object on which the function will be executed. procname Required: Variant (String). A string expression containing the name of a property or method of the object. calltype Required: Constant. A constant of type vbCallType representing the type of procedure being called. args() Optional: Variant (Array). It appears that "args()" is actually a ParamArray, rather than a simple Variant array, but without further documentation, I can't be perfectly sure. Format My tentative design is of the following form: ' Modular variable. Private ArgsCopy() As Variant ' Wrapper function. Public Function MyUDF( _ ByRef Object As Object, _ ByRef ProcName As String, _ CallType As VbCallType, _ ParamArray Args() As Variant _ ) ' ... ' Copy the argument list to the modular variable. ArgsCopy = Args ' ... ' Pass the arguments (and modular variable) to 'CallByName()'. MyUDF = VBA.CallByName( _ Object := Object, _ ProcName := ProcName, _ CallType := CallType, _ Args := ArgsCopy _ ) End Function Displayed Signature In contrast to CallByName(), MyUDF() displays in the VBA editor like so, and concludes with ParamArray Args() As Variant: Only by changing Args() from a ParamArray to a Variant array (ByRef Args() As Variant) can we make them display identically: However, the latter would clash with the functional behavior described below for CallByName(). Parametric Behavior Unfortunately, one cannot pass ArgsCopy to Args by name (Args := ArgsCopy), since Args is apparently a ParamArray and would thus accept only the unnamed arguments: VBA.CallByName( _ Object, ProcName, CallType, _ ArgsCopy(0), ArgsCopy(1), ..., ArgsCopy(n) _ ) Note Please disregard the fact that CallByName() returns a Variant, which may (or may not) be an Object that must be Set. I have already accounted for this in my actual code. Question How do I construct MyUDF(), and especially its Args() argument, such that its signature mimics that of CallByName(), in both the Type and Optionality of its parameters; and it accurately passes to CallByName() any arbitrary set of arguments listed in Args()? Ideally, MyUDF() will also work properly on both Mac and Windows; and display like CallByName() in the VBA editor: This 3rd and 4th criteria are a bonus, but I don't require them. Suggestions Visual Basic (VB) suggests that one may pass arguments to its ParamArray as in MyUDF() above: the arguments are elements in an array of the same type as the ParamArray, and this array is supplied as a single argument. However, I have found neither a documented nor an experimental equivalent in VBA. I did find these three VBA questions on Stack Overflow, but I lack the experience to apply their lessons here. Passing an array of Arguments to CallByName VBA Pass array to ParamArray How to view interface spec from Framework files on Mac OS Change Method Signature That first question has a solution, which changes the method signature for CallByName(), such that Args() is a single argument: an Any array. However, I am unfamiliar with the "Any" type, and the third question (unanswered) makes me doubt this preprocessor "magic" could work on a Mac: #If VBA7 Or Win64 Then Private Declare PtrSafe Function rtcCallByName Lib "VBE7.DLL" ( _ ByVal Object As Object, _ ByVal ProcName As LongPtr, _ ByVal CallType As VbCallType, _ ByRef args() As Any, _ Optional ByVal lcid As Long) As Variant #Else Private Declare Function rtcCallByName Lib "VBE6.DLL" ( _ ByVal Object As Object, _ ByVal ProcName As Long, _ ByVal CallType As VbCallType, _ ByRef args() As Any, _ Optional ByVal lcid As Long) As Variant #End If Public Function CallWithArgs( _ ByRef Object As Object, _ ByRef ProcName As String, _ CallType As VbCallType, _ ByRef Args() As Variant _ ) CallWithArgs = rtcCallByName(Object, ProcName, CallType, Args) End Function A: It sounds like you want to be able to pass an array of arguments to the MyUDF() function, which in turn will be passed on to the CallByName() function. In order to do this, you can declare the Args() argument of MyUDF() as a ParamArray, which will allow it to accept an optional, variable number of arguments of the Variant type. This is the same way that CallByName() is declared. Here is how the function would look: Public Function MyUDF( _ ByRef Object As Object, _ ByRef ProcName As String, _ CallType As VbCallType, _ ParamArray Args() As Variant _ ) ' Copy the argument list to the modular variable. ArgsCopy = Args ' Pass the arguments (and modular variable) to CallByName() MyUDF = VBA.CallByName( _ Object := Object, _ ProcName := ProcName, _ CallType := CallType, _ Args := ArgsCopy _ ) End Function When calling this function, you can pass an array of arguments to the Args() argument, like this: MyUDF Object, ProcName, CallType, Array(arg1, arg2, arg3) Or, you can pass the arguments individually, without using an array: MyUDF Object, ProcName, CallType, arg1, arg2, arg3 In either case, the arguments will be passed to the CallByName() function as expected. I hope this helps!
Pass Elements in Variant Array as Arguments to ParamArray
Background I am creating a VBA function (UDF) called MyUDF(), which wraps CallByName(). I wish to mimic precisely the signature and parametric behavior of CallByName(). Furthermore, MyUDF() must copy its Args() argument to a modular variable ArgsCopy — a Variant array — whose elements are then passed by MyUDF() as further arguments to CallByName(). Don't ask why — it's a long story. Reference CallByName() displays in the VBA editor like so and it is described in the documentation like so: Syntax CallByName (object, procname, calltype, [args()]_) The CallByName function syntax has these named arguments: Part Description object Required: Variant (Object). The name of the object on which the function will be executed. procname Required: Variant (String). A string expression containing the name of a property or method of the object. calltype Required: Constant. A constant of type vbCallType representing the type of procedure being called. args() Optional: Variant (Array). It appears that "args()" is actually a ParamArray, rather than a simple Variant array, but without further documentation, I can't be perfectly sure. Format My tentative design is of the following form: ' Modular variable. Private ArgsCopy() As Variant ' Wrapper function. Public Function MyUDF( _ ByRef Object As Object, _ ByRef ProcName As String, _ CallType As VbCallType, _ ParamArray Args() As Variant _ ) ' ... ' Copy the argument list to the modular variable. ArgsCopy = Args ' ... ' Pass the arguments (and modular variable) to 'CallByName()'. MyUDF = VBA.CallByName( _ Object := Object, _ ProcName := ProcName, _ CallType := CallType, _ Args := ArgsCopy _ ) End Function Displayed Signature In contrast to CallByName(), MyUDF() displays in the VBA editor like so, and concludes with ParamArray Args() As Variant: Only by changing Args() from a ParamArray to a Variant array (ByRef Args() As Variant) can we make them display identically: However, the latter would clash with the functional behavior described below for CallByName(). Parametric Behavior Unfortunately, one cannot pass ArgsCopy to Args by name (Args := ArgsCopy), since Args is apparently a ParamArray and would thus accept only the unnamed arguments: VBA.CallByName( _ Object, ProcName, CallType, _ ArgsCopy(0), ArgsCopy(1), ..., ArgsCopy(n) _ ) Note Please disregard the fact that CallByName() returns a Variant, which may (or may not) be an Object that must be Set. I have already accounted for this in my actual code. Question How do I construct MyUDF(), and especially its Args() argument, such that its signature mimics that of CallByName(), in both the Type and Optionality of its parameters; and it accurately passes to CallByName() any arbitrary set of arguments listed in Args()? Ideally, MyUDF() will also work properly on both Mac and Windows; and display like CallByName() in the VBA editor: This 3rd and 4th criteria are a bonus, but I don't require them. Suggestions Visual Basic (VB) suggests that one may pass arguments to its ParamArray as in MyUDF() above: the arguments are elements in an array of the same type as the ParamArray, and this array is supplied as a single argument. However, I have found neither a documented nor an experimental equivalent in VBA. I did find these three VBA questions on Stack Overflow, but I lack the experience to apply their lessons here. Passing an array of Arguments to CallByName VBA Pass array to ParamArray How to view interface spec from Framework files on Mac OS Change Method Signature That first question has a solution, which changes the method signature for CallByName(), such that Args() is a single argument: an Any array. However, I am unfamiliar with the "Any" type, and the third question (unanswered) makes me doubt this preprocessor "magic" could work on a Mac: #If VBA7 Or Win64 Then Private Declare PtrSafe Function rtcCallByName Lib "VBE7.DLL" ( _ ByVal Object As Object, _ ByVal ProcName As LongPtr, _ ByVal CallType As VbCallType, _ ByRef args() As Any, _ Optional ByVal lcid As Long) As Variant #Else Private Declare Function rtcCallByName Lib "VBE6.DLL" ( _ ByVal Object As Object, _ ByVal ProcName As Long, _ ByVal CallType As VbCallType, _ ByRef args() As Any, _ Optional ByVal lcid As Long) As Variant #End If Public Function CallWithArgs( _ ByRef Object As Object, _ ByRef ProcName As String, _ CallType As VbCallType, _ ByRef Args() As Variant _ ) CallWithArgs = rtcCallByName(Object, ProcName, CallType, Args) End Function
[ "It sounds like you want to be able to pass an array of arguments to the MyUDF() function, which in turn will be passed on to the CallByName() function. In order to do this, you can declare the Args() argument of MyUDF() as a ParamArray, which will allow it to accept an optional, variable number of arguments of the Variant type. This is the same way that CallByName() is declared. Here is how the function would look:\nPublic Function MyUDF( _\n ByRef Object As Object, _\n ByRef ProcName As String, _\n CallType As VbCallType, _\n ParamArray Args() As Variant _\n)\n ' Copy the argument list to the modular variable.\n ArgsCopy = Args\n\n ' Pass the arguments (and modular variable) to CallByName()\n MyUDF = VBA.CallByName( _\n Object := Object, _\n ProcName := ProcName, _\n CallType := CallType, _\n Args := ArgsCopy _\n )\nEnd Function\n\nWhen calling this function, you can pass an array of arguments to the Args() argument, like this:\nMyUDF Object, ProcName, CallType, Array(arg1, arg2, arg3)\n\nOr, you can pass the arguments individually, without using an array:\nMyUDF Object, ProcName, CallType, arg1, arg2, arg3\n\nIn either case, the arguments will be passed to the CallByName() function as expected. I hope this helps!\n" ]
[ 0 ]
[]
[]
[ "paramarray", "parameter_passing", "user_defined_functions", "vba", "wrapper" ]
stackoverflow_0074587176_paramarray_parameter_passing_user_defined_functions_vba_wrapper.txt
Q: Type '"standard"' is not assignable to type 'MatFormFieldAppearance' Updated my angular project to 15 and I notice matformfield appearance ="standard" is no longer useable. A: Turns out that the deprecated versions of these modules would somehow pull the angular 14 versions. so it makes appearance ="standard" useable. Seem google angular team has not found a way to make use of appearance ="standard" in angular 15.0.2 as yet
Type '"standard"' is not assignable to type 'MatFormFieldAppearance'
Updated my angular project to 15 and I notice matformfield appearance ="standard" is no longer useable.
[ "Turns out that the deprecated versions of these modules would somehow pull the angular 14 versions. so it makes appearance =\"standard\" useable. Seem google angular team has not found a way to make use of appearance =\"standard\" in angular 15.0.2 as yet\n" ]
[ 0 ]
[]
[]
[ "angular", "html", "typescript" ]
stackoverflow_0074681244_angular_html_typescript.txt
Q: How to convert space separated file to tab delimited file in python? I have two data files, viz., 'fin.dat' and 'shape.dat'. I want to format 'shape.dat' just the way the 'fin.dat' is written with Python. The files can be found here https://easyupload.io/m/h94wd3. The snippets of the data structures are given here fin.dat,shape.dat. Please help me doing that. A: To convert a space-separated file to a tab-delimited file in Python, you can use the replace() method to replace all occurrences of spaces with tabs. Here's an example: # Open the file in read mode with open('input.txt', 'r') as input_file: # Read the file content content = input_file.read() # Replace all occurrences of space with tab content = content.replace(' ', '\t') # Open the file in write mode with open('output.txt', 'w') as output_file: # Write the modified content to the file output_file.write(content) In this example, the input.txt file is read and its content is stored in the content variable. Then, all occurrences of space are replaced with tab using the replace() method. Finally, the modified content is written back to the output.txt file. You can modify this code to work with your specific requirements. For example, you can use different delimiters, or you can process the file line by line instead of reading and writing the entire content in one go.
How to convert space separated file to tab delimited file in python?
I have two data files, viz., 'fin.dat' and 'shape.dat'. I want to format 'shape.dat' just the way the 'fin.dat' is written with Python. The files can be found here https://easyupload.io/m/h94wd3. The snippets of the data structures are given here fin.dat,shape.dat. Please help me doing that.
[ "To convert a space-separated file to a tab-delimited file in Python, you can use the replace() method to replace all occurrences of spaces with tabs. Here's an example:\n# Open the file in read mode\nwith open('input.txt', 'r') as input_file:\n # Read the file content\n content = input_file.read()\n\n# Replace all occurrences of space with tab\ncontent = content.replace(' ', '\\t')\n\n# Open the file in write mode\nwith open('output.txt', 'w') as output_file:\n # Write the modified content to the file\n output_file.write(content)\n\nIn this example, the input.txt file is read and its content is stored in the content variable. Then, all occurrences of space are replaced with tab using the replace() method. Finally, the modified content is written back to the output.txt file.\nYou can modify this code to work with your specific requirements. For example, you can use different delimiters, or you can process the file line by line instead of reading and writing the entire content in one go.\n" ]
[ 1 ]
[]
[]
[ "numpy", "pandas", "python" ]
stackoverflow_0074681480_numpy_pandas_python.txt
Q: use purrr package to create multiple shiny reactive expressions from tibble tldr: I want to condense multiple reactive expressions (over 300 lines of code) in order to improve readability and maintanability. I did great upgrades by following this thread and its dependencies. My observers were transformed into observeEvents triggered by FP with the purrr package (way more efficient and easier to read) by means of a tibble created with expand grids to get all my possible inputs (they are close to 120). The situation now is that I would like to do something similar with my reactive expressions as I did with my observers, one of my issues is that reactive expressions have returns associated with and as long as I know, create stored accesible objects by means of iterarion (vectorization in this case) it is usually not a good practice. Next I will post a toy example, library(shiny) ui <- fluidPage( numericInput(inputId = 'a', label = 'a', value = ''), numericInput(inputId = 'b', label = 'b', value = ''), conditionalPanel(condition = 'output.OK_1', actionButton(inputId = 'ok_btn_1', label = 'OK_1')), numericInput(inputId = 'c', label = 'c', value = ''), numericInput(inputId = 'd', label = 'd', value = ''), conditionalPanel(condition = 'output.OK_2', actionButton(inputId = 'ok_btn_2', label = 'OK_2')) ) server <- function(input, output, session) { #' this are the reactives that I'd like to create by means of FP. output$OK_1 <- reactive({req(input$a, input$b) if (input$a > input$b) {return(1)} else{return(0)} }) output$OK_2 <- reactive({req(input$c, input$d) if (input$c > input$d) {return(1)} else{return(0)} }) outputOptions(output, "OK_1", suspendWhenHidden = FALSE) outputOptions(output, "OK_2", suspendWhenHidden = FALSE) } shinyApp(ui, server) I thought and try a solution by creating a eventReactive instead of the reactive as the former is powered by the later, and use the eventReactive along with a tibble that contains all my inputs (as I did with the observeEvents) and the purrr::pwalk (maybe here we have a key misconception, since pwalk is used not for returning values but for the side efects of the function/formula. Anyways, I will let it as pwalk until gathering more info). This approach does not work (an example is shown in the next piece of code), so I may be using the wrong function from purrr or maybe I am overkilling it and there is a better way that I am not aware of yet. unsuccesfull approach (the ui section stays the same): library(shiny) library(tibble) library(purrr) # variables---- #' defined outside server function vars <- tibble::tribble( ~x, ~y, ~OK, #'---|----|------| 'a', 'b', 'OK', 'c', 'd', 'OK_2' ) #' ui stays the same as before #'ui <- fluidPage( #'... #') server <- function(input, output, session) { purrr::pwalk(vars, ####---- # 1.. x input # 2.. y input # 3.. OK ####---- ~{output[[..3]] <- eventReactive( if (input[[..1]] > input[[..2]]) {return(1)} else{return(0)}, ignoreInit = TRUE, ignoreNULL = TRUE ) } ) outputOptions(output, "OK_1", suspendWhenHidden = FALSE) outputOptions(output, "OK_2", suspendWhenHidden = FALSE) } shinyApp(ui, server) Any comments or suggestions are welcome. A: This worked for me (all objects from your example): library(purrr) ## ... pwalk(vars, ~{ output[[..3]] <<- reactive({ req(input[[..1]], input[[..2]]) return(input[[..1]] > input[[..2]]) }) }) # ... Note the double left assignment <<- needed to address the output which is outside the environment of the function run in pwalk.
use purrr package to create multiple shiny reactive expressions from tibble
tldr: I want to condense multiple reactive expressions (over 300 lines of code) in order to improve readability and maintanability. I did great upgrades by following this thread and its dependencies. My observers were transformed into observeEvents triggered by FP with the purrr package (way more efficient and easier to read) by means of a tibble created with expand grids to get all my possible inputs (they are close to 120). The situation now is that I would like to do something similar with my reactive expressions as I did with my observers, one of my issues is that reactive expressions have returns associated with and as long as I know, create stored accesible objects by means of iterarion (vectorization in this case) it is usually not a good practice. Next I will post a toy example, library(shiny) ui <- fluidPage( numericInput(inputId = 'a', label = 'a', value = ''), numericInput(inputId = 'b', label = 'b', value = ''), conditionalPanel(condition = 'output.OK_1', actionButton(inputId = 'ok_btn_1', label = 'OK_1')), numericInput(inputId = 'c', label = 'c', value = ''), numericInput(inputId = 'd', label = 'd', value = ''), conditionalPanel(condition = 'output.OK_2', actionButton(inputId = 'ok_btn_2', label = 'OK_2')) ) server <- function(input, output, session) { #' this are the reactives that I'd like to create by means of FP. output$OK_1 <- reactive({req(input$a, input$b) if (input$a > input$b) {return(1)} else{return(0)} }) output$OK_2 <- reactive({req(input$c, input$d) if (input$c > input$d) {return(1)} else{return(0)} }) outputOptions(output, "OK_1", suspendWhenHidden = FALSE) outputOptions(output, "OK_2", suspendWhenHidden = FALSE) } shinyApp(ui, server) I thought and try a solution by creating a eventReactive instead of the reactive as the former is powered by the later, and use the eventReactive along with a tibble that contains all my inputs (as I did with the observeEvents) and the purrr::pwalk (maybe here we have a key misconception, since pwalk is used not for returning values but for the side efects of the function/formula. Anyways, I will let it as pwalk until gathering more info). This approach does not work (an example is shown in the next piece of code), so I may be using the wrong function from purrr or maybe I am overkilling it and there is a better way that I am not aware of yet. unsuccesfull approach (the ui section stays the same): library(shiny) library(tibble) library(purrr) # variables---- #' defined outside server function vars <- tibble::tribble( ~x, ~y, ~OK, #'---|----|------| 'a', 'b', 'OK', 'c', 'd', 'OK_2' ) #' ui stays the same as before #'ui <- fluidPage( #'... #') server <- function(input, output, session) { purrr::pwalk(vars, ####---- # 1.. x input # 2.. y input # 3.. OK ####---- ~{output[[..3]] <- eventReactive( if (input[[..1]] > input[[..2]]) {return(1)} else{return(0)}, ignoreInit = TRUE, ignoreNULL = TRUE ) } ) outputOptions(output, "OK_1", suspendWhenHidden = FALSE) outputOptions(output, "OK_2", suspendWhenHidden = FALSE) } shinyApp(ui, server) Any comments or suggestions are welcome.
[ "This worked for me (all objects from your example):\nlibrary(purrr)\n## ...\n pwalk(vars, ~{\n output[[..3]] <<- reactive({\n req(input[[..1]], input[[..2]])\n return(input[[..1]] > input[[..2]])\n })\n })\n# ...\n\nNote the double left assignment <<- needed to address the output which is outside the environment of the function run in pwalk.\n" ]
[ 0 ]
[]
[]
[ "functional_programming", "purrr", "r", "shiny", "shiny_reactivity" ]
stackoverflow_0074680745_functional_programming_purrr_r_shiny_shiny_reactivity.txt
Q: Reverse standardization after removing rows I have been working with R for about six months now, and so I am still somewhat of a novice with a lot of this. I have a large dataset of 260 columns with 1000 rows and I need to convert the data to standard deviation units and then removing outliers which do not meet the set SD criteria. I have managed to convert the data and remove the necessary rows; however, after doing this I need to convert the data back to its original values. The problem that I am facing is that when I do this it continuously throws up an error and I am not sure how to get past this. I am assuming that this is due to the dataset now being different in size than before I had standardised it, but I can't think of a way to work around this. I have looked through past questions around this issue but I have not found anything that solves my problem and so any help regarding this issue would be greatly appreciated. Here is a sample idea of what I am trying to do and what is failing y = 30 C = 30 ds <- matrix(data = NA, nrow = y, ncol = C) for (i in 1:y) { ds[i,] <- sample(1:100, C, TRUE)} ds_z <- scale(ds, center = TRUE, scale = TRUE) no_out <- ds_z[!rowSums(ds_z >2),] revrs = t(apply(no_out, 1, function(r)r*attr(no_out,'scaled:scale') + attr(no_out, 'scaled:center'))) A: Try i1 <- !rowSums(ds_z > 2) no_out <- ds_z[i1, ] lst1 <- lapply(attributes(ds_z)[-1], \(x) x[i1]) no_out2 <- (no_out * lst1$`scaled:scale`) + lst1$`scaled:center` no_out2 <- round(no_out2)
Reverse standardization after removing rows
I have been working with R for about six months now, and so I am still somewhat of a novice with a lot of this. I have a large dataset of 260 columns with 1000 rows and I need to convert the data to standard deviation units and then removing outliers which do not meet the set SD criteria. I have managed to convert the data and remove the necessary rows; however, after doing this I need to convert the data back to its original values. The problem that I am facing is that when I do this it continuously throws up an error and I am not sure how to get past this. I am assuming that this is due to the dataset now being different in size than before I had standardised it, but I can't think of a way to work around this. I have looked through past questions around this issue but I have not found anything that solves my problem and so any help regarding this issue would be greatly appreciated. Here is a sample idea of what I am trying to do and what is failing y = 30 C = 30 ds <- matrix(data = NA, nrow = y, ncol = C) for (i in 1:y) { ds[i,] <- sample(1:100, C, TRUE)} ds_z <- scale(ds, center = TRUE, scale = TRUE) no_out <- ds_z[!rowSums(ds_z >2),] revrs = t(apply(no_out, 1, function(r)r*attr(no_out,'scaled:scale') + attr(no_out, 'scaled:center')))
[ "Try\ni1 <- !rowSums(ds_z > 2)\nno_out <- ds_z[i1, ]\n lst1 <- lapply(attributes(ds_z)[-1], \\(x) x[i1])\nno_out2 <- (no_out * lst1$`scaled:scale`) + lst1$`scaled:center`\n no_out2 <- round(no_out2) \n\n" ]
[ 0 ]
[]
[]
[ "outliers", "r", "reverse", "standardization" ]
stackoverflow_0074671328_outliers_r_reverse_standardization.txt
Q: std::ranges::any_of fails when compiling with Apple Clang 14.0 When I compile my program I get this error: error: no member named 'any_of' in namespace 'std::ranges'. However, I do include all the necessary headers (e.g. algorithm). I use c++20 standard and my compiler version is Apple Clang 14.0. Why do I get this error? I highly appreciate it if someone is able to explain to me the root cause of this. I tried to go through if there are some issues with the library and my compiler, but could not get an affirmative answer from here either: https://en.cppreference.com/w/cpp/compiler_support#C.2B.2B20_library_features I first tried to implement this on my own code when I got the error. However, I get the same errors when running the example code from cppreference: https://en.cppreference.com/w/cpp/algorithm/ranges/all_any_none_of Below is the example code I was trying to run (copied from the previous link). #include <vector> #include <numeric> #include <algorithm> #include <iterator> #include <iostream> #include <functional>   namespace ranges = std::ranges;   int main() { std::vector<int> v(10, 2); std::partial_sum(v.cbegin(), v.cend(), v.begin()); std::cout << "Among the numbers: "; ranges::copy(v, std::ostream_iterator<int>(std::cout, " ")); std::cout << '\n';   if (ranges::all_of(v.cbegin(), v.cend(), [](int i){ return i % 2 == 0; })) { std::cout << "All numbers are even\n"; } if (ranges::none_of(v, std::bind(std::modulus<int>(), std::placeholders::_1, 2))) { std::cout << "None of them are odd\n"; }   auto DivisibleBy = [](int d) { return [d](int m) { return m % d == 0; }; };   if (ranges::any_of(v, DivisibleBy(7))) { std::cout << "At least one number is divisible by 7\n"; } } Expected output Among the numbers: 2 4 6 8 10 12 14 16 18 20 All numbers are even None of them are odd At least one number is divisible by 7 The output my compiler gives me src/day04.cpp:16:13: error: no member named 'copy' in namespace 'std::ranges' ranges::copy(v, std::ostream_iterator<int>(std::cout, " ")); ~~~~~~~~^ src/day04.cpp:19:9: error: no member named 'all_of' in namespace 'std::ranges'; did you mean 'std::all_of'? if (ranges::all_of(v.cbegin(), v.cend(), [](int i) { return i % 2 == 0; })) ^~~~~~~~~~~~~~ std::all_of /Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX.sdk/usr/include/c++/v1/__algorithm/all_of.h:26:1: note: 'std::all_of' declared here all_of(_InputIterator __first, _InputIterator __last, _Predicate __pred) { ^ src/day04.cpp:23:17: error: no member named 'none_of' in namespace 'std::ranges' if (ranges::none_of( ~~~~~~~~^ src/day04.cpp:31:17: error: no member named 'any_of' in namespace 'std::ranges' if (ranges::any_of(v, DivisibleBy(7))) ~~~~~~~~^ 4 errors generated. ~~~~~~~~^ I tried to include the ranges header in the code #include<ranges>, but got the same errors. A: Range version of the <algorithm> library has not been implemented in Apple Clang 14.0. You can check this by going through each versions of Xcode release notes, or by going through the library files. Range version of the <algorithm> library has been added to their most recent stable branch of LLVM however, so you can build it yourself if you need it right now(or just wait for future releases). Alternatively, you can install third party compilers like GCC or Clang.
std::ranges::any_of fails when compiling with Apple Clang 14.0
When I compile my program I get this error: error: no member named 'any_of' in namespace 'std::ranges'. However, I do include all the necessary headers (e.g. algorithm). I use c++20 standard and my compiler version is Apple Clang 14.0. Why do I get this error? I highly appreciate it if someone is able to explain to me the root cause of this. I tried to go through if there are some issues with the library and my compiler, but could not get an affirmative answer from here either: https://en.cppreference.com/w/cpp/compiler_support#C.2B.2B20_library_features I first tried to implement this on my own code when I got the error. However, I get the same errors when running the example code from cppreference: https://en.cppreference.com/w/cpp/algorithm/ranges/all_any_none_of Below is the example code I was trying to run (copied from the previous link). #include <vector> #include <numeric> #include <algorithm> #include <iterator> #include <iostream> #include <functional>   namespace ranges = std::ranges;   int main() { std::vector<int> v(10, 2); std::partial_sum(v.cbegin(), v.cend(), v.begin()); std::cout << "Among the numbers: "; ranges::copy(v, std::ostream_iterator<int>(std::cout, " ")); std::cout << '\n';   if (ranges::all_of(v.cbegin(), v.cend(), [](int i){ return i % 2 == 0; })) { std::cout << "All numbers are even\n"; } if (ranges::none_of(v, std::bind(std::modulus<int>(), std::placeholders::_1, 2))) { std::cout << "None of them are odd\n"; }   auto DivisibleBy = [](int d) { return [d](int m) { return m % d == 0; }; };   if (ranges::any_of(v, DivisibleBy(7))) { std::cout << "At least one number is divisible by 7\n"; } } Expected output Among the numbers: 2 4 6 8 10 12 14 16 18 20 All numbers are even None of them are odd At least one number is divisible by 7 The output my compiler gives me src/day04.cpp:16:13: error: no member named 'copy' in namespace 'std::ranges' ranges::copy(v, std::ostream_iterator<int>(std::cout, " ")); ~~~~~~~~^ src/day04.cpp:19:9: error: no member named 'all_of' in namespace 'std::ranges'; did you mean 'std::all_of'? if (ranges::all_of(v.cbegin(), v.cend(), [](int i) { return i % 2 == 0; })) ^~~~~~~~~~~~~~ std::all_of /Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX.sdk/usr/include/c++/v1/__algorithm/all_of.h:26:1: note: 'std::all_of' declared here all_of(_InputIterator __first, _InputIterator __last, _Predicate __pred) { ^ src/day04.cpp:23:17: error: no member named 'none_of' in namespace 'std::ranges' if (ranges::none_of( ~~~~~~~~^ src/day04.cpp:31:17: error: no member named 'any_of' in namespace 'std::ranges' if (ranges::any_of(v, DivisibleBy(7))) ~~~~~~~~^ 4 errors generated. ~~~~~~~~^ I tried to include the ranges header in the code #include<ranges>, but got the same errors.
[ "Range version of the <algorithm> library has not been implemented in Apple Clang 14.0. You can check this by going through each versions of Xcode release notes, or by going through the library files.\nRange version of the <algorithm> library has been added to their most recent stable branch of LLVM however, so you can build it yourself if you need it right now(or just wait for future releases). Alternatively, you can install third party compilers like GCC or Clang.\n" ]
[ 0 ]
[]
[]
[ "apple_m1", "c++", "c++20", "clang" ]
stackoverflow_0074679839_apple_m1_c++_c++20_clang.txt
Q: Why does Undetered Chromedriver not work with Selenium Wire I want to make a request using Selenium Wire. The site has an anti -bot protection. I tried to use only Undetateded-Chromedriver. Everything work well. import undetected_chromedriver as uc driver = uc.Chrome() driver.get(f'https://nowsecure.nl/') time.sleep(10) driver.close() driver.quit() But when I use Selenium Wire ... nothing works ... import seleniumwire.undetected_chromedriver as uc driver = uc.Chrome() driver.get(f'https://nowsecure.nl/') time.sleep(10) driver.close() driver.quit() A: You have to add an options in your undetected chrome browser. options = uc.ChromeOptions() options.add_argument('--start-maximized') options.add_argument('--disable-notifications') driver = uc.Chrome(options=options, seleniumwire_options={ 'proxy': { 'http': f'http://{proxy_user}:{proxy_password}@{proxy_ip}:{proxy_port}', } }) uc.Chrome options you can find in Google
Why does Undetered Chromedriver not work with Selenium Wire
I want to make a request using Selenium Wire. The site has an anti -bot protection. I tried to use only Undetateded-Chromedriver. Everything work well. import undetected_chromedriver as uc driver = uc.Chrome() driver.get(f'https://nowsecure.nl/') time.sleep(10) driver.close() driver.quit() But when I use Selenium Wire ... nothing works ... import seleniumwire.undetected_chromedriver as uc driver = uc.Chrome() driver.get(f'https://nowsecure.nl/') time.sleep(10) driver.close() driver.quit()
[ "You have to add an options in your undetected chrome browser.\noptions = uc.ChromeOptions()\noptions.add_argument('--start-maximized')\noptions.add_argument('--disable-notifications')\n\ndriver = uc.Chrome(options=options, seleniumwire_options={\n 'proxy': {\n 'http': f'http://{proxy_user}:{proxy_password}@{proxy_ip}:{proxy_port}',\n }\n })\n\nuc.Chrome options you can find in Google\n" ]
[ 0 ]
[]
[]
[ "cloudflare", "python", "selenium", "seleniumwire", "undetected_chromedriver" ]
stackoverflow_0074680942_cloudflare_python_selenium_seleniumwire_undetected_chromedriver.txt
Q: React Native : Task :app:installDebug FAILED Deprecated Gradle features were used in this build, making it incompatible with Gradle 8.0 i am trying to create a react native project i have 2 android devices 1 is running android 9 and 1 is running android 12 my app is getting is installing and running on device that has android 9 but my app is not running on android 12 i get the following error > Task :app:installDebug Installing APK 'app-debug.apk' on '2201117PI - 12' for :app:debug > Task :app:installDebug FAILED Deprecated Gradle features were used in this build, making it incompatible with Gradle 8.0. You can use '--warning-mode all' to show the individual deprecation warnings and determine if they come from your own scripts or plugins. A: It appears that you are trying to install and run a React Native app on an Android device, but the app is not running on the device that has Android 12 installed. This is likely because the app is using deprecated Gradle features, which are not compatible with Gradle 8.0 or later. To fix this issue, you can try using the --warning-mode all flag when running the Gradle build command. This will show you the individual deprecation warnings, which will help you determine if the warnings are coming from your own scripts or plugins. You can then update your scripts or plugins to use non-deprecated Gradle features, which should allow your app to run on the Android 12 device. Alternatively, you can try using a different version of Gradle that is compatible with the deprecated features used in your app. However, it is generally recommended to use the latest version of Gradle and avoid using deprecated features, as they may be removed in future versions.
React Native : Task :app:installDebug FAILED Deprecated Gradle features were used in this build, making it incompatible with Gradle 8.0
i am trying to create a react native project i have 2 android devices 1 is running android 9 and 1 is running android 12 my app is getting is installing and running on device that has android 9 but my app is not running on android 12 i get the following error > Task :app:installDebug Installing APK 'app-debug.apk' on '2201117PI - 12' for :app:debug > Task :app:installDebug FAILED Deprecated Gradle features were used in this build, making it incompatible with Gradle 8.0. You can use '--warning-mode all' to show the individual deprecation warnings and determine if they come from your own scripts or plugins.
[ "It appears that you are trying to install and run a React Native app on an Android device, but the app is not running on the device that has Android 12 installed. This is likely because the app is using deprecated Gradle features, which are not compatible with Gradle 8.0 or later.\nTo fix this issue, you can try using the --warning-mode all flag when running the Gradle build command. This will show you the individual deprecation warnings, which will help you determine if the warnings are coming from your own scripts or plugins. You can then update your scripts or plugins to use non-deprecated Gradle features, which should allow your app to run on the Android 12 device.\nAlternatively, you can try using a different version of Gradle that is compatible with the deprecated features used in your app. However, it is generally recommended to use the latest version of Gradle and avoid using deprecated features, as they may be removed in future versions.\n" ]
[ 0 ]
[ "In terminal, enter the android folder with\ncd android\n\nthen run\n.\\gradlew clean\n\nAfter that your app should run.\n" ]
[ -2 ]
[ "android", "react_native" ]
stackoverflow_0074568858_android_react_native.txt
Q: how to use info from .txt file to create variables in python? I'm very new to python, and I'd like to know how I can use the info in a text file to create variables. For example, if the txt file looked like this: vin_brand_type_year_price 2132_BMW_330xi_2016_67000 1234_audi_a4_2019_92000 9876_mclaren_720s_2022_327000 How do I then, for example, use it to make a variable called vin and have all the vin numbers in it? I can have the terminal read it. this is what i have so far with open('car.txt', 'r') as file: file_content = file.read() print(file_content) Thank you for any help you can provide. A: We can then use the index() method to find the index of the "vin" header in the list of header values. This will give us the index of the VIN number in each line of the text file. We can then use this index to extract # Create an empty list to store the VIN numbers. vin = [] # Open the text file and read its contents. with open('car.txt', 'r') as file: # Read the first line of the file, which contains the header. header = file.readline() # Split the header on the underscore character. header_values = header.split("_") # Get the index of the "vin" header. vin_index = header_values.index("vin") # Read each line of the file, starting with the second line. for line in file: # Split the line on the underscore character. values = line.split("_") # Get the VIN number, using the index of the "vin" header. vin_number = values[vin_index] # Add the VIN number to the list. vin.append(vin_number) # Print the list of VIN numbers. print(vin) A: There are several ways to do this. The best depends on what you plan to do next. This file will parse with the csv module and you can use csv.reader to iterate all of the lines. To get vin specifically, you could import csv with open('car.txt', 'r') as file: vin = [row[0] for row in csv.reader(file, delimiter="_")] A: You can slice the strings around '_', get the first part (at index 0) and append it to a list variable: vin = [] with open('car.txt', 'r') as file: lines = file.readlines() for line in lines.splitlines(): line = line.strip() if line: vin.append(line.split('_')[0]) vin.pop(0) # this one because I was too cheap to skip the header line :) A: I would use regex to accomplish that. Assuming the file (car.txt) looks like this: vin_brand_type_year_price 2132_BMW_330xi_2016_67000 1234_audi_a4_2019_92000 9876_mclaren_720s_2022_327000 I would use this python script: import re with open('car.txt') as f: data = f.readlines() vin = [] for v in data: if match := re.match(r'(\d+)', v.strip()): vin.append(match.group(0)) print(vin) the r'^(\d)+' is a regex for selecting the part of the text that starts with digits. This is to ensure any line in the file that doesn't start with digits will be ignored.
how to use info from .txt file to create variables in python?
I'm very new to python, and I'd like to know how I can use the info in a text file to create variables. For example, if the txt file looked like this: vin_brand_type_year_price 2132_BMW_330xi_2016_67000 1234_audi_a4_2019_92000 9876_mclaren_720s_2022_327000 How do I then, for example, use it to make a variable called vin and have all the vin numbers in it? I can have the terminal read it. this is what i have so far with open('car.txt', 'r') as file: file_content = file.read() print(file_content) Thank you for any help you can provide.
[ "We can then use the index() method to find the index of the \"vin\" header in the list of header values. This will give us the index of the VIN number in each line of the text file. We can then use this index to extract\n# Create an empty list to store the VIN numbers.\nvin = []\n\n# Open the text file and read its contents.\nwith open('car.txt', 'r') as file:\n # Read the first line of the file, which contains the header.\n header = file.readline()\n\n # Split the header on the underscore character.\n header_values = header.split(\"_\")\n\n # Get the index of the \"vin\" header.\n vin_index = header_values.index(\"vin\")\n\n # Read each line of the file, starting with the second line.\n for line in file:\n # Split the line on the underscore character.\n values = line.split(\"_\")\n\n # Get the VIN number, using the index of the \"vin\" header.\n vin_number = values[vin_index]\n\n # Add the VIN number to the list.\n vin.append(vin_number)\n\n# Print the list of VIN numbers.\nprint(vin)\n\n", "There are several ways to do this. The best depends on what you plan to do next. This file will parse with the csv module and you can use csv.reader to iterate all of the lines. To get vin specifically, you could\nimport csv\n\nwith open('car.txt', 'r') as file:\n vin = [row[0] for row in csv.reader(file, delimiter=\"_\")]\n\n", "You can slice the strings around '_', get the first part (at index 0) and append it to a list variable:\nvin = []\n\nwith open('car.txt', 'r') as file:\n lines = file.readlines() \nfor line in lines.splitlines():\n line = line.strip()\n if line:\n vin.append(line.split('_')[0])\n \nvin.pop(0) # this one because I was too cheap to skip the header line :)\n\n", "I would use regex to accomplish that. Assuming the file (car.txt) looks like this:\nvin_brand_type_year_price\n2132_BMW_330xi_2016_67000\n1234_audi_a4_2019_92000\n9876_mclaren_720s_2022_327000\n\nI would use this python script:\nimport re\n\nwith open('car.txt') as f:\n data = f.readlines()\n\nvin = []\nfor v in data:\n if match := re.match(r'(\\d+)', v.strip()):\n vin.append(match.group(0))\n\nprint(vin)\n\nthe\n\nr'^(\\d)+'\n\nis a regex for selecting the part of the text that starts with digits. This is to ensure any line in the file that doesn't start with digits will be ignored.\n" ]
[ 1, 1, 0, 0 ]
[]
[]
[ "python" ]
stackoverflow_0074681417_python.txt
Q: Number of pixels declared for SizedBox's width & height not universal across multiple devices Here's an example of declaring the height & width of a SizedBox: SizedBox(width: 40, height: 40) In some cases (on other devices), I've noticed that sometimes the dimensions get overflowed and causes errors. In my mind, I thought that declaring by pixel number would be universal across all devices because 200 pixels should be 200 pixels regardless of device screen size. But clearly I was wrong, so is there a quick or easy way to fix my error to make sure the width/height stays universal across all devices? The problem with using MediaQuery.of(context).size.width is that in some instances whenever you have a wide screen or when the user tilts the phone in landscape view, using MediaQuery would make the width or height way too long & would not pass the esthetic test. Thank you for any help! A: Consider using the FractionallySizedBox to set a percentage of the width/height that should be taken from all screens: FractionallySizedBox( heightFactor: 0.2, // will take 20% of the screen height. widthFactor: 0.4, // will take 40% of the screen width. );
Number of pixels declared for SizedBox's width & height not universal across multiple devices
Here's an example of declaring the height & width of a SizedBox: SizedBox(width: 40, height: 40) In some cases (on other devices), I've noticed that sometimes the dimensions get overflowed and causes errors. In my mind, I thought that declaring by pixel number would be universal across all devices because 200 pixels should be 200 pixels regardless of device screen size. But clearly I was wrong, so is there a quick or easy way to fix my error to make sure the width/height stays universal across all devices? The problem with using MediaQuery.of(context).size.width is that in some instances whenever you have a wide screen or when the user tilts the phone in landscape view, using MediaQuery would make the width or height way too long & would not pass the esthetic test. Thank you for any help!
[ "Consider using the FractionallySizedBox to set a percentage of the width/height that should be taken from all screens:\nFractionallySizedBox(\n heightFactor: 0.2, // will take 20% of the screen height.\n widthFactor: 0.4, // will take 40% of the screen width.\n);\n\n" ]
[ 0 ]
[]
[]
[ "dart", "flutter" ]
stackoverflow_0074681401_dart_flutter.txt
Q: PHP - How user can set directory where downloaded file will be saved? I am making a simple application in which the user uploads files using 2 HTML input:file fields and a zip archive is created from them. Is there any way I can let the user choose where this zip can go? (Something like a "Save As" window). This is my current solution, which saves archive only to default download destination without asking. $files = array($_FILES["fileone"]["name"], $_FILES["filetwo"]["name"]); $zipname = "myarchive.zip"; $zip = new ZipArchive; $zip->open($zipname, ZipArchive::CREATE); foreach ($files as $file) { $zip->addFile($file); } $zip->close(); header('Content-Type: application/zip'); header('Content-disposition: attachment; filename='.$zipname); header('Content-Length: ' . filesize($zipname)); header("Pragma: no-cache"); header("Expires: 0"); readfile($zipname); A: To allow the user to choose where to save the zip file, you could use the HTML5 element to create a "Save As" dialog box. Here's an example: <form action="upload.php" method="post" enctype="multipart/form-data"> <label for="fileone">Choose the first file to include in the zip archive:</label> <input type="file" id="fileone" name="fileone" /> <br /> <label for="filetwo">Choose the second file to include in the zip archive:</label> <input type="file" id="filetwo" name="filetwo" /> <br /> <label for="zipfile">Save the zip archive as:</label> <input type="file" id="zipfile" name="zipfile" nwworkingdir="download" /> <br /> <input type="submit" value="Create zip archive" /> </form> In the example above, the zipfile input element has a nwworkingdir attribute set to download, which will cause the "Save As" dialog box to default to the user's default download directory. You can then use the zipfile value in your PHP script to save the zip archive to the location specified by the user. Here's an updated version of your PHP script that uses the zipfile value to save the zip archive: $files = array($_FILES["fileone"]["name"], $_FILES["filetwo"]["name"]); $zipname = $_FILES["zipfile"]["name"]; $zip = new ZipArchive; $zip->open($zipname, ZipArchive::CREATE); foreach ($files as $file) { $zip->addFile($file); } $zip->close(); header('Content-Type: application/zip'); header('Content-disposition: attachment; filename='.$zipname); header('Content-Length: ' . filesize($zipname)); header("Pragma: no-cache"); header("Expires: 0"); readfile($zipname); You will need to update your form and PHP script to handle the file uploads and create the zip archive using the values provided by the user. I hope this helps! A: It is not possible for a PHP script to prompt the user for a location to save a file. This kind of functionality is typically handled by the user's web browser. However, you can provide a download link for the file that the user can click on to download the file. When the user clicks on the link, their web browser will prompt them to choose a location to save the file. Here is an example of how you could provide a download link for the zip file that your PHP script creates: $files = array($_FILES["fileone"]["name"], $_FILES["filetwo"]["name"]); $zipname = "myarchive.zip"; $zip = new ZipArchive; $zip->open($zipname, ZipArchive::CREATE); foreach ($files as $file) { $zip->addFile($file); } $zip->close(); // Provide a link for the user to download the zip file echo '<a href="' . $zipname . '">Click here to download the zip file</a>'; When the user clicks on the link, their web browser will prompt them to choose a location to save the file. Note: If you want to force the web browser to download the file instead of opening it, you can use the Content-Disposition HTTP header. Here is an example of how you could set this header to force the file to download: header('Content-Disposition: attachment; filename="' . $zipname . '"'); This will tell the web browser to download the file and save it with the specified filename, instead of opening it.
PHP - How user can set directory where downloaded file will be saved?
I am making a simple application in which the user uploads files using 2 HTML input:file fields and a zip archive is created from them. Is there any way I can let the user choose where this zip can go? (Something like a "Save As" window). This is my current solution, which saves archive only to default download destination without asking. $files = array($_FILES["fileone"]["name"], $_FILES["filetwo"]["name"]); $zipname = "myarchive.zip"; $zip = new ZipArchive; $zip->open($zipname, ZipArchive::CREATE); foreach ($files as $file) { $zip->addFile($file); } $zip->close(); header('Content-Type: application/zip'); header('Content-disposition: attachment; filename='.$zipname); header('Content-Length: ' . filesize($zipname)); header("Pragma: no-cache"); header("Expires: 0"); readfile($zipname);
[ "To allow the user to choose where to save the zip file, you could use the HTML5 element to create a \"Save As\" dialog box. Here's an example:\n<form action=\"upload.php\" method=\"post\" enctype=\"multipart/form-data\">\n <label for=\"fileone\">Choose the first file to include in the zip archive:</label>\n <input type=\"file\" id=\"fileone\" name=\"fileone\" />\n <br />\n <label for=\"filetwo\">Choose the second file to include in the zip archive:</label>\n <input type=\"file\" id=\"filetwo\" name=\"filetwo\" />\n <br />\n <label for=\"zipfile\">Save the zip archive as:</label>\n <input type=\"file\" id=\"zipfile\" name=\"zipfile\" nwworkingdir=\"download\" />\n <br />\n <input type=\"submit\" value=\"Create zip archive\" />\n</form>\n\nIn the example above, the zipfile input element has a nwworkingdir attribute set to download, which will cause the \"Save As\" dialog box to default to the user's default download directory. You can then use the zipfile value in your PHP script to save the zip archive to the location specified by the user.\nHere's an updated version of your PHP script that uses the zipfile value to save the zip archive:\n$files = array($_FILES[\"fileone\"][\"name\"], $_FILES[\"filetwo\"][\"name\"]);\n$zipname = $_FILES[\"zipfile\"][\"name\"];\n$zip = new ZipArchive;\n$zip->open($zipname, ZipArchive::CREATE);\nforeach ($files as $file) {\n $zip->addFile($file);\n}\n$zip->close();\n\nheader('Content-Type: application/zip');\nheader('Content-disposition: attachment; filename='.$zipname);\nheader('Content-Length: ' . filesize($zipname));\nheader(\"Pragma: no-cache\"); \nheader(\"Expires: 0\"); \nreadfile($zipname);\n\nYou will need to update your form and PHP script to handle the file uploads and create the zip archive using the values provided by the user. I hope this helps!\n", "It is not possible for a PHP script to prompt the user for a location to save a file. This kind of functionality is typically handled by the user's web browser.\nHowever, you can provide a download link for the file that the user can click on to download the file. When the user clicks on the link, their web browser will prompt them to choose a location to save the file.\nHere is an example of how you could provide a download link for the zip file that your PHP script creates:\n$files = array($_FILES[\"fileone\"][\"name\"], $_FILES[\"filetwo\"][\"name\"]);\n$zipname = \"myarchive.zip\";\n$zip = new ZipArchive;\n$zip->open($zipname, ZipArchive::CREATE);\nforeach ($files as $file) {\n $zip->addFile($file);\n}\n$zip->close();\n\n// Provide a link for the user to download the zip file\necho '<a href=\"' . $zipname . '\">Click here to download the zip file</a>';\n\nWhen the user clicks on the link, their web browser will prompt them to choose a location to save the file.\nNote: If you want to force the web browser to download the file instead of opening it, you can use the Content-Disposition HTTP header. Here is an example of how you could set this header to force the file to download:\nheader('Content-Disposition: attachment; filename=\"' . $zipname . '\"');\n\nThis will tell the web browser to download the file and save it with the specified filename, instead of opening it.\n" ]
[ 0, 0 ]
[]
[]
[ "php" ]
stackoverflow_0074681503_php.txt
Q: How to change Keycloak logo on the Admin console page in keycloak.v2 theme I am trying to find a way to replace Keycloak image on the Admin console page using Keycloak.v2 theme which is the default theme starting from Keycloak 19. Note that replacing themes\keycloak.v2\account\resources\public\logo.svg didn't really help. A: The new admin UI (starting from keycloak 19) has been delivered through keycloak-admin-ui.jar. In order to customize any UI components, it has to be done by forking this repo and build on your own. A: In Keycloak, one of the ways you can change the Keycloak logo is by overriding a theme. The benefit of doing it this way over forking and building the entire keycloak-admin-ui repo is you can control and focus only on customizing the components you want, cutting down the size of your new theme and reducing unnecessary duplication. For your specific use-case (tested in Keycloak 20.0.1), I did the following to change the Keycloak Logo on the Admin Console Page: Per the Theme Guide for Keycloak, custom themes can be added to keycloak by placing them into /opt/keycloak/themes. After Keycloak starts the theme can then be selected in the Realm Settings. Therefore, I created a new folder in /opt/keycloak/themes for my custom theme, called myCustomTheme. The new folder will contain the theme definition for different parts of Keycloak. Since we only care about changing the logo in the Admin Console, we create a folder in /opt/keycloak/theme/myCustomTheme for overriding the Admin Console Theme. Per the defined set of theme types, this folder should be called admin. This is so that when you are selecting themes in Realm Settings within the Admin Console, the MyCustomTheme option will be listed under the Admin Console section (see images below). Inside of /opt/keycloak/theme/myCustomTheme/admin is where the theme overriding begins. A configuration file called theme.properties should be created. This file is the first thing read by Keycloak when loading your theme and contains information about the theme environment. For more detailed information, see the description of Theme Properties. Since we are overriding the keycloak.v2 theme, we add the parent field to the properties file, specifying the base theme we are inheriting from. We set it to keycloak.v2, so that myCustomTheme will inherit the keycloak.v2 theme for all of its components unless we are overriding something specific. parent=keycloak.v2 This next step required a bit of exploration and trial-and-error of how the default keycloak.v2 theme is structured, but I found that the place where the Keycloak logo is defined for keycloak.v2 is in keycloak.v2/admin/resources/logo.svg. Therefore, for myCustomTheme, all one would have to do to use a custom logo that overrides the default keycloak.v2 one is add a resources folder to /opt/keycloak/theme/myCustomTheme/admin and add the custom SVG image as a file called logo.svg. Note that this is different than what the Keycloak Docs suggest, which has you creating the same resources folder but also an img folder inside of it which would contain your images. I suspect this has to do with the fact that we are overriding an image instead of adding one, and the keycloak.v2 theme code does not follow its own guide and instead places its logo in resources instead of resources/img. Start Keycloak, go to the Admin Console, sign in and go to Realm Settings > Themes > Admin Console theme, and select myCustomTheme. Refresh and you should see your icon change now. Below is the results of doing the above, showing my resulting folder structure and the Before/After of replacing the Keycloak logo with some random SVG I got from public domain: Folder structure: https://ibb.co/h9kZqb3 Before/After: https://ibb.co/cJ6t434
How to change Keycloak logo on the Admin console page in keycloak.v2 theme
I am trying to find a way to replace Keycloak image on the Admin console page using Keycloak.v2 theme which is the default theme starting from Keycloak 19. Note that replacing themes\keycloak.v2\account\resources\public\logo.svg didn't really help.
[ "The new admin UI (starting from keycloak 19) has been delivered through keycloak-admin-ui.jar. In order to customize any UI components, it has to be done by forking this repo and build on your own.\n", "In Keycloak, one of the ways you can change the Keycloak logo is by overriding a theme. The benefit of doing it this way over forking and building the entire keycloak-admin-ui repo is you can control and focus only on customizing the components you want, cutting down the size of your new theme and reducing unnecessary duplication.\nFor your specific use-case (tested in Keycloak 20.0.1), I did the following to change the Keycloak Logo on the Admin Console Page:\n\nPer the Theme Guide for Keycloak, custom themes can be added to keycloak by placing them into /opt/keycloak/themes. After Keycloak starts the theme can then be selected in the Realm Settings. Therefore, I created a new folder in /opt/keycloak/themes for my custom theme, called myCustomTheme.\n\nThe new folder will contain the theme definition for different parts of Keycloak. Since we only care about changing the logo in the Admin Console, we create a folder in /opt/keycloak/theme/myCustomTheme for overriding the Admin Console Theme. Per the defined set of theme types, this folder should be called admin. This is so that when you are selecting themes in Realm Settings within the Admin Console, the MyCustomTheme option will be listed under the Admin Console section (see images below).\n\nInside of /opt/keycloak/theme/myCustomTheme/admin is where the theme overriding begins. A configuration file called theme.properties should be created. This file is the first thing read by Keycloak when loading your theme and contains information about the theme environment. For more detailed information, see the description of Theme Properties.\n\nSince we are overriding the keycloak.v2 theme, we add the parent field to the properties file, specifying the base theme we are inheriting from. We set it to keycloak.v2, so that myCustomTheme will inherit the keycloak.v2 theme for all of its components unless we are overriding something specific.\nparent=keycloak.v2\n\n\n\nThis next step required a bit of exploration and trial-and-error of how the default keycloak.v2 theme is structured, but I found that the place where the Keycloak logo is defined for keycloak.v2 is in keycloak.v2/admin/resources/logo.svg. Therefore, for myCustomTheme, all one would have to do to use a custom logo that overrides the default keycloak.v2 one is add a resources folder to /opt/keycloak/theme/myCustomTheme/admin and add the custom SVG image as a file called logo.svg.\n\nNote that this is different than what the Keycloak Docs suggest, which has you creating the same resources folder but also an img folder inside of it which would contain your images. I suspect this has to do with the fact that we are overriding an image instead of adding one, and the keycloak.v2 theme code does not follow its own guide and instead places its logo in resources instead of resources/img.\n\n\nStart Keycloak, go to the Admin Console, sign in and go to Realm Settings > Themes > Admin Console theme, and select myCustomTheme. Refresh and you should see your icon change now.\n\n\nBelow is the results of doing the above, showing my resulting folder structure and the Before/After of replacing the Keycloak logo with some random SVG I got from public domain:\n\nFolder structure: https://ibb.co/h9kZqb3\nBefore/After: https://ibb.co/cJ6t434\n\n" ]
[ 0, 0 ]
[]
[]
[ "keycloak", "keycloak_services", "reactjs" ]
stackoverflow_0074607048_keycloak_keycloak_services_reactjs.txt
Q: Laravel CSP how to allow inline styles in laravel-mix webpack I am using spatie/laravel-csp package to set CSP headers on laravel application. But the inline style could not be applied and refused with the following error. Refused to apply inline style because it violates the following Content Security Policy directive: "style-src 'self' 'nonce-TjpZGox5zGathTvJDeVMfxzHaOtWMc7v' 'unsafe-inline'". further detail. I am using laravel 7. /spatie/laravel-csp 2.8 "laravel-mix": "^5.0.1", and inertia with Vue 2 What i have try so far. specify webpack_nonce @ entry file (main.js) webpack documentation Using inline scripts and styles as in spatie/laravel-csp documentation A: To allow inline styles in your Laravel application using the spatie/laravel-csp package, you need to modify the style-src directive in your Content Security Policy (CSP) to include the 'unsafe-inline' keyword. Here is an example of how your CSP header might look with the 'unsafe-inline' keyword added to the style-src directive: Content-Security-Policy: default-src 'self'; style-src 'self' 'unsafe-inline'; Keep in mind that using the 'unsafe-inline' keyword can open up your application to certain vulnerabilities, so it should be used with caution. It is generally recommended to avoid using inline styles in your application whenever possible, and to instead use external stylesheets.
Laravel CSP how to allow inline styles in laravel-mix webpack
I am using spatie/laravel-csp package to set CSP headers on laravel application. But the inline style could not be applied and refused with the following error. Refused to apply inline style because it violates the following Content Security Policy directive: "style-src 'self' 'nonce-TjpZGox5zGathTvJDeVMfxzHaOtWMc7v' 'unsafe-inline'". further detail. I am using laravel 7. /spatie/laravel-csp 2.8 "laravel-mix": "^5.0.1", and inertia with Vue 2 What i have try so far. specify webpack_nonce @ entry file (main.js) webpack documentation Using inline scripts and styles as in spatie/laravel-csp documentation
[ "To allow inline styles in your Laravel application using the spatie/laravel-csp package, you need to modify the style-src directive in your Content Security Policy (CSP) to include the 'unsafe-inline' keyword.\nHere is an example of how your CSP header might look with the 'unsafe-inline' keyword added to the style-src directive:\nContent-Security-Policy: default-src 'self'; style-src 'self' 'unsafe-inline';\n\nKeep in mind that using the 'unsafe-inline' keyword can open up your application to certain vulnerabilities, so it should be used with caution. It is generally recommended to avoid using inline styles in your application whenever possible, and to instead use external stylesheets.\n" ]
[ 0 ]
[]
[]
[ "laravel", "laravel_mix", "webpack" ]
stackoverflow_0073741808_laravel_laravel_mix_webpack.txt
Q: Using PIL module to open file from GCS I am a beginner in programming, and this is my first little try. I'm currently facing a bottleneck, I would like to ask for the help. Any advice will be welcome. Thank you in advance! Here is what I want to do: To make a text detection application and extract the text for the further usage(for instance, to map some of the other relevant information in a data). So, I devided into two steps: 1.first, to detect the text 2.extract the text and use the regular expression to rearrange it for the data mapping. For the first step, I use google vision api, so I have no probelm reading the image from google cloud storage(code reference 1): However, when it comes to step two, I need a PIL module to open the file for drawing the text. When useing the methodImage.open(), it requries a path`. My question is how do I call the path? (code reference 2): code reference 1: from google.cloud import vision image_uri = 'gs://img_platecapture/img_001.jpg' client = vision.ImageAnnotatorClient() image = vision.Image() image.source.image_uri = image_uri ## <- THE PATH ## response = client.text_detection(image=image) for text in response.text_annotations: print('=' * 30) print(text.description) vertices = ['(%s,%s)' % (v.x, v.y) for v in text.bounding_poly.vertices] print('bounds:', ",".join(vertices)) if response.error.message: raise Exception( '{}\nFor more info on error messages, check: ' 'https://cloud.google.com/apis/design/errors'.format( response.error.message)) code reference 2: from PIL import Image, ImageDraw from PIL import ImageFont import re img = Image.open(?) <- THE PATH ## draw = ImageDraw.Draw(img) font = ImageFont.truetype("simsun.ttc", 18) for text in response.text_annotations[1::]: ocr = text.description bound=text.bounding_poly draw.text((bound.vertices[0].x-25, bound.vertices[0].y-25),ocr,fill=(255,0,0),font=font) draw.polygon( [ bound.vertices[0].x, bound.vertices[0].y, bound.vertices[1].x, bound.vertices[1].y, bound.vertices[2].x, bound.vertices[2].y, bound.vertices[3].x, bound.vertices[3].y, ], None, 'yellow', ) texts=response.text_annotations a=str(texts[0].description.split()) b=re.sub(u"([^\u4e00-\u9fa5\u0030-u0039])","",a) b1="".join(b) regex1 = re.search(r"\D{1,2}Dist.",b) if regex1: regex1="{}".format(regex1.group(0)) ......... A: PIL does not have built in ability to automatically open files from GCS. you will need to either Download the file to local storage and point PIL to that file or Give PIL a BlobReader which it can use to access the data: from PIL import Image from google.cloud import storage storage_client = storage.Client() bucket = storage_client.bucket('img_platecapture') blob = bucket.get_blob('img_001.jpg') # use get_blob to fix generation number, so we don't get corruption if blob is overwritten while we read it. with blob.open() as file: img = Image.open(file) # ...
Using PIL module to open file from GCS
I am a beginner in programming, and this is my first little try. I'm currently facing a bottleneck, I would like to ask for the help. Any advice will be welcome. Thank you in advance! Here is what I want to do: To make a text detection application and extract the text for the further usage(for instance, to map some of the other relevant information in a data). So, I devided into two steps: 1.first, to detect the text 2.extract the text and use the regular expression to rearrange it for the data mapping. For the first step, I use google vision api, so I have no probelm reading the image from google cloud storage(code reference 1): However, when it comes to step two, I need a PIL module to open the file for drawing the text. When useing the methodImage.open(), it requries a path`. My question is how do I call the path? (code reference 2): code reference 1: from google.cloud import vision image_uri = 'gs://img_platecapture/img_001.jpg' client = vision.ImageAnnotatorClient() image = vision.Image() image.source.image_uri = image_uri ## <- THE PATH ## response = client.text_detection(image=image) for text in response.text_annotations: print('=' * 30) print(text.description) vertices = ['(%s,%s)' % (v.x, v.y) for v in text.bounding_poly.vertices] print('bounds:', ",".join(vertices)) if response.error.message: raise Exception( '{}\nFor more info on error messages, check: ' 'https://cloud.google.com/apis/design/errors'.format( response.error.message)) code reference 2: from PIL import Image, ImageDraw from PIL import ImageFont import re img = Image.open(?) <- THE PATH ## draw = ImageDraw.Draw(img) font = ImageFont.truetype("simsun.ttc", 18) for text in response.text_annotations[1::]: ocr = text.description bound=text.bounding_poly draw.text((bound.vertices[0].x-25, bound.vertices[0].y-25),ocr,fill=(255,0,0),font=font) draw.polygon( [ bound.vertices[0].x, bound.vertices[0].y, bound.vertices[1].x, bound.vertices[1].y, bound.vertices[2].x, bound.vertices[2].y, bound.vertices[3].x, bound.vertices[3].y, ], None, 'yellow', ) texts=response.text_annotations a=str(texts[0].description.split()) b=re.sub(u"([^\u4e00-\u9fa5\u0030-u0039])","",a) b1="".join(b) regex1 = re.search(r"\D{1,2}Dist.",b) if regex1: regex1="{}".format(regex1.group(0)) .........
[ "PIL does not have built in ability to automatically open files from GCS. you will need to either\n\nDownload the file to local storage and point PIL to that file or\n\nGive PIL a BlobReader which it can use to access the data:\nfrom PIL import Image\nfrom google.cloud import storage\n\nstorage_client = storage.Client()\nbucket = storage_client.bucket('img_platecapture')\nblob = bucket.get_blob('img_001.jpg') # use get_blob to fix generation number, so we don't get corruption if blob is overwritten while we read it.\nwith blob.open() as file:\n img = Image.open(file)\n # ...\n\n\n\n" ]
[ 0 ]
[]
[]
[ "gcs", "google_cloud_storage", "path", "python", "python_imaging_library" ]
stackoverflow_0074678150_gcs_google_cloud_storage_path_python_python_imaging_library.txt
Q: How to send a value as parameter to the Factory Class I need to run a Factory 50 times, so inside the DatabseSeeder: public function run() { for($i=1;$i<=50;$i++){ (new CategoryQuestionFactory($i))->create(); } } So as you can see, I tried passing a variable called $i as parameter to CategoryQuestionFactory class. Then at this Factory, I tried this: class CategoryQuestionFactory extends Factory { protected $counter; public function __construct($c) { $this->counter = $c; } /** * Define the model's default state. * * @return array<string, mixed> */ public function definition() { $question = Question::find($this->counter); return [ 'category_id' => $this->faker->numberBetween(1,22), 'question_id' => $question->id ]; } } But when I run php artisan db:seed at Terminal, I get this error: Call to a member function pipe() on null at C:\xampp\htdocs\forum\root\vendor\laravel\framework\src\Illuminate\Database\Eloquent\Factories\Factory.php:429 So what's going wrong here? How can I properly send a value as a parameter to the Factory Class? Also, at the IDE for the __construct method of this Factory, I get this message: UPDATE #1: Here is the capture of error at IDE: A: It seems to me that you want to seed the intermediate table. There are methods that can be use when seeding them one of them is has() which is the one i always use. /** * will create a one question and 3 category then create a data in the intermediate table. * expected data : * question_id | category_id * 1 1 * 1 2 * 1 3 */ Question::factory()->has( Category::factory()->count(3) )->create(); So let's say you want to create a 100 question and 5 categories /** * will create a 100 question and 5 category then create a data in the intermediate table. * expected data : * question_id | category_id * 1 1 * 1 2 * 1 3 * 1 4 * 1 5 * 2 1 * 2 2 * 2 3 * 2 4 * 2 5 * until the 100th question will have a 5 categories */ Question::factory(100)->has( Category::factory()->count(5) )->create(); A: Don't forget to call parent::__construct() in the constructor of your CategoryQuestionFactory factory. Your CategoryQuestionFactory is supposed to extends Laravel standard Factory. Missing to call the parent constructor on a child class breaks the code. A: In laravel its better to associate with the model, So instead of doing this $question = Question::find($this->counter); return [ 'category_id' => $this->faker->numberBetween(1,22), 'question_id' => $question->id ]; you can do this (then you dont have to passs the $i) return [ 'category_id' => $this->faker->numberBetween(1,22), 'question_id' => Question::factory(), ]; A: I've generated the model via PhpStrom: namespace App\Models; use Illuminate\Database\Eloquent\Factories\Factory; use Illuminate\Support\Collection; class CategoryQuestionFactory extends Factory { public function __construct($count = null, ?Collection $states = null, ?Collection $has = null, ?Collection $for = null, ?Collection $afterMaking = null, ?Collection $afterCreating = null, $connection = null, ?Collection $recycle = null) { parent::__construct($count, $states, $has, $for, $afterMaking, $afterCreating, $connection, $recycle); } public function definition() { $question = Question::find($this->counter); return [ 'category_id' => $this->faker->numberBetween(1,22), 'question_id' => $question->id ]; } } It's should work fine. I checked. You should call parent::__construct. Like constructors, parent destructors will not be called implicitly by the engine. In order to run a parent destructor, one would have to explicitly call parent::__destruct() in the destructor body. Also like constructors, a child class may inherit the parent's destructor if it does not implement one itself. A: The error you're seeing occurs when the $this->faker property is null. This happens when the $faker property is not being set in the CategoryQuestionFactory class. You can fix this error by setting the $faker property in the constructor of the CategoryQuestionFactory class, like this: class CategoryQuestionFactory extends Factory { protected $counter; public function __construct($c) { $this->counter = $c; $this->faker = Faker\Factory::create(); } /** * Define the model's default state. * * @return array<string, mixed> */ public function definition() { $question = Question::find($this->counter); return [ 'category_id' => $this->faker->numberBetween(1,22), 'question_id' => $question->id ]; } } Alternatively, you could also add a call to the setFaker method in the CategoryQuestionFactory class, like this: class CategoryQuestionFactory extends Factory { protected $counter; public function __construct($c) { $this->counter = $c; } /** * Define the model's default state. * * @return array<string, mixed> */ public function definition() { $this->setFaker(Faker\Factory::create()); $question = Question::find($this->counter); return [ 'category_id' => $this->faker->numberBetween(1,22), 'question_id' => $question->id ]; } } Either of these changes should fix the error you're seeing.
How to send a value as parameter to the Factory Class
I need to run a Factory 50 times, so inside the DatabseSeeder: public function run() { for($i=1;$i<=50;$i++){ (new CategoryQuestionFactory($i))->create(); } } So as you can see, I tried passing a variable called $i as parameter to CategoryQuestionFactory class. Then at this Factory, I tried this: class CategoryQuestionFactory extends Factory { protected $counter; public function __construct($c) { $this->counter = $c; } /** * Define the model's default state. * * @return array<string, mixed> */ public function definition() { $question = Question::find($this->counter); return [ 'category_id' => $this->faker->numberBetween(1,22), 'question_id' => $question->id ]; } } But when I run php artisan db:seed at Terminal, I get this error: Call to a member function pipe() on null at C:\xampp\htdocs\forum\root\vendor\laravel\framework\src\Illuminate\Database\Eloquent\Factories\Factory.php:429 So what's going wrong here? How can I properly send a value as a parameter to the Factory Class? Also, at the IDE for the __construct method of this Factory, I get this message: UPDATE #1: Here is the capture of error at IDE:
[ "It seems to me that you want to seed the intermediate table. There are methods that can be use when seeding them one of them is has() which is the one i always use.\n/**\n* will create a one question and 3 category then create a data in the intermediate table. \n* expected data : \n* question_id | category_id\n* 1 1\n* 1 2\n* 1 3\n*/\nQuestion::factory()->has(\n Category::factory()->count(3)\n)->create();\n\nSo let's say you want to create a 100 question and 5 categories\n/**\n* will create a 100 question and 5 category then create a data in the intermediate table. \n* expected data : \n* question_id | category_id\n* 1 1\n* 1 2\n* 1 3\n* 1 4\n* 1 5\n* 2 1\n* 2 2\n* 2 3\n* 2 4\n* 2 5\n* until the 100th question will have a 5 categories\n*/\nQuestion::factory(100)->has(\n Category::factory()->count(5)\n)->create();\n\n", "Don't forget to call parent::__construct() in the constructor of your CategoryQuestionFactory factory.\nYour CategoryQuestionFactory is supposed to extends Laravel standard Factory. Missing to call the parent constructor on a child class breaks the code.\n", "In laravel its better to associate with the model, So instead of doing this\n$question = Question::find($this->counter);\n\nreturn [\n 'category_id' => $this->faker->numberBetween(1,22),\n 'question_id' => $question->id\n];\n\nyou can do this (then you dont have to passs the $i)\nreturn [\n 'category_id' => $this->faker->numberBetween(1,22),\n 'question_id' => Question::factory(),\n];\n\n", "I've generated the model via PhpStrom:\nnamespace App\\Models;\n\nuse Illuminate\\Database\\Eloquent\\Factories\\Factory;\nuse Illuminate\\Support\\Collection;\n\nclass CategoryQuestionFactory extends Factory\n{\n public function __construct($count = null, ?Collection $states = null, ?Collection $has = null, ?Collection $for = null, ?Collection $afterMaking = null, ?Collection $afterCreating = null, $connection = null, ?Collection $recycle = null)\n {\n parent::__construct($count, $states, $has, $for, $afterMaking, $afterCreating, $connection, $recycle);\n }\n\n public function definition()\n {\n $question = Question::find($this->counter);\n\n return [\n 'category_id' => $this->faker->numberBetween(1,22),\n 'question_id' => $question->id\n ];\n }\n}\n\nIt's should work fine. I checked. You should call parent::__construct.\n\nLike constructors, parent destructors will not be called implicitly by the engine. In order to run a parent destructor, one would have to explicitly call parent::__destruct() in the destructor body. Also like constructors, a child class may inherit the parent's destructor if it does not implement one itself.\n\n", "The error you're seeing occurs when the $this->faker property is null. This happens when the $faker property is not being set in the CategoryQuestionFactory class.\nYou can fix this error by setting the $faker property in the constructor of the CategoryQuestionFactory class, like this:\nclass CategoryQuestionFactory extends Factory\n{\n protected $counter;\n\n public function __construct($c)\n {\n $this->counter = $c;\n $this->faker = Faker\\Factory::create();\n }\n /**\n * Define the model's default state.\n *\n * @return array<string, mixed>\n */\n public function definition()\n {\n $question = Question::find($this->counter);\n\n return [\n 'category_id' => $this->faker->numberBetween(1,22),\n 'question_id' => $question->id\n ];\n }\n}\n\nAlternatively, you could also add a call to the setFaker method in the CategoryQuestionFactory class, like this:\nclass CategoryQuestionFactory extends Factory\n{\n protected $counter;\n\n public function __construct($c)\n {\n $this->counter = $c;\n }\n /**\n * Define the model's default state.\n *\n * @return array<string, mixed>\n */\n public function definition()\n {\n $this->setFaker(Faker\\Factory::create());\n\n $question = Question::find($this->counter);\n\n return [\n 'category_id' => $this->faker->numberBetween(1,22),\n 'question_id' => $question->id\n ];\n }\n}\n\nEither of these changes should fix the error you're seeing.\n" ]
[ 1, 1, 0, 0, 0 ]
[]
[]
[ "laravel", "laravel_9", "laravel_factory", "laravel_seeding", "php" ]
stackoverflow_0074411525_laravel_laravel_9_laravel_factory_laravel_seeding_php.txt
Q: Flutter - Generate .ipa file The official Flutter documentations says that the following command produces both ipa and xcarchive files. flutter build ipa From the Flutter documentation to generate ipa Run flutter build ipa to produce an Xcode build archive (.xcarchive file) in your project’s build/ios/archive/ directory and an App Store app bundle (.ipa file) in build/ios/ipa. However, the command is generating only .xcarchive file. How can we generate the .ipa file? Looks like we can generate from Xcode Export but trying to look for a command line command to generate .ipa file to integrate into CI/CD solution. A: You can generate .ipa file for distribution using Xcode via following steps 1.Open iOS folder of your project in Xcode then Product -> Archive It once this is complete open up the Organiser and click the latest version. 3.Now click on Distribute App This will open list of method for export. Select the export method as per your requirement (In your case i think you want to distribute app for testing) so select Development option and click on Next button. 4.Now it will ask for app thining and re-signed for development distribution select Automatically managing signing it take some time to generate .ipa file after that you can export .ipa file in your desire location. A: When you run flutter build ipa, it generates a Runner.xcarchive and an IPA. Output of flutter build: Xcode archive done. 65.6s Built /Users/user/repos/app_name/build/ios/archive/Runner.xcarchive. Building App Store IPA... 64.5s Built IPA to /Users/user/repos/app_name/build/ios/ipa. The generated file is called app_name.ipa.
Flutter - Generate .ipa file
The official Flutter documentations says that the following command produces both ipa and xcarchive files. flutter build ipa From the Flutter documentation to generate ipa Run flutter build ipa to produce an Xcode build archive (.xcarchive file) in your project’s build/ios/archive/ directory and an App Store app bundle (.ipa file) in build/ios/ipa. However, the command is generating only .xcarchive file. How can we generate the .ipa file? Looks like we can generate from Xcode Export but trying to look for a command line command to generate .ipa file to integrate into CI/CD solution.
[ "You can generate .ipa file for distribution using Xcode via following steps\n1.Open iOS folder of your project in Xcode\n\nthen Product -> Archive\nIt once this is complete open up the Organiser and click the latest version.\n\n\n3.Now click on Distribute App This will open list of method for export. Select the export method as per your requirement (In your case i think you want to distribute app for testing) so select Development option and click on Next button.\n\n4.Now it will ask for app thining and re-signed for development distribution select Automatically managing signing it take some time to generate .ipa file after that you can export .ipa file in your desire location.\n\n", "When you run flutter build ipa, it generates a Runner.xcarchive and an IPA.\nOutput of flutter build:\nXcode archive done. 65.6s\nBuilt /Users/user/repos/app_name/build/ios/archive/Runner.xcarchive.\nBuilding App Store IPA... 64.5s\nBuilt IPA to /Users/user/repos/app_name/build/ios/ipa.\n\nThe generated file is called app_name.ipa.\n" ]
[ 0, 0 ]
[]
[]
[ "flutter", "ios", "ipa" ]
stackoverflow_0072947376_flutter_ios_ipa.txt
Q: Why vs terminal window is not showing react variant? PS C:\Users\shakhawat.hossain07\Desktop\nasaspace_app_challenge> npm create vite@latest √ Project name: ... nasaspace_app_challenge √ Select a framework: » React ? Select a variant: » - Use arrow-keys. Return to submit. > JavaScript TypeScript Why is is not showing like this?: ? Select a variant: » - Use arrow-keys. Return to submit. > react react-ts Please help me to configure this problem I have installed node.js and run npm create vite@latest command in the terminal window but variant is not showing react just framework is showing react A: It might have changed with an update. Anyways, "react" is "Javascript" and "react-ts" would be "Typescript" in the first prompt
Why vs terminal window is not showing react variant?
PS C:\Users\shakhawat.hossain07\Desktop\nasaspace_app_challenge> npm create vite@latest √ Project name: ... nasaspace_app_challenge √ Select a framework: » React ? Select a variant: » - Use arrow-keys. Return to submit. > JavaScript TypeScript Why is is not showing like this?: ? Select a variant: » - Use arrow-keys. Return to submit. > react react-ts Please help me to configure this problem I have installed node.js and run npm create vite@latest command in the terminal window but variant is not showing react just framework is showing react
[ "It might have changed with an update.\nAnyways, \"react\" is \"Javascript\" and \"react-ts\" would be \"Typescript\" in the first prompt\n" ]
[ 0 ]
[]
[]
[ "reactjs", "vite", "web_site_project" ]
stackoverflow_0074668537_reactjs_vite_web_site_project.txt
Q: Why can't you access an element of a list at an index without previously using the Add method? Why doesn't this program work if the list has been made with a set size? List<int> list = new List<int>(2); list[0] = 1; Console.WriteLine(list[0]) This is the error: System.ArgumentOutOfRangeException: 'Index was out of range. Must be non-negative and less than the size of the collection. Parameter name: index A: Because the list is empty. That's why there are no elements thus no range A: I think a bit more explanation is in order. You are thinking that the "2" parameter in the List constructor creates a list of size 2. It does not. It creates an empty list with a capacity of 2. What that means effectively is that it pre-allocates enough space for two elements in the list so that as you add more and more elements it doesn't have to create add new space. However, the list is still empty, it just has some unused slots. The way this works is that the list has a certain capacity at the start, in your case two so two "slots" are allocated for the elements, but the list is still empty. You add an element and it uses the first slot, you add another element and it takes the next slot, you add a third element, now it is out of slots, so it allocates more memory (4 slots for example) copies over the previous two into the first two slots of the new block, and now you have a capacity of 4 but a list of length 2. Many years ago I had a problem with a production system that took forever to start up (though this was in C++.) In the loading phase we were loading stuff from a database into a memory list. It contained about 100,000 items. However the capacity was just 10. So it filled the first ten, when they were filled, it allocated 20 slots and copied over the first ten and released that memoer (the algorithm was that it doubled the capacity when it ran out), then 40, 80 and so forth. You can see that it CONSTANTLY was allocating slots and copying the same memory bytes over and over and over again in an exponential fashion. We set the capacity correctly at the beginning and it instantly worked hugely faster. So when you are making a really big list which you are adding to piecemeal, it often is a good idea to set the capacity at the beginning if you can. (FWIW, in C++ this is not only slow but causes horrible memory fragmentation, which is much less of an issue in a garbage collected language like C#.)
Why can't you access an element of a list at an index without previously using the Add method?
Why doesn't this program work if the list has been made with a set size? List<int> list = new List<int>(2); list[0] = 1; Console.WriteLine(list[0]) This is the error: System.ArgumentOutOfRangeException: 'Index was out of range. Must be non-negative and less than the size of the collection. Parameter name: index
[ "Because the list is empty. That's why there are no elements thus no range\n", "I think a bit more explanation is in order. You are thinking that the \"2\" parameter in the List constructor creates a list of size 2. It does not. It creates an empty list with a capacity of 2.\nWhat that means effectively is that it pre-allocates enough space for two elements in the list so that as you add more and more elements it doesn't have to create add new space. However, the list is still empty, it just has some unused slots.\nThe way this works is that the list has a certain capacity at the start, in your case two so two \"slots\" are allocated for the elements, but the list is still empty. You add an element and it uses the first slot, you add another element and it takes the next slot, you add a third element, now it is out of slots, so it allocates more memory (4 slots for example) copies over the previous two into the first two slots of the new block, and now you have a capacity of 4 but a list of length 2.\nMany years ago I had a problem with a production system that took forever to start up (though this was in C++.) In the loading phase we were loading stuff from a database into a memory list. It contained about 100,000 items. However the capacity was just 10. So it filled the first ten, when they were filled, it allocated 20 slots and copied over the first ten and released that memoer (the algorithm was that it doubled the capacity when it ran out), then 40, 80 and so forth. You can see that it CONSTANTLY was allocating slots and copying the same memory bytes over and over and over again in an exponential fashion.\nWe set the capacity correctly at the beginning and it instantly worked hugely faster.\nSo when you are making a really big list which you are adding to piecemeal, it often is a good idea to set the capacity at the beginning if you can. (FWIW, in C++ this is not only slow but causes horrible memory fragmentation, which is much less of an issue in a garbage collected language like C#.)\n" ]
[ 1, 0 ]
[]
[]
[ "c#", "collections", "indexing", "list", "oop" ]
stackoverflow_0074681213_c#_collections_indexing_list_oop.txt
Q: Overlapping Elements in PyQT6 In my application, I have an image and I'm trying to create a button under the application. Instead, the button appears on top of the image. This is my code so far: import sys from PyQt5.QtWidgets import QApplication, QMainWindow, QLabel, QGridLayout, QWidget, QPushButton from PyQt5.QtGui import QPixmap class Example(QWidget): def __init__(self): super().__init__() self.grid = QGridLayout() self.im = QPixmap("image1.png") self.label = QLabel() self.label.setPixmap(self.im) self.grid.addWidget(self.label, 1, 1) self.setLayout(self.grid) self.new = QPushButton("Restart", self) self.grid.addWidget(self.label, 1, 2) self.setLayout(self.grid) self.setWindowTitle("My Application") self.show() if __name__ == '__main__': app = QApplication(sys.argv) ex = Example() sys.exit(app.exec_()) I suspect it has something to do with pixmap not being recognized as an actual object, so the button just disregards its placement. But I'm not sure, anyone know why its not working? Thanks for your help! The image is what the GUI currently looks like A: It looks like you're adding the self.label widget twice to the grid layout, but you're not adding the button to the layout at all. You need to add the button to the grid layout using the addWidget() method, just like you're doing for the self.label widget. Here's an example of how you could modify your code to add the button to the layout and place it underneath the image: class Example(QWidget): def __init__(self): super().__init__() self.grid = QGridLayout() self.im = QPixmap("image1.png") self.label = QLabel() self.label.setPixmap(self.im) # Add the label widget to the grid layout self.grid.addWidget(self.label, 0, 0) # Create the button and add it to the grid layout self.new = QPushButton("Restart", self) self.grid.addWidget(self.new, 1, 0) self.setLayout(self.grid) self.setWindowTitle("My Application") self.show() In this example, I've added the button to the grid layout on the second row, first column (indexes 1, 0), so it will appear underneath the image. I hope this helps! Let me know if you have any other questions.
Overlapping Elements in PyQT6
In my application, I have an image and I'm trying to create a button under the application. Instead, the button appears on top of the image. This is my code so far: import sys from PyQt5.QtWidgets import QApplication, QMainWindow, QLabel, QGridLayout, QWidget, QPushButton from PyQt5.QtGui import QPixmap class Example(QWidget): def __init__(self): super().__init__() self.grid = QGridLayout() self.im = QPixmap("image1.png") self.label = QLabel() self.label.setPixmap(self.im) self.grid.addWidget(self.label, 1, 1) self.setLayout(self.grid) self.new = QPushButton("Restart", self) self.grid.addWidget(self.label, 1, 2) self.setLayout(self.grid) self.setWindowTitle("My Application") self.show() if __name__ == '__main__': app = QApplication(sys.argv) ex = Example() sys.exit(app.exec_()) I suspect it has something to do with pixmap not being recognized as an actual object, so the button just disregards its placement. But I'm not sure, anyone know why its not working? Thanks for your help! The image is what the GUI currently looks like
[ "It looks like you're adding the self.label widget twice to the grid layout, but you're not adding the button to the layout at all. You need to add the button to the grid layout using the addWidget() method, just like you're doing for the self.label widget.\nHere's an example of how you could modify your code to add the button to the layout and place it underneath the image:\nclass Example(QWidget):\n\n def __init__(self):\n super().__init__()\n\n self.grid = QGridLayout()\n\n self.im = QPixmap(\"image1.png\")\n self.label = QLabel()\n self.label.setPixmap(self.im)\n\n # Add the label widget to the grid layout\n self.grid.addWidget(self.label, 0, 0)\n\n # Create the button and add it to the grid layout\n self.new = QPushButton(\"Restart\", self)\n self.grid.addWidget(self.new, 1, 0)\n\n self.setLayout(self.grid)\n \n self.setWindowTitle(\"My Application\")\n self.show()\n\nIn this example, I've added the button to the grid layout on the second row, first column (indexes 1, 0), so it will appear underneath the image.\nI hope this helps! Let me know if you have any other questions.\n" ]
[ 0 ]
[]
[]
[ "pyqt6", "user_interface" ]
stackoverflow_0074681523_pyqt6_user_interface.txt
Q: How to split H256 into u32, u112, u112 in Rust 0x638d0490000000004b7cdeca2fe41a1b6411000000158fb5610df6aa553bfedb of type H256 https://docs.rs/ethers/0.17.0/ethers/types/struct.H256.html# It is a storage slot on EVM. A single slot is uint256, but there, three different values were packed into one storage slot (thats how EVM works). So uint112 + uint112 + uint32 were packed into uint256 that I need to reverse engineer. Looking to get: 0x638d049 0x4b7cdeca2fe41a1b6411 0x158fb5610df6aa553bfedb Then (1, 2, 3) into (u32, u128, u128) - u128 as the closet one to uint112. Tried a few things with padding but does not seem to be optimal (looping thru). A: In Rust, you can split a 256-bit integer into three 128-bit integers using the .split_into_32_and_128() method provided by the num_bigint crate. Here's an example of how you could do that: extern crate num_bigint; use num_bigint::BigUint; fn main() { // Parse the 256-bit integer from its hexadecimal representation. let h256 = BigUint::parse_bytes(b"638d0490000000004b7cdeca2fe41a1b6411000000158fb5610df6aa553bfedb", 16).unwrap(); // Split the integer into three 128-bit integers. let (first_u32, first_u128, second_u128) = h256.split_into_32_and_128(); // Print the results. println!("first u32: {:x}", first_u32); println!("first u128: {:x}", first_u128); println!("second u128: {:x}", second_u128); } This code should produce the following output: first u32: 638d049 first u128: 4b7cdeca2fe41a1b6411 second u128: 158fb5610df6aa553bfedb Note that you will need to add the num_bigint crate as a dependency in your Cargo.toml file and import it in your code as shown in the example above. I hope this helps! Let me know if you have any other questions. A: You could extract them manually using bitshifts like this: use num::BigUint; fn main() { let a = BigUint::parse_bytes(b"638d0490000000004b7cdeca2fe41a1b6411000000158fb5610df6aa553bfedb", 16).unwrap(); println!("0x{a:x}"); let x = a.clone() >> (2*112); let y = (a.clone() >> 112) & ((BigUint::from(1u8) << 112) - 1u8); let z = a.clone() & ((BigUint::from(1u8) << 112) - 1u8); println!("0x{x:x}"); println!("0x{y:x}"); println!("0x{z:x}"); }
How to split H256 into u32, u112, u112 in Rust
0x638d0490000000004b7cdeca2fe41a1b6411000000158fb5610df6aa553bfedb of type H256 https://docs.rs/ethers/0.17.0/ethers/types/struct.H256.html# It is a storage slot on EVM. A single slot is uint256, but there, three different values were packed into one storage slot (thats how EVM works). So uint112 + uint112 + uint32 were packed into uint256 that I need to reverse engineer. Looking to get: 0x638d049 0x4b7cdeca2fe41a1b6411 0x158fb5610df6aa553bfedb Then (1, 2, 3) into (u32, u128, u128) - u128 as the closet one to uint112. Tried a few things with padding but does not seem to be optimal (looping thru).
[ "In Rust, you can split a 256-bit integer into three 128-bit integers using the .split_into_32_and_128() method provided by the num_bigint crate.\nHere's an example of how you could do that:\nextern crate num_bigint;\nuse num_bigint::BigUint;\n\nfn main() {\n // Parse the 256-bit integer from its hexadecimal representation.\n let h256 = BigUint::parse_bytes(b\"638d0490000000004b7cdeca2fe41a1b6411000000158fb5610df6aa553bfedb\", 16).unwrap();\n\n // Split the integer into three 128-bit integers.\n let (first_u32, first_u128, second_u128) = h256.split_into_32_and_128();\n\n // Print the results.\n println!(\"first u32: {:x}\", first_u32);\n println!(\"first u128: {:x}\", first_u128);\n println!(\"second u128: {:x}\", second_u128);\n}\n\nThis code should produce the following output:\nfirst u32: 638d049\nfirst u128: 4b7cdeca2fe41a1b6411\nsecond u128: 158fb5610df6aa553bfedb\n\nNote that you will need to add the num_bigint crate as a dependency in your Cargo.toml file and import it in your code as shown in the example above.\nI hope this helps! Let me know if you have any other questions.\n", "You could extract them manually using bitshifts like this:\nuse num::BigUint;\nfn main() {\n let a = BigUint::parse_bytes(b\"638d0490000000004b7cdeca2fe41a1b6411000000158fb5610df6aa553bfedb\", 16).unwrap();\n println!(\"0x{a:x}\");\n let x = a.clone() >> (2*112);\n let y = (a.clone() >> 112) & ((BigUint::from(1u8) << 112) - 1u8);\n let z = a.clone() & ((BigUint::from(1u8) << 112) - 1u8);\n println!(\"0x{x:x}\");\n println!(\"0x{y:x}\");\n println!(\"0x{z:x}\");\n}\n\n" ]
[ 0, 0 ]
[]
[]
[ "rust" ]
stackoverflow_0074680730_rust.txt
Q: Executes tests in parallel failing on Jenkins but passing locally I am writing here maybe I can get some ideas what can be the issue. I am using serenity with cucumber and spring. The following packages are used by serenity 3.3.2: serenity-core serenity-screenplay serenity-screenplay-webdriver serenity-screenplay-rest serenity-ensure serenity-spring serenity-junit serenity-cucumber Additional libraries (required for spring): spring-boot-starter-test spring-beans spring-rabbit I am using other other libraries too, but are used for heling during test development. I configured the tests to be executed in parallel using the maven-failsafe plugin and the documentation from here. I use the 3.0.0-M3 failsafe plugin version (otherwise the tests are not triggered to be executed in parallel). The tests are running in parallel in case I am executing them with maven locally. I tried in two different OS machines: Windows and Unix. The tests were executed without any problem, all of them passed. I have issue in case I am executing the tests on Jenkins. First of all the tests are triggered in a parallel manner on Jenkins as well (the thread information are showed in the logs: pool-1-thread-2; pool-1-thread-1; pool-1-thread-3). Some of the tests are failing on Jenkins. The tests failed because the element cannot be find on the current state. The screenshot capture is enabled. The elements are displayed accordingly on the screenshot. Also I checked the assertion where the test fails is performed by the same thread that had performed other steps above in the same test. I am using the Xvfb Jenkins plugin in order to be able to perform UI interaction with the tests: The agent has configured with 5 executors. These executors are not used because the the tests are running in parallel on the same machine (not multiple machines). If I am wrong please correct me. I don't have any idea what can be the problem. Somebody has any experience with this kind of configuration? I am welcome any ideas. A: I have the same problem which I can’t solve yet. Either the test fails because the element is not found but the screenshot shows the opposite or it fails because the element is still visible, but not on the screenshot. Tried many different waits, but no results. Did you find anything?
Executes tests in parallel failing on Jenkins but passing locally
I am writing here maybe I can get some ideas what can be the issue. I am using serenity with cucumber and spring. The following packages are used by serenity 3.3.2: serenity-core serenity-screenplay serenity-screenplay-webdriver serenity-screenplay-rest serenity-ensure serenity-spring serenity-junit serenity-cucumber Additional libraries (required for spring): spring-boot-starter-test spring-beans spring-rabbit I am using other other libraries too, but are used for heling during test development. I configured the tests to be executed in parallel using the maven-failsafe plugin and the documentation from here. I use the 3.0.0-M3 failsafe plugin version (otherwise the tests are not triggered to be executed in parallel). The tests are running in parallel in case I am executing them with maven locally. I tried in two different OS machines: Windows and Unix. The tests were executed without any problem, all of them passed. I have issue in case I am executing the tests on Jenkins. First of all the tests are triggered in a parallel manner on Jenkins as well (the thread information are showed in the logs: pool-1-thread-2; pool-1-thread-1; pool-1-thread-3). Some of the tests are failing on Jenkins. The tests failed because the element cannot be find on the current state. The screenshot capture is enabled. The elements are displayed accordingly on the screenshot. Also I checked the assertion where the test fails is performed by the same thread that had performed other steps above in the same test. I am using the Xvfb Jenkins plugin in order to be able to perform UI interaction with the tests: The agent has configured with 5 executors. These executors are not used because the the tests are running in parallel on the same machine (not multiple machines). If I am wrong please correct me. I don't have any idea what can be the problem. Somebody has any experience with this kind of configuration? I am welcome any ideas.
[ "I have the same problem which I can’t solve yet. Either the test fails because the element is not found but the screenshot shows the opposite or it fails because the element is still visible, but not on the screenshot. Tried many different waits, but no results. Did you find anything?\n" ]
[ 0 ]
[]
[]
[ "automated_tests", "cucumber_serenity", "jenkins" ]
stackoverflow_0073554455_automated_tests_cucumber_serenity_jenkins.txt
Q: Binary matrix multiplication I got a matrix A, with the following bytes as rows: 11111110 (0xfe) 11111000 (0xf8) 10000100 (0x84) 10010010 (0x92) My program reads a byte from stdin with the function sys.stdin.read(1). Suppose I receive the byte x 10101010 (0xaa). Is there a way using numpy to perform the multiplication: >>> A.dot(x) 0x06 (00000110) As A is a 4x8 matrix, compossed by 4 bytes as rows, and x is an 8 bit array, I was expecting to receive the (nibble 0110) byte 0000 0110 as a result of the multiplication A * x, treating bits as elements of the matrix. If the elements of the matrix were treated as binary bytes, the result would be: >>> A = np.array([[1,1,1,1,1,1,1,0],[1,1,1,1,1,0,0,0],[1,0,0,0,0,1,0,0],[1,0,0,1,0,0,1,0]]) >>> x = np.array([1,0,1,0,1,0,1,0]) >>> A.dot(x)%2 array([0, 1, 1, 0]) A: 1. Not using dot You do not need to fully expand your matrix to do bitwise "multiplication" on it. You want to treat A as a 4x8 matrix of bits and x as an 8-element vector of bits. A row multiplication yields 1 for the bits that are on in both A and x and 0 if either bit is 0. This is equivalent to applying bitwise and (&): >>> [hex(n) for n in (A & x)] ['0xaa', '0xa8', '0x80', '0x82'] 10101010 10101000 10000000 10000000 Here is a post on counting the bits in a byte. bin(n).count("1") is probably the easiest one to use, so >>> [bin(n).count("1") % 2 for n in (A & x)] [0, 1, 1, 0] If you want just a number, you can do something like >>> int(''.join(str(bin(n).count("1") % 2) for n in (A & x)), 2) 6 2. Using dot To use dot, you can easily expand A and x into their numpy equivalents: >>> list(list(int(n) for n in list(bin(r)[2:])) for r in A) [['1', '1', '1', '1', '1', '1', '1', '0'], ['1', '1', '1', '1', '1', '0', '0', '0'], ['1', '0', '0', '0', '0', '1', '0', '0'], ['1', '0', '0', '1', '0', '0', '1', '0']] >>> list(int(n) for n in bin(x)[2:]) [1, 0, 1, 0, 1, 0, 1, 0] You can apply dot to the result: >>> np.dot(list(list(int(n) for n in list(bin(r)[2:])) for r in A), list(int(n) for n in bin(x)[2:])) % 2 array([0, 1, 1, 0]) A: It is possible to do a binary matrix multiplication using binary arithmetic consider this answer: Binary matrix multiplication bit twiddling hack
Binary matrix multiplication
I got a matrix A, with the following bytes as rows: 11111110 (0xfe) 11111000 (0xf8) 10000100 (0x84) 10010010 (0x92) My program reads a byte from stdin with the function sys.stdin.read(1). Suppose I receive the byte x 10101010 (0xaa). Is there a way using numpy to perform the multiplication: >>> A.dot(x) 0x06 (00000110) As A is a 4x8 matrix, compossed by 4 bytes as rows, and x is an 8 bit array, I was expecting to receive the (nibble 0110) byte 0000 0110 as a result of the multiplication A * x, treating bits as elements of the matrix. If the elements of the matrix were treated as binary bytes, the result would be: >>> A = np.array([[1,1,1,1,1,1,1,0],[1,1,1,1,1,0,0,0],[1,0,0,0,0,1,0,0],[1,0,0,1,0,0,1,0]]) >>> x = np.array([1,0,1,0,1,0,1,0]) >>> A.dot(x)%2 array([0, 1, 1, 0])
[ "1. Not using dot\nYou do not need to fully expand your matrix to do bitwise \"multiplication\" on it. You want to treat A as a 4x8 matrix of bits and x as an 8-element vector of bits. A row multiplication yields 1 for the bits that are on in both A and x and 0 if either bit is 0. This is equivalent to applying bitwise and (&):\n>>> [hex(n) for n in (A & x)]\n['0xaa', '0xa8', '0x80', '0x82']\n\n\n10101010\n10101000\n10000000\n10000000\n\nHere is a post on counting the bits in a byte. bin(n).count(\"1\") is probably the easiest one to use, so\n>>> [bin(n).count(\"1\") % 2 for n in (A & x)]\n[0, 1, 1, 0]\n\nIf you want just a number, you can do something like\n>>> int(''.join(str(bin(n).count(\"1\") % 2) for n in (A & x)), 2)\n6\n\n2. Using dot\nTo use dot, you can easily expand A and x into their numpy equivalents:\n>>> list(list(int(n) for n in list(bin(r)[2:])) for r in A)\n[['1', '1', '1', '1', '1', '1', '1', '0'],\n ['1', '1', '1', '1', '1', '0', '0', '0'],\n ['1', '0', '0', '0', '0', '1', '0', '0'],\n ['1', '0', '0', '1', '0', '0', '1', '0']]\n>>> list(int(n) for n in bin(x)[2:])\n[1, 0, 1, 0, 1, 0, 1, 0]\n\nYou can apply dot to the result:\n>>> np.dot(list(list(int(n) for n in list(bin(r)[2:])) for r in A),\n list(int(n) for n in bin(x)[2:])) % 2\narray([0, 1, 1, 0])\n\n", "It is possible to do a binary matrix multiplication using binary arithmetic consider this answer: Binary matrix multiplication bit twiddling hack\n" ]
[ 0, 0 ]
[]
[]
[ "numpy", "python" ]
stackoverflow_0044203732_numpy_python.txt
Q: Heroku build fails on git push because of Rollup I am deploying my first Heroku app (NodeJS) but when it comes to the last 'git push heroku master' step, I get an error that causes the build to fail because of problems with rollupjs. I dont even think I am using Rollup for anything in my app, and I even tried global and dependency uninstall of Rollup to see if it would change anything, but no. How do I resolve this problem? remote: Could not resolve './sections/Home.jsx' from src/main.jsx remote: error during build: remote: Error: Could not resolve './sections/Home.jsx' from src/main.jsx remote: at error (file:///tmp/build_544c0c85/node_modules/vite/node_modules/rollup/dist/es/shared/rollup.js:1858:30) remote: at ModuleLoader.handleResolveId (file:///tmp/build_544c0c85/node_modules/vite/node_modules/rollup/dist/es/shared/rollup.js:22350:24) remote: at file:///tmp/build_544c0c85/node_modules/vite/node_modules/rollup/dist/es/shared/rollup.js:22313:26 remote: remote: -----> Build failed main.jsx: import React from 'react' import ReactDOM from 'react-dom/client' import Home from './sections/Home.jsx' import { BrowserRouter, Routes, Route } from "react-router-dom" ReactDOM.createRoot(document.getElementById('root')).render( <React.StrictMode> <BrowserRouter> <Routes> <Route path="/" element={<Home />}> </Route> </Routes> </BrowserRouter> </React.StrictMode> ) Home.jsx: import React from "react"; import About from "./About"; import Projects from "./Projects"; import Landing from "./Landing"; import Navbar from './Navbar' import Skills from "./Skills"; import Footer from './Footer' import Contact from "./Contact" import { useState } from "react"; import { useEffect } from "react"; import '../App.css' function Home() { const [darkTheme, setDarkTheme] = useState(true) const [darkLand, setDarkLand] = useState(true) const [count, setCount] = useState(0) const prefersDarkScheme = window.matchMedia("(prefers-color-scheme: dark)"); const currentTheme = localStorage.getItem("theme"); if (currentTheme == "dark") { document.body.classList.toggle("light-theme"); setDarkTheme(true) setCount(count + 1) } else if (currentTheme == "light") { document.body.classList.toggle("dark-theme"); setDarkTheme(false) setCount(count + 1) } function themeChange() { if (prefersDarkScheme.matches) { document.body.classList.toggle("light-theme"); setDarkTheme(true) setCount(count + 1) } else { document.body.classList.toggle("dark-theme"); setDarkTheme(false) setCount(count + 1) } }; useEffect(() => { setDarkLand(!darkLand) }, [count]) return( <div className="Home fade" id="home" key={darkTheme}> <Navbar themeChange={themeChange} darkLand={darkLand} /> <Landing darkLand={darkLand} key={darkLand}/> <About /> <Skills /> <Projects /> <Contact /> <Footer /> </div> ) } export default Home A: Mac and Windows (one of which you're probably using to develop) have case-insensitive filesystems. Linux (which you're deploying to) has case-sensitive filesystems. ./sections/Home.jsx will work on your computer, but not on Heroku — change it to ./Sections/Home.jsx. And a personal tip, if you want to avoid problems like this you can set a rule for yourself to always use lowercase filenames and directories.
Heroku build fails on git push because of Rollup
I am deploying my first Heroku app (NodeJS) but when it comes to the last 'git push heroku master' step, I get an error that causes the build to fail because of problems with rollupjs. I dont even think I am using Rollup for anything in my app, and I even tried global and dependency uninstall of Rollup to see if it would change anything, but no. How do I resolve this problem? remote: Could not resolve './sections/Home.jsx' from src/main.jsx remote: error during build: remote: Error: Could not resolve './sections/Home.jsx' from src/main.jsx remote: at error (file:///tmp/build_544c0c85/node_modules/vite/node_modules/rollup/dist/es/shared/rollup.js:1858:30) remote: at ModuleLoader.handleResolveId (file:///tmp/build_544c0c85/node_modules/vite/node_modules/rollup/dist/es/shared/rollup.js:22350:24) remote: at file:///tmp/build_544c0c85/node_modules/vite/node_modules/rollup/dist/es/shared/rollup.js:22313:26 remote: remote: -----> Build failed main.jsx: import React from 'react' import ReactDOM from 'react-dom/client' import Home from './sections/Home.jsx' import { BrowserRouter, Routes, Route } from "react-router-dom" ReactDOM.createRoot(document.getElementById('root')).render( <React.StrictMode> <BrowserRouter> <Routes> <Route path="/" element={<Home />}> </Route> </Routes> </BrowserRouter> </React.StrictMode> ) Home.jsx: import React from "react"; import About from "./About"; import Projects from "./Projects"; import Landing from "./Landing"; import Navbar from './Navbar' import Skills from "./Skills"; import Footer from './Footer' import Contact from "./Contact" import { useState } from "react"; import { useEffect } from "react"; import '../App.css' function Home() { const [darkTheme, setDarkTheme] = useState(true) const [darkLand, setDarkLand] = useState(true) const [count, setCount] = useState(0) const prefersDarkScheme = window.matchMedia("(prefers-color-scheme: dark)"); const currentTheme = localStorage.getItem("theme"); if (currentTheme == "dark") { document.body.classList.toggle("light-theme"); setDarkTheme(true) setCount(count + 1) } else if (currentTheme == "light") { document.body.classList.toggle("dark-theme"); setDarkTheme(false) setCount(count + 1) } function themeChange() { if (prefersDarkScheme.matches) { document.body.classList.toggle("light-theme"); setDarkTheme(true) setCount(count + 1) } else { document.body.classList.toggle("dark-theme"); setDarkTheme(false) setCount(count + 1) } }; useEffect(() => { setDarkLand(!darkLand) }, [count]) return( <div className="Home fade" id="home" key={darkTheme}> <Navbar themeChange={themeChange} darkLand={darkLand} /> <Landing darkLand={darkLand} key={darkLand}/> <About /> <Skills /> <Projects /> <Contact /> <Footer /> </div> ) } export default Home
[ "Mac and Windows (one of which you're probably using to develop) have case-insensitive filesystems. Linux (which you're deploying to) has case-sensitive filesystems. ./sections/Home.jsx will work on your computer, but not on Heroku — change it to ./Sections/Home.jsx. And a personal tip, if you want to avoid problems like this you can set a rule for yourself to always use lowercase filenames and directories.\n" ]
[ 0 ]
[]
[]
[ "heroku", "node.js", "npm", "reactjs", "rollup" ]
stackoverflow_0074681097_heroku_node.js_npm_reactjs_rollup.txt
Q: How to convert an arbitary channel tensor to opencv dnn blob? I'm using opencv dnn for infering onnx model, and I'v found dnn::blobFromImage can transfer an image to blob as input of dnn::Net, but if there anyway to transfer arbitary shape tensor (e.g. (1,8,256,256) instead of 3 channels) to blob for infering ? A: cv::Mat tensor = ... // tensor with shape (1, 8, 256, 256) cv::Mat blob = cv::dnn::blobFromImage(tensor, 1.0, cv::Size(256, 256), cv::Scalar(0, 0, 0), true, false, CV_32F);
How to convert an arbitary channel tensor to opencv dnn blob?
I'm using opencv dnn for infering onnx model, and I'v found dnn::blobFromImage can transfer an image to blob as input of dnn::Net, but if there anyway to transfer arbitary shape tensor (e.g. (1,8,256,256) instead of 3 channels) to blob for infering ?
[ "cv::Mat tensor = ... // tensor with shape (1, 8, 256, 256)\ncv::Mat blob = cv::dnn::blobFromImage(tensor, 1.0, cv::Size(256, 256), cv::Scalar(0, 0, 0), true, false, CV_32F);\n\n\n" ]
[ 0 ]
[]
[]
[ "blob", "dotnetnuke", "opencv", "tensor" ]
stackoverflow_0074677983_blob_dotnetnuke_opencv_tensor.txt