instruction
stringlengths
0
30k
|c++|lambda|c++17|constexpr|stdtuple|
This is my userscript to change the color of visited links. It is assumed to be able to work with any website. ``` // ==UserScript== // @name Change the color of visited links // @description - // @match *://*/* // ==/UserScript== function style(css) { let head, style; head = document.getElementsByTagName('head')[0]; if (!head) return; style = document.createElement('style'); style.type = 'text/css'; style.innerHTML = css.replace(/;/g, '!important;'); head.appendChild(style); } function links(anylink, link, visited, hover, active) { style(` ${anylink} { outline: solid 1px; } ${link} { color: blue; } ${visited} { color: fuchsia; } ${hover} { color: red; } ${active} { color: yellow; } `); } links('a:any-link , a:any-link *', 'a:link , a:link *', 'a:visited , a:visited *', 'a:hover , a:hover *', 'a:active , a:active *' ); ``` The problem is that on some websites some links belong to shadow DOM elements, and by this reason they are not affected by the script. How this can be fixed?
I am working on a project using Node.js v21.7.1, from a MariaDB database I query first name and surname then pass these back to the client. In my SQL I use: ``` CONCAT(`t2`.`vcSurName`, ',', `t2`.`vcFirstName`) AS vcName ``` My client receives this and adds it to a 'SELECT' HTML tag, I can see the name is displayed correctly, for example 'Platten, Simon' is shown, this is now part of a form which when submitted, I can see that the server receives the name as: ``` 'Platten%2C+Simon' ``` On the server I have used: ``` var strUserName = decodeURIComponent(objFields['biUID']); ``` I then split the name into an array: ``` var aryName = strUserName.split(', '); ``` This is where is goes wrong, decodeURIComponent only translates the %2C back to a comma, but the content of aryName now contains: ``` [Platten,+Simon] ``` What do I need to do because the + is the result of encoding the space but it doesn't get decoded...
Why is ', ' translated to '%2C+'?
|string|decode|encode|javascrip|
From what I have read a NULL pointer points to the memory location "0" and an unitialized pointer points to a random location? Could it be that this random location sometimes is the "0" so that it is a NULL pointer as well? I realize that this will not happen all the time, but could it happen? Or has C some mechanism to stop it from happening?
Could an unitialized pointer be a NULL pointer?
|c|pointers|null|initialization|
When I run my spring-app in IntelliJ I get the normal output in the run-tool-window. The very first greyich line contains the complete command run to start the app. It is clickable so that the full thing should be expanded and displayed inline. But unfortunately my IntelliJ is somehow configured to open the thing in Notpad++ instead. **Q:** How can I make IntelliJ open the thing inline instead? [![enter image description here][1]][1] [1]: https://i.stack.imgur.com/eD0ub.png
|android|firebase|google-cloud-firestore|
null
null
# Short Version Now we are interested in controlling the expected outcome of `fetch_html()`, that means we have to control the input of the function. This function uses `urllib.request.urlopen()` to get a response from the http server of the domain `example.com`. That url is actually the input of the function. To control it and to make sure we can run the test without having to make any "real" http requests to "example.com", we have to mock `urllib.request.urlopen()`, so that it returns a predefined `response`. One way to do that is to patch urllib.request.urlopen() to return a instance of a class that mocks `Response` by exposing a .read() funciton that return a preloaded HTML. Here is an implementation using `unittest`: import unittest from unittest.mock import patch from src.schedules import SomeScheduleClass from src.utils import fetch_html class TestFetchHtml(unittest.TestCase): # mocked html content. It can be re-used when testing SomeScheduleClass mock_html_content = b"<html>Mock example.com HTML content</html>" # class to mock the http response class returned by urlopen class MockResponse: def read(self): return TestFetchHtml.mock_html_content # Prepare mocking def setUp(self): self.mock_response = TestFetchHtml.MockResponse() def test_fetch_html(self): with patch('urllib.request.urlopen', return_value=self.mock_response) as mocked_urlopen: # call the function to be tested result = fetch_html() mocked_urlopen.assert_called_once_with("https://example.com") # assert the return value is an object of type SomeScheduleClass self.assertIsInstance(result, SomeScheduleClass) # some assertions to make sure result contains the expected value # it should be extensive since we are not testing SomeScheduleClass here # Additional assertions could be added here as needed And here is an implementation using `pytest`. you will have first to install: pip3 install pytest pytest-mock `pytest-mock` provides the fixture `mocker` that is a thin wrapper of functionalities of `unittest.mock`. then you can use `pytest`, and `mocker` in the `test_fetch_html()` function as shown below: import pytest from src.schedules import SomeScheduleClass from src.utils import fetch_html @pytest.mark.UNIT def test_fetch_html(mocker): # mocked html content. It can be re-used when testing SomeScheduleClass mock_html_content = b"<html>Mock example.com HTML content</html>" # class to mock the http response class returned by urlopen class MockResponse: def read(self): return mock_html_content mock_response = MockResponse() # mock urllib.request.urlopen to return mock_response p = mocker.patch('urllib.request.urlopen', return_value=mock_response) # call the function to be tested r = fetch_html() # assert urlopen was called once with url: https://example.com p.assert_called_once_with("https://example.com") # assert the return value is an object of type SomeScheduleClass assert isinstance(r, SomeScheduleClass) # some assertions to make sure r contains the exected value # it should be be extensive since we are not testing SomeScheduleClass # Long Version This is a great general question that deserves a general answer. Althgouh Stackoverflow is a great place to get answers, I don't believe an answer here no matter how good would be a substitute for more a more substantial reading, courses and hands-on experience. Nonetheless, I will try to be thorough. Test automation is great topic in software development, and it is an essential part of SecDevOps, and plays a very important role in modern software development. I personally learned about test automation late in my 30 years+ of software development, and a part of me wishes I have taken up writing tests or test-driven software development earlier, and therefore, I encourage you to proceed on this path and take your test automation skills to the next level. I will assume that the questions of **What is Test Automation?**, and **Why is test automation important?** are clear to you, and I will proceed to the next questions **What to test?** and **When to test?**: # What to test? In your question you write and I quote: > Blockquote My point is I have no idea how to test if it actually returns SomeScheduleClass as expected. Do I mock the http.client.HTTPResponse object I get from the urlopen? Do I somehow find a way to mock the response.read()? Or do I not have to test this at all and I'm just wasting my time? If so, what should I test here? This is an essential question. Writing tests for your already existing code is "work" and "takes" time and energy, so the why question is not just a philosophical question but it is also an engineering question. There are several approaches and strategies that Quality Assurance managers, or CI/CD pipelines managers or CIOs would implement or prefer. I will try to list some **strategies** that I hope are useful to address the general **What to test?** question, then I will address the question about the snippet of code you shared. ### Use test driven-development Test driven development means writing the tests before (or while) you are writing code. This method allows to ensure high test coverage, and also force you to design the code in a way that makes it easy to test. ### Focus on regression tests If you are not implementing test-driver development, and you don't have high coverage of your code, then you would want to start with writing a test for everytime you discover a bug in the code. The goal of writing these regression tests is to ensure bugs do not resurface after they get fixed. Also writing regression tests slowly increases code coverage until you reach a point where you can switch to test-driven development. If you are already implementing test-driven development, then adding regression tests will cover cases you did not address in previous tests, and will ensure bugs do not resurface upon code refactoring. In other words, Bug fixing means writing test code that reproduce the bug case, then fixing the bug so that the bug is averted and the test passes. ### Focus on the most critical part of your code: Writing tests is time consuming. If you are going to invest this time in writing tests, then invest time in covering the most essential part of the code, or the most difficult part that is potentially buggy. That will allow you to discover/avert runtime bugs, and might also help you redesign the code so that it is more modular and simpler. ### Write tests for your code Don't waste your time writing tests for functions of 3rd party packages. Most packages already have tests. Write tests for your code, test all branches of a function, test all functions in a class or module, test all modules. Use a test coverage analysis tool to easily zoom in on code that is not coverage by tests. ### Focus on Interfaces When writing code that interfaces with 3rd party services or APIs, then focusing on those interfaces is not a bad idea, because it will allow you to detect any changes in the apis ### Set a coverage target: And add a rule such as ensure when code is committed, coverage does not decrease. ### Security driven tests writing: With this strategy, the goal is to ensure that all edge cases, stress situation, or any security issue is covered. Here you write tests that are designed to break the system, then adjust the code such as the system does not break, or such as the system fails in an expected and controlled way. Every time a security issue is discovered, addressing the issue means writing a test(s) that reproduce it and fixing the code so that the test(s) passes, which is similar to writing regression tests. ### Focus on use cases: Write tests that covers use cases of your code, or that are driven by the end-users of the code/tool. That way you cover the part related to the user-experience. That can accelerate the speed at which features and fixes are delivered to the end-user, by reducing the development/manual-testing iterations of new features or code fixes. ### What to test in the code snippet: from urllib.request import urlopen def fetch_html(): url = "https://example.com" response = urlopen(url) dom = response.read() return SomeScheduleClass(dom) I will start by writing Unit test for fetch_html, the test should ensure that the function returns an instance of type SomeScheduleClass, and that content of that instance is what is expected from parsing some `dom` with the `SomeScheduleClass`. We are not interested in testing `urllib.request.urlopen` because it is not your code. We are not interested in testing SomeScheduleClass constructor in `test_fetch_html()` because it should be tested in the Unit tests of the class `SomeScheduleClass`. Now we are interested in controlling the expected outcome of `fetch_html()`, that means we have to control the input of the function. This function uses `urllib.request.urlopen()` to get a response from the http server of the domain `example.com`. That url is actually the input of the function. To control it and to make sure we can run the test without having to make any "real" http requests to "example.com", we have to mock `urllib.request.urlopen()`, so that it returns a predefined `response`. One way to do that is to patch urllib.request.urlopen() to return a instance of a class that mocks `Response` by exposing a .read() funciton that return a preloaded HTML. Here is an implementation using `unittest`: import unittest from unittest.mock import patch from src.schedules import SomeScheduleClass from src.utils import fetch_html class TestFetchHtml(unittest.TestCase): # mocked html content. It can be re-used when testing SomeScheduleClass mock_html_content = b"<html>Mock example.com HTML content</html>" # class to mock the http response class returned by urlopen class MockResponse: def read(self): return TestFetchHtml.mock_html_content # Prepare mocking def setUp(self): self.mock_response = TestFetchHtml.MockResponse() def test_fetch_html(self): with patch('urllib.request.urlopen', return_value=self.mock_response) as mocked_urlopen: # call the function to be tested result = fetch_html() mocked_urlopen.assert_called_once_with("https://example.com") # assert the return value is an object of type SomeScheduleClass self.assertIsInstance(result, SomeScheduleClass) # some assertions to make sure result contains the expected value # it should be extensive since we are not testing SomeScheduleClass here # Additional assertions could be added here as needed And here is an implementation using `pytest`. you will have first to install: pip3 install pytest pytest-mock `pytest-mock` provides the fixture `mocker` that is a thin wrapper of functionalities of `unittest.mock`. then you can use `pytest`, and `mocker` in the `test_fetch_html()` function as shown below: import pytest from src.schedules import SomeScheduleClass from src.utils import fetch_html @pytest.mark.UNIT def test_fetch_html(mocker): # mocked html content. It can be re-used when testing SomeScheduleClass mock_html_content = b"<html>Mock example.com HTML content</html>" # class to mock the http response class returned by urlopen class MockResponse: def read(self): return mock_html_content mock_response = MockResponse() # mock urllib.request.urlopen to return mock_response p = mocker.patch('urllib.request.urlopen', return_value=mock_response) # call the function to be tested r = fetch_html() # assert urlopen was called once with url: https://example.com p.assert_called_once_with("https://example.com") # assert the return value is an object of type SomeScheduleClass assert isinstance(r, SomeScheduleClass) # some assertions to make sure r contains the exected value # it should be be extensive since we are not testing SomeScheduleClass It is evident that this function does not have any error handling and any exception raised will be handled in by the invoking function. So when testing the invoking function, edge cases should be addressed, such as the case that example.com was not reachable, or if example.com returned 4xx or 5xx or something like that. # When to test? i.e. when to run the tests, during development, after committing to the branch? on merge requests? on staging? on deployment? In "The DevOps Handbook" (which I highly recommend), while discussing the "2nd second way, the principles of feedback" the advise is to "Keep pushing quality closer to source" and "to enable optimization for teams downstream". In practicle terms, developers should be able to test their code before committing it (pushing quality closer to source), and any bug in the code should be discovered as soon as possible before it reaches production, the closer the bug is discovered to its source, the better. I would recommend, enabling developers to run all tests before committing code. but that means developers should be able to create production-like environments on their own... there is so much here to discuss. # Tools of the Trade: ## UnitTest Python comes with built in facility for testing using unittest https://docs.python.org/3/library/unittest.html. ## PyTest I personally prefer the pytest https://pytest.org/ framework because of many reasons such as: - pytest makes organising tests much simpler and much flexible than unittest. - It also make parameterising tests much simpler. - It also more economic in terms of lines of code - and in case you already have tests written for unittest, pytest can still run them. ## Tox From the tox website https://tox.wiki/en/4.13.0/: tox is a generic virtual environment management and test command line tool you can use for - checking your package builds and installs correctly under different environments (such as different Python implementations, versions or installation dependencies), - running your tests in each of the environments with the test tool of choice, - acting as a frontend to continuous integration servers, greatly reducing boilerplate and merging CI and shell-based testing. - it is extendible and already has tons of plugins for mocking or other test related functionality # Resources: I am actually working on a book/udemy course for using pytest, but it is not ready yet, until then I would recommend the following resources: ### Websites - https://pytest.org/ - https://tox.wiki/en/4.13.0/ ### Courses: - https://www.udemy.com/course/elegant-automation-frameworks-with-python-and-pytest - https://www.udemy.com/course/api-testing-python/ - https://www.youtube.com/watch?v=cHYq1MRoyI0 ### Podcasts: - https://podcast.pythontest.com/ ### Books: - The Phoenix Project: https://itrevolution.com/product/the-phoenix-project/ - The Unicorn Project: https://itrevolution.com/product/the-unicorn-project/ - The DevOps Handbook: https://itrevolution.com/product/the-devops-handbook-second-edition/ - Test Driven Python Development: https://www.packtpub.com/en-us/product/test-driven-python-development-9781783987924
I have most recent **Android Studio Hedgehog | 2023.1.1 Patch 2** for this date. Created empty activity project. Tried both kotlin dsl and groovy dsl. Then can build it with build studio's command. But can't run any gradlew task. Only `./gradlew --version` works and prints following: > ------------------------------------------------------------ Gradle 8.2 > ------------------------------------------------------------ > > Build time: 2023-06-30 18:02:30 UTC Revision: > 5f4a070a62a31a17438ac998c2b849f4f6892877 > > Kotlin: 1.8.20 Groovy: 3.0.17 Ant: Apache Ant(TM) > version 1.10.13 compiled on January 4 2023 JVM: 1.8.0_382 > (Amazon.com Inc. 25.382-b05) OS: Mac OS X 14.3.1 aarch64 In **gradle-wrapper.properties** i have `distributionUrl=https\://services.gradle.org/distributions/gradle-8.2-bin.zip`. In *Android studio settings / Build / Build tools / Gradle* i have *Gradle JDK* setting set to **GRADLE_LOCAL_JAVA_HOME (jbr 17.0.7)**. I read some related SO-posts and have no idea what is going on. Every gradlew call ends up with this: ```none FAILURE: Build failed with an exception. * What went wrong: >A problem occurred configuring root project 'GradleTest2'. > Could not resolve all files for configuration ':classpath'. > Could not resolve com.android.tools.build:gradle:8.2.2. > Required by: project : > com.android.application:com.android.application.gradle.plugin:8.2.2 > No matching variant of com.android.tools.build:gradle:8.2.2 was found. The consumer was configured to find a library for use during runtime, compatible with Java 8, packaged as a jar, and its dependencies declared externally, as well as attribute 'org.gradle.plugin.api-version' with value '8.2' but: - Variant 'apiElements' capability com.android.tools.build:gradle:8.2.2 declares a library, packaged as a jar, and its dependencies declared externally: - Incompatible because this component declares a component for use during compile-time, compatible with Java 11 and the consumer needed a component for use during runtime, compatible with Java 8 - Other compatible attribute: - Doesn't say anything about org.gradle.plugin.api-version (required '8.2') - Variant 'javadocElements' capability com.android.tools.build:gradle:8.2.2 declares a component for use during runtime, and its dependencies declared externally: - Incompatible because this component declares documentation and the consumer needed a library - Other compatible attributes: - Doesn't say anything about its target Java version (required compatibility with Java 8) - Doesn't say anything about its elements (required them packaged as a jar) - Doesn't say anything about org.gradle.plugin.api-version (required '8.2') - Variant 'runtimeElements' capability com.android.tools.build:gradle:8.2.2 declares a library for use during runtime, packaged as a jar, and its dependencies declared externally: - Incompatible because this component declares a component, compatible with Java 11 and the consumer needed a component, compatible with Java 8 - Other compatible attribute: - Doesn't say anything about org.gradle.plugin.api-version (required '8.2') - Variant 'sourcesElements' capability com.android.tools.build:gradle:8.2.2 declares a component for use during runtime, and its dependencies declared externally: - Incompatible because this component declares documentation and the consumer needed a library - Other compatible attributes: - Doesn't say anything about its target Java version (required compatibility with Java 8) - Doesn't say anything about its elements (required them packaged as a jar) - Doesn't say anything about org.gradle.plugin.api-version (required '8.2') * Try: > Run with --stacktrace option to get the stack trace. > Run with --info or --debug option to get more log output. > Run with --scan to get full insights. > Get more help at https://help.gradle.org. BUILD FAILED in 391ms mjollneer@IBMPC GradleTest % ./gradlew FAILURE: Build failed with an exception. ``` **UPD 1.** System JAVA_HOME currently set to /Users/mjollneer/Library/Java/JavaVirtualMachines/corretto-1.8.0_382/Contents/Home because when i. set it to jbr-17 like this /Applications/Android\ Studio.app/Contents/jbr/Contents/Home gradlew calls ends up with this ERROR: JAVA_HOME is set to an invalid directory **UPD 2.** Project initially created with this version: compileOptions { sourceCompatibility JavaVersion.VERSION_1_8 targetCompatibility JavaVersion.VERSION_1_8 } kotlinOptions { jvmTarget = '1.8' } But when i change all 3 consts to 11 or 17 - everything remeains the same (ok from studio and fail in command line).
Error: EPERM: operation not permitted, unlink 'C:\Users\Bright-Ewuru\Desktop\full stack real estate\server\node_modules\.prisma\client\query_engine-windows.dll.node'. This the error i get when running npx prisma generate with working with react, mongodb and express I was expecting no error after i rung npx prisma generate
I am having issue in prima, whenever i run prisma generate i alway get an error
|database|mongodb|backend|node-modules|prisma|
null
|r|machine-learning|neural-network|mlr3|nnet|
I have created following middleware for API endpoint ``` apiVersion: traefik.containo.us/v1alpha1 kind: Middleware metadata: name: cors-api namespace: api-staging spec: headers: accessControlAllowMethods: - "GET" - "OPTIONS" - "PUT" - "POST" - "DELETE" accessControlAllowHeaders: - "*" accessControlAllowOriginList: - "https://*.example.com" accessControlMaxAge: 100 addVaryHeader: true ``` & attached it to `IngressRoute` ``` apiVersion: traefik.containo.us/v1alpha1 kind: IngressRoute metadata: name: api-external-0 namespace: api-staging spec: entryPoints: - web - websecure routes: - kind: Rule match: Host(`api.staging.example.com`) services: - name: nginx port: 80 strategy: RoundRobin sticky: cookie: httpOnly: true middlewares: - name: cors-api namespace: api-staging tls: secretName: staging-cert ``` But following request is going through ``` % curl -v --request OPTIONS 'https://api.staging.example.com' -H 'Origin: http://fake.origin.com' -H 'Access-Control-Request-Method: GET' * Trying x.x.x.x:443... * Connected to api.staging.example.com (x.x.x.x) port 443 * ALPN: curl offers h2,http/1.1 * (304) (OUT), TLS handshake, Client hello (1): * CAfile: /etc/ssl/cert.pem * CApath: none * (304) (IN), TLS handshake, Server hello (2): * (304) (IN), TLS handshake, Unknown (8): * (304) (IN), TLS handshake, Certificate (11): * (304) (IN), TLS handshake, CERT verify (15): * (304) (IN), TLS handshake, Finished (20): * (304) (OUT), TLS handshake, Finished (20): * SSL connection using TLSv1.3 / AEAD-AES128-GCM-SHA256 * ALPN: server accepted h2 * Server certificate: * subject: CN=*.staging.example.com * start date: Jan 11 02:15:45 2024 GMT * expire date: Apr 10 02:15:44 2024 GMT * subjectAltName: host "api.staging.example.com" matched cert's "*.staging.example.com" * issuer: C=US; O=Let's Encrypt; CN=R3 * SSL certificate verify ok. * using HTTP/2 * [HTTP/2] [1] OPENED stream for https://api.staging.example.com/ * [HTTP/2] [1] [:method: OPTIONS] * [HTTP/2] [1] [:scheme: https] * [HTTP/2] [1] [:authority: api.staging.example.com] * [HTTP/2] [1] [:path: /] * [HTTP/2] [1] [user-agent: curl/8.4.0] * [HTTP/2] [1] [accept: */*] * [HTTP/2] [1] [origin: http://fake.origin.com] * [HTTP/2] [1] [access-control-request-method: GET] > OPTIONS / HTTP/2 > Host: api.staging.example.com > User-Agent: curl/8.4.0 > Accept: */* > Origin: http://fake.origin.com > Access-Control-Request-Method: GET > < HTTP/2 200 < access-control-allow-headers: * < access-control-allow-methods: GET,OPTIONS,PUT,POST,DELETE < access-control-max-age: 100 < content-length: 0 < date: Mon, 19 Feb 2024 17:54:27 GMT < * Connection #0 to host api.staging.example.com left intact ``` I also tried `accessControlAllowOriginListRegex` still no luck ``` spec: headers: accessControlAllowMethods: - "GET" - "OPTIONS" - "PUT" - "POST" - "DELETE" accessControlAllowHeaders: - "*" accessControlAllowOriginListRegex: - "(.*?)\\.example\\.com" accessControlMaxAge: 100 addVaryHeader: true ``` I am expecting, once the CORS policy are in place, curl request with non `accessControlAllowOriginListRegex` will be refused. Any thoughts on what is happening here ?
Pine Script: Loop Through Input Price levels & set Alerts for Each Price
|loops|pine-script|alert|price|
null
The Compose documentation e.g. [here][1] mentions, that `Modifier.drawBehind` only affects the drawing phase, other phases are not re-executed when this modifier changes. But how does Jetpack Compose know that e.g. `Modifier.drawBedind` only affects the drawing phase? [1]: https://developer.android.com/develop/ui/compose/phases#phase3-drawing
How does Jetpack Compose know that a modifier affects only e.g. the drawing phase?
I'm trying to fetch apis and combine the json objects into a single variable array that I can loop through. using .push my variable array ends up as.. ``` [ [ {"a","1"} ], [ {"b","2"} ] ] ``` when i want this.. ``` [ {"a","1"} {"b","2"} ] ``` Here's my trimmed down code.. var combinedJson = []; const r1 = fetch(firstJson).then(response => response.json()); const r2 = fetch(secondJson).then(response => response.json()); Promise.all([r1, r2]) .then(([d1, d2]) => { combinedJson.push(d1, d2); console.log(combinedJson); }) .catch(error => { console.error(error); });
How to combine JSON objects and unnest array of arrays
|javascript|arrays|json|fetch|push|
null
I am scraping messages about power plant unavailability and converting them into timeseries and storing them in a sql server database. My current structure is the following. * `Messages`: publicationDate datetime, messageSeriesID nvarchar, version int, messageId identity The primary key is on `(messageSeriesId, version)` * `Units`: messageId int, area nvarchar, fueltype nvarchar, unitname nvarchar tsId identity The primary key is on `tsId`. There is a foreign key relation on tsId between this table and `Messages`. The main reason for this table is that one message can contain information about multiple power plants. * `Timeseries`: tsId int, delivery datetime, value decimal I have a partition scheme based on delivery, each partition contains a month of data. The primary key is on `(tsId, delivery)` and it's partitioned along the monthly partition scheme. There is a foreign key on `tsId` to `tsId` in the `Units` table. The `Messages` and `Units` tables contain around a million rows each. The `Timeseries` table contains about 500 million rows. Now, every time I insert a new batch of data, one row goes into the `Messages` table, between one and a few (4) go into the `Units` table, and a lot (up to 100.000s) go into the `Timeseries` table. The problem I'm encountering is that inserts into the `Timeseries` table are too slow (100.000 rows take up to a minute). I already made some improvements on this by setting the fillfactor to 80 instead of 100 when rebuilding the index there. However its still too slow. And I am a bit puzzled, because the way I understand it is this: every partition contains all rows with delivery in that month, but the primary key is on `tsId` first and `delivery` second. So to insert data in this partition, it should simply be placed at the end of the partition (since `tsId` is the identity column and thus increasing by one every transaction). The time series that I am trying to insert spans 3 years and therefore 36 partitions. If I, however, create a time series with the same length that falls within a single partition the insert is notable faster (around 1.5 second). Likewise if I create an empty time series table (`timeseries_test`) with the same structure as the original one, then inserts are also very fast (also for inserting data that spans 3 years). However, querying is done based mainly on delivery, so I don't think partitioning by `tsId` is a good idea. If anyone has a suggestion on the structure or methods to improve inserts it would be greatly appreciated.
Modifying BadRequest Error Message in ASP.NET Core Microservice Application
|c#|asp.net-core|microservices|middleware|
**You can achieve this with a simple for loop. The idea is to iterate over the array forward base on the boolean flag. If it is forward iterating array with** `for (int i = 0; i < arr.length; i++)` **else if it is backward** `for (int i = arr.length - 1; i >= 0; i--)`. Refer the code below: public class Main { public static void main(String[] args) { String[] arr = {"apple", "orange", "pineapple"}; printArray(arr, true); System.out.println("-------------------------"); printArray(arr, false); } private static void printArray(String[] arr, boolean forward) { if (forward) { System.out.println("Printing array forward:"); for (int i = 0; i < arr.length; i++) { System.out.println(arr[i]); } } else { System.out.println("Printing array backward:"); for (int i = arr.length - 1; i >= 0; i--) { System.out.println(arr[i]); } } } } > **Refer to the working code [here.][1]** [1]: https://onlinegdb.com/pUnrCwudF
if(tid\<n) { gain = in_degree[neigh]*out_degree[tid] + out_degree[neighbour]*in_degree[tid]/total_weight //here let say node 0 moves to 2 atomicExch(&node_community[0, node_community[2] // because node 0 is in node 2 now atomicAdd(&in_degree[2],in_degree[0] // because node 0 is in node 2 now atomicAdd(&out_degree[2],out_degree[0] // because node 0 is in node 2 now } } this is the process, in this problem during calculation of gain all the thread should see the update value of 2 which values of 2+values 0 but threads see only previous value of 2. how to solve that ? here is the output: node is: 0 node is: 1 node is: 2 node is: 3 node is: 4 node is: 5 //HERE IS THS PROBLEM (UPDATED VALUES ARE NOT VISIBLE TO THE REST OF THREDS WHO EXECUTED BEFORE THE ATOMIC WRITE) updated_node is: 0 // this should be 2 updated_values are: 48,37. // this should be(48+15(values of 2))+(37+12(values of 0)) comm. in out 0 shifted to ->> 2 - 15 - 12 1 shifted to ->> 1 - 8 - 10 2 shifted to ->> 2 - 48 - 37 I have tried using , __syncthreads(), _threadfence() and shared memory foe reading writing values can any one tell what could be the issue ??
- Press <kbd>Ctrl</kbd> + <kbd>Shift</kbd> + <kbd>P</kbd> - Search *[settings.json][1]* - Open the **User Settings(Json)** [![Enter image description here][2]][2] - Add the following code ``` "editor.codeActionsOnSave": { "source.fixAll": true } Note: It is a simplified and updated version of [Emirhan's answer][3] with required changes [1]: https://stackoverflow.com/questions/65908987/how-can-i-open-visual-studio-codes-settings-json-file [2]: https://i.stack.imgur.com/tNgWT.png [3]: https://stackoverflow.com/questions/73246066/how-can-i-auto-add-const-in-flutter-in-vs-code/73246067#73246067
Typescript generic type with reference to one of it's properties with different type
|typescript|
I think scss syntax allows you to put the ```::before``` pseudo element inside ```mat-expansion-panel-header``` ```scss mat-expansion-panel-header { position: relative; ::before { content: ''; position: absolute; top: 0px; left: 0px; bottom: 0px; right: 0px; z-index: 1000; background-color: black; } } ```
When using the hugging face "phi2" model with the sample code i received the same error and using "mps" instead of "cuda" worked. `torch.set_default_device("mps")`
I wrote a simple code to test how many files may be open in python script: for i in xrange(2000): fp = open('files/file_%d' % i, 'w') fp.write(str(i)) fp.close() fps = [] for x in xrange(2000): h = open('files/file_%d' % x, 'r') print h.read() fps.append(h) and I get the exception: IOError: [Errno 24] Too many open files: 'files/file_509'
I had a similar issue and was not able to find an already existing solution so I made this Python package. https://github.com/emileindik/slosh ```bash $ pip install slosh $ slosh ubuntu@1.1.1.1 --save-as myserver ``` This will create an entry in your SSH config file that looks like ``` Host=myserver HostName=1.1.1.1 User=ubuntu ``` The same entry can be updated by using the same alias name. For example, adding a .pem file to the connection: ```bash $ slosh -i ~/.ssh/mykey.pem ubuntu@1.1.1.1 --save-as myserver ``` It currently supports a number of `ssh` options but let me know if there additional options that should be added!
So in newer version first thing u have to do is npm i @electron/remote and add this code in the main js file you should add 1. require('@electron/remote/main').initialize(); Then you add this after app.on('ready', createWindow); 2. app.on('browser-window-created', (_, window) => { require("@electron/remote/main").enable(window.webContents) }); In renderer after defining electron you should add this: 3.const { BrowserWindow } = require('@electron/remote'); And that is it, took me a while to read docs and find few solutions online, there is no much.
Generic type with reference to one of its properties with different type
Try this code. This will move the Excel file to the Right Monitor. I have tried it and it works. I have commented the code so you should not have a problem understanding it. **Important Note**: Since we are using VBA code, your files need to be saved as `.xlsm` and not `.xlsx` **In the ThisWorkbook Code module.** Option Explicit Private Declare PtrSafe Function SetWindowPos Lib "user32" (ByVal hwnd As LongPtr, _ ByVal hWndInsertAfter As LongPtr, ByVal x As Long, ByVal y As Long, _ ByVal cx As Long, ByVal cy As Long, ByVal wFlags As Long) As Long Private Const SWP_NOSIZE As Long = &H1 Private Const SWP_NOACTIVATE As Long = &H10 Private Const SWP_NOZORDER As Long = &H4 Private Sub Workbook_Open() Dim leftPos As Long Dim appHwnd As Long '~~> Get the left position of the second monitor leftPos = GetSecondMonitorLeft() If leftPos = -1 Then MsgBox "No second monitor detected.", vbInformation Else '~~> Get the handle of the Excel application window appHwnd = Application.hwnd '~~> This is important because you can't move a maximized window Application.WindowState = xlNormal '~~> Move the application window to the second monitor SetWindowPos appHwnd, 0, leftPos, 0, 0, 0, _ SWP_NOSIZE Or SWP_NOZORDER Or SWP_NOACTIVATE '~~> Maximize the application window Application.WindowState = xlMaximized End If End Sub **In a normal Module** Option Explicit Private Declare PtrSafe Function GetSystemMetrics32 Lib "user32" Alias _ "GetSystemMetrics" (ByVal nIndex As Long) As Long Private Declare PtrSafe Function EnumDisplayMonitors Lib "user32" (ByVal hdc As LongPtr, _ ByVal lprcClip As LongPtr, ByVal lpfnEnum As LongPtr, ByVal dwData As LongPtr) As Boolean Private Declare PtrSafe Function GetMonitorInfo Lib "user32.dll" Alias _ "GetMonitorInfoA" (ByVal hMonitor As LongPtr, ByRef lpmi As Any) As Long Private Declare PtrSafe Sub CopyMemory Lib "kernel32" Alias _ "RtlMoveMemory" (Destination As Any, Source As Any, ByVal Length As Long) Private Type RECT Left As Long Top As Long Right As Long Bottom As Long End Type Private Type monitorInfo cbSize As Long rcMonitor As RECT rcWork As RECT dwFlags As Long End Type Private Const SM_CMONITORS As Long = 80 '~~> This function gets the .Left of the 2nd monitor Public Function GetSecondMonitorLeft() As Long Dim monitorCount As Integer Dim monitorInfo As monitorInfo Dim hdc As LongPtr Dim monCount As Long monitorInfo.cbSize = Len(monitorInfo) hdc = 0 '~~> This will get the number of monitors monitorCount = GetSystemMetrics32(SM_CMONITORS) '~~> Check if there are at least 2 monitors If monitorCount >= 2 Then '~~> Get the information of the second monitor EnumDisplayMonitors 0, ByVal 0, AddressOf MonitorEnumProc, VarPtr(monitorInfo) monCount = monitorInfo.rcMonitor.Left Else '~~> If there is only 1 monitor, return -1 monCount = -1 End If GetSecondMonitorLeft = monCount End Function Private Function MonitorEnumProc(ByVal hMonitor As LongPtr, ByVal hdcMonitor As LongPtr, _ ByVal lprcMonitor As LongPtr, ByVal dwData As LongPtr) As Long Dim monitorInfo As monitorInfo monitorInfo.cbSize = Len(monitorInfo) GetMonitorInfo hMonitor, monitorInfo '~~> Here we copy the monitor info to the provided structure CopyMemory ByVal dwData, monitorInfo, Len(monitorInfo) '~~> Next enumeration MonitorEnumProc = 1 End Function **Sample File:** You can download a sample file from [Here](https://www.dropbox.com/scl/fi/iul1mh6ah5f5dmhdksaim/RIGHT.xlsm?rlkey=dgwf21jhu8n9j464rb7m2y509&dl=0) to test it.
Is it possible to do some recursion(?) in typescript to call same Columns type but with different generic type N instead of T? Please consider example below. ``` type Column<T> = { render: (item: T, rowIndex: number) => React.ReactNode; }; ``` **Generic Type N here is only for example purposes.** I want to type this Container type to receive prop items, which can be array of N or function that receives current T type and returns new type N array. ``` type ColumnsContainer<T> = { items: ((item: T, rowIndex: number) => N[]) | N[]; columns: Columns<N>; }; ``` ``` type ColumnDefinition<T> = ColumnsContainer<T> | Column<T>; ``` ``` type Columns<T> = ColumnDefinition<T>[]; ``` ``` type GridProps<T> = { items: T[]; columns: Columns<T>; }; ``` What I want to achieve is below, however it does not work as I don't know how to infer new type from usage. ``` type Cargo = { locFrom: string; locTo: string; totalQty: number; models: Model[] }; type Model = { modelName: string; qty: number; price: number; alternatives: Alternative[] }; type Alternative = { manufacturer: string; name: string; year: number }; const items: Cargo[] = [ { totalQty: 299, locFrom: 'London', locTo: 'Dublin', models: [ { modelName: 'Peugeot', price: 15000, qty: 299, alternatives: [ { manufacturer: 'VW', name: 'Volkswagen', year: 1990 }, { manufacturer: 'Audi', name: 'Audi', year: 1998 }, ], }, ], }, ]; const columns: GridProps<Cargo> = { items, columns: [ { render: item => item.totalQty }, { render: item => item.locFrom }, { render: item => item.locTo }, { items: item => item.models, columns: [ { render: item => item.modelName, }, { items: item => item.alternatives }, { render: item => item.price }, { render: item => item.qty }, ], }, ], }; ```
Why is Notpad++ opened from IntelliJ
|intellij-idea|
I have an SQL query exactly as described in [this post][1]. To sum it up, it will read all Carboards specified with an offset. This query is executed on a MariaDB database.<br> Now, I have my Cardboards (ID, Cardboard_number, DateTime, Production_LineNumber) in my C# ASP.NET program. I have to read all production processes in which each Carboard was used (basically production.start <= cardboard.datetime <= production.end). The table for the Productions in the Oracle database looks like this (I did not create that table myself and I am not able to change anything because it is used in a production program as well): - PRODUCTION_NUMBER (NUMBER) - POSNR (NUMBER) - DATETIME (TIMESTAMP(6)) - PROCESS_ACTIVE (VARCHAR2(1)) - PRODUCTION_LINE (NUMBER) The PROCESS_ACTIVE column is used like a flag, when starting a production process a row is inserted with DATETIME=sysdate, PROCESS_ACTIVE = 1 and a stopped one is indicated with sysdate, PROCESS_ACTIVE = 0. I have created a query which sums up my processes, so I get a Start und End for each PRODUCTION_NUMBER and POSNR: ``` SELECT * FROM ( SELECT PRODUCTION_NUMBER, POSNR, PROCESS_ACTIVE, PRODUCTION_LINE, DATETIME as START, LEAD(DATETIME, 1, SYSDATE) OVER (ORDER BY DATETIME ASC) AS END FROM ssq_lauftr GROUP BY PRODUCTION_NUMBER, POSNR, PROCESS_ACTIVE, PRODUCTION_LINE, DATETIME ORDER BY DATETIME ) WHERE PROCESS_ACTIVE = 1 ``` I iterate over all Cardboards retrieved from the MariaDB in my C# code and execute this query for **every cardboard** (where *cardboard* is the injected object from my C# loop): ``` SELECT * FROM ( SELECT PRODUCTION_NUMBER, POSNR, PROCESS_ACTIVE, PRODUCTION_LINE, DATETIME as START, LEAD(DATETIME, 1, SYSDATE) OVER (ORDER BY DATETIME ASC) AS END FROM ssq_lauftr WHERE PRODUCTION_LINE = cardboard.PRODUCTION_LINENUMBER and DATETIME <= cardboard.DATETIME GROUP BY PRODUCTION_NUMBER, POSNR, PROCESS_ACTIVE, PRODUCTION_LINE, DATETIME ORDER BY DATETIME ) WHERE PROCESS_ACTIVE = 1 AND cardboard.DATETIME <= END ``` which is working quite well with a low number of cardboards. The problem with this solution is that if I have a lot of Cardboards, the whole function for reading all Productions will take too long. Is there a way (e.g. with PLSQL) to make this process more efficient? The SQL statement above is quite fast, but iterating over the list in C# and adding the results to my ProductionSet is slowing down the application a lot. **Edit:**<br> The cardboards are stored in the MariaDB, while the Productions are stored in the Oracle DB. My current C# code for the described functionality looks like this: Cardboards = [.. _mariaDB.Cardboards.FromSql($@" SET @cb_num = {request.Cardboard_Number}; select * from ( SELECT *, sum(Cardboard_Number LIKE CONCAT(@cb_num, '%')) over ( partition by ProductionLine_Number order by timestamp, id rows BETWEEN {CardboardOffset} preceding AND {CardboardOffset} following ) matches from cardboards ) cardboards_with_matches where matches ")]; HashSet<Production> productionSet = []; for(int i = 0; i < Cardboards.Count(); i++) { productionSet.UnionWith(_oracleDB.Production.FromSqlRaw($@" SELECT * FROM ( SELECT PRODUCTION_NUMBER, POSNR, PROCESS_ACTIVE, PRODUCTION_LINE, DATETIME as START, LEAD(DATETIME, 1, SYSDATE) OVER (ORDER BY DATETIME ASC) AS END FROM Productions WHERE PRODUCTION_LINE = {Cardboards.ElementAt(i).ProductionLine_Number} and {Cardboards.ElementAt(i).DateTime} <= cardboard.DATETIME GROUP BY PRODUCTION_NUMBER, POSNR, PROCESS_ACTIVE, PRODUCTION_LINE, DATETIME ORDER BY DATETIME ) WHERE PROCESS_ACTIVE = 1 AND {Cardboards.ElementAt(i).DateTime} <= END ")); } [MariaDB fiddle][2] - Fiddle for Cardboards<br> [OracleDB fiddle][3] - Fiddle for Productions (not running in fiddle, but running in Oracle SQL developer, I honestly don't know the problem there)<br> To break it down, basically when the user searches for the Cardboard 'WDL-005943998-1' the expected output would be the data from the whole Cardboard (searched in MariaDB) and the Production with PRODUCTION_NUMBER = 461618, as the DateTime of the Cardboard is between the Start and End of the Production AND the Production was on the same line as the Cardboard was scanned. Note that there could be the same Production multiple times, but with different timestamps (e.g. production was paused). [1]: https://stackoverflow.com/questions/78224006/sql-how-to-get-elements-after-and-before-the-searched-one [2]: https://dbfiddle.uk/7sFrE5pU [3]: https://dbfiddle.uk/KrPvEEtj
This can also mean the underlying storage driver does not exist, for example, EBS driver is not ready.
I have some python code: ``` class Meta(type): def __new__(cls, name, base, attrs): attrs.update({'name': '', 'email': ''}) return super().__new__(cls, name, base, attrs) def __init__(cls, name, base, attrs): super().__init__(name, base, attrs) cls.__init__ = Meta.func def func(self, *args, **kwargs): setattr(self, 'LOCAL', True) class Man(metaclass=Meta): login = 'user' psw = '12345' ``` How do I write this statement as OOP-true (Meta.func)? In the row: `cls.__init__ = Meta.func` If someone wants to change the name of the metaclass, it will stop working. Because it will be necessary to make changes inside this metaclass. Because I explicitly specified its name in the code. I think it won't be right. But I do not know how to express it in another way. I tried `cls.__init__ = cls.func`, but it doesn't create local variables for the Man class object.
The latest torch I found that works is torch==1.13.1+rocm5.2.0 With the HSA override of course.
You don't call [AbortTransaction][1]. However, mongo provides convenient `withTransaction` API that will simplify this work for you. See [here][2]. See also mongoose related [doc][3]. [1]: https://www.mongodb.com/docs/manual/reference/method/Session.abortTransaction/ [2]: https://mongodb.github.io/node-mongodb-native/6.5/classes/ClientSession.html#withTransaction [3]: https://mongoosejs.com/docs/transactions.html
I am trying to get the instances information with Google Cloud API in Java. When I send the list request I get an exception. Can you tell me why I getting this exception? I do not know about the protocol buffers and the error message does not give much clue. Best regards, The code: ``` InputStream stream = GoogleComputeServiceImpl.class.getResourceAsStream("/google_service_account.json"); FixedCredentialsProvider fixedCredentialsProvider = FixedCredentialsProvider.create( ComputeEngineCredentials.fromStream(stream)); InstancesClient instancesClient = InstancesClient.create(InstancesSettings.newBuilder().setCredentialsProvider(fixedCredentialsProvider).build()); ListInstancesRequest request = ListInstancesRequest.newBuilder() .setProject("my-project-name") .setReturnPartialSuccess(true) .setZone("europe-west3-c") .build(); try { InstancesClient.ListPagedResponse list = instancesClient.list(request); list.iterateAll().forEach(instance -> { log.info("Instance: " + instance.getName()); log.info("Description: " + instance.getDescription()); log.info("Hostname: " + instance.getHostname()); instance.getNetworkInterfacesList().forEach(networkInterface -> { log.info("Network interface: " + networkInterface.getName()); networkInterface.getAccessConfigsList().forEach(accessConfig -> { log.info("Access config: " + accessConfig.getName()); log.info("IP: " + accessConfig.getNatIP()); }); }); }); } catch (Exception e) { log.error("Error: ", e); } ``` The error I got: com.google.api.gax.rpc.CancelledException: Exception in message delivery at com.google.api.gax.rpc.ApiExceptionFactory.createException(ApiExceptionFactory.java:48) ~[gax-2.45.0.jar:2.45.0] at com.google.api.gax.httpjson.HttpJsonApiExceptionFactory.createApiException(HttpJsonApiExceptionFactory.java:76) ~[gax-httpjson-2.45.0.jar:2.45.0] at com.google.api.gax.httpjson.HttpJsonApiExceptionFactory.create(HttpJsonApiExceptionFactory.java:58) ~[gax-httpjson-2.45.0.jar:2.45.0] at com.google.api.gax.httpjson.HttpJsonExceptionCallable$ExceptionTransformingFuture.onFailure(HttpJsonExceptionCallable.java:97) ~[gax-httpjson-2.45.0.jar:2.45.0] at com.google.api.core.ApiFutures$1.onFailure(ApiFutures.java:84) ~[api-common-2.28.0.jar:2.28.0] at com.google.common.util.concurrent.Futures$CallbackListener.run(Futures.java:1127) ~[guava-32.1.3-jre.jar:na] at com.google.common.util.concurrent.DirectExecutor.execute(DirectExecutor.java:31) ~[guava-32.1.3-jre.jar:na] at com.google.common.util.concurrent.AbstractFuture.executeListener(AbstractFuture.java:1286) ~[guava-32.1.3-jre.jar:na] at com.google.common.util.concurrent.AbstractFuture.complete(AbstractFuture.java:1055) ~[guava-32.1.3-jre.jar:na] at com.google.common.util.concurrent.AbstractFuture.setException(AbstractFuture.java:807) ~[guava-32.1.3-jre.jar:na] at com.google.api.core.AbstractApiFuture$InternalSettableFuture.setException(AbstractApiFuture.java:92) ~[api-common-2.28.0.jar:2.28.0] at com.google.api.core.AbstractApiFuture.setException(AbstractApiFuture.java:74) ~[api-common-2.28.0.jar:2.28.0] at com.google.api.gax.httpjson.HttpJsonClientCalls$HttpJsonFuture.setException(HttpJsonClientCalls.java:132) ~[gax-httpjson-2.45.0.jar:2.45.0] at com.google.api.gax.httpjson.HttpJsonClientCalls$FutureListener.onClose(HttpJsonClientCalls.java:166) ~[gax-httpjson-2.45.0.jar:2.45.0] at com.google.api.gax.httpjson.HttpJsonClientCallImpl$OnCloseNotificationTask.call(HttpJsonClientCallImpl.java:552) ~[gax-httpjson-2.45.0.jar:2.45.0] at com.google.api.gax.httpjson.HttpJsonClientCallImpl.notifyListeners(HttpJsonClientCallImpl.java:391) ~[gax-httpjson-2.45.0.jar:2.45.0] at com.google.api.gax.httpjson.HttpJsonClientCallImpl.deliver(HttpJsonClientCallImpl.java:318) ~[gax-httpjson-2.45.0.jar:2.45.0] at com.google.api.gax.httpjson.HttpJsonClientCallImpl.setResult(HttpJsonClientCallImpl.java:164) ~[gax-httpjson-2.45.0.jar:2.45.0] at com.google.api.gax.httpjson.HttpRequestRunnable.run(HttpRequestRunnable.java:149) ~[gax-httpjson-2.45.0.jar:2.45.0] at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515) ~[na:na] at java.base/java.util.concurrent.FutureTask.run$$$capture(FutureTask.java:264) ~[na:na] at java.base/java.util.concurrent.FutureTask.run(FutureTask.java) ~[na:na] at java.base/java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:304) ~[na:na] at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) ~[na:na] at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) ~[na:na] at java.base/java.lang.Thread.run(Thread.java:829) ~[na:na] Suppressed: com.google.api.gax.rpc.AsyncTaskException: Asynchronous task failed at com.google.api.gax.rpc.ApiExceptions.callAndTranslateApiException(ApiExceptions.java:57) ~[gax-2.45.0.jar:2.45.0] at com.google.api.gax.rpc.UnaryCallable.call(UnaryCallable.java:112) ~[gax-2.45.0.jar:2.45.0] at com.google.cloud.compute.v1.InstancesClient.list(InstancesClient.java:3117) ~[google-cloud-compute-1.48.0.jar:1.48.0] at org.service.impl.GoogleComputeServiceImpl.getProxyIp(GoogleComputeServiceImpl.java:49) ~[classes/:na] at org.controller.MyController.getProxyUrl(MyController.java:22) ~[classes/:na] at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[na:na] at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) ~[na:na] at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[na:na] at java.base/java.lang.reflect.Method.invoke(Method.java:566) ~[na:na] at org.springframework.web.method.support.InvocableHandlerMethod.doInvoke(InvocableHandlerMethod.java:205) ~[spring-web-5.3.26.jar:5.3.26] at org.springframework.web.method.support.InvocableHandlerMethod.invokeForRequest(InvocableHandlerMethod.java:150) ~[spring-web-5.3.26.jar:5.3.26] at org.springframework.web.servlet.mvc.method.annotation.ServletInvocableHandlerMethod.invokeAndHandle(ServletInvocableHandlerMethod.java:117) ~[spring-webmvc-5.3.26.jar:5.3.26] at org.springframework.web.servlet.mvc.method.annotation.RequestMappingHandlerAdapter.invokeHandlerMethod(RequestMappingHandlerAdapter.java:895) ~[spring-webmvc-5.3.26.jar:5.3.26] at org.springframework.web.servlet.mvc.method.annotation.RequestMappingHandlerAdapter.handleInternal(RequestMappingHandlerAdapter.java:808) ~[spring-webmvc-5.3.26.jar:5.3.26] at org.springframework.web.servlet.mvc.method.AbstractHandlerMethodAdapter.handle(AbstractHandlerMethodAdapter.java:87) ~[spring-webmvc-5.3.26.jar:5.3.26] at org.springframework.web.servlet.DispatcherServlet.doDispatch(DispatcherServlet.java:1072) ~[spring-webmvc-5.3.26.jar:5.3.26] at org.springframework.web.servlet.DispatcherServlet.doService(DispatcherServlet.java:965) ~[spring-webmvc-5.3.26.jar:5.3.26] at org.springframework.web.servlet.FrameworkServlet.processRequest(FrameworkServlet.java:1006) ~[spring-webmvc-5.3.26.jar:5.3.26] at org.springframework.web.servlet.FrameworkServlet.doGet(FrameworkServlet.java:898) ~[spring-webmvc-5.3.26.jar:5.3.26] at javax.servlet.http.HttpServlet.service(HttpServlet.java:502) ~[tomcat-embed-core-9.0.73.jar:4.0.FR] at org.springframework.web.servlet.FrameworkServlet.service(FrameworkServlet.java:883) ~[spring-webmvc-5.3.26.jar:5.3.26] at javax.servlet.http.HttpServlet.service(HttpServlet.java:596) ~[tomcat-embed-core-9.0.73.jar:4.0.FR] at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:209) ~[tomcat-embed-core-9.0.73.jar:9.0.73] at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:153) ~[tomcat-embed-core-9.0.73.jar:9.0.73] at org.apache.tomcat.websocket.server.WsFilter.doFilter(WsFilter.java:53) ~[tomcat-embed-websocket-9.0.73.jar:9.0.73] at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:178) ~[tomcat-embed-core-9.0.73.jar:9.0.73] at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:153) ~[tomcat-embed-core-9.0.73.jar:9.0.73] at org.springframework.web.filter.RequestContextFilter.doFilterInternal(RequestContextFilter.java:100) ~[spring-web-5.3.26.jar:5.3.26] at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:117) ~[spring-web-5.3.26.jar:5.3.26] at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:178) ~[tomcat-embed-core-9.0.73.jar:9.0.73] at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:153) ~[tomcat-embed-core-9.0.73.jar:9.0.73] at org.springframework.web.filter.FormContentFilter.doFilterInternal(FormContentFilter.java:93) ~[spring-web-5.3.26.jar:5.3.26] at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:117) ~[spring-web-5.3.26.jar:5.3.26] at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:178) ~[tomcat-embed-core-9.0.73.jar:9.0.73] at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:153) ~[tomcat-embed-core-9.0.73.jar:9.0.73] at org.springframework.web.filter.CharacterEncodingFilter.doFilterInternal(CharacterEncodingFilter.java:201) ~[spring-web-5.3.26.jar:5.3.26] at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:117) ~[spring-web-5.3.26.jar:5.3.26] at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:178) ~[tomcat-embed-core-9.0.73.jar:9.0.73] at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:153) ~[tomcat-embed-core-9.0.73.jar:9.0.73] at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:167) ~[tomcat-embed-core-9.0.73.jar:9.0.73] at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:90) ~[tomcat-embed-core-9.0.73.jar:9.0.73] at org.apache.catalina.authenticator.AuthenticatorBase.invoke(AuthenticatorBase.java:492) ~[tomcat-embed-core-9.0.73.jar:9.0.73] at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:130) ~[tomcat-embed-core-9.0.73.jar:9.0.73] at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:93) ~[tomcat-embed-core-9.0.73.jar:9.0.73] at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:74) ~[tomcat-embed-core-9.0.73.jar:9.0.73] at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:343) ~[tomcat-embed-core-9.0.73.jar:9.0.73] at org.apache.coyote.http11.Http11Processor.service(Http11Processor.java:389) ~[tomcat-embed-core-9.0.73.jar:9.0.73] at org.apache.coyote.AbstractProcessorLight.process(AbstractProcessorLight.java:63) ~[tomcat-embed-core-9.0.73.jar:9.0.73] at org.apache.coyote.AbstractProtocol$ConnectionHandler.process(AbstractProtocol.java:926) ~[tomcat-embed-core-9.0.73.jar:9.0.73] at org.apache.tomcat.util.net.NioEndpoint$SocketProcessor.doRun(NioEndpoint.java:1791) ~[tomcat-embed-core-9.0.73.jar:9.0.73] at org.apache.tomcat.util.net.SocketProcessorBase.run(SocketProcessorBase.java:49) ~[tomcat-embed-core-9.0.73.jar:9.0.73] at org.apache.tomcat.util.threads.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1191) ~[tomcat-embed-core-9.0.73.jar:9.0.73] at org.apache.tomcat.util.threads.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:659) ~[tomcat-embed-core-9.0.73.jar:9.0.73] at org.apache.tomcat.util.threads.TaskThread$WrappingRunnable.run(TaskThread.java:61) ~[tomcat-embed-core-9.0.73.jar:9.0.73] ... 1 common frames omitted Caused by: com.google.api.gax.httpjson.HttpJsonStatusRuntimeException: Exception in message delivery at com.google.api.gax.httpjson.HttpJsonClientCallImpl.deliver(HttpJsonClientCallImpl.java:368) ~[gax-httpjson-2.45.0.jar:2.45.0] ... 9 common frames omitted Caused by: com.google.api.gax.httpjson.RestSerializationException: Failed to parse response message at com.google.api.gax.httpjson.ProtoRestSerializer.fromJson(ProtoRestSerializer.java:107) ~[gax-httpjson-2.45.0.jar:2.45.0] at com.google.api.gax.httpjson.ProtoMessageResponseParser.parse(ProtoMessageResponseParser.java:76) ~[gax-httpjson-2.45.0.jar:2.45.0] at com.google.api.gax.httpjson.ProtoMessageResponseParser.parse(ProtoMessageResponseParser.java:41) ~[gax-httpjson-2.45.0.jar:2.45.0] at com.google.api.gax.httpjson.HttpJsonClientCallImpl.consumeMessageFromStream(HttpJsonClientCallImpl.java:431) ~[gax-httpjson-2.45.0.jar:2.45.0] at com.google.api.gax.httpjson.HttpJsonClientCallImpl.deliver(HttpJsonClientCallImpl.java:363) ~[gax-httpjson-2.45.0.jar:2.45.0] ... 9 common frames omitted Caused by: com.google.protobuf.InvalidProtocolBufferException: No map fields found in com.google.cloud.compute.v1.Instance at com.google.protobuf.util.JsonFormat$ParserImpl.merge(JsonFormat.java:1309) ~[protobuf-java-util-3.25.2.jar:na] at com.google.protobuf.util.JsonFormat$Parser.merge(JsonFormat.java:463) ~[protobuf-java-util-3.25.2.jar:na] at com.google.api.gax.httpjson.ProtoRestSerializer.fromJson(ProtoRestSerializer.java:104) ~[gax-httpjson-2.45.0.jar:2.45.0] ... 13 common frames omitted Caused by: java.lang.IllegalArgumentException: No map fields found in com.google.cloud.compute.v1.Instance at com.google.protobuf.GeneratedMessageV3.internalGetMapField(GeneratedMessageV3.java:2054) ~[protobuf-java-3.24.3.jar:na] at com.google.protobuf.GeneratedMessageV3$FieldAccessorTable$MapFieldAccessor.getMapField(GeneratedMessageV3.java:2814) ~[protobuf-java-3.24.3.jar:na] at com.google.protobuf.GeneratedMessageV3$FieldAccessorTable$MapFieldAccessor.<init>(GeneratedMessageV3.java:2806) ~[protobuf-java-3.24.3.jar:na] at com.google.protobuf.GeneratedMessageV3$FieldAccessorTable.ensureFieldAccessorsInitialized(GeneratedMessageV3.java:2123) ~[protobuf-java-3.24.3.jar:na] at com.google.cloud.compute.v1.Instance$Builder.internalGetFieldAccessorTable(Instance.java:4339) ~[proto-google-cloud-compute-v1-1.48.0.jar:1.48.0] at com.google.protobuf.GeneratedMessageV3$Builder.hasField(GeneratedMessageV3.java:747) ~[protobuf-java-3.24.3.jar:na] at com.google.protobuf.util.JsonFormat$ParserImpl.mergeField(JsonFormat.java:1629) ~[protobuf-java-util-3.25.2.jar:na] at com.google.protobuf.util.JsonFormat$ParserImpl.mergeMessage(JsonFormat.java:1477) ~[protobuf-java-util-3.25.2.jar:na] at com.google.protobuf.util.JsonFormat$ParserImpl.merge(JsonFormat.java:1435) ~[protobuf-java-util-3.25.2.jar:na] at com.google.protobuf.util.JsonFormat$ParserImpl.parseFieldValue(JsonFormat.java:1995) ~[protobuf-java-util-3.25.2.jar:na] at com.google.protobuf.util.JsonFormat$ParserImpl.mergeRepeatedField(JsonFormat.java:1710) ~[protobuf-java-util-3.25.2.jar:na] at com.google.protobuf.util.JsonFormat$ParserImpl.mergeField(JsonFormat.java:1642) ~[protobuf-java-util-3.25.2.jar:na] at com.google.protobuf.util.JsonFormat$ParserImpl.mergeMessage(JsonFormat.java:1477) ~[protobuf-java-util-3.25.2.jar:na] at com.google.protobuf.util.JsonFormat$ParserImpl.merge(JsonFormat.java:1435) ~[protobuf-java-util-3.25.2.jar:na] at com.google.protobuf.util.JsonFormat$ParserImpl.merge(JsonFormat.java:1299) ~[protobuf-java-util-3.25.2.jar:na] ... 15 common frames omitted
"No map fields found in com.google.cloud.compute.v1.Instance" error when getting instances in Google Cloud API Java
|java|google-cloud-platform|protocol-buffers|
null
null
you can change max-height unit from px to vh it will work for you <!-- begin snippet: js hide: false console: true babel: false --> <!-- language: lang-css --> .parent { margin: 5px; max-height: 100vh; max-width: 400px; background: pink; } .details { width: 100%; text-align: center; font-size: 14pt; display: grid; grid-template-rows: 1fr 1fr; } .detail1 { grid-row: 1; overflow-y: hidden; } .detail2 { grid-row: 2; overflow-y: hidden; } <!-- language: lang-html --> <div class="parent"> <div class="details"> <div class="detail1"> lotsoftext lotsoftext lotsoftext lotsoftext lotsoftext lotsoftext lotsoftext lotsoftext lotsoftext lotsoftext lotsoftext lotsoftext </div> <div class="detail2"> lotsmoretext lotsmoretext lotsmoretext lotsmoretext lotsmoretext lotsmoretext lotsmoretext lotsmoretext lotsmoretext lotsmoretext lotsmoretext lotsmoretext </div> </div> </div> <!-- end snippet -->
I think there might be the alignment issue. Make sure the pubspec.yaml look like this: dependencies: flutter: sdk: flutter carousel_slider: ^4.2.1
You can use Docker [Volumes](https://docs.docker.com/storage/volumes/) > Volumes can be more safely shared among multiple containers. You can define Volumes on `dockerfile`, `docker-compose` file or on `cli` dockerfile: ```dockerfile FROM python VOLUME /home/python/lib ``` docker-compose file ```docker services: py_app: image: python volumes: - myapp:/home/python/lib volumes: myapp: ```
please help Bad PCD format error I would like to know whats wrong with PCD file. I have given the header to the PCD file header = "# .PCD v.7 - Point Cloud Data file format VERSION .7 FIELDS x y z data SIZE 4 4 4 TYPE F F F COUNT 1 1 1 WIDTH 0 HEIGHT 1 VIEWPOINT 0 0 0 1 0 0 0 POINTS 0 DATA binary"
> And since array name a is originally &a[0], it will be a int** type > variable. You are wrong. Arrays used in expressions with rare exceptions are indeed converted to pointers to their first elements. But in this case the type of elements of the array is `int[3]`. So the array `a` used in expressions is converted to a pointer of the type `int ( * )[3]`. As for this code snippet int* p; p = a; //possible error, from int** to int* printf("*p : %d\n", *p); then it has undefined behavior. There is an attempt to interpret a value of a pointer as an integer.
I have an application using the classic [clean architecture](https://jasontaylor.dev/clean-architecture-getting-started/) with all the dependencies set up as described (i.e. flowing inward). I have an external service I wish to use in my project, so defined the functionality (objects that interact with the service, some interfaces etc.) for interacting with that service in the `Infrastructure` layer. Currently the `Presentation` layer is consuming the external service directly from the `Infrastructure` layer. A question about this would be: is the presentation layer communicating directly with the infrastructure layer acceptable, or must everything go via the application layer? Ideally I would like the presentation layer to call the application layer so that I can reuse some of the functionality it has available (for things such as validation amongst others) but then with the infrastructure layer not knowing about the application layer, I would need to define the external service objects in the application layer. I would rather avoid doing so as these are service specific rather than application specific. I would much rather have them defined in the infrastructure closest to where they are used. So, is there a way to reuse the interfaces defined in the infrastructure layer for this external service or will I just have to suck it up and have some duplication (i.e. define interfaces in the application layer that my external service in the infrastructure layer implements)?
In clean architecture, is the presentation layer allowed to communicate directly with the infrastructure layer?
I am trying to create a multifilter from enum data type column using the Item Template. But Item Template JavaScript function is not being called and data is not being shown in combo box for some reason. I am looking for a help from expertise to solve. **Enum Records** public enum EmpTypes { [Display(Name = "Service")] Service = 0, [Display(Name = "Sales")] Sales = 1, [Display(Name = "Purchase")] Purchase = 2, [Display(Name = "Office")] Office = 3 } **Kendo Grid** columns.Bound(c => c.EmpTypes).Title("Type") .Filterable(filterable => filterable .Multi(true) .UI("") .UI(“”).DataSource(s=>s.Read(r=>r.Action(“GetEmpTypes”,”Report”))) .ItemTemplate(“typetemplate”)); <script> function typetemplate(e) { alert('Test'); } </script> **Action Method in MVC controller** Public ActionResult GetEmpTypes() { List<EmpType> emptypes = new List<EmpType>(); emptypes.Add(EmpType.Sales) emptypes.Add(EmpType.Report) return Json(emptypes,JsonRequestBehavior.AllowGet); }
i am trying to get application context for showToast but not able to get it here is my code, kindly help ``` package com.coding.APPNAVIGATION.MenuAndNavigationDrawer import android.app.Application import android.content.Context open class MainApplication : Application() { override fun onCreate() { super.onCreate() MainApplication.appContext = applicationContext } companion object { lateinit var appContext : Context } } ``` ``` package com.coding.APPNAVIGATION.MenuAndNavigationDrawer.Utils import android.widget.Toast object Utils{ fun showToast(message : String){ Toast.makeText(MainApplication.appContext, message, Toast.LENGTH_LONG).show() } } ``` i want to get application context to run showToast anywhere within the project
|asp.net-mvc|razor|kendo-grid|
I installed DNSfilter on the family tablet to prevent my children from accessing YouTube. As it happens, I need YouTube for professional purposes (following tutorials in particular) and so as not to continually activate/deactivate the VPN I thought of Samsung's Modes & Routines which, along with Routines +, allows when a specific fingerprint is detected (mine in this case) to connect/disconnect from a VPN network automatically such as the one DNSfilter uses. That's exactly what I had in mind! Unfortunately, I can't do it with the Dns filter VPN network: Samsung sends an error message specifying that it must not be an application VPN. My idea is simple: when my fingerprint is detected then automatically disconnect from the dns filter VPN network and keep it activated when connecting with code & other fingerprint or facial recognition. Would there be any other solution to achieve this? For example, by making SAMSNG believe that it's a manual VPN and not an application or other service that allows you to do this please without rooting the phone for fear of compromising security (I also have my bank's application which doesn't support root). I sincerely thank you in advance for your help.
Automating connnection/deconnection of DNS filter's VPN when a specific fingerprint is recognised
|asp.net-core|asp.net-core-webapi|clean-architecture|
I have been trying all day to write a simple bit of code that makes a Discord Bot join a channel and then play a sound. But for some reason I can't seem to get it to work. I have tried many different things but it just doens't play the sound. I also get no error or anything. Could somebody explain to me what I am doing wrong? The code can be found below. const Discord = require("discord.js"); const discordTTS = require('discord-tts'); const { Client, GatewayIntentBits } = require('discord.js'); const { joinVoiceChannel } = require('@discordjs/voice'); const { createAudioPlayer, NoSubscriberBehavior } = require('@discordjs/voice'); const { createAudioResource } = require('@discordjs/voice'); const { join } = require('node:path'); const { AudioPlayerStatus } = require('@discordjs/voice'); const path = require('path'); const player = createAudioPlayer({ behaviors: { noSubscriber: NoSubscriberBehavior.Pause, }, }); const client = new Client({ intents: [ GatewayIntentBits.Guilds, GatewayIntentBits.GuildMessages, GatewayIntentBits.MessageContent, GatewayIntentBits.GuildMembers, ], }); client.on('ready', (c) => { console.log(c.user.tag + " is online!"); }); client.on('messageCreate', (message) => { if (message.content === 'hello') { const connection = joinVoiceChannel({ channelId: message.member.voice.channel.id, guildId: message.member.voice.guild.id, adapterCreator: message.member.voice.guild.voiceAdapterCreator, selfDeaf: false, }); const player = createAudioPlayer(); connection.subscribe(player); resource = createAudioResource(path.join(__dirname, 'sounds', 'alert.mp3')); player.play(resource); } }); It joins the channel fine but then doens't play the sound. I double checked and the file (alert.mp3) is in the sounds folder.
|java|spring-boot|testing|mockito|webclient|
> I don't understand the `temp` part. How the indexes are adding to `temp`? example `arr = 1,2,3,4,3`: it should return `4,2` but it returns `2,4`. `temp` captures the array list that the recursive call returns: it contains the matching indices, but only from the given index (`index+1`) onward. With `addAll` all these matching indices are appended to (after!) the content we already have. The content we already have is either an empty list, or (if the current index has a match) it has one element, which is `index`. As `addAll` adds indices *after* that first element, and those additional indices are guaranteed to be at least `index+1` (as the recursive call only looks from that index onward), we can be certain that `iList` will have the indices in *increasing* order. Notice where your code is different. In essence you *first* make the recursive calls and collect the indices it returns, and only *then* you check whether the current index `index` has a matching value, and add it *before* those if it matches (not *after*). But the result is the same as with the instructor's code because these two changes cancel eachother out: 1. Adding the recursive results first instead of last, 2. Adding the `index` *before* instead of *after* what you already have. If we were to take your instructor's code and just change the order of adding the recursive results with the (optional) adding of the `index`...: ``` static ArrayList<Integer> LinearSearch(int[] arr,int index,int target) { ArrayList<Integer> iList = new ArrayList<>(); if(index == arr.length){ return iList; } ArrayList<Integer> temp = LinearSearch(arr,index+1,target); iList.addAll(temp); // Moved the addition of the current index AFTER // the recursive results were added: if(arr[index] == target){ iList.add(index); } return iList; } ``` ...then the result for the example input will be `4,2`. Notice that here we have `iList.add(index)` and not `iList.addFirst(index)`, and so the order is reversed to `4,2`.
Generic type with reference to one of its properties with different type
I have written an Apache Beam function in Go which queries a database, creating a PCollection and then writes that PCollection to BigQuery. I would like to add a column to the PCollection before writing it to BigQuery. Part of the complication here definitely comes from the fact that I am trying to allow this function to handle arbitrary types (eg structs) to read from different tables in my DB. The relevant parts of my code look something like this: ``` func init() { register.DoFn1x2[beam.X, string, beam.X](&addTenantColumnFn{}) } func copyFromSQLToBQ(s beam.Scope, t tenant, cfg *Config, query, targetTable string, rt reflect.Type) { queryResults := databaseio.Query(s, "pgx", dsn, query, rt) queryResultsWithTenantIDs := beam.ParDo(s, &addTenantColumnFn{TenantID: t.ID}, queryResults) bigqueryio.Write(s, cfg.GCPProjectID, bqTableName(cfg.GCPProjectID, targetTable), queryResultsWithTenantIDs, bigqueryio.WithCreateDisposition(bigquery.CreateIfNeeded)) } type addTenantColumnFn struct { TenantID string } func (a *addTenantColumnFn) ProcessElement(x beam.X) (string, beam.X) { return a.TenantID, x } ``` Unfortunately, when I attempt to run, I get a compile time error: ``` panic: Method ProcessElement in DoFn github.com/apache/beam/sdks/v2/go/pkg/beam.addFixedKeyFn does not have enough main inputs. 2 main inputs were expected, but only 1 inputs were found. Full error: inserting ParDo in scope root/bigquery.Write graph.AsDoFn: for Fn named github.com/apache/beam/sdks/v2/go/pkg/beam.addFixedKeyFn ProcessElement method has too few main inputs ``` I also tried implementing the solution I found in [this post](https://stackoverflow.com/questions/75880936/add-a-column-to-apache-beam-pcollection-in-golang), by adding changing my do line to the following: `queryResultsWithTenantIDs := beam.ParDo(s, &addTenantColumnFn{TenantID: t.ID}, queryResults, beam.TypeDefinition{Var: beam.XType, T: reflect.TypeOf("")})` This leads to an error as well: ``` panic: inserting ParDo in scope root creating new DoFn in scope root binding fn github.com/apache/beam/sdks/v2/go/pkg/beam/register.registerDoFn1x2StructWrappersAndFuncs[...].func2.1 cannot substitute type X with string, already defined as main.user ```
Add a column to an Apache Beam Pcollection in Go
|go|google-cloud-platform|google-bigquery|google-cloud-dataflow|apache-beam|
null
For anyone else facing the same error with grant_type as password, you have to call Passport::enablePasswordGrant() inside the boot method of the AuthServiceProvider. The Laravel docs for Passport missed mentioning this part.
@John Gordon's answer is great, I just wanted to add something. Often `keyboard.read_key()` is not used, since it can be hard to clear keyboard input after pressing a key, and there is usually a more efficient way to do it with the `keyboard` library. We can use `keyboard.add_hotkey()` to allow our program to call a function whenever a key is pressed. Here is a simple example: #after importing turtle, keyboard, and doing whatever you want to do def w_pressed(): #do something when the key w is pressed def a_pressed(): #do something when the key a is pressed def s_pressed(): #do something when the key s is pressed def d_pressed(): #do something when the key d is pressed keyboard.add_hotkey('w', w_pressed) #bind the key w to the function w_pressed keyboard.add_hotkey('a', a_pressed) #bind the key a to the function a_pressed keyboard.add_hotkey('s', s_pressed) #bind the key s to the function s_pressed keyboard.add_hotkey('d', d_pressed) #bind the key d to the function d_pressed One big advantage of this code is it doesn't require a loop, so this code will always work even after the first time. It also allows you to get rid of `if` and `elif` statements, which improves the general organization of the code. I will note that the code here is a little repetitive. There might be better functions, so make sure to look at the [documentation]. [documentation]: https://github.com/boppreh/keyboard/blob/master/README.md
I am trying to write a Junit test but receiving java.util.NoSuchElementException. With limited knowledge on mockito and junit i understand DriverManager.getConnection needs to be mocked/stubbed. How can i do that. Any help is highly appreciated. The aim is to run the junit test without access to actual db source. ```lang-java public class DatabricksQueryExecutorTests { @Test public void testReadUnityTableDataAsJson() throws SQLException { // Create a mock DatabricksConfig object DatabricksConfig databricksConfig = new DatabricksConfig(); databricksConfig.setHost("example.com"); databricksConfig.setHttpPath("/api/2.0"); databricksConfig.setAccessToken("your-access-token"); databricksConfig.setUsername("your-username"); // Create a mock SQL query String sqlQuery = "SELECT * FROM table"; // Create a mock ResultSet with sample data ResultSet resultSet = Mockito.mock(ResultSet.class); Mockito.when(resultSet.next()).thenReturn(true, false); Mockito.when(resultSet.getMetaData()).thenReturn(Mockito.mock(ResultSetMetaData.class)); Mockito.when(resultSet.getMetaData().getColumnCount()).thenReturn(2); Mockito.when(resultSet.getMetaData().getColumnName(1)).thenReturn("column1"); Mockito.when(resultSet.getMetaData().getColumnName(2)).thenReturn("column2"); Mockito.when(resultSet.getString(1)).thenReturn("value1"); Mockito.when(resultSet.getString(2)).thenReturn("value2"); // Create a mock Connection and Statement Connection connection = Mockito.mock(Connection.class); Statement statement = Mockito.mock(Statement.class); Mockito.when(connection.createStatement()).thenReturn(statement); Mockito.when(connection.createStatement().executeQuery(sqlQuery)).thenReturn(resultSet); // Create an instance of DatabricksQueryExecutor DatabricksQueryExecutor queryExecutor = new DatabricksQueryExecutor(); // Call the method under test JsonArray jsonArray = queryExecutor.readUnityTableDataAsJson(sqlQuery, databricksConfig); // Verify the result Assertions.assertEquals(1, jsonArray.size()); JsonObject jsonObject = jsonArray.get(0).getAsJsonObject(); Assertions.assertEquals("value1", jsonObject.get("column1").getAsString()); Assertions.assertEquals("value2", jsonObject.get("column2").getAsString()); } } ``` **Error:** ```lang-none java.util.NoSuchElementException at java.base/java.util.StringTokenizer.nextToken(StringTokenizer.java:347) at com.databricks.client.jdbc.common.BaseConnectionFactory.acceptsURL(Unknown Source) at com.databricks.client.jdbc.common.AbstractDriver.connect(Unknown Source) at java.sql/java.sql.DriverManager.getConnection(DriverManager.java:681) at java.sql/java.sql.DriverManager.getConnection(DriverManager.java:229) at com.se.epx.dain.databricksqueryexecutor.DatabricksQueryExecutorTest.testReadUnityTableDataAsJson(DatabricksQueryExecutorTest.java:38) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:77) at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.base/java.lang.reflect.Method.invoke(Method.java:568) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56) at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63) at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at org.junit.runners.ParentRunner.run(ParentRunner.java:413) at org.junit.runner.JUnitCore.run(JUnitCore.java:137) at org.junit.runner.JUnitCore.run(JUnitCore.java:115) at org.junit.vintage.engine.execution.RunnerExecutor.execute(RunnerExecutor.java:42) at org.junit.vintage.engine.VintageTestEngine.executeAllChildren(VintageTestEngine.java:80) at org.junit.vintage.engine.VintageTestEngine.execute(VintageTestEngine.java:72) at org.junit.platform.launcher.core.EngineExecutionOrchestrator.execute(EngineExecutionOrchestrator.java:198) at org.junit.platform.launcher.core.EngineExecutionOrchestrator.execute(EngineExecutionOrchestrator.java:169) at org.junit.platform.launcher.core.EngineExecutionOrchestrator.execute(EngineExecutionOrchestrator.java:93) at org.junit.platform.launcher.core.EngineExecutionOrchestrator.lambda$execute$0(EngineExecutionOrchestrator.java:58) at org.junit.platform.launcher.core.EngineExecutionOrchestrator.withInterceptedStreams(EngineExecutionOrchestrator.java:141) at org.junit.platform.launcher.core.EngineExecutionOrchestrator.execute(EngineExecutionOrchestrator.java:57) at org.junit.platform.launcher.core.DefaultLauncher.execute(DefaultLauncher.java:103) at org.junit.platform.launcher.core.DefaultLauncher.execute(DefaultLauncher.java:94) at org.junit.platform.launcher.core.DelegatingLauncher.execute(DelegatingLauncher.java:52) at org.junit.platform.launcher.core.SessionPerRequestLauncher.execute(SessionPerRequestLauncher.java:70) at org.eclipse.jdt.internal.junit5.runner.JUnit5TestReference.run(JUnit5TestReference.java:100) at org.eclipse.jdt.internal.junit.runner.TestExecution.run(TestExecution.java:40) at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:529) at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:756) at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.run(RemoteTestRunner.java:452) at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.main(RemoteTestRunner.java:210) ```
As user @SandeepM says, very likely the reason is because your ipad is using a dark theme. A simple way to fix that would be to set colors explicitly in your code. For example: i.fa-light, i.fa-light:before, i.fa-light:after{ color:black; } should change them to `black`.
I wanted to specify the layout of the facets when making a plot with face_wrap in ggplot, so I used the following code: ``` df$layout_order <- factor(rep(1:ceiling(nrow(df)/4), each = 4), levels = unique(rep(1:ceiling(nrow(df)/4), each = 4))[1:nrow(df)]) ggplot(df, aes(x = Time, y = nPVI, group = Time, fill = Group)) + geom_bar(stat = "identity", position = "dodge", color = "black") + labs(title = "nPVI scores by Participant, Time and Group", x = "Time", y = "nPVI", group = "Time") + facet_wrap(~ Participant + layout_order, scales = "free", ncol = 4)+ coord_cartesian(ylim = c(20, 60)) + scale_fill_manual(values = c("Oral" = "#86CB92", "Prosody" = "#0E7C7B")) + theme_minimal() ``` It works, but it also adds the layout order numbers at the top of each facet, under the Participant names. How can I remove them?
class app : Application(), androidx.work.Configuration.Provider { override val workManagerConfiguration: androidx.work.Configuration get() = androidx.work.Configuration.Builder().setJobSchedulerJobIdRange(....).build() android { lintOptions { disable 'SpecifyJobSchedulerIdRange' } https://googlesamples.github.io/android-custom-lint-rules/checks/SpecifyJobSchedulerIdRange.md.html
I'm trying to clean and organize a data set that has user IDs, date and time to the second, and heartrate. What I'm trying to do is change this data set from measuring heartrate by second, to heartrate by minute by removing duplicate values before the first unique value. However, columns A (user IDs) and B (time) are both reliant on each other and both contain duplicates. I need to change it so that there are one unique combination of each for columns A and B together. [enter image description here](https://i.stack.imgur.com/fYnMh.png) This is a snapshot of my data. Example of what I need would be one value for 5/9/2016 19:49 for user ID 123456789, but also one value for 5/9/2016 19:49 for user ID 987654321, and so on for each value of time.
Removing duplicate data conditionally in Excel
|excel|duplicates|data-analysis|data-cleaning|
null
1. Sure you did not update accidentaly to 4.27.**3** or later? I got exactly your problem, after I installed the 4.28.0 Version - see below... 2. You need Hyper-V enabled for this: is it working correctly on your machine? If you are using Windows Home Edition there is no chance: upgrade your Windows to Professional Edition - see maybe [tag:docker-for-windows]? From my view, at this time the Docker Desktop (at least Version 4.28.0) seems to have a problem with some current Windows 10 setups, updates and things ... After I deinstalled the 4.28.0 and replaced it with a fresh install of the Docker Desktop Version 4.27.2 (see [Docker Desktop release notes][2]) everything works fine for me with VS 2022 and ASP.NET 8. ... don't Update DD until this is fixed! ;) In [GitHub, docker/for-win: ERROR: request returned Internal Server Error for API route and version...][1] there is a hint upgrading the WSL2 which might help too, if it is not updated with Windows (see here: https://superuser.com/a/1709473). [1]: https://github.com/docker/for-win/issues/13909 [2]: https://docs.docker.com/desktop/release-notes/#4272
Well, the simple answer is that the language definition simply doesn't allow it - it's a design choice. [Chapter and verse][1]: <blockquote> <b>6.5.16 Assignment operators</b><br> ...<br> <b>Constraints</b><br><br> 2 An assignment operator shall have a modifiable lvalue as its left operand. </blockquote> And what's a modifiable lvalue? <blockquote> <b>6.3.2.1 Lvalues, arrays, and function designators</b><br><br> 1 An <em>lvalue</em> is an expression with an object type or an incomplete type other than `void`;<sup>53)</sup> if an lvalue does not designate an object when it is evaluated, the behavior is undefined. When an object is said to have a particular type, the type is specified by the lvalue used to designate the object. <b>A <em>modifiable lvalue</em> is an lvalue that does not have array type</b>, does not have an incomplete type, does not have a const-qualified type, and if it is a structure or union, does not have any member (including, recursively, any member or element of all contained aggregates or unions) with a const-qualified type.<br> ...<br> 53) The name ‘‘lvalue’’ comes originally from the assignment expression <code>E1 = E2</code>, in which the left operand <code>E1</code> is required to be a (modifiable) lvalue. It is perhaps better considered as representing an object ‘‘locator value’’. What is sometimes called ‘‘rvalue’’ is in this International Standard described as the ‘‘value of an expression’’. </blockquote> Emphasis added. Array expressions in C are treated differently than most other expressions. The reason for this is explained in an [article][2] Dennis Ritchie wrote about the development of the C language: <blockquote> NB existed so briefly that no full description of it was written. It supplied the types <code>int</code> and <code>char</code>, arrays of them, and pointers to them, declared in a style typified by<br> int i, j; char c, d; int iarray[10]; int ipointer[]; char carray[10]; char cpointer[]; The semantics of arrays remained exactly as in B and BCPL: the declarations of <code>iarray</code> and <code>carray</code> create cells dynamically initialized with a value pointing to the first of a sequence of 10 integers and characters respectively. The declarations for <code>ipointer</code> and <code>cpointer</code> omit the size, to assert that no storage should be allocated automatically. Within procedures, the language's interpretation of the pointers was identical to that of the array variables: a pointer declaration created a cell differing from an array declaration only in that the programmer was expected to assign a referent, instead of letting the compiler allocate the space and initialize the cell. <br><br> Values stored in the cells bound to array and pointer names were the machine addresses, measured in bytes, of the corresponding storage area. Therefore, indirection through a pointer implied no run-time overhead to scale the pointer from word to byte offset. On the other hand, the machine code for array subscripting and pointer arithmetic now depended on the type of the array or the pointer: to compute `iarray[i]` or `ipointer+i` implied scaling the addend i by the size of the object referred to. <br><br> These semantics represented an easy transition from B, and I experimented with them for some months. Problems became evident when I tried to extend the type notation, especially to add structured (record) types. Structures, it seemed, should map in an intuitive way onto memory in the machine, but in a structure containing an array, there was no good place to stash the pointer containing the base of the array, nor any convenient way to arrange that it be initialized. For example, the directory entries of early Unix systems might be described in C as struct { int inumber; char name[14]; }; I wanted the structure not merely to characterize an abstract object but also to describe a collection of bits that might be read from a directory. Where could the compiler hide the pointer to <code>name</code> that the semantics demanded? Even if structures were thought of more abstractly, and the space for pointers could be hidden somehow, how could I handle the technical problem of properly initializing these pointers when allocating a complicated object, perhaps one that specified structures containing arrays containing structures to arbitrary depth? <br><br> <b>The solution constituted the crucial jump in the evolutionary chain between typeless BCPL and typed C. It eliminated the materialization of the pointer in storage, and instead caused the creation of the pointer when the array name is mentioned in an expression. The rule, which survives in today's C, is that values of array type are converted, when they appear in expressions, into pointers to the first of the objects making up the array.</b> <br><br> This invention enabled most existing B code to continue to work, despite the underlying shift in the language's semantics. The few programs that assigned new values to an array name to adjust its origin—possible in B and BCPL, meaningless in C—were easily repaired. More important, the new language retained a coherent and workable (if unusual) explanation of the semantics of arrays, while opening the way to a more comprehensive type structure. </blockquote> It's a good article, and well worth reading if you're interested in the "whys" of C. [1]:http://www.open-std.org/jtc1/sc22/wg14/www/docs/n1256.pdf [2]:https://web.archive.org/web/20130226145628/http://cm.bell-labs.com/cm/cs/who/dmr/chist.html
null
Well, the simple answer is that the language definition simply doesn't allow it — it's a design choice. [Chapter and verse][1]: <blockquote> <b>6.5.16 Assignment operators</b><br> ...<br> <b>Constraints</b><br><br> 2 An assignment operator shall have a modifiable lvalue as its left operand. </blockquote> And what's a modifiable lvalue? <blockquote> <b>6.3.2.1 Lvalues, arrays, and function designators</b><br><br> 1 An <em>lvalue</em> is an expression with an object type or an incomplete type other than <code>void</code>;<sup>53)</sup> if an lvalue does not designate an object when it is evaluated, the behavior is undefined. When an object is said to have a particular type, the type is specified by the lvalue used to designate the object. <b>A <em>modifiable lvalue</em> is an lvalue that does not have array type</b>, does not have an incomplete type, does not have a const-qualified type, and if it is a structure or union, does not have any member (including, recursively, any member or element of all contained aggregates or unions) with a const-qualified type.<br> ...<br> 53) The name ‘‘lvalue’’ comes originally from the assignment expression <code>E1 = E2</code>, in which the left operand <code>E1</code> is required to be a (modifiable) lvalue. It is perhaps better considered as representing an object ‘‘locator value’’. What is sometimes called ‘‘rvalue’’ is in this International Standard described as the ‘‘value of an expression’’. </blockquote> Emphasis added. Array expressions in C are treated differently than most other expressions. The reason for this is explained in an [article][2] Dennis Ritchie wrote about the development of the C language: <blockquote> NB existed so briefly that no full description of it was written. It supplied the types <code>int</code> and <code>char</code>, arrays of them, and pointers to them, declared in a style typified by:<br> int i, j; char c, d; int iarray[10]; int ipointer[]; char carray[10]; char cpointer[]; The semantics of arrays remained exactly as in B and BCPL: the declarations of <code>iarray</code> and <code>carray</code> create cells dynamically initialized with a value pointing to the first of a sequence of 10 integers and characters, respectively. The declarations for <code>ipointer</code> and <code>cpointer</code> omit the size, to assert that no storage should be allocated automatically. Within procedures, the language's interpretation of the pointers was identical to that of the array variables: a pointer declaration created a cell differing from an array declaration only in that the programmer was expected to assign a referent, instead of letting the compiler allocate the space and initialize the cell. <br><br> Values stored in the cells bound to array and pointer names were the machine addresses, measured in bytes, of the corresponding storage area. Therefore, indirection through a pointer implied no run-time overhead to scale the pointer from word to byte offset. On the other hand, the machine code for array subscripting and pointer arithmetic now depended on the type of the array or the pointer: to compute `iarray[i]` or `ipointer+i` implied scaling the addend i by the size of the object referred to. <br><br> These semantics represented an easy transition from B, and I experimented with them for some months. Problems became evident when I tried to extend the type notation, especially to add structured (record) types. Structures, it seemed, should map in an intuitive way onto memory in the machine, but in a structure containing an array, there was no good place to stash the pointer containing the base of the array, nor any convenient way to arrange that it be initialized. For example, the directory entries of early Unix systems might be described in C as: struct { int inumber; char name[14]; }; I wanted the structure not merely to characterize an abstract object but also to describe a collection of bits that might be read from a directory. Where could the compiler hide the pointer to <code>name</code> that the semantics demanded? Even if structures were thought of more abstractly, and the space for pointers could be hidden somehow, how could I handle the technical problem of properly initializing these pointers when allocating a complicated object, perhaps one that specified structures containing arrays containing structures to arbitrary depth? <br><br> <b>The solution constituted the crucial jump in the evolutionary chain between typeless BCPL and typed C. It eliminated the materialization of the pointer in storage, and instead caused the creation of the pointer when the array name is mentioned in an expression. The rule, which survives in today's C, is that values of array type are converted, when they appear in expressions, into pointers to the first of the objects making up the array.</b> <br><br> This invention enabled most existing B code to continue to work, despite the underlying shift in the language's semantics. The few programs that assigned new values to an array name to adjust its origin—possible in B and BCPL, meaningless in C—were easily repaired. More important, the new language retained a coherent and workable (if unusual) explanation of the semantics of arrays, while opening the way to a more comprehensive type structure. </blockquote> It's a good article, and well worth reading if you're interested in the "whys" of C. [1]:http://www.open-std.org/jtc1/sc22/wg14/www/docs/n1256.pdf [2]:https://web.archive.org/web/20130226145628/http://cm.bell-labs.com/cm/cs/who/dmr/chist.html
I was facing the same error on Laravel 10 with grant_type as password. The solution for me was to call Passport::enablePasswordGrant() inside the boot method of the AuthServiceProvider. The Laravel docs for Passport missed mentioning this part.
I fixed it guys. I was using restricted test key and each key (product and restricted and any) has different api keys settings for different actions, the error message will tell you which api key action you must change from none to write. Where it says reveal your key U have three dots on the right and there U can edit api keys, I had to change three api keys because each were giving error until I changed all three to write mode.
I have rewritten your `is_prime` and `largest_prime_under` functions. #!/bin/bash is_prime() { local n="$1" if (( n==2 )) || (( n==3 )); then return 0 fi if (( n<1 )) || (( n % 2==0 )) || (( n % 3==0 )); then return 1 fi for (( i=5; i*i<=n; i+=6 )); do if (( n % i==0)) || (( n % (i+2)==0 )); then return 1 fi done return 0 } largest_prime_under() { local x=$1 if [[ ! "$x" =~ ^([0-9]|[1-9][0-9]+)$ ]]; then echo 'Input must be a natural number (0,1,2,3,...)' return 1 fi if (( x<2 )); then echo 'Input must be greater than 2' return 1 fi while (( x>=2 )); do if is_prime "$x"; then echo "$x is largest prime less than or equal to $1" return 0 fi (( x-- )) done } There is an explanation of the `is_prime` method [HERE][1]. Know that this is not at all an optimal solution. With larger values of integers, the [gap between primes][2] grows. With larger gaps, the function `largest_prime_under` may be calling `is_prime` and recalculating the *same* prime numbers millions of times. A more efficient solution would cache all primes up to and including a given number and return the last number. [1]: https://stackoverflow.com/a/15285588/298607 [2]: https://en.wikipedia.org/wiki/Prime_gap#
just replace this import { getMessaging} from 'firebase/messaging'; with this import { getMessaging} from 'firebase/messaging/sw';
I would use a [D3](https://d3js.org/) scale for that: ``` scale = d3.scaleLinear([1, 10], [100, 200]) // ^^^^^^^ ^^^^^^^^^^ // A B ``` `scale` is a function that maps a value in a domain **(A)** (the set of possible input) to another value in a range **(B)** (e.g the set of possible output). e.g., ``` scale(5) //=> 144.44444444444446 scale(5.8) //=> 153.33333333333334 ``` Let's plot the value `5` on different *x* axes: <!-- begin snippet: js hide: false console: true babel: false --> <!-- language: lang-js --> var scale1 = d3.scaleLinear([1, 10],[1, 100]); var scale2 = d3.scaleLinear([1, 10],[1, 200]); var scale3 = d3.scaleLinear([1, 10],[1, 300]); document.querySelector('#x1').innerHTML = `<span style="left:${scale1(5)}px"></span>`; document.querySelector('#x2').innerHTML = `<span style="left:${scale2(5)}px"></span>`; document.querySelector('#x3').innerHTML = `<span style="left:${scale3(5)}px"></span>`; <!-- language: lang-css --> div {position: relative; height: 25px; border-bottom: 1px solid black} span {position: absolute} #x1 {width: 100px} #x2 {width: 200px} #x3 {width: 300px} <!-- language: lang-html --> <script src="https://d3js.org/d3.v7.min.js"></script> <div id="x1"></div> <div id="x2"></div> <div id="x3"></div> <!-- end snippet -->
|android|automation|dns|samsung|
null
So I'm trying to run a CMD check for a package I want to upload to CRAN but when running it on my Windows 11 cmd I get the following error: ``` checking whether package 'YEAB' can be installed ... ERROR Installation failed. ``` And when I open the "00install.out" file I get this error: ``` installing *source* package 'YEAB' ... using staged installation R data inst byte-compile and prepare package for lazy loading Error : 'library' is not an exported object from 'namespace:foreach' Error: unable to load R code in package 'YEAB' Execution halted ERROR: lazy loading failed for package 'YEAB' removing 'C:/Users/Zick/Documents/YEAB/YEAB-master/YEAB.Rcheck/YEAB' ``` The problem is that I'm not importing any function named "library" from the "foreach" namespace. What is more, if I'm not wrong, the "library" function comes from the "base" package, so I don't understand why I'm getting this error. My NAMESPACE file looks like this: ``` export(KL_div) export(ab_range_normalization) export(addalpha) export(balci2010) export(berm) export(biexponential) export(box_dot_plot) export(bp_km) export(bp_opt) export(ceiling_multiple) export(curv_index_fry) export(curv_index_int) export(den_histogram) export(entropy_kde2d) export(eq_hyp) export(event_extractor) export(exhaustive_lhl) export(exhaustive_sbp) export(exp_fit) export(f_table) export(fleshler_hoffman) export(fwhm) export(gaussian_fit) export(gell_like) export(get_bins) export(hist_over) export(hyperbolic_fit) export(ind_trials_opt) export(mut_info_discret) export(mut_info_knn) export(n_between_intervals) export(objective_bp) export(read_med) export(sample_from_density) export(trapezoid_auc) export(unity_normalization) export(val_in_interval) importFrom(foreach,"%dopar%") importFrom(foreach,foreach) importFrom(MASS,kde2d) importFrom(Polychrome,createPalette) importFrom(cluster, clusGap) importFrom(dplyr,between) importFrom(dplyr,lag) importFrom(infotheo,discretize) importFrom(infotheo,mutinformation) importFrom(magrittr,"%>%") importFrom(minpack.lm,nls.lm) importFrom(rmi,knn_mi) importFrom(scales,show_col) importFrom(sfsmisc,integrate.xy) importFrom(zoo,rollapply) importFrom(ggplot2, ggplot, aes, geom_point) importFrom(grid, unit) importFrom(gridExtra, grid.arrange) importFrom(rethinking, HPDI) importFrom(stats, median, optim, coef, fitted, approx, integrate, quantile) importFrom("grDevices", "col2rgb", "grey", "rgb") importFrom("graphics", "abline", "arrows", "axis", "box", "boxplot", "grid", "hist", "layout", "lines", "mtext", "par", "polygon", "stripchart", "text") importFrom("stats", "approx", "bw.SJ", "coef", "cor", "density", "fitted", "integrate", "lm", "loess", "median", "na.omit", "nls", "nls.control", "optim", "pnorm", "quantile", "rbinom", "runif") importFrom("utils", "read.table", "stack", "tail", "write.csv") importFrom(grid, unit, gpar, grid.polygon, grid.text, grid.layout, grid.newpage, pushViewport, viewport, grid.rect, grid.points, grid.xaxis, grid.yaxis, grid.segments) importFrom(minpack.lm, "nls.lm.control") importFrom(utils, "stack", "write.csv") importFrom(ggplot2, theme_gray, element_line, element_blank, element_text) importFrom(cluster, pam) importFrom(dplyr, group_by, summarise) ``` I've looked all over the internet for a solution and tried to update the "foreach" package to it's latest version but to no avail. What is more strange is that if I delete these lines: ``` importFrom(foreach,"%dopar%") importFrom(foreach,foreach) ``` from the NAMESPACE file I still get the same 'library' is not an exported object from 'namespace:foreach' error. Have somebody else found this problem? I'm quite new in R package development so I'm sorry if I turn out to be making a rookie mistake here.