content
stringlengths
240
2.34M
<issue_start><issue_comment>Title: DAOS-6854 dfs: add internal API for dfs_statx to stat + fetch xattr username_0: <issue_comment>username_0: ah thanks. good to know. does that add to all the PR tests being done, or it replaces them? and if they are not too time consuming, it might be worth moving some into PR testing. <issue_comment>username_1: The `Features` tag _**adds to**_ the PR tests being executed, so it's generally okay to use on a final commit/run. We previously had some tagged with "pr", but those have been moved to "daily_regression" (ran twice a day) to reduce the overall size of PR runs. <issue_comment>username_0: @username_2 since you asked for this, can you please review? <issue_comment>username_2: Superficially it looks good, the interface is what I expected and it does work. I created https://github.com/daos-stack/daos/pull/4742 to test it from the dfuse side which at least smoke tests it. What I hadn't realised was that dfuse is going to have to keep a pointer to the parent instead of just the inode which means I need to update all the reference counting so it's going to be more work than I thought to get this properly working. <issue_comment>username_0: yea i was thinking we could do that but it's not possible since to open the object we need the chunk size from the inode. we try to improve file creation rates by using a special API for array (open_with_attr) which required passing the chunk size and record size there. <issue_comment>username_2: If we know the oid then it's also possible to know the chunk size, although it would mean passing it out in dfs_obj2id() and storing it in dfuse. <issue_comment>username_0: no. the oid itself does not store the chunks size. <issue_comment>username_2: Well no, but when we initially discover the oid we also get the chunk size so it we could store it alongside the oid. We're not saving the oid anywhere externally and the chunk size won't change without also changing the oid so we could choose to save it. <issue_comment>username_0: well you would have to store it in dfuse when you retrieve the oid. and pass that to statx. <issue_comment>username_2: Yes, that's what I meant.
<issue_start><issue_comment>Title: How to install it via downloaded zip in wamp server. username_0: I have downloaded this api and extracted it in wamp server, but its not working. How to get it working on localhost wamp server. <issue_comment>username_1: What is not working? How far have you got? <issue_comment>username_0: Fatal error: Class 'Buzz\Client\Curl' not found in D:\wamp\www\gitlab\Gitlab\Client.php on line 78 This is the error, i am facing right now. <?php require_once(__DIR__ . '/Gitlab/Client.php'); $client = new \Gitlab\Client('http://git.yourdomain.com/api/v3/'); $client->authenticate('your_gitlab_token_here', \Gitlab\Client::AUTH_URL_TOKEN); // change here $project = $client->api('projects')->create('My Project', array( 'description' => 'This is a project', 'issues_enabled' => false )); This is my initial code <issue_comment>username_1: You will need Composer for this project to work. Checkout [this tutorial](http://codezag.com/how-to-install-composer-wamp/) on install Composer on WAMP. Once installed, run `composer install` in the project root and include `vendor/autoload.php` instead of `require_once(DIR . '/Gitlab/Client.php');` And you should be good to go. <issue_comment>username_0: Hi Matt, I am now trying to install it via composer. Below is the error which i am facing right now. D:\wamp\www\gitlab>composer install You are running composer with xdebug enabled. This has a major impact on runtime performance. See https://getcomposer.org/xdebug Loading composer repositories with package information Installing dependencies (including require-dev) Your requirements could not be resolved to an installable set of packages. Problem 1 - The requested package username_1/php-gitlab-api No version set (parsed a s 1.0.0) could not be found. Potential causes: - A typo in the package name - The package is not available in a stable-enough version according to your min imum-stability setting see <https://getcomposer.org/doc/04-schema.md#minimum-stability> for more det ails. Read <https://getcomposer.org/doc/articles/troubleshooting.md> for further commo n problems. And here is my composer.json code { "name": "username_1/php-gitlab-api", "type": "library", "description": "GitLab API client", "homepage": "https://github.com/username_1/php-gitlab-api", "keywords": ["gitlab", "api"], "license": "MIT", "authors": [ { "name": "Matt Humphrey", "homepage": "http://m4tt.io" }, { "name": "KnpLabs Team", "homepage": "http://knplabs.com" }, { "name": "Thibault Duplessis", "email": "thibault.duplessis@gmail.com", "homepage": "http://ornicar.github.com" } ], "require": { "php": ">=5.3.2", "ext-curl": "*", "kriswallsmith/buzz": ">=0.7", "username_1/php-gitlab-api": "dev-master" }, "require-dev": { "phpunit/phpunit": "~4.5" }, "autoload": { "psr-0": { "Gitlab\\": "lib/" } } } <issue_comment>username_1: All you need in your `composer.json` file is ```json { "require": { "username_1/php-gitlab-api": "dev-master" } } ``` You will probably want to set a specific version based on your [Gitlab version](https://github.com/username_1/php-gitlab-api#versioning). <issue_comment>username_0: Hi Matt, Now i managed to get it installed. But dont know how to use it. Below is my code <?php require_once '/vendor/autoload.php'; $client = new \Gitlab\Client('https://localhost/gitlab/api/v3/'); // change here $client->authenticate('XYZ', \Gitlab\Client::AUTH_URL_TOKEN); // change here #$project = $client->api('projects'); $project = $client; echo '<pre>';print_r($project); ?> I have two queries here 1) What link should be given in $client = new \Gitlab\Client('https://localhost/gitlab/api/v3/'); // change here 2) How can i fetch the branches of a repo and its commits using this api as this is not given in the documentation? It would be better if you can provide any sample project code. <issue_comment>username_1: [The API Guide](http://doc.gitlab.com/ce/api/) should help you out.<issue_closed>
<issue_start><issue_comment>Title: Make ext extension work without the ! username_0: I think the problem that causes this not to work is that the plugin extension will individually normalize both sides of a module name. So if our extension makes `foo.css` become `foo.css!css-plugin` then there will be a cycle where it continuously does this over and over. I think we have to flag this some how in the plugin extension so that the normalize hook knows what is happening. <issue_comment>username_0: This is a test page to get working: https://github.com/stealjs/steal/blob/master/test/extensions/site_no_bang.html <issue_comment>username_0: The tricky part is how the extension knows that it is being normalized by the plugin extension and that it doesn't need to do it's part.<issue_closed>
<issue_start><issue_comment>Title: `iCleaner Pro` working on iOS 12.4 username_0: ``` { "packageId": "com.exile90.icleanerpro", "action": "working", "userInfo": { "arch32": false, "packageId": "com.exile90.icleanerpro", "deviceId": "iPhone9,1", "url": "http://cydia.saurik.com/package/com.exile90.icleanerpro/", "iOSVersion": "12.4", "packageVersionIndexed": true, "packageName": "iCleaner Pro", "category": "Utilities", "repository": "Ivano Bilenchi's Beta Repo", "name": "iCleaner Pro", "installed": "7.7.5", "packageIndexed": true, "packageStatusExplaination": "This package version has been marked as Working based on feedback from users in the community. The current positive rating is 100% with 8 working reports.", "id": "com.exile90.icleanerpro", "commercial": false, "packageInstalled": true, "tweakCompatVersion": "0.1.5", "shortDescription": "The first real iOS system cleaner & optimizer", "latest": "7.7.5", "author": "Ivano Bilenchi", "packageStatus": "Working" }, "base64": "eyJhcmNoMzIiOmZhbHNlLCJwYWNrYWdlSWQiOiJjb20uZXhpbGU5MC5pY2xlYW5lcnBybyIsImRldmljZUlkIjoiaVBob25lOSwxIiwidXJsIjoiaHR0cDpcL1wvY3lkaWEuc2F1cmlrLmNvbVwvcGFja2FnZVwvY29tLmV4aWxlOTAuaWNsZWFuZXJwcm9cLyIsImlPU1ZlcnNpb24iOiIxMi40IiwicGFja2FnZVZlcnNpb25JbmRleGVkIjp0cnVlLCJwYWNrYWdlTmFtZSI6ImlDbGVhbmVyIFBybyIsImNhdGVnb3J5IjoiVXRpbGl0aWVzIiwicmVwb3NpdG9yeSI6Ikl2YW5vIEJpbGVuY2hpJ3MgQmV0YSBSZXBvIiwibmFtZSI6ImlDbGVhbmVyIFBybyIsImluc3RhbGxlZCI6IjcuNy41IiwicGFja2FnZUluZGV4ZWQiOnRydWUsInBhY2thZ2VTdGF0dXNFeHBsYWluYXRpb24iOiJUaGlzIHBhY2thZ2UgdmVyc2lvbiBoYXMgYmVlbiBtYXJrZWQgYXMgV29ya2luZyBiYXNlZCBvbiBmZWVkYmFjayBmcm9tIHVzZXJzIGluIHRoZSBjb21tdW5pdHkuIFRoZSBjdXJyZW50IHBvc2l0aXZlIHJhdGluZyBpcyAxMDAlIHdpdGggOCB3b3JraW5nIHJlcG9ydHMuIiwiaWQiOiJjb20uZXhpbGU5MC5pY2xlYW5lcnBybyIsImNvbW1lcmNpYWwiOmZhbHNlLCJwYWNrYWdlSW5zdGFsbGVkIjp0cnVlLCJ0d2Vha0NvbXBhdFZlcnNpb24iOiIwLjEuNSIsInNob3J0RGVzY3JpcHRpb24iOiJUaGUgZmlyc3QgcmVhbCBpT1Mgc3lzdGVtIGNsZWFuZXIgJiBvcHRpbWl6ZXIiLCJsYXRlc3QiOiI3LjcuNSIsImF1dGhvciI6Ikl2YW5vIEJpbGVuY2hpIiwicGFja2FnZVN0YXR1cyI6IldvcmtpbmcifQ==", "chosenStatus": "working", "notes": "" } ```<issue_closed>
<issue_start><issue_comment>Title: IRCv3 Message Tags username_0: It would be useful for this library to support parsing of message tags from the IRCv3.2 working specification. Whilst the specification is a work-in-progress, message tags are pretty much set-in-stone and it would be advantageous when writing an IRC application using this library to already have support for message tags. http://ircv3.org/specification/message-tags-3.2 <issue_comment>username_0: I've been low on time as well but if I get any free time, I'll see about writing the necessary code and creating a pull request.
<issue_start><issue_comment>Title: 🛑 nitter.mha.fi is down username_0: In [`0f2edb2`](https://github.com/username_0/nitter-instances/commit/0f2edb24e1f64207c4b786f647e82e9f3cfb5cfc ), nitter.mha.fi (https://nitter.mha.fi) was **down**: - HTTP code: 502 - Response time: 382 ms<issue_closed> <issue_comment>username_0: **Resolved:** nitter.mha.fi is back up in [`3a55b8c`](https://github.com/username_0/nitter-instances/commit/3a55b8c062bd3d885b209aed4f9908dd5dfd0348 ).
<issue_start><issue_comment>Title: [BUG] Versionable datetimetype column diff issue username_0: ### Bug Report | Q | A |------------ | ------ | Version | 3.3.2 #### Summary Platform: MySQL Version: 8.0 Entity with versionable field: ``` /** * @ORM\Version() * @ORM\Column(type="datetime") */ private \DateTime $version; ``` generates the same migration over and over again even after executing it (https://github.com/username_0/migration-issue/blob/version-issue/migrations/Version20220302083650.php) An issue comes from changes that adds `Comparator` with provided database platform as dependency. When comparing columns this peace of code (https://github.com/doctrine/dbal/blob/3.3.x/src/Schema/Comparator.php#L291) returns `[]` on versionable datetimetype column. But due to set `$this->platform` later check `$this->columnsEqual($column, $toColumn)` results to `false` and marks column as having changes. Because one of `Column` instances does not have `version` platform option which is necessary for getting correct mysql datetime column type declaration: https://github.com/doctrine/dbal/blob/3.3.x/src/Platforms/AbstractMySQLPlatform.php#L249 #### How to reproduce https://github.com/username_0/migration-issue/tree/version-issue `version-issue` branch #### Expected behaviour No migration should be generated for versionable datetime type column
<issue_start><issue_comment>Title: Add groups to from validation username_0: Add this feature to angular forms if we have form with a lot of controls and we want to validate only some of these controls by adding validation group name to ng-validation-group directive <form nam="user"> <input type='"text" name=''name" ng-validation-group="personalData" ng-required="true"/> <input type='"text" name=''age" ng-validation-group="personalData" ng-required="true"/> // Other inputs //Other inputs </form so we can check if these controls that belong to group "personalData" is valid or not by just using user.personalData.$valid other than using user.name.$valid && user.age.$valid Thanks <issue_comment>username_1: You can group controls with subforms: <form name="user"> <div ng-form name="personalData"> <input type='"text" name=''name" ng-required="true"/> <input type='"text" name=''age" ng-required="true"/> </div> // Other inputs //Other inputs </form> and `user.personalData.$valid`<issue_closed> <issue_comment>username_1: Closing due to lack of feedback (and available options)
<issue_start><issue_comment>Title: Error on Superslicer username_0: I got an error when trying to slice with Superslicer ![image](https://user-images.githubusercontent.com/41800494/145480712-0b330f14-c180-43de-af59-cbdaec41a9f4.png) Thanks <issue_comment>username_1: You'll need to provide the logs from superslicer itself as there's nothing I can use in your screenshot unfortunately. <issue_comment>username_0: Sorry. Do you know where the logs are located? <issue_comment>username_2: Try changing the name of the file Gcode file you are exporting. I think there is an issue with overwriting file names. This fixed it for me. I don't know if the is some way the script can be updated to allow overwriting file names. <issue_comment>username_1: It should work without any problem while overwriting existing files. To move forward, what you can do is: * Disable it from superslicer * Export your gcode on your disk * run the script from command line: C:\PATH_TO_PYTHON.EXE C:\PATH_TO_FILE.gcode and report back if there are errors <issue_comment>username_0: I got this error: ![image](https://user-images.githubusercontent.com/41800494/145861436-c4f942e7-8c68-4975-a8c6-f93428834453.png) <issue_comment>username_1: Sorry, my bad... it should be C:\PATH_TO_PYTHON.EXE C:\PATH_TO_FastGyroidInfil.py C:\PATH_TO_FILE.gcode <issue_comment>username_0: ![image](https://user-images.githubusercontent.com/41800494/145943800-1a7830b1-cdde-4996-8389-5e95923059ac.png) This is the error I got <issue_comment>username_1: You downloaded an HTML file, you need to copy the python file content, not the HTML file. <issue_comment>username_1: You downloaded the cura file, please use this one: https://raw.githubusercontent.com/username_1/klipper-FastGyroidInfill/main/superslicer/FastGyroidInfill.py <issue_comment>username_0: Done. Thank you. You can close this issue.<issue_closed>
<issue_start><issue_comment>Title: plugin/video_player How to create two timestamps on the live video from network username_0: I was thinking about to create two timestamps in video from network and get the exact part of video between these two tomestamps. How can i achive this with video_player plugin? I saw in doc it has the `controller.seekTo` . Can this work? Am I on the right path with my research? <issue_comment>username_1: Hi @username_0, this seems like a discussion for either StackOverflow or any of the communities in https://flutter.dev/community#community-grid. The flutter issue tracker is for reporting bug reports or feature requests for flutter. Seeing as this issue is neither of those, I'm going to close this issue. Thank you<issue_closed>
<issue_start><issue_comment>Title: df.toJSON() not working username_0: I try to get a DataFrame as a JSON structure via `df.toJSON()`. The command completes, but the output is ```` spark-node> var df = sqlContext.read().json("./data/people.json") spark-node> df.toJSON() nodeJava_org_apache_spark_rdd_MapPartitionsRDD { 'org$apache$spark$rdd$RDD$$evidence$1': nodeJava_scala_reflect_ClassTag__anon_1 {} } spark-node> ```` Is it possible to get a DataFrame as JSON with the current version? <issue_comment>username_1: As described [here](http://spark.apache.org/docs/latest/api/scala/index.html#org.apache.spark.sql.DataFrame), `DataFrame.toJSON()` returns an RDD, which probably isn't what you want. You can get the dataframe back into the shell by doing `df.collect()`. This will give you an array of strings that you can then convert to JSON using `JSON.parse()`<issue_closed> <issue_comment>username_0: Thanks, `df.toJSON().collect()` does what I want. I was basically confused about the RDD creation's output.
<issue_start><issue_comment>Title: Issue when running yarn username_0: Hi, I have granfana 7 in a docker container. I cloned your repo in a local path in my computer and tried to run the "yarn install --pure-lockfile" command. I received the following output yarn install v1.22.4 [1/4] 🔍 Resolving packages... [2/4] 🚚 Fetching packages... error An unexpected error occurred: "https://codeload.github.com/ryantxu/grafana-sdk-mocks/tar.gz/e62ddb5450e102a832cd890974c6235262471bf8: Request failed \"404 Not Found\"". info If you think this is a bug, please open a bug report with the information provided in "/Users/username/Documents/_grafana/file-system-datasource/yarn-error.log". info Visit https://yarnpkg.com/en/docs/cli/install for documentation about this command. `Yarn version: 1.22.4 Node version: 12.18.1 Platform: darwin x64 Trace: Error: https://codeload.github.com/ryantxu/grafana-sdk-mocks/tar.gz/e62ddb5450e102a832cd890974c6235262471bf8: Request failed "404 Not Found" at ResponseError.ExtendableBuiltin (/usr/local/Cellar/yarn/1.22.4/libexec/lib/cli.js:696:66) at new ResponseError (/usr/local/Cellar/yarn/1.22.4/libexec/lib/cli.js:802:124) at Request.<anonymous> (/usr/local/Cellar/yarn/1.22.4/libexec/lib/cli.js:67057:16) at Request.emit (events.js:315:20) at Request.module.exports.Request.onRequestResponse (/usr/local/Cellar/yarn/1.22.4/libexec/lib/cli.js:141625:10) at ClientRequest.emit (events.js:315:20) at HTTPParser.parserOnIncomingClient [as onIncoming] (_http_client.js:596:27) at HTTPParser.parserOnHeadersComplete (_http_common.js:119:17) at TLSSocket.socketOnData (_http_client.js:469:22) at TLSSocket.emit (events.js:315:20)` Do you know if this something I need to do in my computer, or a dependency issue/bug as we have a resource not found? Thanks!
<issue_start><issue_comment>Title: use normalize.css username_0: Check if bootstrap already does that or not. If not then include it in the project. <issue_comment>username_0: http://stackoverflow.com/questions/18286189/should-i-use-normalize-css-in-my-bootstrap-project I don't need to use as Bootstrap already uses a version of it in their code.<issue_closed>
<issue_start><issue_comment>Title: TL;DR instructions produce error username_0: <!-- If you are reporting a new issue, make sure that we do not have any duplicates already open. You can ensure this by searching the issue list for this repository. If there is a duplicate, please close your issue and add a comment to the existing issue instead. If you suspect your issue is a bug, please edit your issue description to include the BUG REPORT INFORMATION shown below. If you fail to provide this information within 7 days, we cannot debug your issue and we'll close it. We will, however, reopen it if you later provide the information. ------------------------------- BUG REPORT INFORMATION ------------------------------- Use the commands below to provide key information from your environment: You do NOT have to include this information if this is a FEATURE REQUEST --> **Description** The command that Docker hub tells you to run produces an error right away **Steps to reproduce the issue:** ``` $ docker run --name jupyterhub bitnami/jupyterhub:latest ``` **Describe the results you received:** ``` [I 2021-12-06 14:57:55.621 JupyterHub app:2479] Running JupyterHub version 1.5.0 [I 2021-12-06 14:57:55.621 JupyterHub app:2510] Using Authenticator: jupyterhub.auth.PAMAuthenticator-1.5.0 [I 2021-12-06 14:57:55.621 JupyterHub app:2510] Using Spawner: jupyterhub.spawner.LocalProcessSpawner-1.5.0 [I 2021-12-06 14:57:55.621 JupyterHub app:2510] Using Proxy: jupyterhub.proxy.ConfigurableHTTPProxy-1.5.0 [I 2021-12-06 14:57:55.627 JupyterHub app:1599] Writing cookie_secret to /jupyterhub_cookie_secret [E 2021-12-06 14:57:55.627 JupyterHub app:2989] Traceback (most recent call last): File "/opt/bitnami/miniconda/lib/python3.7/site-packages/jupyterhub/app.py", line 2986, in launch_instance_async await self.initialize(argv) File "/opt/bitnami/miniconda/lib/python3.7/site-packages/jupyterhub/app.py", line 2519, in initialize self.init_secrets() File "/opt/bitnami/miniconda/lib/python3.7/site-packages/jupyterhub/app.py", line 1601, in init_secrets with open(secret_file, 'w') as f: PermissionError: [Errno 13] Permission denied: '/jupyterhub_cookie_secret' ``` **Describe the results you expected:** Uh, a working JupyterHub image **Additional information you deem important (e.g. issue happens only occasionally):** <!-- Any additional information, configuration or data that might be necessary to reproduce the issue. --> **Version** - Output of `docker version`: ``` Client: Version: 20.10.11 API version: 1.41 Go version: go1.17.3 Git commit: dea9396e18 Built: Sat Nov 20 12:57:52 2021 OS/Arch: linux/amd64 Context: default Experimental: true Server: Engine: Version: 20.10.11 API version: 1.41 (minimum version 1.12) Go version: go1.17.3 Git commit: 847da184ad Built: Sat Nov 20 12:57:08 2021 OS/Arch: linux/amd64 Experimental: false containerd: [Truncated] Debug Mode: false Username: username_0 Registry: https://index.docker.io/v1/ Labels: Experimental: false Insecure Registries: 127.0.0.0/8 Live Restore Enabled: false ``` - Output of `docker-compose version` (if applicable): ``` (paste your output here) ``` **Additional environment details (AWS, VirtualBox, Docker for MAC, physical, etc.):** Just my own laptop... <issue_comment>username_1: Hi, Note that this solution is meant to be run in a Kubernetes cluster, not using docker run. Could you try running it in a Kubernetes cluster using the bitnami/jupyterhub helm chart? <issue_comment>username_0: I do not run a Kubernetes cluster. If that's the case, maybe remove the TL;DR command from the page? It obviously tells you to use `docker run` `$ docker run --name jupyterhub bitnami/jupyterhub:latest` <issue_comment>username_2: Hi @username_0 , yes, we should delete that form the README. it was aded because we use a general template
<issue_start><issue_comment>Title: Merge our thread code back into std? username_0: Some context: https://github.com/username_1/ctru-rs/issues/44 * Pros 1. Keeps us up-to-date with std. 2. Not duplicating code. We could define an extension trait for 3ds specific data like affinity. 4. Other code can still use std threads (though we should generally avoid this since the 3ds CPU works differently). * Cons 1. This allows 3rd party code to use threads, which might break since the default core is not preemptive (and we only get one thread on the preemptive system core). 2. We can't directly link libctru to std. So we'd need to pass the extra data (ex. affinity) through pthread or something. <issue_comment>username_1: Priority and affinity are all we have to pass through. Surely this becomes problematic when thinking about third-party crates, but J guess it’s just another thing we can only worry our users not to do. Can we implement extensions on pre-existing std structs? <issue_comment>username_1: From testing I can say: exactly that. Threads with higher priority will always run before threads with lower one, and those lower will run only once all threads with a higher priority are either locked or sleeping. I’m pretty sure a simple “yield” (which in libctru is basically a sleep for 0 time) won’t yield to a lower priority, though I’m not that sure. Threads on a same priority seem to run in a very “preemptive” way. It probably just uses a random one from those available. <issue_comment>username_0: Yeah, see for example https://doc.rust-lang.org/stable/std/os/unix/fs/trait.MetadataExt.html <issue_comment>username_1: Ok, an `OS` struct to work with that... I bet the Rust maintainers won't be please to see a cooperative threading model be used as a preemptive, but what can we do. <issue_comment>username_0: I'm mostly through implementing this I think. Haven't tested it yet though :upside_down_face:. <issue_comment>username_2: I noticed something about the test runner added in #41, which is that a panic in the single-threaded test runner pauses everything silently until the user presses SELECT (not ideal). This is due to a combination of the default `ctru::init` panic handler and the fact that the test both a) has stdout captured and b) is running in the main thread, which triggers [this code](https://github.com/username_1/ctru-rs/blob/master/ctru-rs/src/lib.rs#L25) to loop while waiting for input. I think running with threads would mostly resolve this, since the background thread panics would simply exit the threads as usual, but I'm open to other workarounds meanwhile if we'd rather keep the single-threaded behavior for now. <issue_comment>username_0: We can just disable the handler with `#[cfg(not(test))]` <issue_comment>username_2: This is a good idea, and it works well enough, but doesn't behave very nicely if the `test_runner` code panics, which unfortunately seems to happen for certain combinations of configuration options. I think it's fine for now but perhaps multi threaded testing would be more robust as a future enhancement. <issue_comment>username_0: We could technically work around this with a global boolean wrapped in a mutex, to signal if the tests have started or not. But that might be messy. <issue_comment>username_1: We should work toward the `std` threads instead. Our `ctru` hook is already messy enough. <issue_comment>username_0: @username_1 nice timing: #46<issue_closed>
<issue_start><issue_comment>Title: Picture URL return empty string on Signed APK username_0: Hi, Im using SimpleFacebook.getProfile to retrieve user's profile information.. I am able to retrieve all information during debugging mode, however after i signed the APK, I am getting empty string for user profile picture URL. So far i have checked the keystore in Facebook Developer console it is correct. (i am able to retrieve user id, user scoped url, user's name.) I hope someone can assist me on this, or guide me on how to fix this issue. Thank you.
<issue_start><issue_comment>Title: fixes empty --tag-prefix username_0: This check is superflouse. The default is set in `rc.js`. If we want to be type safe for `expandTemplate()` we can add a type check if we want to but I think it is not neccessary. fixes #141 <issue_comment>username_1: Hmm, maybe we should switch to github actions? <issue_comment>username_1: @username_0 Do you want to publish it or should I? <issue_comment>username_0: @username_1 Go for it, you are probably much faster, my last npm release is month ago 😄 <issue_comment>username_1: `6.0.1`
<issue_start><issue_comment>Title: Fixed #32641 -- Log the number of tests found in DiscoverRunner.build_suite() username_0: ticket [#32641](https://code.djangoproject.com/ticket/32641). I am not sure about, converting the generator returned from the `reordered_tests` function into a list has any unexpected effect on other parts of the framework. <issue_comment>username_1: It's fine. You need to do it to get the length, and `self.test_suite(all_tests)` afterwards accepts any iterable. You should add a test though. You can look here for an example of testing a `DiscoverRunner` log message: https://github.com/django/django/blob/59552bea5790c97be0da0a6f16ccd0189857c7a7/tests/test_runner/test_discover_runner.py#L285-L289 <issue_comment>username_0: Thanks, @username_1, I write a test for logging the number of tests found, but if I pass the `extra_tests` parameter in the `build_suite` method, tests are failing. <issue_comment>username_0: ``` ResourceWarning: unclosed running multiprocessing pool <multiprocessing.pool.Pool state=RUN pool_size=8> ``` This was console massage for test ```py def test_no_of_tests_displayed(self): runner = DiscoverRunner(verbosity=1) with captured_stdout() as stdout: runner.build_suite(test_labels=['foo', 'bar'], extra_tests=['extra']) self.assertIn('Found 3 tests\n', stdout.getvalue()) ``` <issue_comment>username_1: Can you show the rest of the console? It's just showing a warning and not whether the test passed or failed. <issue_comment>username_1: Maybe it's because `extra_tests` needs to be `TestCase` instances rather than strings. I think you can just not pass `extra_tests` and look into why that's happening later. <issue_comment>username_0: Ya, this is the actual reason :+1: <issue_comment>username_1: @username_2 Note my comments here (not currently visible by default): https://github.com/django/django/pull/14261/files#r613250715 <issue_comment>username_2: @username_0 Thanks :+1: I added `verbosity=0` to disable output in various tests.
<issue_start><issue_comment>Title: Disable `source_type` scope when `through` already scoped username_0: Add a condition to disable the `source_type` automatic class scoping when the `through_reflection` already has a scope. Fixes #13920 <issue_comment>username_1: Is this still relevant in the latest stable version of Rails? If so, please update, otherwise should it be closed?
<issue_start><issue_comment>Title: az vmss update fails JSON serialization for updating secrets on windows username_0: I am trying to add the secrets (on windows) in the vmss using cli with Json parameters but it's failing to deserialize. I came across an existing closed [bug](https://github.com/Azure/azure-cli/issues/12623) and followed the instructions suggested [here](https://github.com/Azure/azure-cli/issues/12623#issuecomment-634472185). However, it still doesn't work. Please note: once add starts working, I have a requirement to work on the removal part as well. **Command Name** `az vmss update --name $vmScaleSetName --resource-group $sfResourceGroupName --add virtualMachineProfile.osProfile.secrets `@myCert.json ` Please note: I had to use backtick before the file path since my terminal wasn't able to recognize it. Content of the json file (for more clarity, also replaced the sensitive information with dummy values) `[ { "sourceVault": { "id": "/subscriptions/123123234234234234/resourceGroups/myresourcegroup/providers/Microsoft.KeyVault/vaults/mykeyvault" }, "vaultCertificates": [ { "certificateStore": "MY", "certificateUrl": "https://mykeyvault.vault.azure.net/secrets/mycert/1231kj23h1k2j3h12312323" } ] } ]` **Errors:** `The command failed with an unexpected error. Here is the traceback: Unable to build a model: Unable to deserialize to object: type, AttributeError: 'str' object has no attribute 'get', DeserializationError: Unable to deserialize to object: type, AttributeError: 'str' object has no attribute 'get' Traceback (most recent call last): File "C:\Users\VSSADM~1\AppData\Local\Temp\pip-install-yf22vmg2\msrest\msrest\serialization.py", line 1291, in _deserialize File "C:\Users\VSSADM~1\AppData\Local\Temp\pip-install-yf22vmg2\msrest\msrest\serialization.py", line 1081, in rest_key_case_insensitive_extractor File "C:\Users\VSSADM~1\AppData\Local\Temp\pip-install-yf22vmg2\msrest\msrest\serialization.py", line 1104, in attribute_key_case_insensitive_extractor AttributeError: 'str' object has no attribute 'get' During handling of the above exception, another exception occurred: Traceback (most recent call last): File "C:\Users\VSSADM~1\AppData\Local\Temp\pip-install-yf22vmg2\msrest\msrest\serialization.py", line 571, in body File "C:\Users\VSSADM~1\AppData\Local\Temp\pip-install-yf22vmg2\msrest\msrest\serialization.py", line 1261, in _deserialize File "C:\Users\VSSADM~1\AppData\Local\Temp\pip-install-yf22vmg2\msrest\msrest\serialization.py", line 1261, in _deserialize File "C:\Users\VSSADM~1\AppData\Local\Temp\pip-install-yf22vmg2\msrest\msrest\serialization.py", line 1261, in _deserialize File "C:\Users\VSSADM~1\AppData\Local\Temp\pip-install-yf22vmg2\msrest\msrest\serialization.py", line 1270, in _deserialize File "C:\Users\VSSADM~1\AppData\Local\Temp\pip-install-yf22vmg2\msrest\msrest\serialization.py", line 1450, in deserialize_data File "C:\Users\VSSADM~1\AppData\Local\Temp\pip-install-yf22vmg2\msrest\msrest\serialization.py", line 1481, in deserialize_iter File "C:\Users\VSSADM~1\AppData\Local\Temp\pip-install-yf22vmg2\msrest\msrest\serialization.py", line 1481, in <listcomp> File "C:\Users\VSSADM~1\AppData\Local\Temp\pip-install-yf22vmg2\msrest\msrest\serialization.py", line 1463, in deserialize_data File "C:\Users\VSSADM~1\AppData\Local\Temp\pip-install-yf22vmg2\msrest\msrest\serialization.py", line 1301, in _deserialize File "C:\Users\VSSADM~1\AppData\Local\Temp\pip-install-yf22vmg2\msrest\msrest\exceptions.py", line 51, in raise_with_traceback File "C:\Users\VSSADM~1\AppData\Local\Temp\pip-install-yf22vmg2\msrest\msrest\serialization.py", line 1291, in _deserialize File "C:\Users\VSSADM~1\AppData\Local\Temp\pip-install-yf22vmg2\msrest\msrest\serialization.py", line 1081, in rest_key_case_insensitive_extractor File "C:\Users\VSSADM~1\AppData\Local\Temp\pip-install-yf22vmg2\msrest\msrest\serialization.py", line 1104, in attribute_key_case_insensitive_extractor msrest.exceptions.DeserializationError: Unable to deserialize to object: type, AttributeError: 'str' object has no attribute 'get' During handling of the above exception, another exception occurred: Traceback (most recent call last): File "C:\Users\VSSADM~1\AppData\Local\Temp\pip-install-yf22vmg2\knack\knack\cli.py", line 215, in invoke File "D:\a\1\s\build_scripts\windows\artifacts\cli\Lib\site-packages\azure\cli\core\commands\__init__.py", line 654, in execute File "D:\a\1\s\build_scripts\windows\artifacts\cli\Lib\site-packages\azure\cli\core\commands\__init__.py", line 718, in _run_jobs_serially File "D:\a\1\s\build_scripts\windows\artifacts\cli\Lib\site-packages\azure\cli\core\commands\__init__.py", line 711, in _run_job File "C:\Users\VSSADM~1\AppData\Local\Temp\pip-install-yf22vmg2\six\six.py", line 703, in reraise File "D:\a\1\s\build_scripts\windows\artifacts\cli\Lib\site-packages\azure\cli\core\commands\__init__.py", line 688, in _run_job File "D:\a\1\s\build_scripts\windows\artifacts\cli\Lib\site-packages\azure\cli\core\commands\__init__.py", line 325, in __call__ File "D:\a\1\s\build_scripts\windows\artifacts\cli\Lib\site-packages\azure\cli\core\commands\arm.py", line 567, in handler File "D:\a\1\s\build_scripts\windows\artifacts\cli\Lib\site-packages\azure\cli\core\commands\__init__.py", line 444, in cached_put File "D:\a\1\s\build_scripts\windows\artifacts\cli\Lib\site-packages\azure\cli\core\commands\__init__.py", line 438, in _put_operation File "D:\a\1\s\build_scripts\windows\artifacts\cli\Lib\site-packages\azure\cli\command_modules\vm\custom.py", line 2758, in update_vmss File "D:\a\1\s\build_scripts\windows\artifacts\cli\Lib\site-packages\azure\cli\core\util.py", line 515, in sdk_no_wait File "C:\Users\VSSADM~1\AppData\Local\Temp\pip-install-yf22vmg2\azure-mgmt-compute\azure\mgmt\compute\v2019_12_01\operations\_virtual_machine_scale_sets_operations.py", line 127, in create_or_update File "C:\Users\VSSADM~1\AppData\Local\Temp\pip-install-yf22vmg2\azure-mgmt-compute\azure\mgmt\compute\v2019_12_01\operations\_virtual_machine_scale_sets_operations.py", line 72, in _create_or_update_initial File "C:\Users\VSSADM~1\AppData\Local\Temp\pip-install-yf22vmg2\msrest\msrest\serialization.py", line 574, in body File "C:\Users\VSSADM~1\AppData\Local\Temp\pip-install-yf22vmg2\msrest\msrest\exceptions.py", line 51, in raise_with_traceback File "C:\Users\VSSADM~1\AppData\Local\Temp\pip-install-yf22vmg2\msrest\msrest\serialization.py", line 571, in body File "C:\Users\VSSADM~1\AppData\Local\Temp\pip-install-yf22vmg2\msrest\msrest\serialization.py", line 1261, in _deserialize File "C:\Users\VSSADM~1\AppData\Local\Temp\pip-install-yf22vmg2\msrest\msrest\serialization.py", line 1261, in _deserialize File "C:\Users\VSSADM~1\AppData\Local\Temp\pip-install-yf22vmg2\msrest\msrest\serialization.py", line 1261, in _deserialize File "C:\Users\VSSADM~1\AppData\Local\Temp\pip-install-yf22vmg2\msrest\msrest\serialization.py", line 1270, in _deserialize File "C:\Users\VSSADM~1\AppData\Local\Temp\pip-install-yf22vmg2\msrest\msrest\serialization.py", line 1450, in deserialize_data File "C:\Users\VSSADM~1\AppData\Local\Temp\pip-install-yf22vmg2\msrest\msrest\serialization.py", line 1481, in deserialize_iter File "C:\Users\VSSADM~1\AppData\Local\Temp\pip-install-yf22vmg2\msrest\msrest\serialization.py", line 1481, in <listcomp> File "C:\Users\VSSADM~1\AppData\Local\Temp\pip-install-yf22vmg2\msrest\msrest\serialization.py", line 1463, in deserialize_data [Truncated] az vm secret format -s $secrets --keyvault $keyvaultName -g mykvresourcegroupname --certificate-store "MY"| Out-File "myCert.json"` Please note: the source for the above script is this [example](https://docs.microsoft.com/en-us/cli/azure/keyvault/certificate?view=azure-cli-latest#az-keyvault-certificate-create-examples) ## Expected Behavior Should be able to update vmss with the secret/certificate ## Environment Summary ``` Windows-10-10.0.17763-SP0 Python 3.6.6 Installer: MSI azure-cli 2.6.0 * ``` ## Additional Context <!--Please don't remove this:--> <!--auto-generated--> <issue_comment>username_1: cc @houk-ms for awareness since NoneType error <issue_comment>username_2: Faced the same issue. Need solution for this. <issue_comment>username_3: similar issue: https://github.com/Azure/azure-cli/issues/15264 <issue_comment>username_3: I tried and got a similar error. ``` File "C:\yfy\env\lib\site-packages\msrest\serialization.py", line 1206, in attribute_key_case_insensitive_extractor if lower_attr == key.lower(): msrest.exceptions.DeserializationError: Unable to deserialize to object: type, AttributeError: 'dict' object has no attribute 'lower' ``` Deserialization failed. I am trying to finding cause.
<issue_start><issue_comment>Title: remove resource setting for simulatedd metric-collector. username_0: Not sure if this makes sense or not, but if I want to start up more simulated metric collector, I can just remove the resource setting from the simulated metrics collectors. <issue_comment>username_1: I am OK with this change. That is designed to run multiple metrics collector in a single node. /lgtm
<issue_start><issue_comment>Title: Change the state row colour to green if all the active cases have been recovered. username_0: **Is your feature request related to a problem? Please describe.** A clear and concise description of what the problem is. Ex. I'm always frustrated when [...] **Describe the solution you'd like** A clear and concise description of what you want to happen. **Describe alternatives you've considered** A clear and concise description of any alternative solutions or features you've considered. **Additional context** Add any other context or screenshots about the feature request here. <issue_comment>username_1: You mean that when there are zero active cases. We can display a green dot in the row in such case. <issue_comment>username_0: Not a green dot as such , but changing the whole row colour from grey to green <issue_comment>username_2: Will consider later.<issue_closed>
<issue_start><issue_comment>Title: Bug 2058256: use library-go for leader elect options username_0: <issue_comment>username_0: /uncc jmencak kpouget /cc @wabouhamad @username_1 <issue_comment>username_0: I will coordinate with Simon before moving forward with this PR /hold <issue_comment>username_0: @sreber84 if we want a fix into 4.8, we will need to edit the bz to point to 4.11 and then make our way down to 4.8 <issue_comment>username_0: Hey @sreber84 any updates testing this? <issue_comment>username_0: /hold cancel <issue_comment>username_1: Tested 15 March 2022 Verified on the4.11.0-0.nightly-2022-03-15-060211 build on an AWS cluster (3 worker and 3 master nodes). NFD version (from a bundle): quay.io/username_1/nfd-operator-bundle:4.11.202203151049 Kube version: v1.23.3+f017760 Tested performed: 1. Build and deploy NFD bundle. Check the worker node labels. 2. Add a worker node and check new node labels. 3. Remove the newly added node and check the labels on the remaining nodes. 4. Log creation and stability. 5. Check for pods restarts in the openshift-etcd namespace (there were no restarts). All tests were executed successfully.
<issue_start><issue_comment>Title: GL.create() assumes GLX on Linux username_0: ### Version 3.3.0 (nightly) ### Platform Linux x64 ### JDK Eclipse OpenJ9 VM 17.0.2.0 ### Module OpenGL ### Bug description `GL.create()` assumes GLX on Linux using `libGL.so` and `glXGetProcAddress()`. The problem with this is `libGL.so` isn't always available on wayland platforms. I ran into this bug on a fresh debian install where the only thing installed was the sway wayland compositor. `libEGL` is present but not `libGL` since wayland uses EGL for context management. My game engine works around this by setting `Configuration.OPENGL_EXPLICIT_INIT.set(true)` and then using `GL.create(GLFW::glfwGetProcAddress)` since GLFW uses the API for context creation to implement that function however LWJGL's automatic extension loading should avoid making assumptions about what windowing system is being used as it can break in some cases. To be honest I don't know what a good fix for this is besides just leaving it to the engine/game but I figured I'd bring it up in case anyone had a solution that could be put into LWJGL. ### Stacktrace or crash log output _No response_
<issue_start><issue_comment>Title: New version: WorldOceanAtlasTools v0.5.1 username_0: - Registering package: WorldOceanAtlasTools - Repository: https://github.com/briochemc/WorldOceanAtlasTools.jl - Created by: @briochemc - Version: v0.5.1 - Commit: ee5dfb00cbefdfe6d453e2ba000f3eaa042dc65b - Reviewed by: @briochemc - Reference: https://github.com/briochemc/WorldOceanAtlasTools.jl/commit/ee5dfb00cbefdfe6d453e2ba000f3eaa042dc65b#commitcomment-55110553 <!-- bf0c69308befbd3ccf2cc956ac8a46712550b79fc9bfb5e4edf8f833f05f4c18b06eddad8845b45beb9f45c2b8020dd6de42e885eb62ec91d2d9e779ac2a5eff990e939ca84bfa97a078e3a9e1d0af35dcb5a035299e70b2850767e01b39af58191b6eea202a4556ffc26a4b46f1eb2d98c9cb5b3149b01fb329ec55a9ce874bd7c1948f91374eb263efcc030d67c44070f19727739d6826330214a3beef215c3f7a1365d1b578a7b501e432787d12297b3b02f00fa05c0e2c98dde2b7a6fc766db824b507b32ec677160679f75a8bf09f85c319fd935ae84502c9a4a10672b3aaa5b0947ed431ea2d064bcebaea54f7 -->
<issue_start><issue_comment>Title: Agent plugin execution errors not being logged correctly username_0: <issue_comment>username_0: Appears to have been fixed by commit: 8b8803e1e3b7ee883d5694a99980838aeac43615 Some of the exceptions were not returning an error message so the "if error" check was failing and the error was not being logged. This commit prefixes all error messages which stops this from occurring.<issue_closed>
<issue_start><issue_comment>Title: implement important loss layer types username_0: The following loss layers have a high priority: * [ ] Euclidean * [ ] SoftMax Other loss layers that are interesting for implementation: * [ ] Hinge * [ ] Sigmoid Cross Entropy <issue_comment>username_1: With #49 `SoftMax`, `LogSoftmax` (both not directly loss layers) and a NegativeLogLikelihood layer have been added.
<issue_start><issue_comment>Title: Add unit tests username_0: Some tests aren't working, so I've marked them as skipped until the necessary methods were completed. :D <issue_comment>username_1: @username_0 Do we need to add tests in this (relatively unrelated) feature branch? Shouldn't they just live upstream? <issue_comment>username_0: Yes, they are already upstream. I should close this PR now as it's obsolete.
<issue_start><issue_comment>Title: Mapillary object detection dataset username_0: Thank you for that great work. I would like to know which dataset of mapillary datasets has you used to train your model. I have check the website and I am not sure which one contains the object detection bounding boxes. Could you please provide a link to download the dataset.
<issue_start><issue_comment>Title: godep restore broken by influxdb client username_0: ``` $ godep restore can't load package: package github.com/influxdb/influxdb/client: code in directory /home/decarr/code/kubernetes/src/github.com/influxdb/influxdb/client expects import "github.com/influxdata/influxdb/client" godep: restore: exit status 1 can't load package: package github.com/influxdb/influxdb/influxql: code in directory /home/decarr/code/kubernetes/src/github.com/influxdb/influxdb/influxql expects import "github.com/influxdata/influxdb/influxql" godep: restore: exit status 1 package github.com/influxdb/influxdb/meta: cannot find package "github.com/influxdb/influxdb/meta" in any of: /usr/lib/golang/src/github.com/influxdb/influxdb/meta (from $GOROOT) /home/decarr/code/kubernetes/src/github.com/influxdb/influxdb/meta (from $GOPATH) godep: restore: exit status 1 package github.com/influxdb/influxdb/snapshot: cannot find package "github.com/influxdb/influxdb/snapshot" in any of: /usr/lib/golang/src/github.com/influxdb/influxdb/snapshot (from $GOROOT) /home/decarr/code/kubernetes/src/github.com/influxdb/influxdb/snapshot (from $GOPATH) godep: restore: exit status 1 can't load package: package github.com/influxdb/influxdb/toml: code in directory /home/decarr/code/kubernetes/src/github.com/influxdb/influxdb/toml expects import "github.com/influxdata/influxdb/toml" godep: restore: exit status 1 can't load package: package github.com/influxdb/influxdb/tsdb: code in directory /home/decarr/code/kubernetes/src/github.com/influxdb/influxdb/tsdb expects import "github.com/influxdata/influxdb/tsdb" godep: restore: exit status 1 ``` <issue_comment>username_0: /cc @username_1 you have experience in fixing these issues. <issue_comment>username_0: This is blocking ability to bump cadvisor to fix kubelet startup blocking when not on GCE/AWS <issue_comment>username_1: What do you get if you ``` rm -rf $GOPATH/src/github.com/influx* go get github.com/influxdb/influxdb go get github.com/influxdata/influxdb cd $GOPATH/src/github.com/influxdb/influxdb git checkout b237c68bab4756507baf6840023be103853e77db cd $GOPATH/src/k8s.io/kubernetes godep restore ``` My guess is that influxdb is trying to <issue_comment>username_0: That unblocked me. <issue_comment>username_1: @username_3 I would debug a little deeper myself, but I'm on my phone. My wild guess is that just `go get -u github.com/influxdb/influxdb` and then `godep restore` from kube master will reproduce. Seems like godep shouldn't whine about things until after it has checked out the specified revision... <issue_comment>username_1: @username_3 once I get useful internet, if I can reproduce, I'll open an issue against godep... <issue_comment>username_1: No, stick with v53 if you need to `save` <issue_comment>username_1: now I'm having trouble blaming godep.... I'll play more with it tomorrow but simply ``` go get -u github.com/influxdb/influxdb go get -u github.com/influxdata/influxdb go get -u github.com/influxdb/influxdb ``` Seemed to fall over... weird.... <issue_comment>username_2: @thockin loves godep. <issue_comment>username_3: This *may* be related to https://github.com/tools/godep/issues/400. I'll see if I can duplicate, but I may not get to it today. PS: Why are you sticking to v53? <issue_comment>username_4: Wanted to throw it out there that I too am hitting not just the influxdb brokenness in godep, but also with `errorutil` and `heapster` ``` package github.com/camlistore/go4/errorutil: code in directory /home/mike/k8s-devel/src/github.com/camlistore/go4/errorutil expects import "go4.org/errorutil" godep: restore: exit status 1 package github.com/influxdb/influxdb/client: code in directory /home/mike/k8s-devel/src/github.com/influxdb/influxdb/client expects import "github.com/influxdata/influxdb/client" godep: restore: exit status 1 can't load package: package github.com/influxdb/influxdb/influxql: code in directory /home/mike/k8s-devel/src/github.com/influxdb/influxdb/influxql expects import "github.com/influxdata/influxdb/influxql" godep: restore: exit status 1 package github.com/influxdb/influxdb/meta: cannot find package "github.com/influxdb/influxdb/meta" in any of: /home/mike/.gvm/gos/go1.5/src/github.com/influxdb/influxdb/meta (from $GOROOT) /home/mike/k8s-devel/src/github.com/influxdb/influxdb/meta (from $GOPATH) godep: restore: exit status 1 package github.com/influxdb/influxdb/snapshot: cannot find package "github.com/influxdb/influxdb/snapshot" in any of: /home/mike/.gvm/gos/go1.5/src/github.com/influxdb/influxdb/snapshot (from $GOROOT) /home/mike/k8s-devel/src/github.com/influxdb/influxdb/snapshot (from $GOPATH) godep: restore: exit status 1 can't load package: package github.com/influxdb/influxdb/toml: code in directory /home/mike/k8s-devel/src/github.com/influxdb/influxdb/toml expects import "github.com/influxdata/influxdb/toml" godep: restore: exit status 1 can't load package: package github.com/influxdb/influxdb/tsdb: code in directory /home/mike/k8s-devel/src/github.com/influxdb/influxdb/tsdb expects import "github.com/influxdata/influxdb/tsdb" godep: restore: exit status 1 package k8s.io/heapster/api/v1/types: cannot find package "k8s.io/heapster/api/v1/types" in any of: /home/mike/.gvm/gos/go1.5/src/k8s.io/heapster/api/v1/types (from $GOROOT) /home/mike/k8s-devel/src/k8s.io/heapster/api/v1/types (from $GOPATH) godep: restore: exit status 1 ``` I'm on master https://github.com/kubernetes/kubernetes/commit/83135047285555ca92df30075f6fb01e22c1160c <issue_comment>username_3: What are the steps I need follow in order to reproduce these? With a clean $GOPATH `godep restore -v` works as advertised with k8s master. Also output with `godep restore -v` would be helpful. ```console $ cd github.com/kubernetes/kubernetes $ godep restore -v godep: WARNING: Go version (go1.6) & $GO15VENDOREXPERIMENT=1 wants to enable the vendor experiment, but disabling because a Godep workspace (Godeps/_workspace) exists godep: Downloading dependency (if needed): bitbucket.org/bertimus9/systemstat godep: Downloading dependency (if needed): bitbucket.org/ww/goautoneg godep: Downloading dependency (if needed): github.com/ClusterHQ/flocker-go godep: Downloading dependency (if needed): github.com/Sirupsen/logrus godep: Downloading dependency (if needed): github.com/abbot/go-http-auth godep: Downloading dependency (if needed): github.com/appc/cni/libcni godep: Downloading dependency (if needed): github.com/appc/cni/pkg/invoke godep: Skipping already downloaded repo https://github.com/appc/cni godep: Downloading dependency (if needed): github.com/appc/cni/pkg/types godep: Skipping already downloaded repo https://github.com/appc/cni godep: Downloading dependency (if needed): github.com/appc/spec/schema godep: Downloading dependency (if needed): github.com/armon/go-metrics godep: Downloading dependency (if needed): github.com/aws/aws-sdk-go/aws godep: Downloading dependency (if needed): github.com/aws/aws-sdk-go/private/endpoints godep: Skipping already downloaded repo https://github.com/aws/aws-sdk-go godep: Downloading dependency (if needed): github.com/aws/aws-sdk-go/private/protocol/ec2query godep: Skipping already downloaded repo https://github.com/aws/aws-sdk-go godep: Downloading dependency (if needed): github.com/aws/aws-sdk-go/private/protocol/json/jsonutil godep: Skipping already downloaded repo https://github.com/aws/aws-sdk-go godep: Downloading dependency (if needed): github.com/aws/aws-sdk-go/private/protocol/jsonrpc godep: Skipping already downloaded repo https://github.com/aws/aws-sdk-go godep: Downloading dependency (if needed): github.com/aws/aws-sdk-go/private/protocol/query godep: Skipping already downloaded repo https://github.com/aws/aws-sdk-go godep: Downloading dependency (if needed): github.com/aws/aws-sdk-go/private/protocol/rest godep: Skipping already downloaded repo https://github.com/aws/aws-sdk-go godep: Downloading dependency (if needed): github.com/aws/aws-sdk-go/private/protocol/xml/xmlutil godep: Skipping already downloaded repo https://github.com/aws/aws-sdk-go godep: Downloading dependency (if needed): github.com/aws/aws-sdk-go/private/signer/v4 godep: Skipping already downloaded repo https://github.com/aws/aws-sdk-go godep: Downloading dependency (if needed): github.com/aws/aws-sdk-go/private/waiter godep: Skipping already downloaded repo https://github.com/aws/aws-sdk-go godep: Downloading dependency (if needed): github.com/aws/aws-sdk-go/service/autoscaling godep: Skipping already downloaded repo https://github.com/aws/aws-sdk-go godep: Downloading dependency (if needed): github.com/aws/aws-sdk-go/service/ec2 godep: Skipping already downloaded repo https://github.com/aws/aws-sdk-go godep: Downloading dependency (if needed): github.com/aws/aws-sdk-go/service/ecr godep: Skipping already downloaded repo https://github.com/aws/aws-sdk-go godep: Downloading dependency (if needed): github.com/aws/aws-sdk-go/service/elb godep: Skipping already downloaded repo https://github.com/aws/aws-sdk-go godep: Downloading dependency (if needed): github.com/beorn7/perks/quantile godep: Downloading dependency (if needed): github.com/blang/semver godep: Downloading dependency (if needed): github.com/boltdb/bolt godep: Downloading dependency (if needed): github.com/camlistore/go4/errorutil godep: Downloading dependency (if needed): github.com/codegangsta/negroni godep: Downloading dependency (if needed): github.com/coreos/etcd/client godep: Downloading dependency (if needed): github.com/coreos/etcd/discovery godep: Skipping already downloaded repo https://github.com/coreos/etcd godep: Downloading dependency (if needed): github.com/coreos/etcd/error godep: Skipping already downloaded repo https://github.com/coreos/etcd godep: Downloading dependency (if needed): github.com/coreos/etcd/etcdserver godep: Skipping already downloaded repo https://github.com/coreos/etcd godep: Downloading dependency (if needed): github.com/coreos/etcd/pkg/crc godep: Skipping already downloaded repo https://github.com/coreos/etcd godep: Downloading dependency (if needed): github.com/coreos/etcd/pkg/fileutil godep: Skipping already downloaded repo https://github.com/coreos/etcd godep: Downloading dependency (if needed): github.com/coreos/etcd/pkg/httputil godep: Skipping already downloaded repo https://github.com/coreos/etcd godep: Downloading dependency (if needed): github.com/coreos/etcd/pkg/idutil godep: Skipping already downloaded repo https://github.com/coreos/etcd godep: Downloading dependency (if needed): github.com/coreos/etcd/pkg/ioutil godep: Skipping already downloaded repo https://github.com/coreos/etcd godep: Downloading dependency (if needed): github.com/coreos/etcd/pkg/netutil godep: Skipping already downloaded repo https://github.com/coreos/etcd godep: Downloading dependency (if needed): github.com/coreos/etcd/pkg/pathutil godep: Skipping already downloaded repo https://github.com/coreos/etcd godep: Downloading dependency (if needed): github.com/coreos/etcd/pkg/pbutil godep: Skipping already downloaded repo https://github.com/coreos/etcd godep: Downloading dependency (if needed): github.com/coreos/etcd/pkg/runtime godep: Skipping already downloaded repo https://github.com/coreos/etcd godep: Downloading dependency (if needed): github.com/coreos/etcd/pkg/timeutil [Truncated] godep: Checking dependency: golang.org/x/net/internal/timeseries godep: Checking dependency: golang.org/x/net/trace godep: Checking dependency: golang.org/x/net/websocket godep: Checking dependency: golang.org/x/oauth2 godep: Checking dependency: golang.org/x/sys/unix godep: Checking dependency: golang.org/x/tools/go/ast/astutil godep: Checking dependency: golang.org/x/tools/imports godep: Checking dependency: google.golang.org/api/cloudmonitoring/v2beta2 godep: Checking dependency: google.golang.org/api/compute/v1 godep: Checking dependency: google.golang.org/api/container/v1 godep: Checking dependency: google.golang.org/api/gensupport godep: Checking dependency: google.golang.org/api/googleapi godep: Checking dependency: google.golang.org/cloud/compute/metadata godep: Checking dependency: google.golang.org/cloud/internal godep: Checking dependency: google.golang.org/grpc godep: Checking dependency: gopkg.in/natefinch/lumberjack.v2 godep: Checking dependency: gopkg.in/yaml.v2 godep: Checking dependency: k8s.io/heapster/api/v1/types godep: Checking dependency: speter.net/go/exp/math/dec/inf ``` <issue_comment>username_4: My issues with `godep restore` went away when: 1. I switched from `go1.5` to `go1.5.3` 2. Appropriately reconfigured `GOROOT` & `GOPATH` for `go1.5.3` 3. Updated `godep` as I was using a version over 1 year ago <issue_comment>username_0: Related PRs: https://github.com/kubernetes/kubernetes/pull/21786 https://github.com/kubernetes/kubernetes/pull/21786 <issue_comment>username_5: Any update on this issue? I seem to get alot of godep issues, specifically broken `heapster`. Am playing w/ upgrading and so on... <issue_comment>username_5: I'm assuming the solution might be just nuking godep, as @username_3 suggested? <issue_comment>username_5: s/nuking/godep/gopath <issue_comment>username_1: I was going to close this as un-reproducable. What's your problem @username_5 <issue_comment>username_5: @username_1 ok , lets close this since everything is golang 1.5 related, and we build in 1.6<issue_closed>
<issue_start><issue_comment>Title: Use foxy branch for geometric_shapes username_0: ### Description Fix: https://github.com/ros-planning/moveit2/issues/588#issuecomment-891789521 ### Checklist - [ ] **Required by CI**: Code is auto formatted using [clang-format](http://moveit.ros.org/documentation/contributing/code) - [ ] Extend the tutorials / documentation [reference](http://moveit.ros.org/documentation/contributing/) - [ ] Document API changes relevant to the user in the [MIGRATION.md](https://github.com/ros-planning/moveit/blob/master/MIGRATION.md) notes - [ ] Create tests, which fail without this PR [reference](https://ros-planning.github.io/moveit_tutorials/doc/tests/tests_tutorial.html) - [ ] Include a screenshot if changing a GUI - [ ] While waiting for someone to review your request, please help review [another open pull request](https://github.com/ros-planning/moveit/pulls) to support the maintainers [//]: # "You can expect a response from a maintainer within 7 days. If you haven't heard anything by then, feel free to ping the thread. Thank you!"
<issue_start><issue_comment>Title: Docs: improve documentation for various rules username_0: * safeContextKeyword Modify the type and value sections to be consistent with other rules Modify the valid and invalid case titles to be consistent with other rules * validateAlignedFunctionParameters Modify the type and value sections to be consistent with other rules Modify the valid and invalid case titles to be consistent with other rules * validateLineBreaks Modify the type and value sections to be consistent with other rules Modify the valid and invalid case titles to be consistent with other rules * validateNewlineAfterArrayElements Modify the description, type and value sections to be consistent with other rule. Modify the valid and invalid case titles to be consistent with other rules Fixes #1677 <issue_comment>username_1: Awesomely done!
<issue_start><issue_comment>Title: [Enhancement] Option to not use date filter username_0: Our operations support group would like to, by default, see all open tickets and not select a 'since' date. Can we add an option to not use the date filter? I tried clearing out the date but that didn't work. <issue_comment>username_0: In addition, the date filter doesn't seem to cache like the rest of the filters and we have to re-set it each time we reload the page. Can we get it to cache? <issue_comment>username_1: Having reviewed this enhancement, there are a few considerations here: - PagerDuty [`/incidents`](https://developer.pagerduty.com/api-reference/b3A6Mjc0ODEzOA-list-incidents) API endpoint has a default `since` date of 1 month, going back as far as 6 months. No incidents can be returned which are older than 6 months. - Querying large number incidents (+1000) in the query can impact user experience; this is a larger issue to be addressed by: #52 - Typically large number of incidents are returned when querying for `resolved` state; users should consider PagerDuty's core [Search](https://support.pagerduty.com/docs/search) for deeper search capabilities on a larger dataset. Removing the since date filter in the UI will default back to a 1 month lookback, so a good middleground here is to make the since date filter a configurable item. e.g. Users can set the default lookback to "6M" which means the since data will be set to 6 months before the current date.<issue_closed> <issue_comment>username_1: Given the above feedback and the proposed solution implemented by #56, this issue will now be closed.
<issue_start><issue_comment>Title: Gantt Groups - Error: [$rootScope:inprog] $digest already in progress username_0: I'm consistently seeing `Error: [$rootScope:inprog] $digest already in progress` in the groups plugin, both within my own code as well as on the [demo site](https://www.angular-gantt.com/demo/). The `$digest` calls seem to be originating from https://github.com/angular-gantt/angular-gantt/blob/master/src/plugins/groups/group.controller.js#L29 and https://github.com/angular-gantt/angular-gantt/blob/master/src/plugins/groups/group.controller.js#L36 Where `$scope.$$phase` is not accurately reporting that a digest is in progress. I've seen others use `if (!$scope.$$phase && !$scope.$root.$$phase)` to accomplish the same thing (see http://stackoverflow.com/a/16301044) and after changing these two lines to include the root scope check, I didn't see any more `$digest` errors. I'd be happy to create a pull request if you feel that this is an acceptable fix. <issue_comment>username_1: I'll do thanks for reporting<issue_closed>
<issue_start><issue_comment>Title: Sass Converter & (S)CSS Comments username_0: ###### What version of Jekyll are you using (`jekyll -v`)? 3.0.2 ###### What operating system are you using? Mac OSX 10.11 ###### What did you do? ```Sass ​// ************************************* // // First Level // -> Description // // ************************************* $pickled_bluewood: #334D5C; $keppel: #45B29D; $ronchi: #EFC94C; $red_damask: #E27A3F; $valencia: #DF4949; ``` ###### What did you expect to see? A converted Sass to CSS file. ###### What did you see instead? ```bash Jekyll::Converters::Scss encountered an error while converting 'css/main.scss': Invalid CSS after "...uewood: #334D5C": expected "{", was ";" on line 7 ```
<issue_start><issue_comment>Title: PlayReady IE11 issue username_0: Hello, I am having trouble getting PlayReady to work on IE11. This is the drm listener that I am using: new shaka.player.DrmSchemeInfo( 'com.microsoft.playready' /* keySystem */, licenseServerUrl, false /* withCredentials */, null /* initData */, null /* licensePostProcessor */); On IE11, I get a console error saying: The key system specified is not supported. Is this not the correct way to setup the player? If not, is there a working example of PlayReady? Thank you so much! <issue_comment>username_1: Hi @username_0. What version of Shaka Player are you using? PlayReady & IE11 support were added in v1.6.0, and only through polyfills. Are you using shaka.polyfill.installAll()? Do you see log messages in an uncompiled build like 'Using EME v20140218' or 'EME not available'? What version of Windows are you using? If I remember correctly, IE11 only supports MediaSource in Windows 8+, and only supports EME in Windows 8.1+. <issue_comment>username_0: Hello @username_1 . I just realized that I am using v1.5.1, instead of v1.6.0. Thank you for the answer!<issue_closed>
<issue_start><issue_comment>Title: Having problems while decrypting username_0: While decrypting I get the error : The operation couldnot be performed RNCryptorError 1 I dont understand what I am doing wrong. Here is my block of code ![screen shot 2015-11-02 at 4 20 11 pm](https://cloud.githubusercontent.com/assets/7240389/10879692/a73459da-817d-11e5-9d1f-532f36e9812c.png) <issue_comment>username_1: Please just post code into the question rather than a screenshot. I can't compile a screenshot, and they're very hard to read. Error 1 is an HMAC error. Either your data is corrupted or your password is incorrect. Note that `NSException` never makes sense in Swift. Switch can't catch them. They only make sense in ObjC if you're going to crash the program shortly after. They're not memory-safe in ObjC. You meant to use Swift's `throw` and `ErrorType`, which are unrelated to `raise` or `NSException`.<issue_closed> <issue_comment>username_0: This is my code and I get error 1 class func decrypt(message: String) -> String { print("the message to decrypt is \(message)") let data1: NSData = NSData(base64EncodedString: message, options: NSDataBase64DecodingOptions(rawValue: 0))! do { let decData : NSData = try RNCryptor.decryptData(data1, password: password) print("The decrypted data is now now \(decData)") let tokenString: String = NSString(data: decData, encoding: NSUTF8StringEncoding)! as String return tokenString } catch let error as NSError { print("The error description is \(error.localizedDescription)") print("The main error is \(error.description)") debugPrint("The main main main eror :\(error)") return "" } }
<issue_start><issue_comment>Title: Lower threshold value for running UX tests username_0: This PR is decreasing the threshold value for banner disappearance from 480 to 300 on desktop. Number is passed as an optional parameter. This will be all removed at the end of the week after the UX test will finish. <issue_comment>username_1: :+1:
<issue_start><issue_comment>Title: Rack-wide capacity monitoring. username_0: On boot and every minute after, check the latest ECS Service Events for any sign of trouble. Notify <issue_comment>username_0: <a href="https:&#x2F;&#x2F;trello.com&#x2F;c&#x2F;yhlDrx73&#x2F;269-web-hook-on-rack-out-of-capacity"><img src="https:&#x2F;&#x2F;github.trello.services&#x2F;images&#x2F;trello-icon.png" width="12" height="12"> Web hook on rack out of capacity</a> <issue_comment>username_1: :+1: <issue_comment>username_1: :+1:
<issue_start><issue_comment>Title: Build failing @ grunt task "sass:all" username_0: ``` I checked the directory and there is no `C:/Users/user/Projects/opc_theme_2015/lib/wet-boew/src/plugins/share/sprites/sprites_share.scss` file. Any idea how to get this working? <issue_comment>username_1: Did you `bower install`? <issue_comment>username_0: Warning: Task "jscs:all" failed. Use --force to continue. ``` <issue_comment>username_0: I disabled that task (`jscs`) in the `wet-boew` dependency and I can get it building. I guess this is a problem for the [wet-boew](https://github.com/wet-boew/wet-boew) project?<issue_closed> <issue_comment>username_0: Closing this issue due to my previous comments. This doesn't seem to be an issue with the GCWeb project but rather it's dependencies. <issue_comment>username_1: Actually no....these might be codestyle errors with the theme....or it's a wrong version of JSCS <issue_comment>username_0: Are you saying this could be issues with GCWeb? The error stated above only happens within the `/lib/wet-boew/` directory. I accidentally omitted a line which I've now added to my code snippet above which provides context to where the error is happening: `>> C:\Users\user\Projects\opc_theme_2015\lib\wet-boew\Gruntfile.coffee:` So the error is taking place when trying to run the `jscs:all` task within the wet-boew theme. <issue_comment>username_1: Huh, that's weird but yes it's not an issue with this project
<issue_start><issue_comment>Title: NaN % in Virtual memory pie chart username_0: I'm using Webmin 1.780 with Authentic theme 17.71 on an embedded system with no virtual memory. The very first time the interface is loaded after a succesfull login the 'virtual memory' pie chart is shown with NaN % (like it calculates div by 0). Any refresh of the status page is correct with no virtual memory pie chart shown. The line Virtual memory correctly indicates '0 bytes total/0 bytes used' in both cases. ![immagine 2](https://cloud.githubusercontent.com/assets/9549090/13138461/bd016db2-d627-11e5-8d81-d30385b594f5.png) <issue_comment>username_1: I'll take a look. <issue_comment>username_2: Hello, i temporarily created an swap file via fstab and it works. Without the swapfile it shows NaN% But if you could fix this, i could remove the swap dummy. Or you create a query that proofs the existense of swap etc. an if not disable the chart or so. <issue_comment>username_1: I can't even remotely imagine why when you get `NaN` on output, it's not hidden, when it should. What happens if you go to line `( ( $disk_percent || $disk_percent eq "0" ) ? $disk_percent : 'NaN' )` and lets say replace it with `( ( $disk_percent || $disk_percent eq "0" ) ? -sin(9**9**9) : 'NaN' )` Will you have `Local Disk Space` chart hidden? What browser are you using? <issue_comment>username_1: I could reproduce your bug. I will fix it in 17.72<issue_closed> <issue_comment>username_0: Thanks a lot Ilia! Your job is really beautiful and your commitment deserve an applause....and a little donation too!!! 9M072990289196125 <issue_comment>username_1: Thank you! <issue_comment>username_1: Please upgrade to 17.72.
<issue_start><issue_comment>Title: Problems reading a Row table in the Grid username_0: Hello community, I have tried to read an information from the table that contains 2 compound keys. But I can't get the information, here is my schedule. If someone knows how to do it, I would appreciate: <issue_comment>username_1: Please use Q&A on Discussion tab for this.<issue_closed>
<issue_start><issue_comment>Title: Update intro_metadata.rst username_0: This PR adds time formats as part of the metadata documentation. Formats have been discussed previously and are (partially) used in the metadata branch already. The documentation here serves as reference for related attribute definitions throughout the schema. <issue_comment>username_1: How much of this documentation applies to the current API in master, and how much is only relevant to proposed future enhancements? For example, are there any durations or intervals in the current schema? (I'm not remembering any offhand, but I haven't looked exhaustively.) <issue_comment>username_0: Not sure if there is any time related implementation in the current version => check? However, we know that we will need both time points and intervals/durations, at least in metadata, but probably also for object housekeeping. For metadata, the need of the different types given through the examples is pretty clear & has been discussed based on our developing data test implementations. I've made a note about ```FIXME: Add examples for time attributes as when such are implemented in the schema.``` > <issue_comment>username_2: Given the slow progress on doc, I am for collecting as much stuff as possible in this branch and then cherry picking a merge into master. <issue_comment>username_1: Thanks Michael for confirming that this is mostly forward-looking. Mark, your call on process -- it just means the merge into master will be trickier. <issue_comment>username_2: Doc that seems solid and short term future looking I will take in and deal with the merge. Longer term, most speculative stuff, such as GUID, I have ask to be delayed. Will probably take suggestion of breaking in to multiple pull requests. David Glazer <notifications@github.com> writes: > Thanks Michael for confirming that this is mostly forward-looking. > > Mark, your call on process -- it just means the merge into master will be > trickier. > >
<issue_start><issue_comment>Title: Add typings username_0: - Do some dark metaclass magic to support python 3.6 (We can lose this if youʼre ok bumping the version dependency of this package to >=3.7, since from 3.7 `typing.Mapping` is a runtime synonym of `collections.abc.Mapping`). - Bump the version of poetry. I got weird dependency issues when using poetry 1.0.0 to run the tests. Upgrading fixed these, so I didnʼt look into them too hard. - Add tox.ini to support running the tests locally (gotta make sure 3.6 still works :wink:) - Add mypy and tox to the poetry dev-dependencies <issue_comment>username_1: Thanks you for your contribution 👍 ! It looks good to me. Can you just squash all commits into the first the first one and sign it ? I will try to integrate this into a release ASAP to fix #41 . <issue_comment>username_0: Done 🙂 thanks <issue_comment>username_1: Merged :) Thanks again for your PR !
<issue_start><issue_comment>Title: Open namespace/module and fully-qualified name? username_0: Hey guys, I noticed something lately. Let's say ``User`` is defined in ``Namespace.ModuleB``, and my cursor is located at **$** ```fsharp module Namespace.ModuleA let f (u: $User) = () ``` Then I am suggested to add ``open ModuleB`` instead of ``open Namespace.ModuleB``. Shouldn't we prefer to use the fully qualified name by default, so as to avoid potential ambiguities? <issue_comment>username_1: Yes, fully qualified names seem better. I don't recall why we use short names; maybe @username_2 remembers the reason :-). <issue_comment>username_2: I do not remember either :) It seems the right thing though: ```fsharp namespace My.Long.Name.Space.Here module A = let foo x = x module B = open A let x = foo 1 ``` Do you think that the following is better? ```fsharp namespace My.Long.Name.Space.Here module A = let foo x = x module B = open My.Long.Name.Space.Here.A let x = foo 1 ``` <issue_comment>username_0: Can we easily distinguish between 'top level modules' (1 module for the entire file) vs explicit modules like in your example? <issue_comment>username_1: I think using `open A` might be wrong in some cases if there are multiple `A` modules from different namespaces. But we don't yet see any bug related to this; so keeping short names is good because it is aesthetically pleasing.
<issue_start><issue_comment>Title: Problem with page refresh in IE11 username_0: Hi. The plugin looks great , thank you very much . But testing it locally there is a problem in IE11. When I refresh the page when lightbox opened , and after the close lightbox browser address bar changes (for example, C:\xampp\htdocs\mysite\index.html changes to C:\C:\xampp\htdocs\mysite\index.htm). As a result, the next page refresh I see "page not found" massage. In Chrome and Firefox all works perfectly. <issue_comment>username_1: Have you tried opening page on webserver, not directly file? <issue_comment>username_0: Unfortunately at the moment I can not try, but I hope that everything will work well. 1 more question. With some plugin as masonry be successfully used PhotoSwipe? There is some advice? Ty! <issue_comment>username_0: Sorry. Isotop plagin is works very well. When I finish a site for my sister ( she was an artist ) , I will send you a link to it !<issue_closed>
<issue_start><issue_comment>Title: Windows API Functions, Structures, and Constants: Twisted username_0: This issue covers a list of Windows API functions which the Twisted project is currently using. This issue is really meant to be a TODO list of sorts to keep track of remaining work to be done. **Functions** - [ ] `ClearCommError()` - [ ] `CloseHandle()` - [ ] `CreatePipe()` - [ ] `CreateProcess()` - [ ] `CreateEvent()` - [ ] `DuplicateHandle()` - [ ] `GetCurrentProcess()` - [ ] `GetExitCodeProcess()` - [ ] `GetHandleInformation()` - [ ] `GetStdHandle()` - [ ] `GetOverlappedResult()` - [ ] `MsgWaitForMultipleObjects()` - [ ] `PeekNamedPipe()` - [ ] `ResetEvent()` - [ ] `ReadFile()` - [ ] `SetNamedPipeHandleState()` - [ ] `TerminateProcess()` - [ ] `WriteFile()` - [ ] `WaitForSingleObject()` - [ ] `WSAEventSelect()` - [ ] `WSAEnumNetworkEvents()` **Structures** - [ ] `SECURITY_ATTRIBUTES` - [ ] `STARTUPINFO` - [ ] `STARTF_USESTDHANDLES` - [ ] `OVERLAPPED` **Constants** - [ ] `DUPLICATE_SAME_ACCESS` - [ ] `STD_OUTPUT_HANDLE` - [ ] `FD_ACCEPT` - [ ] `FD_CLOSE` - [ ] `FD_CONNECT` - [ ] `FOREGROUND_RED` - [ ] `FOREGROUND_GREEN` - [ ] `FOREGROUND_BLUE` - [ ] `PIPE_NOWAIT` - [ ] `QS_ALLINPUT` - [ ] `WAIT_OBJECT_0` - [ ] `WAIT_TIMEOUT` **Excluded** *The following are excluded from conversion because the surrounding code in Twisted was deprecated or other methods of performing the same task are available.* * FormatMessage - cffi handles this * RegOpenKeyEx - Surrounding code was deprecated * RegQueryValueEx - Surrounding code was deprecated **NOTE**: The actual list of functions may be longer than the above. The calls are just those that are used directly. More may be required in order to properly test pywincffi's implementation. <issue_comment>username_1: Oliver, I'm doing progress in the implementation of GetOverlappedResult... Checkout https://github.com/username_1/pywincffi/tree/get-ov-result Most code is in place, just wrapping my head around the best way to test it. There is something there, still not working. I'm off to the airport now to fly back to Lisbon. I expect to be able to get back to it later this week. Cheers. <issue_comment>username_0: Sounds good, thanks for the heads up and the help as always. In terms of testing, looks like what you have may be a pretty good start. Usually when I find it difficult to come up with a test I look to the MSDN docs for examples and go from there. <issue_comment>username_2: I'm confused as to the state of this ticket wrt being a TODO list. Are there items still remaining for this (and the project getting beta status) that are not mentioned in the ticket? <issue_comment>username_0: Nope there's not anything remaining....I just never got around to closed the issue. Basically I was using this issue to track the major functions necessary in order to use pywincffi to replace pywin32 in twisted. With respect to beta status, the project is probably at that stage now considering the APIs and practices in pywincffi are unlikely to be changing.<issue_closed>
<issue_start><issue_comment>Title: Documment TSS support for theme style overrides username_0: Hi @mnajdova & @oliviertassinari, I think you'll be pleased to know that TSS now has [out of the box support for theme styleOverrides](https://docs.tss-react.dev/mui-theme-styleoverrides). 🥳 It's a big deal for libraries based on MUI like [mui-datatables](https://github.com/gregnb/mui-datatables) that now have an option to update to MUIv5 without undergoing heavy refactoring. In this PR I update the migration guide. Best regards,
<issue_start><issue_comment>Title: This syntax requires an imported helper but module 'tslib' cannot be found. username_0: I am seeing the following error: ``` $ tscc TSCC: tsickle converts TypeScript modules to Closure modules via CommonJS internally."module" flag is overridden to "commonjs". TSCC: --outDir option is ignored. Use prefix option in the spec file. TSCC: tsickle uses a custom tslib optimized for closure compiler. importHelpers flag is set. TS: scripts/utils.ts(96,11): error TS2354: This syntax requires an imported helper but module 'tslib' cannot be found. TSCC: The compilation has terminated with an error. ``` tsconfig.json: ``` { "compilerOptions": { "outDir": "./built", "allowJs": true, "target": "es5", "jsx": "react", "lib": [ "es2019", "dom" ], "allowSyntheticDefaultImports": true }, "include": ["./scripts/**/*"] } ``` tscc.spec.json: ``` { "modules": { "out": "scripts/artblocksEntry.ts" }, "prefix": "dist/" } ``` <issue_comment>username_1: Hi, this is weird, because `tslib` is a dependency of `@tscc/tscc`, so you should have it in your `node_modules` and typescript compiler should be able to find it. Can it be related to your package structure? As a workaround, you can let Typescript emit `ES2015` or higher, which don't require helper functions to transpile spread operators, and let Closure Compiler to transpile them to `ES5`. ```json { "compilerOptions": { "target": "ES2019" } } ``` ```jsonc { "modules": { /**/ }, "compilerFlags": { "language_out": "ECMASCRIPT5" } ```<issue_closed>
<issue_start><issue_comment>Title: Worse performance than Google Translation API or translate.google.com username_0: **Others:** - OS: MacOS - googletrans version: 4.0.0rc1 **Links:** [https://translate.google.com/](https://translate.google.com/) [https://github.com/googleapis/python-translate](https://github.com/googleapis/python-translate) <issue_comment>username_1: Thanks for sharing! I am going to scrape the google translate webpage with lxml :p
<issue_start><issue_comment>Title: Rust 1.0 beta username_0: @cs4414/owners <issue_comment>username_0: @username_1 could you add me to owners? I'd like to configure travis for this repo, but apparently I only have the necessary permissions for ps4, not ps{1..3} :( <issue_comment>username_0: Oh, that's because ps4 is owned by @cs4414/ta and the others are all owned by @cs4414/owners ... <issue_comment>username_1: Thanks, @username_0. I've added you to the owners group.
<issue_start><issue_comment>Title: TST, BUG: fix rbf matrix value username_0: Fixes #14158 ; this is basically the patch suggested by @treverhines * fix an ill-conditioned matrix causing regular test failures on MacOS (locally) and on Windows (sporadically) for `test_interpolation_misfit_1d()` with the `gaussian` and `inverse_multiquadric` kernels, respectively * I've tested the fix locally on MacOS to produce 20 failures before (on `master`) and no failures after with a `pytest-repeat` run of the problem test with: `python runtests.py -v -t "scipy.interpolate.tests.test_rbfinterp::TestRBFInterpolatorNeighbors20::test_interpolation_misfit_1d" -- --count=20` * so on my Mac master produces `20 failed, 140 passed in 1.54s` and this feature branch produces `160 passed in 1.09s` * I'm still fiddling with using `pytest-repeat` to probe the issue on Windows, although if the fix is that decisive on MacOS for one kernel we may have enough confidence to move forward with such a small test-only patch anyway * also, added a related file to `.gitignore`; I'm assuming this is the normal way pythran names/generates CPP files <issue_comment>username_0: Ok, the patch is simply that suggested by the expert and the fix works on MacOS at least. The two CI failures are pre-release dependency situations that are out of scope for this fix (observed elsewhere). So, self-merging to move forward with release process.
<issue_start><issue_comment>Title: Add support for alternative completions directories username_0: ## Standards checklist: <!-- Fill with an x the ones that apply. Example: [x] --> - [x] The PR title is descriptive. - [x] The PR doesn't replicate another PR which is already open. - [x] I have read the contribution guide and followed all the instructions. - [x] The code follows the code style guide detailed in the wiki. - [x] The code is mine or it's from somewhere with an MIT-compatible license. - [x] The code is efficient, to the best of my ability, and does not waste computer resources. - [x] The code is stable and I have tested it myself, to the best of my abilities. ## Description This will check if a `completions-dir.txt` is present in the root of the custom plugin before adding the plugin's root to `fpath`. If the file is there, the `fpath` will be filled with `$plugin_dir/<contents of completions-dir.txt>` instead of `$plugin_dir`. This should not cause any breaking change, as no custom plugins would already have such a file. It will be taken into account for custom plugins. This will decrease the amount of times we check for the file, and I think it's very reasonable that the plugins in this repo (not custom ones) needs to follow the rule of having completions in their root directories. Also, regarding my concern at duplications in `fpath` (https://github.com/ohmyzsh/ohmyzsh/issues/10412#issuecomment-967214321), there isn't much we can do about it. Regardless, it's being like that since always, so this isn't something we need to address this time. Fixes https://github.com/ohmyzsh/ohmyzsh/issues/10412 The PR which implements this at zsh-completions is https://github.com/zsh-users/zsh-completions/pull/835. I'm testing this at https://github.com/username_0/dotfiles/pull/59. <issue_comment>username_0: Marked PR as ready to review as it is working for me: ![image](https://user-images.githubusercontent.com/29582865/148303663-6de16cb8-8499-4ee7-b422-6767ac548a03.png) <issue_comment>username_1: I will close this as not a proper fix, with reasoning as seen in https://github.com/zsh-users/zsh-completions/pull/835#issuecomment-1036620102.
<issue_start><issue_comment>Title: Component property is not immediately updated after it has been modified by an event triggered by another component username_0: A property inside a regular angular2 (ts) component is automatically refreshed if is bound like {{msg}} But it is **not** immediately refreshed if this value changed via an event being by another component. This plunkr demonstrates this problem: http://plnkr.co/edit/lgTmNdD7jjVkCIx2WIWO?p=info <issue_comment>username_1: Looks like you're requesting #7421 <issue_comment>username_2: events from the `EventEmitter` are delivered synchronously now so this issue should be solved.<issue_closed>
<issue_start><issue_comment>Title: Thoughts on making these work with Binder username_0: http://mybinder.org/ I am giving it a try. <issue_comment>username_1: This seems really cool <issue_comment>username_1: @username_0 Sorry for the delay with this and other issues. I integrated this with binder. I hope to get back to other pending issues ASAP.<issue_closed> <issue_comment>username_0: @username_1 Cool, Ill give it a try. I have been busy myself.
<issue_start><issue_comment>Title: afterLaunch does not work with multiCapabilities or suites or both username_0: hi, i'm implementing html reporter to be generated after all of the tests already ran. i'm using multiCapabilities , suite and shardTestFiles options. when running more than one test file or more than one capability afterLaunch function does not start. i'm using Protractor 1.4.0 tried to run with grunt or directly with protractor conf-file.js here is my conf-file: exports.config={ "reportDirectory": "./test/tulip/reports/logs", "printCustomizationLabelsToFile": true, "stringInputLanguage": "english", "failureScreenshot": { "dir": "./test/tulip/reports/screenshots" }, "cssScreenshots": { "enabled": false, "dir": "./test/tulip/reports/screenshots/css" }, "performance": true, "loginInfo": { "username": "sa", "password": "", "domain": "random", "project": "2_4" }, "database": { "dbType": "MSSQL" }, "multiCapabilities": [ { "browserName": "chrome", "version": "40.0", "platform": "WINDOWS", "ignoreZoomSetting": "true", "shardTestFiles": "true", "maxInstances": 20, "chromeOptions": { "args": [ "--lang=en" ] } } , { "browserName": "firefox", "version": "36.0", "platform": "WINDOWS", "ignoreZoomSetting": "true", "shardTestFiles": "true", "maxInstances": 20 } ], "specs": "", "suites": { "testing": "./munir/elad/*.js" }, "baseUrl": "http://localhost:8080/qcbin/", "seleniumServerJar": "./selenium-server-standalone-2.45.0.jar", "seleniumArgs": [ "-Dwebdriver.ie.driver=./selenium-server-ie32-driver-2.45.0.exe" ], "chromeDriver": "./selenium-chrome-driver-2.14.0.exe", "jasmineNodeOpts": { "isVerbose": true, "showColors": true, "includeStackTrace": true, "defaultTimeoutInterval": 1000000, "realtimeFailure": true, "showTiming": false, "stackFilter": stackFilterFunction }, "log4js": { "appenders": [ { "type": "logLevelFilter", "level": "DEBUG", "appender": { "type": "console", [Truncated] ] }, "onPrepare": "./initialize-runner-script.js", "debug": false, "allScriptsTimeout": 600000, "getPageTimeout": 600000, "afterLaunch": function(exitCode) { console.log('+++++++++++++++++++++AfterLaunch '+exitCode+'+++++++++++++++'); }, "seleniumAddress": "http://localhost:4444/wd/hub", "suite": "testing", "appUser": { "appUser": "sa9-4-13-19-23-628", "appPass": "" } } Thanks, Munir <issue_comment>username_1: have you tried grunt-protractor-runner? <issue_comment>username_0: no, how could it help ? did you had the same issue? i wonder why it's not when running directly with bin/protractor <issue_comment>username_2: I've added github markdown to make your comment readable. Please do so yourself in the future. <issue_comment>username_2: What is the test output? Is the process exiting with an error? I cannot reproduce this. I am using both multiCapabilities and sharding. <issue_comment>username_2: Oh! Your version of Protractor is really old. Please upgrade to a newer version.<issue_closed> <issue_comment>username_0: Thank you all. i updated Protractor version. its working. <issue_comment>username_3: Hi All, var reporter = require('C:\\Users\\sujata.birajdar\\Desktop\\Common Demand\\PH_SmokeSuite\\PH_SmokeSuite\\node_modules\\protractor-multicapabilities-htmlreporter_v2'); specs : [ 'Smoke_CreateDemand.specs.js', 'Smoke_BulkCancel.js' ], afterLaunch: function () { reporter.generateHtmlReport('./report.json', 'Automation Results', '../../report.html'); }, Above code is generating html only for single Spec. JSON is generated for both specs. can you suggest me fix so that both html files get generated
<issue_start><issue_comment>Title: Getting an OutOfMemoryError exception username_0: Is it possible that we might getting this error because of `maxPoolSize` field in `CoroutineScheduler` which might be depending on jvm versions? We get the trace from crashlytics and can not reproduce by ourself. Android Version: 6.0 and 7.0 ``` Fatal Exception: java.lang.OutOfMemoryError: Failed to allocate a 8388616 byte allocation with 6577392 free bytes and 6MB until OOM at java.util.concurrent.atomic.AtomicReferenceArray.<init>(AtomicReferenceArray.java:64) at kotlinx.coroutines.scheduling.CoroutineScheduler.<init>(CoroutineScheduler.kt:264) at kotlinx.coroutines.scheduling.SchedulerCoroutineDispatcher.createScheduler(Dispatcher.kt:95) at kotlinx.coroutines.scheduling.SchedulerCoroutineDispatcher.<init>(Dispatcher.kt:92) at kotlinx.coroutines.scheduling.DefaultScheduler.<init>(Dispatcher.kt:13) at kotlinx.coroutines.scheduling.DefaultScheduler.<clinit>(Dispatcher.kt) at kotlinx.coroutines.Dispatchers.<clinit>(Dispatchers.kt:32) at kotlinx.coroutines.Dispatchers.getDefault(Dispatchers.java:32) at com.some.coroutines.CoroutinesModule.providesDefaultDispatcher(CoroutinesModule.java:19) at com.some.coroutines.CoroutinesModule_ProvidesDefaultDispatcherFactory.providesDefaultDispatcher(CoroutinesModule_ProvidesDefaultDispatcherFactory.java:25) at com.some.coroutines.CoroutinesModule_ProvidesDefaultDispatcherFactory.get(CoroutinesModule_ProvidesDefaultDispatcherFactory.java:17) at com.some.coroutines.CoroutinesModule_ProvidesDefaultDispatcherFactory.get(CoroutinesModule_ProvidesDefaultDispatcherFactory.java:9) at com.some.domain.SomeAUseCase_Factory.get(SomeAUseCase_Factory.java:24) at com.some.domain.SomeAUseCase_Factory.get(SomeAUseCase_Factory.java:9) at com.some.domain.SomeBUseCase_Factory.get$bridge(SomeBUseCase_Factory.java:46) at com.some.domain.SomeCUseCase_Factory.get(SomeCUseCase_Factory.java:32) at com.some.domain.SomeCUseCase_Factory.get(SomeCUseCase_Factory.java:9) at com.some.abtestdecider.data.repository.ABRepositoryImpl_Factory.get$bridge(ABRepositoryImpl_Factory.java:37) at com.some.ui.SomeDViewModel_Factory.get(SomeDViewModel_Factory.java:187) at com.some.ui.SomeDViewModel_Factory.get(SomeDViewModel_Factory.java:41) at com.some.app.ViewModelFactory.create(ViewModelFactory.java:42) at androidx.lifecycle.ViewModelProvider.get(ViewModelProvider.java:187) at androidx.lifecycle.ViewModelProvider.get(ViewModelProvider.java:150) at com.some.ui.SomeEFragmentModule.provideSomeEViewModel(SomeEFragmentModule.java:13) at com.some.ui.SomeEFragmentModule_ProvideSomeEViewModelFactory.provideSomeEViewModel(SomeEFragmentModule_ProvideSomeEViewModelFactory.java:37) at com.some.app.DaggerAppComponent$SomeEFragmentSubcomponentImpl.viewModelInjectionSomeEViewModel(DaggerAppComponent.java:39519) at com.some.app.DaggerAppComponent$SomeEFragmentSubcomponentImpl.injectSomeEFragment(DaggerAppComponent.java:39536) at com.some.app.DaggerAppComponent$SomeEFragmentSubcomponentImpl.inject(DaggerAppComponent.java:39524) at com.some.app.DaggerAppComponent$SomeEFragmentSubcomponentImpl.inject(DaggerAppComponent.java:39501) at dagger.android.DispatchingAndroidInjector.maybeInject(DispatchingAndroidInjector.java:113) at dagger.android.DispatchingAndroidInjector.inject(DispatchingAndroidInjector.java:134) at dagger.android.support.AndroidSupportInjection.inject(AndroidSupportInjection.java:75) at dagger.android.support.AndroidSupportInjection.inject(AndroidSupportInjection.java:67) at com.some.base.BaseFragment.onAttach(BaseFragment.kt:134) at androidx.fragment.app.Fragment.performAttach(Fragment.java:2922) at androidx.fragment.app.FragmentStateManager.attach(FragmentStateManager.java:464) at androidx.fragment.app.FragmentManager.moveToState(FragmentManager.java:1348) at androidx.fragment.app.FragmentManager.moveToState(FragmentManager.java:1522) at androidx.fragment.app.FragmentTransition.addToFirstInLastOut(FragmentTransition.java:1246) at androidx.fragment.app.FragmentTransition.calculateFragments(FragmentTransition.java:1128) at androidx.fragment.app.FragmentTransition.startTransitions(FragmentTransition.java:135) at androidx.fragment.app.FragmentManager.executeOpsTogether(FragmentManager.java:2157) at androidx.fragment.app.FragmentManager.removeRedundantOperationsAndExecute(FragmentManager.java:2100) at androidx.fragment.app.FragmentManager.execPendingActions(FragmentManager.java:2002) at androidx.fragment.app.FragmentManager$5.run(FragmentManager.java:524) at android.os.Handler.handleCallback(Handler.java:739) at android.os.Handler.dispatchMessage(Handler.java:95) at android.os.Looper.loop(Looper.java:158) at android.app.ActivityThread.main(ActivityThread.java:7225) at java.lang.reflect.Method.invoke(Method.java) at com.android.internal.os.ZygoteInit$MethodAndArgsCaller.run(ZygoteInit.java:1230) at com.android.internal.os.ZygoteInit.main(ZygoteInit.java:1120) ``` <issue_comment>username_1: There's really not a lot of memory left. <issue_comment>username_2: It feels like your application is almost out of memory and this particular place is just the trigger. It indeed tries to allocate 8-16MB depending on various JVM flag (aka depending on actual reference size), but it's fixed with #3137 in upcoming 1.6.1 <issue_comment>username_0: We'll follow the issue. Thank you for quick response 👍
<issue_start><issue_comment>Title: pytest.deprecated_call breaks with TypeError on pytest 2.8.4 username_0: https://jenkins.cryptography.io/job/cryptography-pr-builder/4650/TOXENV=py27,label=10.10/console shows a traceback of this occurring. <issue_comment>username_1: Thanks for the report @username_0! Oh my, 91909 tests executed, 58284 skipped! :sweat_smile:<issue_closed> <issue_comment>username_0: Thanks! Can I ask if there's a plan for when the next release will be? <issue_comment>username_2: this week, if possible i'll do this afternoon european time myself, else i'll delegate <issue_comment>username_0: Awesome, thanks! <issue_comment>username_3: Thanks for fixing that! Waiting for the release. <issue_comment>username_1: `2.8.5` is out now. :smile:
<issue_start><issue_comment>Title: Unable to do Appium Touchaction in Serenity username_0: Hi, Im using Appium+Serernity+JBehave for an Android automation. When trying for TouchAction actions = new TouchAction((MobileDriver) getDriver()); in an PageObject class the following error is being shown. My Environment is Serenity-core: 1.1.29-rc.1 Serenity-jbehave: 1.8.0 Win: 8.1 pro Eclipse: Version: Mars.2 Release (4.5.2) Appium version: 1.4.16.1 (Ophiuchus) (java.lang.ClassCastException: net.thucydides.core.webdriver.WebDriverFacade cannot be cast to io.appium.java_client.MobileDriver) java.lang.ClassCastException: net.thucydides.core.webdriver.WebDriverFacade cannot be cast to io.appium.java_client.MobileDriver at com.test.appium.RealBucksAutomation.pageobject.RealbuckPageObject.clickTextBox(MyClassPageObject.java:47) at com.test.appium.RealBucksAutomation.model.RealbuckTestStepModules.clickOntextbox(MyClassStepModules.java:17) at com.test.appium.RealBucksAutomation.model.RealbuckTestStepModules$$EnhancerByCGLIB$$a9d424f2.CGLIB$clickOntextbox$1(<generated>) at com.test.appium.RealBucksAutomation.model.RealbuckTestStepModules$$EnhancerByCGLIB$$a9d424f2$$FastClassByCGLIB$$54e5228d.invoke(<generated>) at net.sf.cglib.proxy.MethodProxy.invokeSuper(MethodProxy.java:228) at net.thucydides.core.steps.StepInterceptor.invokeMethod(StepInterceptor.java:348) at net.thucydides.core.steps.StepInterceptor.executeTestStepMethod(StepInterceptor.java:333) at net.thucydides.core.steps.StepInterceptor.runTestStep(StepInterceptor.java:308) at net.thucydides.core.steps.StepInterceptor.testStepResult(StepInterceptor.java:130) at net.thucydides.core.steps.StepInterceptor.intercept(StepInterceptor.java:57) at com.test.appium.RealBucksAutomation.model.RealbuckTestStepModules$$EnhancerByCGLIB$$a9d424f2.clickOntextbox(<generated>) at com.test.appium.RealBucksAutomation.steps.RealbuckTestSteps.givenIClickOnTheTextBox(MyClassTestSteps.java:14) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source) at java.lang.reflect.Method.invoke(Unknown Source) at org.jbehave.core.steps.StepCreator$ParametrisedStep.perform(StepCreator.java:599) at org.jbehave.core.embedder.PerformableTree$FineSoFar.run(PerformableTree.java:340) <issue_comment>username_1: @username_0 thanks, for this issue! WebDriver facade should be updated to implement MobileDriver as well. Please try to run something like (I think you will lost some statistics until we will fix this bug): WebDriver facade = getDriver(); WebDriver driver = ((WebDriverFacade)facade).getDriver(); TouchAction actions = new TouchAction((MobileDriver)driver) ; <issue_comment>username_0: Hi @username_1 Sorry when trying your code getting this error "The method getDriver() is undefined for the type WebDriverFacade" <issue_comment>username_1: @username_0 I am talked about method : https://github.com/serenity-bdd/serenity-core/blob/f6d14d9b92d5f63bc64431cb24494dbfd651cf38/serenity-core/src/main/java/net/thucydides/core/webdriver/WebDriverFacade.java#L101 <issue_comment>username_2: Once more, I tested with default browser in Android. It had the same issue. <issue_comment>username_3: @username_0 were you able to get this working? <issue_comment>username_0: Hi @username_3 Yes now im can able to get this working. Thanks<issue_closed> <issue_comment>username_4: Can you please let me know how did you get this working? I am running into same problem, Any sample code is appreciated. <issue_comment>username_0: @username_4 This is how i did it. WebDriver facade = getDriver(); WebDriver driver= ((WebDriverFacade) facade).getProxiedDriver(); TouchAction actions = new TouchAction((MobileDriver)driver) ; <issue_comment>username_4: Thank you Selva, but I get a different error like the below if I use as below WebDriver driver = getDriver(); WebDriver driver1 = ((WebDriverFacade) driver).getProxiedDriver(); TouchAction tAction1 = new TouchAction((MobileDriver)driver1) ; Error in this code ::: tAction1.moveTo(element).release().perform(); java.lang.ClassCastException: com.sun.proxy.$Proxy19 cannot be cast to org.openqa.selenium.remote.RemoteWebElement While performing actions I get the error Thanks & regards, Divya <issue_comment>username_5: @username_0 I tried your way, but was getting below error : Error:- Error:org.openqa.selenium.remote.RemoteWebDriver$$EnhancerByCGLIB$$6191bd78 cannot be cast to io.appium.java_client.MobileDriver Please Note : - I am running test on saucelabs. Could you please provide a solution to it. Many thanks in advance Thanks, Lovesh Jain <issue_comment>username_6: Yes I am also getting the same error while running this on sauce labs. it works fine when running it on the simulator on local machine but gives cannot be cast to io.appium.java_client.MobileDriver on Sauce Labs. Please Help Many Thanks in advance Charan <issue_comment>username_5: @username_6 Hi Charan, I have raised a separate issue with @username_3 and waiting for a perfect solution but meanwhile I was able to resolve the issue with an alternative solution. The issue is that when you run the test on saucelabs and give serenity property "Saucelabs.url=something" then the driver is considered as RemoteWebdriver and remotedriver cannot be cast to other driver. So to resolve the issue don't provide Saucelabs.url property and run the test. The only issue you would get is serenity report won't be having your saucelabs job video link because the SaucelabsLinkGenerator.class take saucelabs.url for providing saucelabs video link in serenity report. Therefore you can modify the function with below function to take Saucelab_Access_key : - private boolean saucelabsIsNotConfigured() { return (ThucydidesSystemProperty.SAUCELABS_ACCESS_KEY.from(environmentVariables) == null); } Also having said that you will encounter 1 issue after this which is when your test case fails, serenity will try to take a screenshot if you have set "serenity.take.screenshots=FOR_FAILURES" or "serenity.take.screenshots=AFTER_EACH_STEP" but since we are not providing saucelabs.url property, it does not consider that as RemoteDriver and does not go to takeScreenShot function of RemoteDriver and because of that your test case will try to do that till saucelabs job time which is by default 30 mins comes to end. I would recommend you to set "serenity.take.screenshots=DISABLED". Let me know if in case you face any issue. Thanks, Lovesh Jain +642108258442 <issue_comment>username_6: Hi Lovesh Thanks for the reply. Can you please explain me How and where to use the function you have mentioned private boolean saucelabsIsNotConfigured() { return (ThucydidesSystemProperty.SAUCELABS_ACCESS_KEY.from(environmentVariables) == null); } How Exactly can we configure Sauce Labs after disabling sauce labs.url serenity property. Really appreciate your help. <issue_comment>username_5: Hi Charan, You can still run your appium mobile test cases on saucelabs by giving saucelabs local host path in appium.hub property as below. I have commented out saucelabs.url but provided that url in appium.hub. Therefore it will consider driver as Android driver and not RemoteDriver. saucelabs.test.name = Test-Saucelabs-Android webdriver.driver = appium appium.hub = http://SaucelabsUsername:SaucelabsAccessKey@localhost:4445/wd/hub appium.platformName = Android appium.platformVersion = 5.1 appium.deviceName = Android Emulator appium.browserName = Browser appium.deviceType = tablet appium.deviceOrientation = portrait appium.hasTouchScreen= true #saucelabs.url=http://SaucelabsUsername:SaucelabsAccessKey@localhost:4445/wd/hub saucelabs.access.key=SaucelabsAccessKey saucelabs.user.id=SaucelabsUsername With respect to video link in report. The function is in "net.thucydides.core.reports.saucelabs.SaucelabsLinkGenerator" private boolean saucelabsIsNotConfigured() { return (ThucydidesSystemProperty.SAUCELABS_URL.from(environmentVariables) == null); } Just replace this with private boolean saucelabsIsNotConfigured() { return (ThucydidesSystemProperty.SAUCELABS_ACCESS_KEY.from(environmentVariables) == null); } You need to update the jar source. Thanks, Lovesh Jain +642108258442 <issue_comment>username_7: Can this be re-opened? Problem still persists today
<issue_start><issue_comment>Title: Add generic flash latency computation for STM32 and LPC11. username_0: This adds the required look-up-tables and algorithm for computing and setting the flash latency for any frequency at any voltage for any STM32F target. Example [generated table for STM32F407](https://gist.github.com/username_0/2a3f740ab527f7c58928?ts=4). Tested in hardware on: - STM32F072, - STM32F100, - STM32F103, - STM32F103, - STM32F303, - STM32F407, - STM32F429, - LPC11C24. <issue_comment>username_0: cc @username_1: Should now be as easy as [`ClockControl::setFlashLatency(MHz64);`](https://github.com/roboterclubaachen/xpcc/blob/develop/examples/nucleo_f103rb/nucleo_f103rb.hpp#L104) <issue_comment>username_1: Thanks, it looks good! I'll check it when I have the time, but probably not in the next few days, as a deadline is really close and I have to get to a presentable state with the project :)
<issue_start><issue_comment>Title: Skip empty lines in the input file username_0: <issue_comment>username_1: @username_0, that won't work. Some systems put a carriage return and some don't: ``` if not line.strip(): # do something ``` or ``` if not line in ['\r', '\r\n']: # do something ``` <issue_comment>username_2: +1 if not line.strip(): <issue_comment>username_0: done
<issue_start><issue_comment>Title: Bug on CC0029 (Call GC.SuppressFinalize on dispose) with expression based methods username_0: This code: ````csharp public virtual void Dispose() => appInstance.Dispose(); ```` Will incorrectly fixed to: ````csharp public virtual void Dispose() { GC.SuppressFinalize(this); } => appInstance.Dispose(); ```` Should be instead: ````csharp public virtual void Dispose() { GC.SuppressFinalize(this); appInstance.Dispose(); } ```` <issue_comment>username_0: Closed on #673.<issue_closed>
<issue_start><issue_comment>Title: Followed module owner isn't highlighted consistently username_0: In this search I'm logged in as my GitHub user and searching for "cli list". A lot of modules by "sindresorhus" come up first, but only first 4 show him highlighted: ![screen shot 2016-01-10 at 18 27 29](https://cloud.githubusercontent.com/assets/10602/12222859/ee92faae-b7c7-11e5-84d5-a5fa48f13baf.png)
<issue_start><issue_comment>Title: Add Attributes API [Mostly Complete] username_0: Solves #372. I wish I had a more detailed description, but I can't really think of anything to write. [This](http://minecraft.gamepedia.com/Attribute) page describes how attributes work. The only remaining issue right now is that i'm not sure of the best way to remove attributes from entities. Please discuss in the comments. <issue_comment>username_1: Missing a `package-info.java` for the `org.spongepowered.api.attributes` package. <issue_comment>username_2: Please correctly wrap javadoc comments, they should not exceed 80 characters per line <issue_comment>username_3: :+1: Pinging @SpongePowered/developers for final review <issue_comment>username_4: I'm not sure if it was mentioned somewhere in the outdated comments, but having AttributeModifier.getName() and .getUUID() would be useful <issue_comment>username_0: @username_4 I am unsure of the usefulness of that, because no name is stored and the UUIDs are inconsistent at best: "...Note that some UUIDs are generated on-the-fly, while others are represented as fixed strings in the code..." <issue_comment>username_4: Name is a description and somewhat useful for that (although only really for developers). UUID is 'identity' and is important as a new modifier added with the same UUID will replace any existing applied modifier with the same one -- most of them are fixed (e.g., CB3F55D3-645C-4F38-A497-9C13A33DB5CF is always added damage from active item, and has different names depending on the item), unless it's a completely dynamic modifier that should apply along with all others. <issue_comment>username_0: @username_4 while that's true, I don't see a need to expose it to plugin developers, and in a way it seems like an implementation detail. <issue_comment>username_5: I agree with getName being added. Imagine the case of printing "Added ATTRIBUTE_NAME to player" to the console. <issue_comment>username_0: @username_5 I see the reasoning behind that. I'll add it soon. <issue_comment>username_6: I don't know much about attributes in the base game, but is there not a way for plugins to create their own attributes? <issue_comment>username_0: @username_6 It seems, inspecting the attribute code that this would mean modification to the MC entity classes, otherwise it would only be useful for new entities, which plugins can't make anyway. <issue_comment>username_7: Not yet, it is planned for the next big MC release. (1.9) Although i don't know anything about its capabilities, because its totally defined via NBT. <issue_comment>username_0: @username_7 Yes, they are persistent. And since we'll have some other way to attach tags to entites, we don't really need custom attributes. <issue_comment>username_0: Wow, I accidentally hit the wrong button. <issue_comment>username_6: This exactly. I was hoping to be able to take advantage of the ```ItemStack``` attribute modifiers with attributes created such as "INT" "STR" "DEX" and what not. It would make things a lot simpler for rpg plugins. <issue_comment>username_0: @username_6 That would be better served using a metadata system for ItemStack and Entity than making "fake" minecraft Attributes, I think. <issue_comment>username_0: @username_6 Especially since attribute display logic is client-side, custom attributes wouldn't display as "+5 Intelligence" just by being there. <issue_comment>username_6: @username_0 I see that's unfortunate <issue_comment>username_7: @username_0 But that could be covered by a Sponge client plugin easily i guess. It is planned anyway, so we should probably add it to the feature request list. https://github.com/SpongePowered/SpongeAPI/issues/429 <issue_comment>username_0: @username_7 While that's true, the Client API is supposed to be optional, so custom attributes shouldn't be part of this PR, Though they could be part of it. <issue_comment>username_7: @username_0 That's true, in the entire API there is no possibility to add custom stuff yet. This is most likely the case until the Inventory API is finished. But you are right adding custom stuff is not/should not be part of this PR. <issue_comment>username_3: Attributes can have multiple operations, see http://minecraft.gamepedia.com/Attribute#Vanilla_Modifiers. <issue_comment>username_7: @username_3 I can't see anything in the API that prevents that. The multiplicity is covered by using multiple `AttributeModifier`s. <issue_comment>username_3: @username_7 Our goal is to model the game, and the game simply doesn't work like that. See "Leader zombie bonus". <issue_comment>username_0: @username_3 I'll refactor it to allow multiple thingies. <issue_comment>username_6: Wait the link given @username_3 shows a modifier labeled as "Unknown" at the bottom stating that its a modifier sent by the server that is unknown. So custom attributes are possible server side even if the client doesn't see it on the item stack? <issue_comment>username_3: @username_6 Yes. <issue_comment>username_7: @username_3 Look at minecrafts implementation, they have an own `AttributeModifier` for each attribute they modify. So the current implementation is totally correct. <issue_comment>username_3: @username_7 Hmm, fair point. Then we probably want to revert 2390a61749 <issue_comment>username_0: @username_3 Alright, reverting. <issue_comment>username_0: @username_3 I'm afraid i've never used git to revert before, how would I? <issue_comment>username_3: ``` git revert --no-commit 2b2cc7 git revert --no-commit 2390a61 git commit -m "Revert adding AttributeModifierEntry" ``` <issue_comment>username_7: Here is an example implementation of the `AttributeModifications` class that reflects the curremt Minecraft implementation (using the old system). ````java public final class AttributeModifications { public static final List<AttributeModifier> ZOMBIE_LEADER_BONUS = null; public static final List<AttributeModifier> ZOMBIE_RANDOM_SPAWN_BONUS = null; static { ZOMBIE_LEADER_BONUS = new ArrayList<>(); ZOMBIE_LEADER_BONUS.add(new AttributeModifier(Attributes.MAXHEALTH, Operations.ADD, 42); ZOMBIE_LEADER_BONUS.add(new AttributeModifier(Attributes.ZOMBIE_REINFORMCENTS, Operations.MULTIPLY, 42); } } ```` <issue_comment>username_0: Alright, it's rolled back. I just used git reset --hard HEAD~2, sorry about no commit message. <issue_comment>username_3: @username_7 While that's nice, I don't think I'd bother. The only usage for that is adding multiple attribute modifiers, trying to detect if `ZOMBIE_LEADER_BONUS` is actually applied is a different story. <issue_comment>username_7: @username_3 The reason why a `AttributeModifications`class would be cool is not the removal of said attribute from any entity, it's the addition. ````java entity.addAttributeModifiers(AttributeModifications.ZOMBIE_LEADER_BONUS); ```` But you are right, its a nice to have, but Optional <issue_comment>username_7: @username_0 What about extracting `AttributeSource` or `AttributeModifierSource`? `PotionEffectType` also uses `AttributeModifier`s so it could extend that interface too. Notice: `PotionEffectType` is called `Potion` in MCP. <issue_comment>username_0: @username_7 Could you explain a bit more? <issue_comment>username_7: @username_0 Sure ````java [..] public interface AttributeSource { /** * Gets all {@link AttributeModifier}s on this ItemStack. * * @return All AttributeModifiers on this ItemStack */ Collection<AttributeModifier> getAttributeModifiers(); /** * Adds a {@link AttributeModifier} to this ItemStack. * * @param modifier The AttributeModifier to add to this ItemStack */ void addAttributeModifier(AttributeModifier modifier); /** * Removes a {@link AttributeModifier} from this ItemStack. * * @param modifier The AttributeModifier to remove from this ItemStack */ void removeAttributeModifier(AttributeModifier modifier); } ```` [ItemStack](https://github.com/SpongePowered/SpongeAPI/blob/master/src/main/java/org/spongepowered/api/item/inventory/ItemStack.java) ````java [..] public interface ItemStack extends Serializable, DataSerializable, AttributeSource { [..] ```` [PotionEffectType](https://github.com/SpongePowered/SpongeAPI/blob/master/src/main/java/org/spongepowered/api/potion/PotionEffectType.java) ````java [..] public interface PotionEffectType extends AttributeSource { [..] ```` Which will grant access to: net.minecraft.potion.Potion.attributeModifierMap (line 86 from MCP 1.8.0) But i'm not sure whether we should grant more than read access to `PotionEffectType` `AttributeModifier`s. I haven't found any up to date class for that so i reference a file in Bukkit: [Potion](https://github.com/Bukkit/mc-dev/blob/c1627dc9cc7505581993eb0fa15597cb36e94244/net/minecraft/server/MobEffectList.java#L14) The last method call in that line is the addition of the AttributeModifier. <issue_comment>username_0: @username_7 Got it. It'll be in the next commit. <issue_comment>username_7: @username_0 Yeah, that's exactly what i wanted. Thanks a lot for your work. <issue_comment>username_0: @username_7 No problem. I hope you think the mutable distinction was a good idea. <issue_comment>username_7: @username_0 Yes, it is a good idea. I thought of it myself, but i wasn't sure how to name the read only variant. But your choice is very good here. In some other PR there was the discussion with MutableX extends ImmutableX. But i'm glad you avoided that issue. There is basically 2 or 3 points left from my point of view. - Add helper methods to `SpongeEventFactory` - Maybe add `Optional<AttributeSource> getSource()` to `AttributeModifiedEvent` to get the PotionEffectType or ItemStack that causes the attribute change. - Seperate `AddAttributeModifiedEvent` from `RemoveAttributeModifiedEvent` - You have already commented on this and stated you opinion, so we don't have to discuss it over again I'm not a Teammember, so you don't have to change anything if you don't want to. <issue_comment>username_5: @SpongePowered/developers - does everyone feel this is ready for merging? <issue_comment>username_1: I was hoping that Movement speed would be represented by Attributes. Can we consider adding those? <issue_comment>username_0: @username_1 Is it not? Attributes.GENERIC_MOVEMENT_SPEED? <issue_comment>username_1: Does that apply for both fly speed and walking speed? <issue_comment>username_0: @username_1 I'm not sure, but it's the only movement attribute. <issue_comment>username_1: @username_0 Well, considering there is a value stored for both flight and walk speed, I'd want an attribute for both. <issue_comment>username_0: @username_1 I'm not creating these attributes, these are the ones present in Vanilla MC. As stated above, creation of new Attributes is out of the scope of this PR. <issue_comment>username_1: I'll allow a few additional Attributes to be added to the API for this case. There isn't a reason we can't emulate the attributes for player movement and flight speed. So, again, please add these two attributes. <issue_comment>username_1: @username_0 Can you add the following: For Humans: - Food Level - Saturation - Exhaustion Reasoning is that they too are attributes of the Human entity, much like movement speeds. <issue_comment>username_0: @username_1 That's gonna cause problems with AttributeModifiers, unless sponge actually adds these attributes, which would cause client issues. <issue_comment>username_1: We very well could, though I wouldn't imagine the Forge implementation would actually use them as attributes and just bypass attributes and represent them as proxies to the actual field values stored in the entities themselves. <issue_comment>username_0: @username_1 If you say so. <issue_comment>username_8: Another thing, it would be good to support plugin added custom attributes. There is nothing stopping plugin added attributes as we can persist them with the rest of sponge's extraneous data, and they'll even be available on the client for client mods to access. <issue_comment>username_0: @username_8 Custom Attributes were discussed earlier in this discussion, and we decided they were out of scope for this PR, and maybe Client API exclusive. <issue_comment>username_8: Not true, attributes added to existing entities can still be used by plugins in place of needing to store their own custom data. <issue_comment>username_4: @username_1 -- all your suggestions here are massively out of scope of what an 'attribute' in the context of minecraft means. <issue_comment>username_4: @username_1 For more clarity: The API's job is to expose what Minecraft does in a fairly easy to use manner, preferablly in a way that makes it not too bad for alternate servers to use. Its job is not to randomly expand concepts just because you think it looks cleaner. For things like this PR, that correspond directly to things vanilla does that are well exposed to users of vanilla (in this case, itis heavily documented for mapmaker use), we should keep that correspondence as close as possible. <issue_comment>username_1: Sorry, but I do not see where we are breaking the contract of the defined usage of Attributes, merely we're expanding on the possible usages on what are stored in Attributes. Examples of this expansion include the Inventory API. It's an entirely different setup compared to what vanilla Minecraft/Bukkit has/had and yet facilitates easier usage of the API. <issue_comment>username_4: For the record I don't really like the inventory API concept. (I could probably go on and name half the stuff in the API that I don't really like) <issue_comment>username_9: I think we agree on that the API is a balance of several goals: * Something that is greatly expands the ability to modify MC in as many ways as possible * Something that is consistent and portable between MC versions * Something that "fits together" in the grand scheme of the entire API * Something that is within a scope that we, the team, are able to maintain between MC versions (time and effort-wise) In my opinion, I think we should provide the ability to register new attributes, but I disagree with making some currently non-attributes "virtual" attributes because attributes are a "public-facing" feature that people can use when making maps. <issue_comment>username_5: See ^, this guy makes sense. <issue_comment>username_10: Agreed, virtual attributes are not the way to do this. A "property" API might offer a similarly clean representation that isn't muddled with Minecraft's definition of an attribute. <issue_comment>username_0: @username_10 I was thinking there'd be something in the Persistence API, akin to bukkit's metadata, that would be better used for tagging entities. <issue_comment>username_10: Yeah that's the same thing I was thinking. <issue_comment>username_1: @username_0 Please do the following: - Remove virtual attributes (They can be discussed and decided upon in the future) - Add customized attribute registration for Plugins - Fix merge conflicts. <issue_comment>username_0: @username_1 Are you sure the second is necessary? As I said before, It seems like plugins should utilize the Persistence API to save data to entities and items, rather than attributes. <issue_comment>username_10: @username_1 I'm with @username_0 on this, it doesn't make much sense to put custom support anywhere near Attributes. <issue_comment>username_11: I'm not seeing an issue with custom attributes. Virtual ones I see the point about. <issue_comment>username_10: @username_1 What's the point of custom attributes? <issue_comment>username_1: As attributes are implemented in Minecraft, they are various `double` values assigned to entities that can be modified by something as simple as a held item or equipped item that has an attribute modifier for that specific attribute. Given that this behavior is entirely supported within Minecraft by default, it would make sense to do the following: - Create a loosely based API around the implementation of Attributes as they are in Minecraft - Allow the same freedoms Minecraft grants by having custom attributes created as needed Given that the first is already done in this PR, the second should most definitely be allowed for the same reason that map makers are able to create their own [AttributeModifiers](http://minecraft.gamepedia.com/Attributes#Vanilla_Modifiers). I already have tested custom attributes being registered with the game where the game doesn't crash. Being that custom attributes would allow storage of various bits of information, modifiers are immensely beneficial to applying a change to an attribute without having to even listen for the change of equipment or anything else for that matter. I'm failing to see where it's so hard to understand why custom attributes and modifiers is something that should be allowed in our API to begin with. I mean, isn't the whole point of our API to allow integration and customization with modding? <issue_comment>username_1: @username_0 You have some decent PR's going on, but in every single one of them, you have formatting issues. I'd appreciate some courtesy in fixing your auto formatter since we do provide the formatting styles for both [IntelliJ](https://github.com/SpongePowered/SpongeAPI/blob/master/extra/intellij/sponge_intellij_style.xml) and [Eclipse](https://github.com/SpongePowered/SpongeAPI/tree/master/extra/eclipse). I'd want to handle merging some PRs but I simply can't get over all the formatting issues. ### FIX YOUR FORMATTING <issue_comment>username_0: @username_1 I have been working on the formatting. If you could point out what I've missed, that would help. I don't know how that happens though considering I'm using the sponge style, cleanup, and import order. <issue_comment>username_1: Oddly, when I made the comment, the files were showing up incorrectly formatted still. Excuse my rudeness. I'll start reviewing this PR for merging. <issue_comment>username_1: @username_0 If you can, add an `AttributeBuilder` for plugin created attributes. After which, we should be all ready to go.
<issue_start><issue_comment>Title: Location of element in Android Storage username_0: hello, I want to ask where is the data storage location of the element application on Android located? Usually the installed application data will be in the Internal Storage/Android/data/packageName folder, but for this element application is not found. I think the folder must be encrypted, but where can I find it? Thanks.
<issue_start><issue_comment>Title: Feature request: A configurable property similar to `maven.executable.path` for Maven Daemon username_0: ## Suggestion <!-- A summary of what you'd like to see added or changed --> ## Use Cases `maven.executable.path` - points to mvn executable, helps run mvn commands -- This is an existing feature On the similar lines if we can have a property supporting [Maven daemon or `mvnd`](https://github.com/mvndaemon/mvnd#configuration) that would help leverage `mvnd` This might not be an override property, might coexist along with existing property as `mvn`, `mvnd` are clearly different <!-- What do you want to use this for? This would What shortcomings exist with current approaches? --> ## Examples <!-- Show how this would be used and what the behavior would be --> Having a configuration property like below `mvndaemon.executable.path`: `/home/username_0/.sdkman/candidates/mvnd/0.6.0/bin/mvnd` enables this extension to run commands using `mvnd` like `mvnd clean` `mvnd clean install` <issue_comment>username_1: As far as I understand, both `mvn` and `mvnd` are used to run maven commands. Can we simply use the same setting `maven.executable.path` for `mvnd`? <issue_comment>username_0: So, I think if we can keep the properties separate that would be neat Given this is the current hierarchy of mvn, mvnwrapper properties `"maven.executable.path": "/home/username_0/.sdkman/candidates/mvnd/0.6.0/mvn/bin/mvn"` `"maven.executable.preferMavenWrapper": "true"` // uses wrapper if mvw found, otherwise falls back to mvn path And after introducing `mvnd` it might have 2 properties `"mavend.executable.path":"/home/username_0/.sdkman/candidates/mvnd/0.6.0/bin/mvnd"` `"mavend.executable.preferMavendWrapper": "true"` // uses mvnd wrapper if found, otherwise falls back to mvnd path **Note:** mvndwrapper is not yet there as a library as far as I know, so not relevant for now If both mvn, mvnd properties are mentioned, we may need to have a flag to tell extension which one to use `mvn`, or `mvnd` assuming a property like `"enable.mvnd":true`// defaults to false <issue_comment>username_2: Would love to utilize `mvnd` under VSCode for it superior building preformance. <issue_comment>username_3: Agreed
<issue_start><issue_comment>Title: https://github.com/jlord/sheetsee.js/issues/26#issue-18393950 username_0: Kunjungi - [ ] ### **_**@gitHub.mdf**_** [OPPO_CPH1823_CPH1823_2021-11-29_08-04-08.log](https://github.com/sheetsee.js/files/7651940/OPPO_CPH1823_CPH1823_2021-11-29_08-04-08.log)<issue_closed> <issue_comment>username_0: Kunjungi - [ ] ### **_**@gitHub.mdf**_** [OPPO_CPH1823_CPH1823_2021-11-29_08-04-08.log](https://github.com/sheetsee.js/files/7651940/OPPO_CPH1823_CPH1823_2021-11-29_08-04-08.log)
<issue_start><issue_comment>Title: 🛑 Oy 2 Us is down username_0: In [`c83c3aa`](https://github.com/username_0/upptime/commit/c83c3aa7cded636836f01b807b57f22a37d96d2a ), Oy 2 Us (https://oy2.us/admin/) was **down**: - HTTP code: 500 - Response time: 2075 ms <issue_comment>username_0: **Resolved:** Oy 2 Us is back up in [`d161c1c`](https://github.com/username_0/upptime/commit/d161c1c3e416f38a89d6d670fbd2cdac6c53a35f ).<issue_closed>
<issue_start><issue_comment>Title: minecraft-launcher cannot be opened with DLSYM username_0: I updated the official Minecraft launcher today and I'm currently unable to open it with DLSYM. Although I can open it without DLSYM (which probably reveals where the problem is). Note: The last lines of this log were repeated over and over until I force quit the program. `❯ mangohud --dlsym /usr/bin/minecraft-launcher [Info: 2021-06-17 04:19:19.094429763: NetQueue.cpp(157)] NetQueue: Setting up. [Info: 2021-06-17 04:19:19.094672676: mainLinux.cpp(250)] Running launcher bootstrap (version 921) [Info: 2021-06-17 04:19:19.094580033: NetQueue.cpp(575)] NetQueue: worker thread started. [Debug: 2021-06-17 04:19:19.094899085: PlatformLinux.cpp(44)] HOME dir (getpwuid): /home/zlendy [Debug: 2021-06-17 04:19:19.094910046: PlatformLinux.cpp(49)] HOME dir (env): /home/zlendy [Error: 2021-06-17 04:19:19.096881400: localization.cpp(45)] No locale matching [C.json] found. [Error: 2021-06-17 04:19:19.096892827: localization.cpp(110)] No translation found for "Unable to download <game>. Please try again later." [Error: 2021-06-17 04:19:19.096900491: localization.cpp(110)] No translation found for "Unable to download the <game> Native Launcher." [Error: 2021-06-17 04:19:19.096905815: localization.cpp(110)] No translation found for "Unable to update the <game> Game Launcher." [Error: 2021-06-17 04:19:19.096911547: localization.cpp(110)] No translation found for "Unable to download the <game> Runtime Environment." [Error: 2021-06-17 04:19:19.096916707: localization.cpp(110)] No translation found for "Unable to update the <game> Game Launcher." [Error: 2021-06-17 04:19:19.096921791: localization.cpp(110)] No translation found for "Unable to update the <game> Runtime Environment." [Error: 2021-06-17 04:19:19.096926831: localization.cpp(110)] No translation found for "Unable to update the <game> Native Launcher." [Error: 2021-06-17 04:19:19.096933384: localization.cpp(110)] No translation found for "Unable to relaunch the Minecraft bootstrap." [Error: 2021-06-17 04:19:19.096939270: localization.cpp(110)] No translation found for "Unable to start the <game> Runtime Environment. This is most likely caused by a corruption. Please try to reinstall <game>." [Error: 2021-06-17 04:19:19.096945099: localization.cpp(110)] No translation found for "Unable to start <game>, if you are running from a dmg, please drag to Applications and try again." [Error: 2021-06-17 04:19:19.096950122: localization.cpp(110)] No translation found for "Unable to initialize the network library." [Error: 2021-06-17 04:19:19.096955575: localization.cpp(110)] No translation found for "<game> Launcher" [Info: 2021-06-17 04:19:19.096960496: Common.cpp(32)] Native Launcher Version: 921 [Info: 2021-06-17 04:19:19.096973331: Common.cpp(33)] Operating System: Linux [Info: 2021-06-17 04:19:19.096984174: Common.cpp(34)] Application Data directory: //.../.minecraft [Info: 2021-06-17 04:19:19.096996710: Common.cpp(35)] Executable Path: /.../minecraft-launcher [Info: 2021-06-17 04:19:19.097006713: Common.cpp(36)] App Directory dir: //.../bin [Info: 2021-06-17 04:19:19.097016013: Common.cpp(37)] Game data directory: //.../.minecraft [Info: 2021-06-17 04:19:19.097025258: Common.cpp(38)] Launcher dir: //.../launcher [Info: 2021-06-17 04:19:19.097035389: Common.cpp(39)] Java dir: //.../jre-x64 [Info: 2021-06-17 04:19:19.097044824: Common.cpp(40)] TmpDir dir: //.../MinecraftLauncher.1000 [Info: 2021-06-17 04:19:19.097053964: Common.cpp(41)] x64: true [Info: 2021-06-17 04:19:19.097376207: mainLinux.cpp(333)] resolvedExe: /home/zlendy/.minecraft/launcher/minecraft-launcher selfExe: /usr/bin/minecraft-launcher [Info: 2021-06-17 04:19:19.097392555: mainLinux.cpp(337)] Top level executable, forwarding signals. [Debug: 2021-06-17 04:19:19.097466124: SentryAPI.cpp(314)] The string '/usr' was added to the logging blacklist [Debug: 2021-06-17 04:19:19.097482012: SentryAPI.cpp(314)] The string '///usr' was added to the logging blacklist [Debug: 2021-06-17 04:19:19.097511861: SentryAPI.cpp(314)] The string '/home/zlendy' was added to the logging blacklist [Debug: 2021-06-17 04:19:19.097524997: SentryAPI.cpp(314)] The string '///home/zlendy' was added to the logging blacklist [Debug: 2021-06-17 04:19:19.097552385: SentryAPI.cpp(314)] The string '/home/zlendy/.minecraft' was added to the logging blacklist [Debug: 2021-06-17 04:19:19.097577384: SentryAPI.cpp(314)] The string '///home/zlendy/.minecraft' was added to the logging blacklist [Debug: 2021-06-17 04:19:19.097608410: SentryAPI.cpp(314)] The string '/home/zlendy/.minecraft/runtime' was added to the logging blacklist [Debug: 2021-06-17 04:19:19.097635103: SentryAPI.cpp(314)] The string '///home/zlendy/.minecraft/runtime' was added to the logging blacklist [Debug: 2021-06-17 04:19:19.097648937: SentryAPI.cpp(314)] The string '/tmp' was added to the logging blacklist [Debug: 2021-06-17 04:19:19.097661083: SentryAPI.cpp(314)] The string '///tmp' was added to the logging blacklist [Info: 2021-06-17 04:19:19.097671938: Common.cpp(32)] Native Launcher Version: 921 [Info: 2021-06-17 04:19:19.097706225: Common.cpp(33)] Operating System: Linux [Info: 2021-06-17 04:19:19.097739926: Common.cpp(34)] Application Data directory: .../.minecraft [Info: 2021-06-17 04:19:19.097762003: Common.cpp(35)] Executable Path: /.../minecraft-launcher [Info: 2021-06-17 04:19:19.097796908: Common.cpp(36)] App Directory dir: .../bin [Info: 2021-06-17 04:19:19.097849790: Common.cpp(37)] Game data directory: .../.minecraft [Info: 2021-06-17 04:19:19.097871840: Common.cpp(38)] Launcher dir: .../.minecraft/launcher [Info: 2021-06-17 04:19:19.097893182: Common.cpp(39)] Java dir: .../.minecraft/runtime/jre-x64 [Info: 2021-06-17 04:19:19.097914721: Common.cpp(40)] TmpDir dir: .../MinecraftLauncher.1000 [Info: 2021-06-17 04:19:19.097962067: Common.cpp(41)] x64: true [Info: 2021-06-17 04:19:19.103028416: mainLinux.cpp(441)] Current executable is not in game directory, update and run game directory version [Info: 2021-06-17 04:19:19.103091982: mainLinux.cpp(455)] Attempting to start the launcher [Info: 2021-06-17 04:19:19.103118551: mainLinux.cpp(118)] Window closing, exiting process https://launchermeta.mojang.com/v1/products/launcher/6f083b80d5e6fabbc4236f81d0d8f8a350c665a9/linux.json [0617/061919.496212:INFO:main_context.cpp(136)] CEF initialized successfully. [0617/061919.496279:INFO:main_context.cpp(138)] CEF version: 86.0.23+ga2c2edf+chromium-86.0.4240.193 MANGOHUD: Failed to get function 'ANGLEGetDisplayPlatform' [0617/061919.502209:ERROR:gl_surface_egl.cc(1285)] ANGLE Platform initialization failed. skipping config: /home/zlendy/.minecraft/launcher/MangoHud.conf [ not found ] skipping config: /home/zlendy/.config/MangoHud/minecraft-launcher.conf [ not found ] parsing config: /home/zlendy/.config/MangoHud/MangoHud.conf [ ok ] sh: line 1: orca: command not found Created browser window for reuse: 0x8000001 MANGOHUD: Uploading is disabled (permit_upload = 0) Version: 4.5 [0617/061920.054615:ERROR:sandbox_linux.cc(374)] InitializeSandbox() called with multiple threads in process gpu-process. [0617/061920.154696:ERROR:skia_output_surface_impl_on_gpu.cc(1204)] Failed to make current. [0617/061920.171828:ERROR:raster_decoder.cc(1027)] RasterDecoderImpl: Context lost during MakeCurrent. [0617/061920.174754:ERROR:skia_output_surface_impl_on_gpu.cc(1204)] Failed to make current. [0617/061920.188731:ERROR:skia_output_surface_impl_on_gpu.cc(1204)] Failed to make current. [0617/061920.204802:ERROR:raster_decoder.cc(1027)] RasterDecoderImpl: Context lost during MakeCurrent. [0617/061920.222165:ERROR:skia_output_surface_impl_on_gpu.cc(1204)] Failed to make current. [0617/061920.252875:ERROR:raster_decoder.cc(1027)] RasterDecoderImpl: Context lost during MakeCurrent. [0617/061920.257103:ERROR:skia_output_surface_impl_on_gpu.cc(1204)] Failed to make current. [0617/061920.354496:ERROR:skia_output_surface_impl_on_gpu.cc(1204)] Failed to make current. [0617/061920.436627:ERROR:raster_decoder.cc(1027)] RasterDecoderImpl: Context lost during MakeCurrent. [Truncated] [0617/061920.982247:ERROR:shared_image_manager.cc(214)] SharedImageManager::ProduceSkia: Trying to Produce a Skia representation from a non-existent mailbox. [0617/061920.982538:WARNING:ipc_message_attachment_set.cc(49)] MessageAttachmentSet destroyed with unconsumed attachments: 0/1 [0617/061920.982516:ERROR:shared_image_manager.cc(214)] SharedImageManager::ProduceSkia: Trying to Produce a Skia representation from a non-existent mailbox. [0617/061920.982889:ERROR:shared_image_manager.cc(214)] SharedImageManager::ProduceSkia: Trying to Produce a Skia representation from a non-existent mailbox. [0617/061920.983184:ERROR:shared_image_manager.cc(214)] SharedImageManager::ProduceSkia: Trying to Produce a Skia representation from a non-existent mailbox. [0617/061920.983451:ERROR:shared_image_manager.cc(214)] SharedImageManager::ProduceSkia: Trying to Produce a Skia representation from a non-existent mailbox. [0617/061920.983532:ERROR:shared_image_manager.cc(214)] SharedImageManager::ProduceSkia: Trying to Produce a Skia representation from a non-existent mailbox. [0617/061920.983695:ERROR:shared_image_manager.cc(214)] SharedImageManager::ProduceSkia: Trying to Produce a Skia representation from a non-existent mailbox. [0617/061920.984117:ERROR:shared_image_manager.cc(214)] SharedImageManager::ProduceSkia: Trying to Produce a Skia representation from a non-existent mailbox. [0617/061920.984288:ERROR:shared_image_manager.cc(214)] SharedImageManager::ProduceSkia: Trying to Produce a Skia representation from a non-existent mailbox. [0617/061920.991897:ERROR:skia_output_surface_impl_on_gpu.cc(1204)] Failed to make current. [0617/061920.992159:ERROR:skia_output_surface_impl_on_gpu.cc(1204)] Failed to make current. [0617/061921.002385:ERROR:raster_decoder.cc(1027)] RasterDecoderImpl: Context lost during MakeCurrent. [0617/061921.035921:WARNING:ipc_message_attachment_set.cc(49)] MessageAttachmentSet destroyed with unconsumed attachments: 0/1 [0617/061921.036402:WARNING:ipc_message_attachment_set.cc(49)] MessageAttachmentSet destroyed with unconsumed attachments: 0/1 [0617/061921.383998:ERROR:gl_surface_egl.cc(767)] EGL Driver message (Critical) eglMakeCurrent: Failed to make the GLX context current [0617/061921.392454:ERROR:gl_surface_egl.cc(767)] EGL Driver message (Critical) eglMakeCurrent: Failed to make the GLX context current [0617/061921.392855:ERROR:skia_output_surface_impl_on_gpu.cc(1204)] Failed to make current. [0617/061921.978187:WARNING:gpu_process_host.cc(1262)] The GPU process has crashed 1 time(s) [Warning: 2021-06-17 04:19:22.046657527: mainLinux.cpp(475)] Launcher exited with code: 1` <issue_comment>username_1: Unable to reproduce. Work fine on my end with and without dlsym <issue_comment>username_0: Hi, I was once again reading this log and I noticed that there were many errors referencing the GPU. It turns out that disabling hardware acceleration "solves" the issue. These are the specs of my system in case anyone else is also experiencing this issue. If there is a common pattern between them it may be easier to solve. ``` OS: Arch Linux CPU: Intel i5-6600 (4) @ 3.900GHz GPU: NVIDIA GeForce GTX 1070 GPU Driver: NVIDIA 465.31 ``` <issue_comment>username_2: I was able to reproduce with the following system: |System Property|Version| |-|-| |OS|Arch Linux |Kernel|Linux 5.12.12-arch1-1 |CPU|AMD Ryzen 5 3600 (12) @ 3.6GHz |GPU|NVIDIA GeForce RTX 2070 |GPU Driver|NVIDIA 465.31 |Graphics Platform|X11 Steps to reproduce: 1. Install mangohug + the minecraft launcher 2. run `mangohud minecraft-launcher` 3. press "Play" to launch the game 4. Crash I've made a diff of the working (without mangohud) and non-working (with mangohud) launcher logs, so they can be compared: ```diff --- minecraft-launcher.log 2021-06-24 17:14:14.697468156 -0400 +++ minecraft-launcher.mangohud.log 2021-06-24 17:13:47.223804149 -0400 @@ -39,20 +39,31 @@ [Info: mainLinux.cpp(441)] Current executable is not in game directory, update and run game directory version [Info: mainLinux.cpp(455)] Attempting to start the launcher [Info: mainLinux.cpp(118)] Window closing, exiting process https://launchermeta.mojang.com/v1/products/launcher/6f083b80d5e6fabbc4236f81d0d8f8a350c665a9/linux.json [INFO: main_context.cpp(136)] CEF initialized successfully. [INFO: main_context.cpp(138)] CEF version: 86.0.23+ga2c2edf+chromium-86.0.4240.193 sh: line 1: orca: command not found Created browser window for reuse: 0xca00001 +[ERROR: gl_surface_egl.cc(767)] EGL Driver message (Error) eglSwapBuffers: Failed to retrieve the size of the parent window. +[ERROR: shared_image_stub.cc(452)] SharedImageStub: context already lost +[WARNING: xproto_util.cc(51)] X error received: serial 447, error_code 9 (BadDrawable (invalid Pixmap or Window parameter)), request_code 14, minor_code 0 (X_GetGeometry) [ERROR: gl_surface_egl.cc(767)] EGL Driver message (Critical) eglMakeCurrent: Failed to make the GLX context current [ERROR: gl_surface_egl.cc(767)] EGL Driver message (Critical) eglMakeCurrent: Failed to make the GLX context current [ERROR: skia_output_surface_impl_on_gpu.cc(1204)] Failed to make current. [WARNING: xproto_util.cc(51)] X error received: serial 465, error_code 170 (GLXBadWindow), request_code 151, minor_code 5 (X_GLXMakeCurrent) [WARNING: xproto_util.cc(51)] X error received: serial 466, error_code 162 (GLXBadContextTag), request_code 151, minor_code 5 (X_GLXMakeCurrent) -[ERROR: gles2_cmd_decoder_passthrough.cc(1487)] GLES2DecoderPassthroughImpl: Context lost during MakeCurrent. [ERROR: skia_output_surface_impl_on_gpu.cc(1204)] Failed to make current. -[ERROR: gl_surface_egl.cc(767)] EGL Driver message (Error) eglDestroyContext: Bad context. -[ERROR: gl_context_egl.cc(260)] eglDestroyContext failed with error EGL_BAD_CONTEXT -[WARNING: xproto_util.cc(51)] X error received: serial 543, error_code 3 (BadWindow (invalid Window parameter)), request_code 4, minor_code 0 (X_DestroyWindow) +[ERROR: skia_output_surface_impl_on_gpu.cc(1204)] Failed to make current. + Created browser window for reuse: 0xca00007 +[WARNING: xproto_util.cc(51)] X error received: serial 539, error_code 3 (BadWindow (invalid Window parameter)), request_code 4, minor_code 0 (X_DestroyWindow) +[ERROR: gl_surface_egl.cc(767)] EGL Driver message (Critical) eglMakeCurrent: Failed to make the GLX context current +[ERROR: gl_surface_egl.cc(767)] EGL Driver message (Critical) eglMakeCurrent: Failed to make the GLX context current +[ERROR: skia_output_surface_impl_on_gpu.cc(1204)] Failed to make current. +[ERROR: skia_output_surface_impl_on_gpu.cc(1092)] Failed to make current during initialization. +[WARNING: xproto_util.cc(51)] X error received: serial 525, error_code 162 (GLXBadContextTag), request_code 151, minor_code 5 (X_GLXMakeCurrent) +[WARNING: xproto_util.cc(51)] X error received: serial 526, error_code 162 (GLXBadContextTag), request_code 151, minor_code 5 (X_GLXMakeCurrent) +[ERROR: gl_surface_egl.cc(767)] EGL Driver message (Critical) eglMakeCurrent: Failed to make the GLX context current [WARNING: gpu_process_host.cc(1262)] The GPU process has crashed 1 time(s) +[WARNING: gpu_process_host.cc(990)] Reinitialized the GPU process after a crash. The reported initialization time was 139 ms [Warning: mainLinux.cpp(475)] Launcher exited with code: 1 ``` I've attached the console logs as well as the minecraft crash long. [minecraft-launcher.log](https://github.com/username_1/MangoHud/files/6712202/minecraft-launcher.log) [minecraft-launcher.mangohud.log](https://github.com/username_1/MangoHud/files/6712203/minecraft-launcher.mangohud.log) [minecraft crash log](https://github.com/username_1/MangoHud/files/6712206/hs_err_pid3749106.log) <issue_comment>username_2: Note: this only occus with the _minecraft launcher_. I can run minecraft perfectly fine with multimc using `mangohud --dlsym` as a wrapper command. <issue_comment>username_0: Same here, but with Technic Launcher. Have you tried disabling hardware acceleration on the launcher to see if it works again? <issue_comment>username_2: I haven't tried it and I was too lazy to. Since I don't use the mc launcher ever (I didn't even have it installed), I only used it to help reproduce this. <issue_comment>username_3: Workaround: disable hardware acceleration in launcher settings <issue_comment>username_4: thanks it works <issue_comment>username_5: Didn't work for me but resetting my Java args did: https://bugs.mojang.com/browse/MC-228054?attachmentViewMode=list
<issue_start><issue_comment>Title: layout in IE10 seems broken username_0: I am not sure if this is because I may be doing something wrong, but on [this](http://commerceservertraining.com/#!/) website I've built, in IE10 it looks like the layout is messed up (It does not occur in other browsers). <issue_comment>username_1: https://github.com/angular/material/issues/1276 <issue_comment>username_2: Please provide a CodePen or Plunkr that demonstrates this issue. Here are some starter demo templates that you can use/fork: * CodePen template: http://codepen.io/team/AngularMaterial/pen/BNYppz * Plunkr template: http://plnkr.co/edit/yP0gB9iTacIQb48thud8?p=preview. <issue_comment>username_0: @username_2 I am unsure if I can replicate what is happening [here](this). But you can easily debug the live site without the need of creating a plunkr demo. There is nothing else happening that cannot be seen in the source of the website.<issue_closed>
<issue_start><issue_comment>Title: Feature/Functionality Requests: Legacy Judge QOL - Mercurial Item Entry and CS PDF Exporter username_0: Feature/Function Request 1: Allow Mercurial Effects to be entered as items from a compendium. Benefit: This will benefit Legacy Judge's in preparation of games for players coming from other systems of play, and cut down prep time for the Wizard and Elf classes. Feature/Function Request 2: Allow the Judge or Player to export a character onto the Form Fillable PDF offered through Legacy rulings so that it can be distributed immediately after players have updated the sheet for the next Legacy game.<issue_closed> <issue_comment>username_1: I like this idea and I'm not too happy with the current solution of merging 'As Coin's items into one of those items.
<issue_start><issue_comment>Title: Configure `git show` to work as `git diff` does username_0: `git diff HEAD~1..HEAD` works as expected, but `git show` shows a standard line-by-line diff. Can `daff git tsv` configure `git show` to use daff? <issue_comment>username_1: Hmm, looks like you can do `git show --ext-diff`, does that work for you? Not sure how to set that to be the default.<issue_closed> <issue_comment>username_0: Yeah, that works for me. Thanks. **.gitconfig** ``` [alias] show-tsv = show --ext-diff ```
<issue_start><issue_comment>Title: HPCC-13006 DOCS:Standardize right-click username_0: FIX HPCC-13006 DOCS:Standardize right-click Throughout documentation right-click should appear the same way in all docs. @username_2 please review <issue_comment>username_1: You may not care (and I do not care) but you signed this commit with your gmail address rather than the ln address you usually use. <issue_comment>username_2: Looks good to merge
<issue_start><issue_comment>Title: Blazor - Virtualize - virtualized table scrolling incorrect behaviour when cells have a border username_0: To reproduce, create a table and add a border to all cells. If you try to scroll down, the scroll box moves much slower than the mouse cursor. The workaround is to remove 'border-style: solid' and add 'border-bottom-style: solid; border-left-style: solid; border-right-style: solid;. Test application: [VirtualizeBug-CellBorder.zip](https://github.com/dotnet/aspnetcore/files/5942041/VirtualizeBug-CellBorder.zip) Incorrect scrolling behaviour: ![StartScrollDown](https://user-images.githubusercontent.com/10242002/107191123-00b51d80-69ec-11eb-95f3-06b02ac8a370.png) ![ScrollDownError](https://user-images.githubusercontent.com/10242002/107191130-03177780-69ec-11eb-8ec1-c47fbedd1f35.png) <issue_comment>username_1: @username_2 FYI <issue_comment>username_2: The scrollbar-out-of-sync-with-mouse is a Chromium bug (doesn't repro in Firefox). This has been logged as a bug in Edge, so I'll close this as external.<issue_closed> <issue_comment>username_0: And where exactly is this bug reported? I've checked https://bugs.chromium.org/ and searching 'scrollbar out of sync' finds nothing relevant. I've also checked Edge Canary 90.0.792.0 and the problem is not fixed. <issue_comment>username_2: It’s reported on the internal Edge bug tracker. It’s up to the maintainers there whether to fix it directly in Chromium or whether to convert it into an upstream bug on chromium.org.
<issue_start><issue_comment>Title: Does this library support expandable StaggeredGridLayout? username_0: <issue_comment>username_1: @username_0 It will work, however probably not display in any sort of special way. Children will be shown right after the parent. So for example it will not make children show up on a new row away from parents etc. StaggeredGridLayoutManager/GridLayoutManager is not yet on our sights for future enhancements<issue_closed>
<issue_start><issue_comment>Title: Dou you know how to watch on model without $watch function? username_0: Actually, I want to know whether io.select has any callbacks for tracking model changing. <issue_comment>username_1: You can use `newItemFn` and `removeItemFn` callbacks. It useful if your items storage in DB. In other case use angular methods: `watch`, `ng-change`, etc.<issue_closed>
<issue_start><issue_comment>Title: [TRAFODION-28] Part 3 of changes to update license text username_0: This check-in updates the license text in the remaining subdirectories of the core directory. Still to do are a few files in the core directory itself (I forgot these), and the other non-core diretories in incubator-trafodion. <issue_comment>username_1: Thanks! The change has been merged
<issue_start><issue_comment>Title: Timestamps instead of relative time username_0: Just an idea, but it's more useful to me to see the actual date and time when something occurred. The home feed simplifying it to "2 hours ago" might as well say "two episodes of House ago"-- I'm an adult, I want the real time! Am I alone on this one? <issue_comment>username_1: I don't care about the exact time, I like the '2 hours ago', because it gives me a rough estimation. The actual date and time would require me to think a lot more. <issue_comment>username_0: I think if you want super minimal, that makes sense. For me, my day is pretty connected to what happens on GitHub, so having a reference to when things occur is important. Removing it would be like not seeing when you sent or received emails. I like to connect what I was doing in real life with when something happened on GitHub life, and as I use the clock for real-life, being able to link them in the same language (time) is crucial. <issue_comment>username_2: I don't think I've ever looked at the time I've received an email. That's the beauty of asynchronous communication, the time really doesn't matter.<issue_closed> <issue_comment>username_0: You're too hippy for me, man. I still live in the world where clocks matter. No arguments required, just different philosophies :) I'll shut this down! <issue_comment>username_2: ⏰💥🦄✨😬 <issue_comment>username_3: Just my 2 ¢: I really like what GitHub is doing with timestamps at the moment. They show relative timestamps for recent events and absolute timestamps for older events. On hover you see the absolute timestamp (from the title attribute) and they show you it's in your timezone. They do this with JavaScript as far as I know. They also update recent relative timestamps dynamically on the page (from time to time). Using relative timestamps they show you how long ago something was at a glance without making you think whether this is in your timezone (like you need to do in many other systems) and at the same time provide you with the absolute timestamp. From my point of view - kudos to GitHub. <issue_comment>username_4: Give me the timestamp . I find hiding that data is as brain damaged as anything MS has ever done . <issue_comment>username_3: Showing absolute timestamps from `<time>` elements is something which could be implemented as a separate extension probably even website-agnostic or supporting multiple websites which do what GitHub does at once. We're just saying we understand why GitHub does it like they do and we wouldn't change it. Anyone is free to create a project for doing that :-) Cheers! 🍻 <issue_comment>username_5: question: WHERE exactly is the timestamp shown on hover? I've spent 20 minutes trying to track down what time of day "3 days ago" something was merged and am coming up empty. I agree with the simplicity of relative time, but if you can't dig and get actual time, you're not able to trace events <issue_comment>username_4: Just making the display a simple sortable directory table would be more useful . I think was exists is too cute at the expense of function . Something I would expect from MS . <issue_comment>username_6: Everywhere, for me. This the last-modified date of each file on the repo root: <img width="205" src="https://user-images.githubusercontent.com/1402241/42400398-adb12f80-8171-11e8-8899-35806f821ed6.png"> It should work the same way on all relative dates. <issue_comment>username_5: thanks, @username_6 I see it now! I think I just wasn't being patient enough earlier, as it takes a bit before it shows up, or my network is a bit slow. As long as the timestamp is findable, it's all good. <issue_comment>username_7: Recently I started seeing GitHub Issues list with absolute timestamps (`updated 08 Nov 2018`) instead of relative (`2 hours ago`) which I definitely prefer. I checked on another browser without Refined GitHub and they're still showing relative as always... Is Refined GitHub doing this? If so, how can I turn it off? Thanks! <issue_comment>username_8: Don't believe it should be, I see relative with or without RG enabled. <issue_comment>username_7: I am seeing other strange behaviours on Github, things like dropdowns that won't open, or AJAX parts of the page that won't finish loading. I have turned off Adblock and Flashblock for the GitHub site, so that shouldn't be interfering. I am using an outdated Firefox (v56), I can't update for reasons not worth explaining here. Maybe some Github update made it less compatible with my browser, I don't know. Anyway thanks for your reply. <issue_comment>username_9: Had an issue where we found failures around a certain time. It was so hard to find the PR that might have caused it amongst all the "a day ago", and "6 hours ago" etc. It would be nice to simply tell the time. <issue_comment>username_4: I'm finding GitHub so _far_ more complicated than it needs to be that I'm avoiding using it . <issue_comment>username_10: There is an UserScript for this available from the wonderful Mottie: [GitHub Static Time](https://github.com/Mottie/GitHub-userscripts/wiki/GitHub-static-time). <issue_comment>username_11: This relative dating is a nuisance. If you're looking at an old project and trying to figure out which bits were last worked on, it's a really hassle, because EVERYTHING says "5 years ago". Argh. <issue_comment>username_4: I've quit using it : GitHub . I've just got to simple open directories on my http://CoSy.com , archiving stale `.zip`s . _**CoSy**_ , the language is far , far more powerful to simply manipulate the raw directories and files . The entire differencing function I see as the heart of Git is just a few lines in the evolved APL level of _**CoSy**_ . And the bright non-programmer market I consider my core target never heard of GitHub and would have zero patience for it . <issue_comment>username_12: I agree. Relative dating is a nuisance. I am trying to find my last successful build after the commit. It is not quick. <issue_comment>username_13: For those who find it a nuisance, there's already a solution. Not everything needs to be part of Refined GitHub.
<issue_start><issue_comment>Title: Notion-Enhancer compatibility username_0: Opening this PR again to check whether the build is working and see if this is mergeable, thanks @username_1 <issue_comment>username_0: Unfortunately this is still not working on my end - adding `--asar` prevents the `build/dist` directory from being created. I'm still not sure what is wrong since the `electron-packager` command does not produce any error output and exist with a `0` status code. <issue_comment>username_0: @username_2 hey! would you be alright with me trying to directly integrating notion-enhancer into this build script? it seems counterproductive to generate an `asar` when your scripts only unpack it anyway, and baking in support would allow me to distribute `deb` packages which are pre-patched :) <issue_comment>username_1: I was going to say, with `notion-enhancer` now supporting unpacked builds my MR probably isn't necessary anymore! Would be quite cool to pass a flag that would build a `notion-enhancer`-ified deb. <issue_comment>username_0: ah, that answers [my question here](https://github.com/username_0/notion-deb-builder/issues/17#issuecomment-739262092)! <issue_comment>username_2: hey @username_0 that's a cool idea. with how the enhancer works at the moment it might be a little fiddly, because to make it easy to add new mods it works based off a fancy loader that looks up and links downloaded enhancer files by an absolute path somewhere rather than directly inserting the changes into the app folder. i'm just about to start a rework on the module system though that will bring some pretty major changes to things like file location & format, revamp the theming variables, and significantly decrease the amount of files needed for the enhancer to work. that would make pre-enhanced debs a lot more possible. i'd say hold off for now but definitely give this a try when v0.11.0 comes out. in theory it would then just need: copying the app folder to make an `app.bak`, appending a single `require()` statement to every js file in the `app` folder, and adding in an `enhancer` folder at the same level as the `app` folder. so i don't forget about this i've made an issue in the enhancer repo for this too: https://github.com/notion-enhancer/notion-enhancer/issues/321 <issue_comment>username_0: thanks! i've been looking into it a bit already before your message but sounds like things will be easier after that release :)
<issue_start><issue_comment>Title: get pdf as base64-encoded string? username_0: I've been happily using prince/princely in a Rails 4 app to render PDFs which can be downloaded in the browser, using a `format.pdf` block in a controller—all as described in the README. Now I'm looking to construct a PDF in the same way (using a Rails view template), but I want to do the work in a model, not a controller, and I don't want to write the pdf as a file to disk. Instead, I want to store the resulting PDF as a base64-encoded string. Can you provide any pointers about how I can do this? Thanks so much! <issue_comment>username_1: You can render a PDF from a string with `Princely::Pdf.new.pdf_from_string(string)` Now you'd just have to find a way to get the template emitted. One way would be to build up a controller and use that directly, eg: https://www.amberbit.com/blog/2011/12/27/render-views-and-partials-outside-controllers-in-rails-3/ (nb: that's for Rails 3, so no guarantees on how accurate it still is). Another way is to use ERB directly (or Haml::Engine if you're using haml). Example: `ERB.new(File.read(Rails.root.join('app', 'views', 'whatever', 'show.html.erb').result(binding)`<issue_closed> <issue_comment>username_0: Thanks, @username_1. I managed to build up a controller with most of what I needed as follows. Then, as you said, the pdf part was easy: ``` controller = ApplicationController.new controller.request = ActionDispatch::Request.new(foo: "/foo/bar") html_string = controller.render_to_string( template: "/path/to/view/template.html.haml", layout: "pdf", locals: :@local1 => local1, :@local2 => local2. ) prince = Princely.new prince.pdf_from_string(html_string) ``` The two gotchas that are accounted for above: 1) The Request.new argument needs to be a hash (I am literally using foo: "/foo/bar"). Otherwise, at least on Rails 4.1, actionpack-4.1.1/lib/action_dispatch/http/parameters.rb will break in the #path_parameters method if your view invokes a link helper. 2) The syntax for supplying local variables is easy to get wrong. Credit: http://makandracards.com/makandra/17751-render-a-view-from-a-model-in-rails for this fix. All of this said, I then hit another snag: some of my views have CanCan-based permissions code, so it seems I need to inject a current_user into the request as well. At this point I'm realizing that I'm basically recreating the entire controller—newing up instance variables for the view, setting a current_user, etc.—and so I think I'm going to abandon this approach, and do something like this instead: ``` session = ActionDispatch::Integration::Session.new(Rails.application) session.get(path_to_pdf_resource) encoded_pdf_string = Base64.encode64(session.response.body) ``` ```
<issue_start><issue_comment>Title: mach-glfw doesn't build with latest Zig master username_0: Latest Zig master has several fixes for macOS framework linking which means some very awkward workaround behavior we had in place to avoid some linker issues is now fixed, and `master` no longer builds: ``` warning(link): unable to resolve dependency /System/Library/Frameworks/ApplicationServices.framework/Versions/A/ApplicationServices warning(link): unable to resolve dependency /System/Library/Frameworks/Foundation.framework/Versions/C/Foundation error(link): undefined reference to symbol '_CGSetDisplayTransferByTable' error(link): first referenced in '/Users/username_0/Desktop/hexops/mach/glfw/zig-cache/o/92047481a1a97318e7889499d8a2924f/libglfw.a(/Users/username_0/Desktop/hexops/mach/glfw/zig-cache/o/c41a8e5da0f2f5ed783557aa4cfa1679/cocoa_monitor.o)' error(link): undefined reference to symbol '_CGReleaseDisplayFadeReservation' error(link): first referenced in '/Users/username_0/Desktop/hexops/mach/glfw/zig-cache/o/92047481a1a97318e7889499d8a2924f/libglfw.a(/Users/username_0/Desktop/hexops/mach/glfw/zig-cache/o/c41a8e5da0f2f5ed783557aa4cfa1679/cocoa_monitor.o)' error(link): undefined reference to symbol '_CGOpenGLDisplayMaskToDisplayID' error(link): first referenced in '/Users/username_0/Desktop/hexops/mach/glfw/zig-cache/o/92047481a1a97318e7889499d8a2924f/libglfw.a(/Users/username_0/Desktop/hexops/mach/glfw/zig-cache/o/c41a8e5da0f2f5ed783557aa4cfa1679/cocoa_monitor.o)' error(link): undefined reference to symbol '_CGGetOnlineDisplayList' error(link): first referenced in '/Users/username_0/Desktop/hexops/mach/glfw/zig-cache/o/92047481a1a97318e7889499d8a2924f/libglfw.a(/Users/username_0/Desktop/hexops/mach/glfw/zig-cache/o/c41a8e5da0f2f5ed783557aa4cfa1679/cocoa_monitor.o)' error(link): undefined reference to symbol '_CGGetDisplayTransferByTable' error(link): first referenced in '/Users/username_0/Desktop/hexops/mach/glfw/zig-cache/o/92047481a1a97318e7889499d8a2924f/libglfw.a(/Users/username_0/Desktop/hexops/mach/glfw/zig-cache/o/c41a8e5da0f2f5ed783557aa4cfa1679/cocoa_monitor.o)' error(link): undefined reference to symbol '_CGDisplayVendorNumber' error(link): first referenced in '/Users/username_0/Desktop/hexops/mach/glfw/zig-cache/o/92047481a1a97318e7889499d8a2924f/libglfw.a(/Users/username_0/Desktop/hexops/mach/glfw/zig-cache/o/c41a8e5da0f2f5ed783557aa4cfa1679/cocoa_monitor.o)' error(link): undefined reference to symbol '_CGDisplayUnitNumber' error(link): first referenced in '/Users/username_0/Desktop/hexops/mach/glfw/zig-cache/o/92047481a1a97318e7889499d8a2924f/libglfw.a(/Users/username_0/Desktop/hexops/mach/glfw/zig-cache/o/c41a8e5da0f2f5ed783557aa4cfa1679/cocoa_monitor.o)' error(link): undefined reference to symbol '_CGDisplaySetDisplayMode' error(link): first referenced in '/Users/username_0/Desktop/hexops/mach/glfw/zig-cache/o/92047481a1a97318e7889499d8a2924f/libglfw.a(/Users/username_0/Desktop/hexops/mach/glfw/zig-cache/o/c41a8e5da0f2f5ed783557aa4cfa1679/cocoa_monitor.o)' error(link): undefined reference to symbol '_CGDisplayScreenSize' error(link): first referenced in '/Users/username_0/Desktop/hexops/mach/glfw/zig-cache/o/92047481a1a97318e7889499d8a2924f/libglfw.a(/Users/username_0/Desktop/hexops/mach/glfw/zig-cache/o/c41a8e5da0f2f5ed783557aa4cfa1679/cocoa_monitor.o)' ``` We just need to properly link the frameworks now, but doing so will require anyone using mach-glfw be up-to-date with latest Zig `master`: <img width="796" alt="image" src="https://user-images.githubusercontent.com/3173176/143627145-181dc778-8e42-4dd4-9983-f5bded8a7018.png"><issue_closed>
<issue_start><issue_comment>Title: Large repo push http : consumption of ram username_0: I tried to push a large repository via http. Everything goes well until the transfer is finished eating ram becomes important (Gogs). It's a shame Gogs is so light yet in other cases. To see if this is not a technical limitation caused by the protocol, but I think it is possible to optimize. <issue_comment>username_1: https://github.com/gogits/gogs/blob/master/routers/repo/http.go#L362 `input, err = ioutil.ReadAll(reqBody)` is **the** line that makes Gogs use a lot of RAM when pushing/pulling over HTTP. Not sure why that block is required either to be honest @username_2 ? <issue_comment>username_2: @username_1 good point! Maybe you could try to use a continuous reader see if it works? <issue_comment>username_1: I can try :) <issue_comment>username_1: This will be a much larger task than I anticipated... Involving goroutines and channels :cry: <issue_comment>username_0: Excited to see movement here :) <issue_comment>username_1: @username_2 is there a reason for not using the `post-recieve`-hook instead of the callback that requires that line? seems like it would be way easier to implement it correctly that way :smiley: <issue_comment>username_2: @username_1 no worries, I'll take a deep look. Not sure why does it matter with `post-recieve` hook? <issue_comment>username_2: Clean up the code for `http.go` a bit: https://github.com/gogits/gogs/commit/c041273dd334fc7a45ca6cb67ea2c2b492c4a5ed Still working on the `input, err = ioutil.ReadAll(reqBody)` thing. <issue_comment>username_1: The issue is the callback is reading refname, oldsum and newsum from the http-request. This is why it needs to store it (unless you do magic), unless you use the `post-recieve`-hook which would give us this automagically :) Also, I'm still working on this :laughing: <issue_comment>username_2: @username_1 oops, hope does not break your code too much.... :sweat: <issue_comment>username_1: @username_2 some breakage, easy to fix though :smiley: and I need to refactor anyhow :laughing: <issue_comment>username_3: I took a look and wondering why ReadAll is needed at all. The code is finally creating a Reader out of the buffer, why not directly using `reqBody` as the Reader ? It is originally an `io.ReadCloser`...<issue_closed> <issue_comment>username_2: This is fixed by #3748. <issue_comment>username_2: Just implemented Git hook delegation, so this problem can be ultimately solved by `post-receive` hook. Reopen as a reminder for myself. <issue_comment>username_2: I tried to push a large repository via http. Everything goes well until the transfer is finished eating ram becomes important (Gogs). It's a shame Gogs is so light yet in other cases. To see if this is not a technical limitation caused by the protocol, but I think it is possible to optimize. <issue_comment>username_2: This problem should be completely solve by https://github.com/gogits/gogs/commit/d521e716dd59617dbbb637a3e8028bf4a5c6f849.<issue_closed>
<issue_start><issue_comment>Title: Yosemite Beta 4 bug with formatting username_0: I am experiencing an issue with the JSFormat package which adds a line break after each word/symbol. This happened right after the Yosemite Beta 4 upgrade. Upgrading the package/atom, reinstalling, etc had no effect. The author of the JSFormat as well as myself think it has something to do with Yosemite itself which may affect Atom as a whole so I am posting it here. Please refer to this issue for the original post: https://github.com/jdc0589/jsformat-atom/issues/31#issuecomment-57574982<issue_closed> <issue_comment>username_1: Based on the discussion in https://github.com/jdc0589/jsformat-atom/issues/31, this doesn't sound like an issue in Atom. Also, I tried installing JSFormat on Yosemite and couldn't reproduce the issue.
<issue_start><issue_comment>Title: Property with Null value and having a Nullable type (Integer) with custom Converter causes NullReferenceError username_0: **Describe the bug** After searching the documntation and not finding much discussion on Nullable objects, I assumed the following would work. However, I seem to be throwing a NullReferenceError. Say I have the following (simplified) Entity and Enum: ``` @Entity public class DBStation { @Id public long id; @NameInDb("inf_laststatus") @Convert(converter = RecoveryStatusEnum_Converter.class, dbType = Integer.class) public RecoveryStatusEnum LastStatus; public static class RecoveryStatusEnum_Converter implements PropertyConverter<RecoveryStatusEnum, Integer> { @Override public Integer convertToDatabaseValue(RecoveryStatusEnum entityProperty) { if (entityProperty == null || entityProperty == RecoveryStatusEnum.Unknown) return null; else return entityProperty.id; } } } ``` Note that in this case, it is not important what is in the Enumeration's `.id` property, as the first `if` in `convertToDatabaseValue` is returing true. So my code is returning a null value. When I go to put a record in with `LastStatus` null, I get the error: `java.lang.NullPointerException: Attempt to invoke virtual method 'int java.lang.Integer.intValue()' on a null object reference` in the following code, which is the auto-generated code: ``` /** * Puts an object into its box. * * @return The ID of the object within its box. */ @Override public final long put(DBStation entity) { [...] long __assignedId = collect004000(cursor, entity.id, PUT_FLAG_FIRST | PUT_FLAG_COMPLETE, __id1, __id1 != 0 ? LastStatusConverter.convertToDatabaseValue(LastStatus) : 0, 0, 0, 0, 0, 0, 0); ``` This the error is pointing to the line there with the converter. However, it is NOT POINTING to my code - I am returning a null. So is this expected? If not, fantastic. (I guess..) If it IS, I didn't see mention in the docs about this causing an error. (In fact, it's mentioned that the converter **should return** a null. :) Using: ``` objectboxVersion = '2.7.1' ``` Haven't yet tried v3 alpha. <issue_comment>username_1: Thanks for reporting this! Nullable types are supported. https://docs.objectbox.io/advanced/custom-types Try returning `0` instead of `null` in `convertToDatabaseValue`. Then the database should give you a `null` back in `convertToEntityProperty`. I'm not sure, yet, whether this is just a documentation issue (e.g. change docs to mention `convertToDatabaseValue` should never return null for number-based properties) or a bug (`convertToDatabaseValue` returning null should be handled). My hunch is that this is an oversight, so a bug. <issue_comment>username_1: Actually this should work. I tested this and for me the generated Cursor code has an additional check: ```java int __id1 = oneEnum != null ? __ID_oneEnum : 0; ``` This guards the null value. This may actually be related to the `NameInDb` annotation, I'll have a look. <issue_comment>username_1: Nope. Did you try to clean and rebuild? It's possible the generated code is outdated. <issue_comment>username_0: I didn't do a clean and rebuild, but I *DID* to a Build->Rebuild, which I believe just calls a 'clean' and 'rebuild'. However, give me an hour or so, and I will do that just to make sure. <issue_comment>username_0: Okay, I believe I found the issue. And it looks like it has to do with the returning a '0'.. For some reason (different bug in my code I need to track down), one of the values I'm inserting has that Enum value set to an Enum with an integer number of "0". (`SomeEnumType.Unknown`). In my converter, I am returning a null IF the entityProperty is null, OR this `Unknown` value. The reason I did this was because as far as the DB is concerned, I don't care that it's unknown. However, I do differentiate a difference between having an unset Enum and the enum set to `Unknown`. Now I have a couple other Enums where I actually *DO* return a 0. I will need to test those, and may need to change my 0's to -1's. <issue_comment>username_1: OK, so this is not an ObjectBox issue. Feel free to share an example if you do find an issue. <issue_comment>username_0: I will spend next week seeing if I can make a small project that reproduces this. Believe it's more of an issue with expectations, and changes during production and existing data. Here's what I narrowed down: At some point, my Converter was NOT sending `null` for that `Unknown` enumeration type. It was sending `0`. (Zero.) Later, when I started having this issue, I had changed the converter to NOT return `0` and return a `null`, The code was failing in the above mentioned section on records that were inserted INTO the database with the old converter, and then getting re-saved (some fields updated, but NOT the enumeration one) with the new converter. Once I uninstalled the beta app from my phone (which cleared out the database), and then pushed the beta with the correct, changed converter? No issues. So it DOES raise a question - during active development, while I COULD uninstall and start from scratch, that adds a step that I would like to avoid, but understand if it can't be avoided. <issue_comment>username_2: `interface IIntEnum { val id: Int } abstract class IntEnumConverter<T : IIntEnum>(private val enumValues: Array<T>) : PropertyConverter<T, Int> { override fun convertToDatabaseValue(entityProperty: T?) = when(entityProperty) { null -> -1 else -> entityProperty.id } override fun convertToEntityProperty(databaseValue: Int) = when(databaseValue) { -1 -> null else -> enumValues.first { it.id == databaseValue } } } class MessageStatusConverter : IntEnumConverter<MessageStatus>(MessageStatus.values()) class MessageTypeConverter : IntEnumConverter<MessageType>(MessageType.values()) @Entity data class DbChatMessage( @Id(assignable = true) var id: Long = 0, @Index val conversationId: Long = 0, @Convert(converter = MessageStatusConverter::class, dbType = Int::class) val status: MessageStatus = MessageStatus.SENT, @Convert(converter = MessageTypeConverter::class, dbType = Int::class) val messageType: MessageType = MessageType.TEXT )` I had this problem after adding a second property (messageType) which used PropertyConverter<T, Int>. Reinstalling the app / clearing cache solved it<issue_closed> <issue_comment>username_1: Closing this issue due to inactivity. :zzz: Feel free to comment with more details or submit a new issue.
<issue_start><issue_comment>Title: The message "Your login page is now here: [...] Bookmark this page!" appears on every page username_0: Hi, after installing the plugin and updating the URL, the "ribbon" saying: Your login page is now here: [...]. Bookmark this page! Appears on every page. <issue_comment>username_1: Hm. Cannot reproduce. It should only appear on the page 'options-permalink.php'... <issue_comment>username_2: I've got the exact same problem, every wordpress back-end page has this annoying 'Your login page is now here: Bookmark this page!' message. No way to close it, no way to get rid of it. Really stupid bug, kinda irritated with it. <issue_comment>username_3: I to can not reproduce That I only get the one notice to Bookmark the new login URL. I am just installing a Wordpress on a new domain. I saw this in my e-mail so I decided to test it now. So I just a couple minutes ago installed rename-wp-login to check. All the hundreds of time I have done it in the past, I never remembered only getting one notification to Bookmark so I just installed it on a New domain just now to rest I did NOT get but one notification to Bookmark the new login URL . After I activated the plugin it went to Permalinks structure page, and at the top it had a small notice which read: Your login page is now here http://example.com/login/ Bookmark this page! I selected a name for login and clicked save . back to te dashboard and the reminder to Bookmark is not there . To make sure I logged out and eve cleared browser cache logged back in at the new url and there was No notice at all . So I to can not reproduce it. I saved screen shots of each step. <issue_comment>username_2: I've given up and installed a different login page renaming plugin. But I'm on A2 hosting using wordpress 4.4.2. <issue_comment>username_1: Not sure how this is possible as the following needs to be true: `$pagenow === 'options-permalink.php' && isset( $_GET['settings-updated'] )`. What are those on other pages then? <issue_comment>username_0: username_1, yes this is really weird - I thought I was somewhat cached, but if you have a conditional that makes sure we're on the permalink page, I don't see why the message would appear on other pages. Moreover the fact that any other messages appear once excludes any caching. It's the only ribbon that stays on all pages. <issue_comment>username_4: The ribbon is on every page in my WP Admin Console. Was this issue ever resolved? What was the fix aside from installing a different plugin? <issue_comment>username_4: I removed Rename plugin, cleared all cache W3 Total Cache, reinstalled the plugin with same issue. I even tried deactivating the cache plugin, but same result. I did notice the plugin has not been tested with my version of WP 4.7.4. Not sure if that matters. I'm on A2 Hosting using their optimized WP pack. Anyway, not a big deal. I just thought I might find a fix for it. Thanks!! <issue_comment>username_0: Same here. This could have to do with their 'turbocache' Can you try to disable integration with A2 and see if that works any better? <issue_comment>username_4: Deactivating A2 Optimized WP plugin removes the banner with W3 Total Cache still Active. I assume it's one of the A2 plugin cache settings. <issue_comment>username_4: I tried disabling one by one the cache options of the A2 Optimized plugin (Page, DB, Object, Browser), but banner still displays. It does disappear when the plugin is deactivated so at least I know its that. <issue_comment>username_4: Done. I'll post their response here <issue_comment>username_4: I have not tried it yet, but here is the response from A2 Hosting: This issue is generally caused by a failure in W3 Total Cache. First, using FTP or File Manager, disable W3 Total Cache by deleting the wp-content > plugins > w3-total-cache directory, as well as the associated files in wp-content (advanced-cache.php, object-cache.php, db.php, and w3-config). You should then be able to log into the WordPress admin area and reinstall W3TC using the A2 Optimized for WordPress plugin. This issue should now be resolved. <issue_comment>username_5: Is this problem happening for someone that is NOT using A2 hosting ? It seems like their caching process is not working properly as every person here who has the problem seem to be using that hosting provider. <issue_comment>username_5: 2 months later, no more information provided. Closing the issue until more information given.<issue_closed>
<issue_start><issue_comment>Title: Disable select/ focus (annoying blue lines) username_0: Can you somehow get rid of these blue lines? I tried on the div ```css -webkit-user-select: none; -moz-user-select: -moz-none; -ms-user-select: none; user-select: none; ``` But that seems not to work. Thanks! <img width="670" alt="screenshot 2015-10-05 07 21 57" src="https://cloud.githubusercontent.com/assets/8103258/10273532/3f0a5d20-6b32-11e5-87fa-2b6ae9f7dc96.png"> <issue_comment>username_1: try to add ```css outline: 0; ``` on focus state <issue_comment>username_0: Wow. So easy fix. I cannot believe, I didn't figure this out. Thanks ! <issue_comment>username_2: Answered support question... Closing<issue_closed> <issue_comment>username_3: ``` scss div.slick-slide { *:focus, &:focus { outline: 0; } } ```
<issue_start><issue_comment>Title: Failure on testing process username_0: Hi, first of all, thanks for this great project .. second :-( I compiled everything with sharpdevelop against net 4.5 successfully, but once I did the test I had too many failure notices (sqlce, access, sqlite) <issue_comment>username_1: How can I reproduce the problem? <issue_comment>username_2: @username_1, I think this issue could be resolved with the creation of a `contributing.md` file that details getting up and running with basic testing. New contributors currently have to look around quite a bit for the different config settings that need to be changed to get the tests running in their environment. <issue_comment>username_3: @username_4, it is still an issue? <issue_comment>username_4: To be honest I have not used this project since I made that comment. If I remember correctly my issue stemmed from not having an easy way to just start running tests. I would suggest correcting that if its still an issue, or making a guide on how to get tests running. <issue_comment>username_5: Yes, guide is needed <issue_comment>username_5: About culture issues, most of them are fixed for now. There are problems with informix, I do keep them in mind <issue_comment>username_6: Informix culture issues will be fixed with changes from #686. Only issues, caused by #730 will remain.<issue_closed>
<issue_start><issue_comment>Title: cellStyle username_0: http://jsfiddle.net/wenyi/e3nk137y/9/ works! But when I put it in my project, it does not call the cell style function. I put a alert in the function which is not being called. I am using the formatter function which is being called correctly. but cell-style does not work. I tried it both in 1.0.6 and 1.6.0 and in all browsers. Any clues?<issue_closed> <issue_comment>username_0: Update : It now works with the 1.6.0. I am not sure the exact reason why it was not working yesterday. Maybe a restart of the PC solved it. Thanks. <issue_comment>username_1: Have fixed this issue: https://github.com/username_1/bootstrap-table/issues/221
<issue_start><issue_comment>Title: New style suggestion username_0: Hello! Do you continue to improve the library? Can I improve this in your ready styles? Or is it in your plans to improve it? ![Capture](https://user-images.githubusercontent.com/48206386/103692198-7f133f80-4fa8-11eb-9f68-6a5fea766b0e.JPG)
<issue_start><issue_comment>Title: Spike: can we use S3 versions rather than manual versions? username_0: ### Enabler <!-- A description of the enabler that covers what needs to be done why it needs to be done. It should be understandable by all members of the team --> So that we can use standard AWS functionality and make it simpler and possibly cheaper, we want to investigate if we can use S3 versions to maintain dataset versions #### Acceptance Criteria <!-- Requirements to accept this enabler as completed --> - [ ] define use cases / user stories that we want solve - [ ] do some design/research to meet use cases using S3 versions - [ ] indicate which use cases can be solved and which can't #### Additional context <!-- Add any other context here --> #### Tasks <!-- Tasks needed to complete this enabler --> - [ ] ... - [ ] ... #### Definition of Ready - [ ] This story is __ready__ to work on, according to the [team's definition](https://confluence.linz.govt.nz/pages/viewpage.action?pageId=87930423) #### Definition of Done - [ ] This story is __done__, according to the [team's definition](https://confluence.linz.govt.nz/pages/viewpage.action?pageId=87930423) <!-- Please add one or more of these labels: 'spike', 'refactor', 'architecture', 'infrastructure', 'compliance' --><issue_closed>
<issue_start><issue_comment>Title: Use a deque instead of list in Python cursors username_0: We currently use `pop(0)` to pop the first element of a list from a Python cursor. It seems that this operation is O(n), and I have a test case where it becomes the bottleneck. (see e.g. http://stackoverflow.com/questions/1296511/efficiency-of-using-a-python-list-as-a-queue ) Changing the list to a queue and using `popleft()` appears to solve the issue. <issue_comment>username_0: @sontek, @coffeemug FYI: after addressing the deserialization slowness, this came up as the next bottleneck in the Python driver while looking into #4569. I'm working on a patch. <issue_comment>username_0: A fix is in code review 3198.<issue_closed>
<issue_start><issue_comment>Title: groupby in searchAndReadObject() username_0: OpenERP 7 supports groupby. It would be great to have the groupby option in searchAndReadObject(). Thanks, Adam <issue_comment>username_1: @username_0 I don't think it's possible. First the OpenERP search method has no paramater for group by even in v8 see there : https://www.odoo.com/documentation/8.0/reference/orm.html#openerp.models.Model.search Group by is really a client option. Yet you could sort your search and the with java make some filters or loop on your datas<issue_closed> <issue_comment>username_1: Hi, I confirmed that it's a client function, so it could not be implemented in the API. You'll have to recode the behaviour on the resultset you get
<issue_start><issue_comment>Title: Stop war in Ukraine username_0: While Ukraine is under missile attacks GitHub could be used by Russians to develop apps and platforms aiming to destabilize Ukrainian web resources. Please, prevent these actions and don't stay on the same side with invaders! All information about war can be found at: https://war.ukraine.ua/ We urge you to close GitHub for Russia and its developers! We value your support and we are in need for your actions!
<issue_start><issue_comment>Title: At larger screen size, the map button is always open username_0: The map in the corner is always open and there's no button, but on smaller screens its a button. I'd expect to have a button. ![image](https://user-images.githubusercontent.com/6268375/78215658-036aec00-7486-11ea-81ec-ca4dab380bb5.png) <issue_comment>username_1: I think this is works-as-intended. PS HIIIIII SUSAN<issue_closed>
<issue_start><issue_comment>Title: ES6 migration branch username_0: Referring to: https://github.com/fs/backbone-base/issues/79 Also includes: * Jshint support * e2e tasks improvements <issue_comment>username_1: :zap: :+1: :zap: <issue_comment>username_2: it seems like we have es5 js and es6 js having same extension `.js` - Don't we have to name es6 ones with `.js.es6`? <issue_comment>username_0: ES6 allows us to use familiar ES5 syntax, therefore there is no need to separate them. If this reasoning, then using some new ES7 features we also need to create a file with the `.js.es7` extension?) Sounds bad. The second reason when browsers will fully support the ES6 we just turn off Babel and without unnecessary movements start to use initial js files. <issue_comment>username_2: :+1: <issue_comment>username_0: Answered above
<issue_start><issue_comment>Title: Suggestion for all Pololu arduino libraries username_0: Hi, I have a suggestion to improve your arduino library. The new arduino IDE includes a auto update of the libraries but for that it need’s a file called “library.properties” inside the library folder. I think that it would be good to add that file to all your libraries so that users can get always the newer versions automatically. I will attach a file the I made for the QTRSensors library base on an Adafruit library, I’m not sure if it works…. ```Java name=QTRSensors version=2.1.2 author=Pololu maintainer= Pololu <www.pololu.com> sentence=Driver for Pololu QTR-XX line Sensors paragraph=Driver for Pololu QTR-XX line Sensors category=Sensors url=hhttps://github.com/pololu/qtr-sensors-arduino architectures=* ```<issue_closed> <issue_comment>username_1: @username_0 Thanks for your suggestion. We are currently working on getting all of our libraries into the Library Manager, and this one is now in the Library Manager!
<issue_start><issue_comment>Title: added Python3 ujson username_0: Because regular Python3 is a lot faster on this test than regular Python2. And since it is not fair to only use python2 with ujson. <issue_comment>username_0: Note that Numpy is not some weird library. It is the main one used for matrix multiplication in Python, and nobody does it by hand in the C-way. So it makes sense to use a library here if we are using libraries elsewhere (e.g. JSON). Even on http://julialang.org/ they use numpy for python matrix benchmarks.
<issue_start><issue_comment>Title: Fix OSX build username_0: I'm skipped `math.sfu` since `math.random` cannot be deterministic, at least on different os <issue_comment>username_1: thanks @username_0 This "random" functionality is still far from finished (it's just quickly implemented for a domcode raffler). I think we will change this to a more generic randomizer (an LCG) later on.
<issue_start><issue_comment>Title: Invalid `Podfile` file: [!] Unsupported options `{:exclusive=>true}` for target `mySuperApp_Tests` username_0: # pod 'mySuperApp', :path => '../' # ------------------------------------------- Ace! you're ready to go! We will start you off by opening your project in Xcode open 'mySuperApp/Example/mySuperApp.xcworkspace' The file /Users/jan/SwiftProjects/cocoapods/mySuperApp/Example/mySuperApp.xcworkspace does not exist. To learn more about the template see `https://github.com/CocoaPods/pod-template.git`. To learn more about creating a new pod, see `http://guides.cocoapods.org/making/making-a-cocoapod`. MacBook-Pro:cocoapods jan$ ```<issue_closed> <issue_comment>username_1: Since you're using the CocoaPods 1.0 beta, you'll have to update the Podfile -- see https://github.com/CocoaPods/CocoaPods/issues/4706. <issue_comment>username_0: So I should change the content of``mySuperApp/Example/Podfile`` from this: ``` source 'https://github.com/CocoaPods/Specs.git' use_frameworks! target 'mySuperApp_Tests', :exclusive => true do pod 'mySuperApp', :path => '../' end ``` to this? ``` source 'https://github.com/CocoaPods/Specs.git' platform :ios, '9.0' inhibit_all_warnings! xcodeproj 'MyProject' use_frameworks! target 'mySuperApp_Tests' do pod 'mySuperApp', :path => '../' end ``` ### Why is the ``Podfile`` in the ``/Example`` folder? Is this normal? <issue_comment>username_1: Correct
<issue_start><issue_comment>Title: Pull request suggestion status testing username_0: <issue_comment>username_1: Docs Build status updates of commit _[724d72c](https://github.com/OPS-E2E-PPE/E2E_DocFxV3/commits/724d72c2a8cff689d58512ff11f47883a2356bf9)_: ### :white_check_mark: Validation status: passed File | Status | Preview URL | Details ---- | ------ | ----------- | ------- [E2E_DocsBranch_Dynamic/index.md](https://github.com/OPS-E2E-PPE/E2E_DocFxV3/blob/suggestionPrTest/E2E_DocsBranch_Dynamic/index.md) | :bulb:Suggestion | [View](https://ppe.docs.microsoft.com/en-us/E2E_DocFxV3/index?branch=pr-en-us-19202) | [Details](#user-content-b697ab4cea91c812c3a376adcfa781af-824b7208) <a id="b697ab4cea91c812c3a376adcfa781af-824b7208"></a> ### [E2E_DocsBranch_Dynamic/index.md](https://github.com/OPS-E2E-PPE/E2E_DocFxV3/blob/suggestionPrTest/E2E_DocsBranch_Dynamic/index.md) - **Line 0, Column 0**: **[Suggestion-[h1-missing](https://aka.ms/h1-missing)]** `````H1 is required. Use a single hash (#) followed by a space to create your top-level heading. NOTE: This Suggestion will become a Warning around July 2020.````` For more details, please refer to the [build report](https://opbuilduserstoragepubdev.blob.core.windows.net/report/2021%5C6%5C1%5C8b165afe-a866-8c48-e5c3-ead94440f27e%5CPullRequest%5C202106010348562407-19202%5Cworkflow_report.html?sv=2016-05-31&sr=b&sig=wzvHZqIQj1o44g%2Fp9RUmjHrBsYx%2B1IgankEkxFfUEGY%3D&st=2021-06-01T03%3A44%3A46Z&se=2021-07-02T03%3A49%3A46Z&sp=r). **Note:** Broken links written as relative paths are included in the above build report. For broken links written as absolute paths or external URLs, see the [broken link report](https://docs-portal-pubdev-wus.azurewebsites.net/#/repos/8b165afe-a866-8c48-e5c3-ead94440f27e?tabName=brokenlinks). For any questions, please:<ul><li>Try searching the docs.microsoft.com <a href="https://review.docs.microsoft.com/en-us/help/?branch=master">contributor guides</a></li><li>Post your question in the <a href="https://teams.microsoft.com/l/channel/19%3a7ecffca1166a4a3986fed528cf0870ee%40thread.skype/General?groupId=de9ddba4-2574-4830-87ed-41668c07a1ca&amp;tenantId=72f98bf-86f1-41af-91ab-2d7cd011db47">Docs support channel</a></li></ul>