content
stringlengths
240
2.34M
<issue_start><issue_comment>Title: iot sdk c for ubuntu build fails username_0: Hi everyone ; I did all steps at "Set up Linux development environment". But when i run "./build.sh" script , i get this error and therefore samples can not compiled. : Scanning dependencies of target iothub_test [ 39%] Building C object common/testtools/iothub_test/CMakeFiles/iothub_test.dir/src/iothubtest.c.o /home/bd/azure-iot-sdks/c/common/testtools/iothub_test/src/iothubtest.c:23:28: fatal error: proton/message.h: No such file or directory #include <proton/message.h> ^ compilation terminated. make[2]: *** [common/testtools/iothub_test/CMakeFiles/iothub_test.dir/src/iothubtest.c.o] Error 1 make[1]: *** [common/testtools/iothub_test/CMakeFiles/iothub_test.dir/all] Error 2 make: *** [all] Error 2<issue_closed> <issue_comment>username_0: I 've solved the exception. Before begin build.sh script. You must install proton to ubuntu. I've used build_proton script into build_all folder in azure_iot_sdks mainfolder. <issue_comment>username_0: Hi everyone ; I did all steps at "Set up Linux development environment". But when i run "./build.sh" script , i get this error and therefore samples can not compiled. : Scanning dependencies of target iothub_test [ 39%] Building C object common/testtools/iothub_test/CMakeFiles/iothub_test.dir/src/iothubtest.c.o /home/bd/azure-iot-sdks/c/common/testtools/iothub_test/src/iothubtest.c:23:28: fatal error: proton/message.h: No such file or directory #include <proton/message.h> ^ compilation terminated. make[2]: *** [common/testtools/iothub_test/CMakeFiles/iothub_test.dir/src/iothubtest.c.o] Error 1 make[1]: *** [common/testtools/iothub_test/CMakeFiles/iothub_test.dir/all] Error 2 make: *** [all] Error 2<issue_closed>
<issue_start><issue_comment>Title: Define a canary protocol username_0: [Context](http://en.wikipedia.org/wiki/Warrant_canary), plus rumors that the SEC are looking deeply into the Bitcoin 2.0 space. How about we use this git issue to define a 'canary protocol' for the Mastercoin Foundation? See also http://truecrypt.ch/ <issue_comment>username_1: Ok after many delays we're finally getting some proper seppuku foundations pulled together; shortly there'll be a re-map of seppukupledge.org, & we've got our threads in-forum collated in one spot: http://seppuku.cryptostorm.org Cheers, ~ pj
<issue_start><issue_comment>Title: What the solution to avoid 404 error with angular2 & SPA application & core 1 ? username_0: I try to avoid 404 error when i write url and let angular2 manage the redirection. Have you any solution? Thx <issue_comment>username_1: See _https://github.com/aspnet/JavaScriptServices_ this project. It made for SPA and angular 2<issue_closed> <issue_comment>username_2: This issue is being closed because it has not been updated in 3 months. We apologize if this causes any inconvenience. We ask that if you are still encountering this issue, please log a new issue with updated information and we will investigate.
<issue_start><issue_comment>Title: Request app exit instead of SIGTERM (darwin) username_0: WIP <issue_comment>username_0: cc @keybase/updater-hackers <issue_comment>username_0: OK this is ready to review <issue_comment>username_1: seems fine to me, can we get some go 👀 maybe @username_2 ? <issue_comment>username_2: LGTM except for one question <issue_comment>username_0: We do KILL because it we're changing the Keybase.app to do `ctl stop` on SIGTERM, which stops the world (and on updates we want just the app to quit for restart). See https://keybase.atlassian.net/browse/DESKTOP-1552 for more info. There was a long discussion about this topic BTW and this was the consensus.
<issue_start><issue_comment>Title: Can't delete Dev Environment username_0: **Describe the bug** When I want to delete a dev environment, it tips me "Error: TypeError: error.toJSON is not a function" **To Reproduce** Steps to reproduce the behavior: 1. Go to 'Dev Environments' 2. Click on 'Create' 3. Click on 'Get Started' 4. Click on 'Locale Directory' 5. Click on 'Select' 6. Select an empty directory 7. Click on Continue 8. WIndow closed 9. You can't delete dev environments you just created **Expected behavior** Enter next setup step **Screenshots** ![image](https://user-images.githubusercontent.com/10193482/147634351-2c0279bd-38f0-4b7a-81c5-efbec15fab7f.png) **Desktop (please complete the following information):** - OS: macOS - Version 11.6.2 **Version of Docker Desktop:** ``` 4.3.2 (72729) ``` **Additional context** Add any other context about the problem here. <issue_comment>username_1: Hey @username_0, thanks for reporting the isssue. While we fix it, please try removing the existing dev env from the `~/.docker/devenvironments/data.json` and delete the associated directories too.
<issue_start><issue_comment>Title: Issue when inserting space as the last character username_0: When inserting a final space, the space is dropped, and when manually enterring text at the end, the enterred character is inserted after the cursor. The following code reproduce the issue: ```javascript class MySpaceTestEditor extends React.Component { constructor(props) { super(props); this.state = { editorState: EditorState.createEmpty(decorator), }; } onChange = (editorState) => { this.setState({editorState}); }; onAdd = (value) => { let editorState = this.state.editorState; let contentState = editorState.getCurrentContent(); contentState = Modifier.replaceText( contentState, contentState.getSelectionAfter(), ', ', editorState.getCurrentInlineStyle(), null, ); editorState = EditorState.push(editorState, contentState, 'add-coma'); this.setState({editorState: editorState}); }; render() { const {editorState} = this.state; return ( <div> <Editor editorState={editorState} onChange={this.onChange} /> <input onClick={this.onAdd} type="button" value="Test" /> </div> ); } ``` <issue_comment>username_0: The code works in Firefox, the issue is only visible in Chrome.<issue_closed> <issue_comment>username_0: My bad.. I was not including Draft.css.
<issue_start><issue_comment>Title: Multiple variants of LiveChat widget (for different sites) username_0: RocketChat should be able to work with multiple variants of LiveChat widget, each with its own set of parameters. If an organization runs multiple websites, a separate widget might be needed for every site, with different visual style, texts (such as LiveChat title) and behavior.
<issue_start><issue_comment>Title: Fully fledged docker image username_0: I've just pushed an example docker image resulting from these changes, and you can now do: ```bash docker run -it -p 8080:80 \ -v $(pwd)/pgdata:/var/lib/postgresql/9.4/main \ -v $(pwd)/logs:/var/log/supervisor \ -e SECRET_KEY_BASE=changeme \ openproject/community ``` Then: ```bash curl localhost:8080 ``` And get a fully working openproject install. Database files and logs are stored in mounted volumes, so that you don't lose your state if you restart the container. It is also very easy to link with an external postgres db/container if wanted (just pass `-e DATABASE_URL=...`). Additional configuration must be done with env variables, preferably stored in an `env-file`. This is still a work in progress, it lacks support for svn/git repos, but if you can try it out that'd be great! /cc @username_2 @oliverguenther <issue_comment>username_1: Hi Cyril, do we have documentation on this yet? Regards Niels <issue_comment>username_0: Hi Niels, I'm working on it, not sure where I should put it though. <issue_comment>username_1: Can*t you put into this repository? <issue_comment>username_0: I think I need to put it in the core repo, since the doc for the website is taken from there. The question was more where to put it in the current doc. <issue_comment>username_2: Sweet! I'll check it out. <issue_comment>username_0: I think I'll open a PR on core instead. There is nothing specific to community here. <issue_comment>username_1: -- Dipl. Kfm. Niels Lindenthal MBA Chief Executive Officer OpenProject GmbH Rosenthaler Str. 32 10178 Berlin E: n.username_1@openproject.com T: +49 30 288 777 04 M: +49 151 2266 2777 I: www.openproject.com Sitz OpenProject GmbH: Berlin, Amtsgericht Berlin-Charlottenburg, HRB 117935 Geschäftsführer: Niels Lindenthal UStID DE211309779 <issue_comment>username_0: Well, except for the `FROM` in the `Dockerfile.public`, so I'll leave that PR here for now. <issue_comment>username_0: BTW I'd love to know if this works on Windows. Does anyone has a Windows machine with docker installed at hand? <issue_comment>username_0: Nevermind, I thought docker was natively available for windows, but it still goes through an intermediary Linux VM, so yes that would work. <issue_comment>username_1: i can do that. Sent from my mobile. Please excuse my brevity. <issue_comment>username_0: I'll go ahead and merge this, so that we can get feedback from the community.
<issue_start><issue_comment>Title: Make "required" validation message configurable username_0: I find the current required=bool lacking because it doesn't allow me to customize its error message. It would be better if required were implemented as a validator itself. This is especially useful for providing semantic per-field required help or for translating the message. I've been looking at #76 too and this could be implemented in the same go. Also, it would stream line some code as currently you are handling required separately from everything else even though in reality it could be just another validator. Thoughts? <issue_comment>username_1: I find it clearer if there is a different required_error parameter instead of setting required to a string. What do you think? I think I can do the PR anyway. <issue_comment>username_2: @username_1 Both alternatives come with different tradeoffs, each adding complexity in different ways. I slightly prefer to just have just the `required` parameter (that can be a `bool` or a `str`) because it reduces the number of `Field` parameters and avoids having "dependent" parameters (e.g. what happens if a user passes `required_error` but sets `required=False`?). If you would like to send a PR adding this functionality, I would gladly review and merge it. <issue_comment>username_1: Setting required_error without setting required to True would just do that: setting the error message. It's exactly the same effect when you set a custom error message (for any other validator) and the validation never fails (for whatever reason). As the current code may expect required to be a bool, something like "if self.required" would fail for empty strings (though setting an empty error message does not have any sense). I think I can send the PR later today. <issue_comment>username_2: This can be changed to check `if self.required is True` if need be. <issue_comment>username_1: PR sent: https://github.com/marshmallow-code/marshmallow/pull/122
<issue_start><issue_comment>Title: Define icons for actions. username_0: This PR adds an <code>icon</code> attribute to <code>NotificationAction</code>, as proposed in #59. <issue_comment>username_1: Overall I think this change is good, a few comments. Two more points: - How does this work with #28? I'd be OK with updating `Notification.icon` and `NotificationAction.icon` together (since we'll need to accept both USVString and an array anyway), but I do think this raises the priority of #28, especially since you add a note further about platform-specific behaviour. - Please add yourself to the acknowledgements. <issue_comment>username_0: I think #28 can be addressed after getting action icons in. I agree that this PR helps motivate addressing it. <issue_comment>username_0: All done. <issue_comment>username_1: This looks good to me, also based on the additional data Michael provided in #59. Let's see if we can get some functional feedback from Mozilla. WDYT @username_2? <issue_comment>username_2: This seems reasonable to me. Still trying to find some additional folks to review new features, but this is not that big. <issue_comment>username_2: I'll let @username_3 have a look and then probably merge this tomorrow if I don't forget. Feel free to ping me. If you could rebase/squash that'd be appreciated. I'll try to add some Merge advice around the same time. <issue_comment>username_0: Oh the purpose is clear. But to me, "e.g." just expands to ", for example," with the commas included, as visually it looks similar enough with the periods. Anyway, I'm not the editor of this spec, so if consensus is to insist on the commas just let me know whether to add one before, after, or both. <issue_comment>username_2: Committed as https://github.com/whatwg/notifications/commit/362780b1e173d547955ac0b90c427a13bf16d530 with gratuitous use of commas. Thank you!
<issue_start><issue_comment>Title: Implement `offset` and `limit` based paging in search username_0: Closes #6062 Still up for comments on the backend. I decided to go with offset and limit based search and not with page cursors, as it doesn't seem to be popular here. ## Hey, I just made a Pull Request! <!-- Please describe what you added, and add a screenshot if possible. That makes it easier to understand the change so we can :shipit: faster. --> #### :heavy_check_mark: Checklist <!--- Please include the following in your Pull Request when applicable: --> - [x] A changeset describing the change and affected packages. ([more info](https://github.com/backstage/backstage/blob/master/CONTRIBUTING.md#creating-changesets)) - [ ] Added or updated documentation - [x] Tests for new functionality and regression tests for bug fixes - [ ] Screenshots attached (for UI changes) - [x] All your commits have a `Signed-off-by` line in the message. ([more info](https://github.com/backstage/backstage/blob/master/CONTRIBUTING.md#developer-certificate-of-origin)) <issue_comment>username_0: This one is ready now 🙂 <issue_comment>username_0: I think we all agree? 🤔 I would be open to suggest an implementation based on a `pageCursor` instead of offset and limit. It would have been my favourite choice, but the previous discussion leant towards having pages. Would not be mad if I had to make some bigger changes again 😁 I would make the `nextPageCursor` in the result optional so every implementation can choose whether it want to implement pagination or not. Then I would change the UI to either have a "load more" button, or to do that automatically (the latter could also be a future improvement). The `pageCursor` would be a string and the implementation would not make any assumptions about the content. On the implementation side, all current engines support offset and limit. I would just encode them together into a single string for the page cursor (e.g. Json + base64). Don't merge yet, I will rewrite it tomorrow 😉 <issue_comment>username_1: Ready for new review? <issue_comment>username_0: Not yet, but later today <issue_comment>username_0: Here we go: I updated to PR to use `pageCursors`. A search engine can optionally return a `nextPageCursor` and `previousPageCursor`. Translators can decide the page size. The UI has a previous and next button. I tried it first with a "load more" button, but that turned out to be a usability nightmare 😆 If I navigate from the results to an entity and then back to the results, it's hard to restore the previous result set with the right pages. Give me feedback if that goes in the right direction. If not, we can iterate or switch back to offset and limit 😆 <issue_comment>username_2: seems like this'll get in the release then!
<issue_start><issue_comment>Title: feat: KNeighborsRegression implemented username_0: First draft/version of KNeighborRegression. There are still some rough edges: * `tfjs` cleanup code is missing in most place, i.e. memory may be leaky * `minkowskiDistance` and `weights` are not yet based on `tfjs` * missing documentation `kdTree` and `ballTree` should be fairly easy to add at a later point as a different implementation of a `Neighborhood` data structure. <issue_comment>username_1: Hey @username_0, that was an awesome phone call and your new PR is 🔥 . Here's the list we talked about on the call, and then we'll merge this in. Great work. Seriously good shit. - [ ] Remove generic Metric function or distance function - [ ] Move validation from constructor to fit function - [ ] Do batch calculation instead of tf.unstack - [ ] Use assert instead of throw new Error - [ ] Move weights class attribute up to the base class KNeighborsBase
<issue_start><issue_comment>Title: work on SentencePreprocessing username_0: Add word numbers as integer literals in order to get queries like ... ?dobj <conll:wordnumber> ?wordnumber_dobj . FILTER(?wordnumber_dobj<?wordnumber). ... working <issue_comment>username_0: This requires more work than expected, as also the preprocessing has to be changed etc, because the RDF will have integer nodes. Therefore leave this aside for next year. More important is to increase the number of found sentences. As the official DBpedia endpoint only retrieves 9999 triples use either multiple queries with LIMIT and OFFSET to extract the necessary data(done) or even better, extract the required data from the original .nt files, without using the SPARQL endpoint(also done). Also it makes sense to have one set of entities, for all languages, as even in the german wikipedia sometimes names in other languages appear. Therefore I used the Spanish, German, Japanese and English labels to create more input data. E.g. for the property spouse the number of entity-pairs increased from 9999 to over 75000 combinations. The next step is to generate the same data for the property namespace (for the inbox properties) and also add for both (ontology/property) as additional experiments the labels from the wikipedia anchor text. Finally adapt the SentencePreprocessing to the new input, and generate new input sentences. <issue_comment>username_0: Interesting facts: Some resources do not have any label or additional information, e.g http://dbpedia.org/resource/Warner_W._Henry In this case replace the http://dbpedia.org/resource/ part and replace _ with an empty space <issue_comment>username_0: Also very interesting is the fact that for example the relation http://dbpedia.org/ontology/spouse contains wrong resources. e.g: http://dbpediaorg/resource/Balthasar_Erdmann (the dot in front of the org is missing) or http://dbpedia.org/resource/Balthasar_Erdmann (resource just don't exists) ->Still try to extracts sentences for those, see previous comment<issue_closed>
<issue_start><issue_comment>Title: Does the --superheat option work? username_0: I'm working on revising lambda++ now, and in lesson 2 we should have 4 different solutions for free-variable-capture.lambda, when kompiled with option --superheat strict. In old unparsing, they used to be: ``` Solution 1: V:K --> a ~> (HOLE ((lambda x . (lambda y . x) y) z)) Solution 2: V:K --> y ~> closure ( .Map , x , lambda y . x ) HOLE ~> HOLE z ~> (a HOLE) Solution 3: V:K --> y ~> lambda x . (lambda y . x) HOLE ~> HOLE z ~> (a HOLE) Solution 4: V:K --> z ~> (lambda x . (lambda y . x) y) HOLE ~> (a HOLE) ``` However, I can now only see one solution, no matter what I try: ``` Solution 1: V:K --> z ~> #freezer1 ( ( lambda x . lambda y . x ) y ) ~> #freezer1 ( a ) ``` So, does the --superheat option work? It seems to work in imp++, where it finds the 5 solutions of div.imp, as it should. <issue_comment>username_1: It is a bit curious it works for div. @username_4, do you have any idea? The strategy-based heating/cooling, which can do superheat if given the right strategy, is only activated if importing the STRATEGY module, which is not currently the default. <issue_comment>username_0: Actually, I did not really check div.imp, I only looked at the config.xml files and the div.imp.out file, and assumed that @username_2 just ran ktest when she revised imp++ and was done with it: https://github.com/kframework/k/blob/3.6Base/k-distribution/tutorial/1_k/4_imp%2B%2B/lesson_3/tests/div.imp.out https://github.com/kframework/k/blob/3.6Base/k-distribution/tutorial/1_k/4_imp%2B%2B/lesson_3/tests/config.xml https://github.com/kframework/k/blob/3.6Base/k-distribution/tutorial/1_k/4_imp%2B%2B/tests/config.xml https://github.com/kframework/k/blob/3.6Base/k-distribution/tutorial/1_k/tests/config.xml I'll check if it actually works or not later when I get a chance, but probably you are right and we have some other kind of bug in how ktest checks the regexps in .out files. <issue_comment>username_2: div has been excluded from lesson 1, as later lessons use test config from previous lessons, so you don't see that directly in current lesson. When I worked on it, I couldn't get multiple results either, so left it for later. <issue_comment>username_3: At that time I worked on KORE-translatoin of lambda++, `--superheat` wan't supported, so `free-variable-capture` was excluded from lesson_2: https://github.com/kframework/k/blob/e6a13a83d395aee3bd330609d061b27821674fb4/k-distribution/tutorial/1_k/3_lambda%2B%2B/lesson_2/tests-kore/config.xml#L5-L8 As such, lesson_4 had several excludes as well: https://github.com/kframework/k/blob/e6a13a83d395aee3bd330609d061b27821674fb4/k-distribution/tutorial/1_k/3_lambda%2B%2B/lesson_4/tests-kore/config.xml#L16-L18 Regarding imp++, as @username_2 said, she excluded `div` from lesson_1 of imp++: https://github.com/kframework/k/blob/e6a13a83d395aee3bd330609d061b27821674fb4/k-distribution/tutorial/1_k/4_imp%2B%2B/lesson_1/tests/config.xml#L8 Also, lesson_6 has more excludes: https://github.com/kframework/k/blob/e6a13a83d395aee3bd330609d061b27821674fb4/k-distribution/tutorial/1_k/4_imp%2B%2B/lesson_6/tests/config.xml#L7<issue_closed> <issue_comment>username_4: Fixed in #2038 .
<issue_start><issue_comment>Title: Feature Request username_0: Hi, would it be possible to incorporate a search bar into the Navbar? Typically for the NavButtonBarController? ... And secondly, could you include a scroll factor so when user scrolls up, it shrinks ins size? / not collapse entirely? ... Great work and love it 👍 <issue_comment>username_1: @username_0 any update on this?
<issue_start><issue_comment>Title: [new release] mdx (1.11.0) username_0: Executable code blocks inside markdown files - Project page: <a href="https://github.com/realworldocaml/mdx">https://github.com/realworldocaml/mdx</a> ##### CHANGES: #### Added #### Changed - Use odoc-parser.0.9.0 (realworldocaml/mdx#333, @julow) #### Deprecated - Add a deprecation warning for toplevel blocks that are not terminated with `;;` (realworldocaml/mdx#342, @username_0) #### Fixed - Fix accidental redirect of stderr to stdout (realworldocaml/mdx#343, @username_0) - Remove trailing whitespaces that were added to indent empty lines (realworldocaml/mdx#341, @gpetiot) #### Removed #### Security <issue_comment>username_1: Thanks
<issue_start><issue_comment>Title: DON'T MERGE: Changing PMT_PaymentCreator to create a payment when the opportunity … username_0: …is updated from a null amount, regardless of whether the opportunity is open or closed/won. # Warning # Info # Issues Fixes #1714 <issue_comment>username_0: @mpusto here's the pull request for the payment amount update for closed/won opps. <issue_comment>username_0: **lurch: review apex visualforce <issue_comment>username_0: This is by design as we don't update anything for closed/won opportunities. Users should enter opportunities with products in an open stage, add line items, then close the opportunity.
<issue_start><issue_comment>Title: FreeBSD 13.0-RELEASE compatibility username_0: working on FreeBSD compatibility. Currently testing using 13.0-RELEASE-p6. ```sh uname -r 13.0-RELEASE-p6 ``` First issue was git2go. 13.0-RELEASE-p6 uses libgit2 version 1.3.0. ```sh pkg-config --modversion libgit2 1.3.0 ``` This corresponds to git2go V33 [per the documentation](https://github.com/libgit2/git2go#which-go-version-to-use). I updated the imports to `"github.com/libgit2/git2go/v33"`, updated the Makefile to use `GIT2GO_VERSION = v33.0.7`, and added `SHELL = /usr/local/bin/bash` to the top of the `Makefile`. Then, I encountered an issue with `scm/git.go` on line 301. git2go v27 -> v33 changed `type TreeWalkCallback` : `func(string, *TreeEntry) int` -> `func(string, *TreeEntry) error` Since the anonymous func just returns 0 anyways, I just changed the return type to `error` and the return to `nil` and called it a day. `go vet` ran successfully :+1: ```go 300 err := childTree.Walk( ~ 301 func(s string, entry *git.TreeEntry) error { 302 switch entry.Filemode { 303 case git.FilemodeTree: 304 // directory where file entry is located 305 path = filepath.ToSlash(entry.Name) 306 default: 307 files = append(files, filepath.Join(path, entry.Name)) 308 fileCnt++ 309 } ~ 310 return nil 311 }) ``` ```go @@ -298,7 +298,7 @@ func DiffParentCommit(childCommit *git.Commit) (CommitStats, error) { files := []string{} err := childTree.Walk( - func(s string, entry *git.TreeEntry) int { + func(s string, entry *git.TreeEntry) error { switch entry.Filemode { case git.FilemodeTree: // directory where file entry is located @@ -307,7 +307,7 @@ func DiffParentCommit(childCommit *git.Commit) (CommitStats, error) { files = append(files, filepath.Join(path, entry.Name)) fileCnt++ } - return 0 + return nil }) ``` Now it builds, but not with `--tags 'static'`. ```sh gmake build Mon Jan 31 04:35:36 2022 go build --tags 'static' -ldflags "-X main.Version=0.0.0-dev-019de99" -o bin/gtm # pkg-config --cflags --static -- /usr/home/aescaler/code/dev-tools/gtm/vendor/github.com/libgit2/git2go/v33/static-build/install/lib/pkgconfig/libgit2.pc Package vendor/github.com/libgit2/git2go/v33/static-build/install/lib/pkgconfig/libgit2.pc was not found in the pkg-config search path. Perhaps you should add the directory containing `/usr/home/aescaler/code/dev-tools/gtm/vendor/github.com/libgit2/git2go/v33/static-build/install/lib/pkgconfig/libgit2.pc.pc' to the PKG_CONFIG_PATH environment variable Package '/usr/home/aescaler/code/dev-tools/gtm/vendor/github.com/libgit2/git2go/v33/static-build/install/lib/pkgconfig/libgit2.pc', required by 'virtual:world', not found pkg-config: exit status 1 gmake: *** [Makefile:13: build] Error 2 ``` Looks like an issue with some libgit2 stuff so I checked out the [git2go readme](https://github.com/libgit2/git2go#main-branch-or-vendored-static-linking) and it has info on static builds. I changed the Makefile, which now only needs to use `make install-static`. ```sh ``` <issue_comment>username_0: I have the relevant changes included in #113. I have more to do to not break the existing supported platforms, but that's effectively the meat of what needs to be changed. <issue_comment>username_0: @mschenk42 can you provide a high level overview of how the `vendor` directory is generated? I notice you aren't using modules. I use them fairly religiously, so I have no clue how this is supposed to work.
<issue_start><issue_comment>Title: Fix commands for s390x custom jobs username_0: **Which issue(s) this PR fixes**:<br> This PR is to fix some of the commands of the custom jobs for s390x. <!-- Use `Fixes #<issue number>`, or `Fixes (paste link of issue)` to automatically close linked issue when the PR is merged. Uncomment and fill below if the PR does not close any issues. --> <!-- **What this PR does, why we need it**:<br> --> <!-- If there is any golang code in this PR please uncomment the `/lint` statement below to have Prow automatically lint it. --> <!-- /lint --> <issue_comment>username_1: /ok-to-test
<issue_start><issue_comment>Title: Escape filters missing username_0: At the line https://github.com/symfony2admingenerator/GeneratorBundle/blob/master/Resources/templates/CommonAdmin/EditTemplate/fieldsets.php.twig#L34 we print the formType in lowercase but we don't escape undesired characters, but we should. Take a look to other templates to this particular situation too<issue_closed>
<issue_start><issue_comment>Title: fix ffw rule deletion username_0: Hi, I believe there's a bug in `execDelete` which prevents it from removing the filter-forward chain for the container id. With v1.0.9 after removing a pod with podman, the chains are still in my ruleset: ``` table ip _filter { chain _forward { type filter hook forward priority 0; policy accept; jump cni-ffw-8726fd92cb82f541940f26a jump cni-ffw-469fc695dc03346d8d7865c } chain cni-ffw-469fc695dc03346d8d7865c { oifname "cni-podman0" ip daddr 20.88.0.20 ct state established,related counter packets 0 bytes 0 accept iifname "cni-podman0" ip saddr 20.88.0.20 counter packets 0 bytes 0 accept iifname "cni-podman0" oifname "cni-podman0" counter packets 0 bytes 0 accept } chain cni-ffw-8726fd92cb82f541940f26a { oifname "cni-podman0" ip daddr 20.88.0.21 ct state established,related counter packets 0 bytes 0 accept iifname "cni-podman0" ip saddr 20.88.0.21 counter packets 0 bytes 0 accept iifname "cni-podman0" oifname "cni-podman0" counter packets 0 bytes 0 accept } } ``` but the rules under the nat table, postrouting chain are correctly removed. As far as I can tell, [this check](https://github.com/username_1/cni-plugins/blob/main/pkg/firewall/plugin.go#L351) is wrong, the `ffwChain` is in the filter table, not the nat table. This PR fixes this. I've verified on my server that it works correctly now, but let me know if I'm misunderstanding something <issue_comment>username_1: @username_0 , thank you for the contribution 👍 <issue_comment>username_0: :+1: thank you for the software ! <issue_comment>username_1: @username_0 , released v1.0.10
<issue_start><issue_comment>Title: v4.0.10 upgrade to v6.0.0 fail with : region split size 4194304 must >= region bucket size 100663296 username_0: ## Bug Report <!-- Thanks for your bug report! Don't worry if you can't fill out all the sections. --> ### What version of TiKV are you using? v6.0.0 ### What operating system and CPU are you using? <!-- If you're using Linux, you can run `cat /proc/cpuinfo` --> ### Steps to reproduce install v4.0.10 with : tikv: coprocessor.region-max-size: 16MB coprocessor.region-split-size: 4MB raftstore.region-split-check-diff: 1MB storage.block-cache.capacity: 1GB upgrade to v6.0.0 upgrade fail with : Error: init config failed: tikv1-peer:30160: executor.ssh.execute_failed: Failed to execute command over SSH for 'tidb@tikv1-peer:22' {ssh_stderr: invalid configuration: [components/raftstore/src/coprocessor/config.rs:111]: region split size 4194304 must >= region bucket size 100663296 , ssh_stdout: , ssh_command: export LANG=C; PATH=$PATH:/bin:/sbin:/usr/bin:/usr/sbin /tiup/deploy/tikv-30160/bin/tikv-server --config-check --config=/tiup/deploy/tikv-30160/conf/tikv.toml --pd "" --data-dir "/tiup/data/tikv-30160"}, cause: Process exited with status 1: check config failed ### What did you expect? upgrade success ### What did happened? upgrade fail <issue_comment>username_1: Fixed by https://github.com/tikv/tikv/pull/12237<issue_closed>
<issue_start><issue_comment>Title: Set the default domain correctly if it is set on the options of the FormBuilder username_0: | Q | A | ------------- | --- | Bug fix? | yes | New feature? | no | BC breaks? | no | Deprecations? | no | Tests pass? | yes | Fixed tickets | #235 | License | Apache2 ## Description Fixed setting the default domain correctly if it is set on the options of the formbuilder. <issue_comment>username_0: This broke the tests. I will try to push a fix.
<issue_start><issue_comment>Title: Question: Any plans for .NET 6? username_0: .NET 6 is out since the end of last year. This is a long-term support (LTS) release that will be supported for three years. I do not know if there are any braking change concerning the IO etc. As the unifying .NET 6 is supported only by the Visual Studio 2022 then multi-targeted platform would be a versatile solution. Your thoughts on this? <issue_comment>username_1: Hi @username_0, these libraries can be used with the .NET 6 projects. A PR for updating libraries to target **net6.0** TFM is welcome. <issue_comment>username_2: @username_1 you mean it's just a matter of updating the nuget recipe? I'll do it then :)
<issue_start><issue_comment>Title: Add set_dht_storage to session API username_0: The idea is that you can set the custom storage constructor function only if the dht is not running. <issue_comment>username_0: Hi @username_1, getting a non related error in one test in travis `***** test_ssl.cpp:276 "TEST_EQUAL_ERROR: peer_errors > 0: 1 expected: 0" *****` <issue_comment>username_1: thanks!
<issue_start><issue_comment>Title: Add info about performance in the README username_0: We can also make some benchmarks like this https://github.com/daj/android-orm-benchmark (not sure about their quality tho) <issue_comment>username_1: Very interesting lib! +1 for infos about performance <issue_comment>username_0: Related to #489. <issue_comment>username_2: @username_0 in case you didn't know, @touchlab updated released updated [android-orm-benchmark-updated repository](https://github.com/touchlab/android-orm-benchmark-updated) some time ago. I thought about adding StorIO to it as well, but this benchmark is purely for ORM libraries, so imho shouldn't be there. But it didn't prevent me from making a fork of it: [android-db-libraries-benchmark](https://github.com/username_2/android-db-libraries-benchmark/tree/add_storio) My concert is that reading data seems to be fast, but writing is really slow. Any ideas? <issue_comment>username_0: You're seeing slow writes with StorIO or it's just hypothetical? Because we give you full control over SQL and transactions so the only slow down from us is few allocations for abstractions like Query and Transaction notifications. <issue_comment>username_2: @username_0 It's slow in benchmark: put 2000 Users (3 field object) and 20000 Messages (9 fields object) and I do it like that: ``` try { storIOSQLite.lowLevel().beginTransaction(); storIOSQLite.put() .objects(users) .prepare() .executeAsBlocking(); storIOSQLite.put() .objects(messages) .prepare() .executeAsBlocking(); storIOSQLite.lowLevel().setTransactionSuccessful(); } catch (Exception e){ e.printStackTrace(); } finally { storIOSQLite.lowLevel().endTransaction(); } ``` Is there anything in particular what I do wrong? <issue_comment>username_0: Can you please give link to code, I wasn't been able to find StorIO in your benchmark yesterday (I'm from phone, so that might be a reason), thanks! <issue_comment>username_2: @username_0 [Here](https://github.com/username_2/android-db-libraries-benchmark/tree/add_storio/ORM-Benchmark/src/main/java/com/littleinc/orm_benchmark/storio) you can find whole StorIO part package: User and Message represent entities, DataBaseHelper extends SQLiteOpenHelper and StorIOExecutor executes different actions (create table, insert, read and delete table) <issue_comment>username_0: @username_2 ah, it's on separate branch! So, I see few problems in your benchmark: `writeWholeData()`: 1. You use `LinkedList` which gives significant degradation in reading in compare to `ArrayList` since you have to traverse through heap each time you need to get reference to new element, especially on scale you use it at (`NUM_USER_INSERTS = 2000; NUM_MESSAGE_INSERTS = 20000;`). 2. You can pass all items (of different types) as single list to `storIO.put().objects()`, this will decrease number of operations StorIO does under the hood. 3. You use transactions, but you do it in non-great way. By default `storIO.put().objects()` already uses transactions so in your example you will have nested transactions which are not required. In fact you can remove manual transaction if you'll pass all items (of different types) to single `storIO.put().objects()` call, it'll handle transaction for you. `readWholeData()`: * I would exclude logging from measuring time. Otherwise looks interesting! Pls share results after applying changes I mentioned :) <issue_comment>username_2: @username_0 I agree that logging shouldn't be part of measure action, but rest of executors do it in this way and I wanted to just add StorIO to existing benchmarks and be consistent across all of them without modifying old ones <issue_comment>username_2: @username_0 due to `writeWholeData` I've checked how does it work with changes which you recommended and it's 1.5% faster now, but it's still slow. I'll try to analyse what's going on under the hood and maybe it'll help me to understand what potential bottleneck is. <issue_comment>username_0: @username_2 how many times do you run the benchmark, is SQLite used as in-memory db or it stores info on disk, on which device/emulator do you run it? <issue_comment>username_2: @username_0 I found main cause of low `writeWholeData` performance - put operation handles "insert/update if item already exist" and it's a main advantage of StorIO: Put Get Delete instead of CRUD. I completely forgot about it. Just for the sake of benchmark, I've replaced generated `DefaultPutResolver` with custom one, which `performPut` method doesn't check that item already exists. Now results look fine: insert all of items with raw sql queries took 2428ms and storIO made it in 2836ms ![screen shot 2016-10-09 at 12 47 25](https://cloud.githubusercontent.com/assets/1184480/19220167/dcfb0a60-8e1e-11e6-95b6-81adae0e40ee.jpg) I didn't stop there. As you can see raw optimized sql can make it two time faster, because it uses precompiled SQLStatement. To apply this optimalization to StorIO I needed to apply new `LowLevel` implementation to `DefaultStorIOSQLite`. It allows to decrease `writeWholeData` operation to 1850ms. ![screen shot 2016-10-09 at 13 01 33](https://cloud.githubusercontent.com/assets/1184480/19220233/8afa4dd2-8e20-11e6-95ee-d018f5387b7d.jpg) I can imagine that further optimizations could be applied - e.g. the way I've applied prepared statement to StorIO can't be called 'nice' ;) Moreover I think it doesn't make sense to use brute force to fit StorIO into existing benchmark, which tests how fast are simple insert and get operations. In my opinion to check StorIO performance, benchmark should check 'insert or update' operation instead. Moreover would it possible to fit `SQLStatement` into StorIO somehow? <issue_comment>username_0: Yes, we can add this to `LowLevel` API! I think your work here is basically the asnwer to the "Performance" question of StorIO — by default it's not fastest DB wrapper, but we give users all high and low-level APIs to improve hot parts of their DB interactions, we're also open for PRs that improve performance without breaking StorIO for other users (though I'm not sure we've left anything except `insertOrReplace()`). @username_2 thank you for detailed investigation and feedback! <issue_comment>username_2: @username_0 just for a record: `SQLiteStatement` is ["not thread-safe"](https://developer.android.com/reference/android/database/sqlite/SQLiteStatement.html). So LowLevel should handle it somehow. Later I'll add here some concern which I have about api, which I faced when I tried to replace LowLevel implementation. <issue_comment>username_0: @username_2 ok, thanks! Until we have requests to add `SQLiteStatement` from users we won't do it to not bloat API with unused functions <issue_comment>username_1: @username_2 What do you think ```SQLiteStatement``` will affect the performance? Do you have any experiences? <issue_comment>username_3: @username_0 there is request for such api from @karlicoss <issue_comment>username_0: @username_2 if you have benchmark results with and without `SQLiteStatement` (related or not related to StorIO), I'd be happy to see them (with code of course!). Last time @StanisVS and I played with it I got impression that it doesn't actually add much perf, I guess, SQLite may cache queries on its own. <issue_comment>username_2: @username_0 to compare benchmark of raw sql query with and without `SQLiteStatement` you can take a look on an [origin orm benchmark](https://github.com/touchlab/android-orm-benchmark-updated), which I used to compare StorIO. Unoptimized raw sql queries can be found [here](https://github.com/touchlab/android-orm-benchmark-updated/tree/master/ORM-Benchmark/src/main/java/com/littleinc/orm_benchmark/sqlite) and optimized [here](https://github.com/touchlab/android-orm-benchmark-updated/tree/master/ORM-Benchmark/src/main/java/com/littleinc/orm_benchmark/sqliteoptimized). It's also available in [Google Play Store](https://play.google.com/store/apps/details?id=com.littleinc.orm_benchmark). Just run it and compare `RAW` results with `RAW_OPTIMIZED`. @username_1 However as I mentioned earlier this benchmark covers only simple scenario when 22000 inserts (no updates) are applied in single transaction and that's the reason why optimized raw sql benchmark is two time faster than unoptimized one - statement is precompiled and it's reused across all of inserts. I would say most of users won't need it because we won't get to much by using `SQLiteStatement` when there are just few thousands items inserted into db. Moreover, `SQLiteStatement` has few extra requirements - e.g. it can't be compiled until table is created etc. In my opinion, as I've noticed trying to apply `SQLiteStatement`, `DefaultStorIOSQLite` could be a little more flexible with replacing LowLevel implementation - I'll create a issue for it at some point just to discuss does it make sense to change this API in the first place. Btw I found whole Master thesis about [An Experimental Study of the Performance, Energy, and Programming Effort Trade-offs of Android Persistence Frameworks](https://vtechworks.lib.vt.edu/bitstream/handle/10919/72268/Pu_J_T_2016.pdf?sequence=1), which can be useful to decide to we need to use this `SQLiteStatement` in the first place. <issue_comment>username_2: So actually it make sense to have it for PutResolver - it could be lazily initialized, but of course we'll need to have a proper benchmark for it. Another option which I see would be to have `SQLiteStatement` next to raw sql - we could decide does it make sense to compile it based on number of put items. As far as I know default implementation of sorting algorithms apply this approach, because of there different performance due to size of data which need to be sorted. <issue_comment>username_2: @username_1 @username_0 just to let you know I reviewed `SQLite.insertWithOnConflict` method in Android source ([here](https://gist.github.com/username_2/54e6a829f57f1aeb1d0036388a1ee51f) you can find gist with it) and you can see there that by default `SQLiteStatement` is compiled for every insert operation anyway, so we'll definitely get some improvement in performance if we'll it do it only once in advance. Moreover, for for the record - `acquireReference` and `releaseReference` are used to synchronize `SQLiteDatabase` object and their implementation doesn't seem too complicated <issue_comment>username_0: @username_2 great info, thanks! @username_3 let's investigate how we can use `SQLiteStatement` in our Resolvers under the hood!
<issue_start><issue_comment>Title: Fix hello_world.py test username_0: <issue_comment>username_0: Fixed syntax errors but blocked by bug. Opened ticket: #158 <issue_comment>username_0: Running into issues with getting the follow-on to access files in the GlobalFileStore. Will need to talk with Benedict about intended functionality.<issue_closed>
<issue_start><issue_comment>Title: Neo4j and graph problems username_0: 1. I can see your own node information in the http://localhost:7474/browser/, but the secondary Desktop I can't see any node information, database is completely empty. ![Uploading 1647931636(1).png…]() 2. At http://localhost:7474/browser/, after I start node, they will be very mess, I can't normal use, I want to know is what reason. 3. ![Uploading 1647931778(1).jpg…]() 4. When creating the project, the number used cannot be stored in the database normally. However, you can find the number created by the system in localhost:7474/browser/ and use this number to search for the project <issue_comment>username_0: @username_1 希望你能帮我解答 <issue_comment>username_1: Hi, 1. If I understand correctly, the graph is built correctly and the connection to the database works but when you access Neo4j desktop, you cannot see any nodes. The screenshot you sent shows that the database you are accessing is not pointing to the right port (:7474) and it points instead to :11005, which is a different database. 2. It seems that the project upload worked and you have your project with all nodes and edges created. The visualization of the entire project graph in the Neo4j environment may be as you say messy, but generally you will want to use this interface to query specific nodes, relationships or paths and not the full network, or apply specific graph algorithms to extract information from the graph. 3. I am not sure what the issue is. The project identifier is unique and if you try to create a project with the same identifier you will get an error. <issue_comment>username_0: Hello! Thanks for getting back to me! I've got a bigger problem! When I run Docker, everything looks ok (I can live with that). When I create a project, or normal, but when I upload design file and protein data problems (proteomics), I have uploaded the experimental designs and clinical design, but when I upload the protein data click on generate report, I don't get any feedback, page into the endless waiting, I am not sure whether the design document has been uploaded successfully (the page has indeed been loaded), but I cannot get the result of the report. Could you please help me answer the question? The Docker log is shown in Figure 1 below, and the webpage is shown in Figure 2 below ![593c3ad8c4c9783bf39ef652324eccb](https://user-images.githubusercontent.com/37625275/160414900-c4faaa78-4463-473d-9305-82b24eb72e23.png) ![990d5d45f5fa790b4346a9cab0f1f94](https://user-images.githubusercontent.com/37625275/160414939-92b8a588-3b8d-4d6d-acf1-7a5ac0e67eec.png)
<issue_start><issue_comment>Title: How to configure it properly ! username_0: It always says I didn't configure it properly, please help me <issue_comment>username_1: Getting the same issue after replacing the variables. Please configure environment variables properly! <issue_comment>username_2: Be sure you set correctly enviroment variables: Set IG_USERNAME to username account you want to monitor. Example: ladygaga Set WEBHOOK_URL to Discord account webhook url. To know how, just Google: "how to create webhook discord". Set TIME_INTERVAL to the time in seconds in between each check for a new post. Example: 1.5, 600 (default=600 = 10 minutes) How to setup enviroment variables: https://www.serverlab.ca/tutorials/linux/administration-linux/how-to-set-environment-variables-in-linux/<issue_closed>
<issue_start><issue_comment>Title: Expose pref for Chromium's "potentially annoying security features" switch username_0: Chromium exposes a CLI flag called `--enable-potentially-annoying-security-features`. In [`chrome_content_browser_client.cc`](https://source.chromium.org/chromium/chromium/src/+/main:chrome/browser/chrome_content_browser_client.cc;l=3391;drc=9d6354a00275098a32ac99f77b4a8d6027c5cddd;bpv=0;bpt=1), it looks like it's currently a shortcut for three other prefs: - Disable reading from canvas - Strict mixed content checking - Strict powerful feature restrictions (see #55). This could be a good candidate for an off-by-default pref to expose, or at least document somewhere. <issue_comment>username_0: I can see how disabling reading from the canvas might break a few sites (a rare occurrence), but the other two prefs might be desirable.
<issue_start><issue_comment>Title: Programmatically generating a PNG from a plot generates an empty image username_0: I'm using code along the lines of: plotly.Snapshot.toImage(dom_elt.id, {format: 'png'}).once('success', function(url) { callback(url); }); and I get back an URL with a base64 encoded PNG, but it appears to be empty space. It does have a size (e.g 700 x 450). At the time I call toImage the plot is visible, so its not being called too soon. Sample: data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAArwAAAHCCAYAAAANehpvAAAZD0lEQVR4Xu3WwQ0AMAwCsXb/oanUMU7OBpg8uNt2HAECBAgQIECAAIGowDV4o82KRYAAAQIECBAg8AUMXo9AgAABAgQIECCQFjB40/UKR4AAAQIECBAgYPD6AQIECBAgQIAAgbSAwZuuVzgCBAgQIECAAAGD1w8QIECAAAECBAikBQzedL3CESBAgAABAgQIGLx+gAABAgQIECBAIC1g8KbrFY4AAQIECBAgQMDg9QMECBAgQIAAAQJpAYM3Xa9wBAgQIECAAAECBq8fIECAAAECBAgQSAsYvOl6hSNAgAABAgQIEDB4/QABAgQIECBAgEBawOBN1yscAQIECBAgQICAwesHCBAgQIAAAQIE0gIGb7pe4QgQIECAAAECBAxeP0CAAAECBAgQIJAWMHjT9QpHgAABAgQIECBg8PoBAgQIECBAgACBtIDBm65XOAIECBAgQIAAAYPXDxAgQIAAAQIECKQFDN50vcIRIECAAAECBAgYvH6AAAECBAgQIEAgLWDwpusVjgABAgQIECBAwOD1AwQIECBAgAABAmkBgzddr3AECBAgQIAAAQIGrx8gQIAAAQIECBBICxi86XqFI0CAAAECBAgQMHj9AAECBAgQIECAQFrA4E3XKxwBAgQIECBAgIDB6wcIECBAgAABAgTSAgZvul7hCBAgQIAAAQIEDF4/QIAAAQIECBAgkBYweNP1CkeAAAECBAgQIGDw+gECBAgQIECAAIG0gMGbrlc4AgQIECBAgAABg9cPECBAgAABAgQIpAUM3nS9whEgQIAAAQIECBi8foAAAQIECBAgQCAtYPCm6xWOAAECBAgQIEDA4PUDBAgQIECAAAECaQGDN12vcAQIECBAgAABAgavHyBAgAABAgQIEEgLGLzpeoUjQIAAAQIECBAweP0AAQIECBAgQIBAWsDgTdcrHAECBAgQIECAgMHrBwgQIECAAAECBNICBm+6XuEIECBAgAABAgQMXj9AgAABAgQIECCQFjB40/UKR4AAAQIECBAgYPD6AQIECBAgQIAAgbSAwZuuVzgCBAgQIECAAAGD1w8QIECAAAECBAikBQzedL3CESBAgAABAgQIGLx+gAABAgQIECBAIC1g8KbrFY4AAQIECBAgQMDg9QMECBAgQIAAAQJpAYM3Xa9wBAgQIECAAAECBq8fIECAAAECBAgQSAsYvOl6hSNAgAABAgQIEDB4/QABAgQIECBAgEBawOBN1yscAQIECBAgQICAwesHCBAgQIAAAQIE0gIGb7pe4QgQIECAAAECBAxeP0CAAAECBAgQIJAWMHjT9QpHgAABAgQIECBg8PoBAgQIECBAgACBtIDBm65XOAIECBAgQIAAAYPXDxAgQIAAAQIECKQFDN50vcIRIECAAAECBAgYvH6AAAECBAgQIEAgLWDwpusVjgABAgQIECBAwOD1AwQIECBAgAABAmkBgzddr3AECBAgQIAAAQIGrx8gQIAAAQIECBBICxi86XqFI0CAAAECBAgQMHj9AAECBAgQIECAQFrA4E3XKxwBAgQIECBAgIDB6wcIECBAgAABAgTSAgZvul7hCBAgQIAAAQIEDF4/QIAAAQIECBAgkBYweNP1CkeAAAECBAgQIGDw+gECBAgQIECAAIG0gMGbrlc4AgQIECBAgAABg9cPECBAgAABAgQIpAUM3nS9whEgQIAAAQIECBi8foAAAQIECBAgQCAtYPCm6xWOAAECBAgQIEDA4PUDBAgQIECAAAECaQGDN12vcAQIECBAgAABAgavHyBAgAABAgQIEEgLGLzpeoUjQIAAAQIECBAweP0AAQIECBAgQIBAWsDgTdcrHAECBAgQIECAgMHrBwgQIECAAAECBNICBm+6XuEIECBAgAABAgQMXj9AgAABAgQIECCQFjB40/UKR4AAAQIECBAgYPD6AQIECBAgQIAAgbSAwZuuVzgCBAgQIECAAAGD1w8QIECAAAECBAikBQzedL3CESBAgAABAgQIGLx+gAABAgQIECBAIC1g8KbrFY4AAQIECBAgQMDg9QMECBAgQIAAAQJpAYM3Xa9wBAgQIECAAAECBq8fIECAAAECBAgQSAsYvOl6hSNAgAABAgQIEDB4/QABAgQIECBAgEBawOBN1yscAQIECBAgQICAwesHCBAgQIAAAQIE0gIGb7pe4QgQIECAAAECBAxeP0CAAAECBAgQIJAWMHjT9QpHgAABAgQIECBg8PoBAgQIECBAgACBtIDBm65XOAIECBAgQIAAAYPXDxAgQIAAAQIECKQFDN50vcIRIECAAAECBAgYvH6AAAECBAgQIEAgLWDwpusVjgABAgQIECBAwOD1AwQIECBAgAABAmkBgzddr3AECBAgQIAAAQIGrx8gQIAAAQIECBBICxi86XqFI0CAAAECBAgQMHj9AAECBAgQIECAQFrA4E3XKxwBAgQIECBAgIDB6wcIECBAgAABAgTSAgZvul7hCBAgQIAAAQIEDF4/QIAAAQIECBAgkBYweNP1CkeAAAECBAgQIGDw+gECBAgQIECAAIG0gMGbrlc4AgQIECBAgAABg9cPECBAgAABAgQIpAUM3nS9whEgQIAAAQIECBi8foAAAQIECBAgQCAtYPCm6xWOAAECBAgQIEDA4PUDBAgQIECAAAECaQGDN12vcAQIECBAgAABAgavHyBAgAABAgQIEEgLGLzpeoUjQIAAAQIECBAweP0AAQIECBAgQIBAWsDgTdcrHAECBAgQIECAgMHrBwgQIECAAAECBNICBm+6XuEIECBAgAABAgQMXj9AgAABAgQIECCQFjB40/UKR4AAAQIECBAgYPD6AQIECBAgQIAAgbSAwZuuVzgCBAgQIECAAAGD1w8QIECAAAECBAikBQzedL3CESBAgAABAgQIGLx+gAABAgQIECBAIC1g8KbrFY4AAQIECBAgQMDg9QMECBAgQIAAAQJpAYM3Xa9wBAgQIECAAAECBq8fIECAAAECBAgQSAsYvOl6hSNAgAABAgQIEDB4/QABAgQIECBAgEBawOBN1yscAQIECBAgQICAwesHCBAgQIAAAQIE0gIGb7pe4QgQIECAAAECBAxeP0CAAAECBAgQIJAWMHjT9QpHgAABAgQIECBg8PoBAgQIECBAgACBtIDBm65XOAIECBAgQIAAAYPXDxAgQIAAAQIECKQFDN50vcIRIECAAAECBAgYvH6AAAECBAgQIEAgLWDwpusVjgABAgQIECBAwOD1AwQIECBAgAABAmkBgzddr3AECBAgQIAAAQIGrx8gQIAAAQIECBBICxi86XqFI0CAAAECBAgQMHj9AAECBAgQIECAQFrA4E3XKxwBAgQIECBAgIDB6wcIECBAgAABAgTSAgZvul7hCBAgQIAAAQIEDF4/QIAAAQIECBAgkBYweNP1CkeAAAECBAgQIGDw+gECBAgQIECAAIG0gMGbrlc4AgQIECBAgAABg9cPECBAgAABAgQIpAUM3nS9whEgQIAAAQIECBi8foAAAQIECBAgQCAtYPCm6xWOAAECBAgQIEDA4PUDBAgQIECAAAECaQGDN12vcAQIECBAgAABAgavHyBAgAABAgQIEEgLGLzpeoUjQIAAAQIECBAweP0AAQIECBAgQIBAWsDgTdcrHAECBAgQIECAgMHrBwgQIECAAAECBNICBm+6XuEIECBAgAABAgQMXj9AgAABAgQIECCQFjB40/UKR4AAAQIECBAgYPD6AQIECBAgQIAAgbSAwZuuVzgCBAgQIECAAAGD1w8QIECAAAECBAikBQzedL3CESBAgAABAgQIGLx+gAABAgQIECBAIC1g8KbrFY4AAQIECBAgQMDg9QMECBAgQIAAAQJpAYM3Xa9wBAgQIECAAAECBq8fIECAAAECBAgQSAsYvOl6hSNAgAABAgQIEDB4/QABAgQIECBAgEBawOBN1yscAQIECBAgQICAwesHCBAgQIAAAQIE0gIGb7pe4QgQIECAAAECBAxeP0CAAAECBAgQIJAWMHjT9QpHgAABAgQIECBg8PoBAgQIECBAgACBtIDBm65XOAIECBAgQIAAAYPXDxAgQIAAAQIECKQFDN50vcIRIECAAAECBAgYvH6AAAECBAgQIEAgLWDwpusVjgABAgQIECBAwOD1AwQIECBAgAABAmkBgzddr3AECBAgQIAAAQIGrx8gQIAAAQIECBBICxi86XqFI0CAAAECBAgQMHj9AAECBAgQIECAQFrA4E3XKxwBAgQIECBAgIDB6wcIECBAgAABAgTSAgZvul7hCBAgQIAAAQIEDF4/QIAAAQIECBAgkBYweNP1CkeAAAECBAgQIGDw+gECBAgQIECAAIG0gMGbrlc4AgQIECBAgAABg9cPECBAgAABAgQIpAUM3nS9whEgQIAAAQIECBi8foAAAQIECBAgQCAtYPCm6xWOAAECBAgQIEDA4PUDBAgQIECAAAECaQGDN12vcAQIECBAgAABAgavHyBAgAABAgQIEEgLGLzpeoUjQIAAAQIECBAweP0AAQIECBAgQIBAWsDgTdcrHAECBAgQIECAgMHrBwgQIECAAAECBNICBm+6XuEIECBAgAABAgQMXj9AgAABAgQIECCQFjB40/UKR4AAAQIECBAgYPD6AQIECBAgQIAAgbSAwZuuVzgCBAgQIECAAAGD1w8QIECAAAECBAikBQzedL3CESBAgAABAgQIGLx+gAABAgQIECBAIC1g8KbrFY4AAQIECBAgQMDg9QMECBAgQIAAAQJpAYM3Xa9wBAgQIECAAAECBq8fIECAAAECBAgQSAsYvOl6hSNAgAABAgQIEDB4/QABAgQIECBAgEBawOBN1yscAQIECBAgQICAwesHCBAgQIAAAQIE0gIGb7pe4QgQIECAAAECBAxeP0CAAAECBAgQIJAWMHjT9QpHgAABAgQIECBg8PoBAgQIECBAgACBtIDBm65XOAIECBAgQIAAAYPXDxAgQIAAAQIECKQFDN50vcIRIECAAAECBAgYvH6AAAECBAgQIEAgLWDwpusVjgABAgQIECBAwOD1AwQIECBAgAABAmkBgzddr3AECBAgQIAAAQIGrx8gQIAAAQIECBBICxi86XqFI0CAAAECBAgQMHj9AAECBAgQIECAQFrA4E3XKxwBAgQIECBAgIDB6wcIECBAgAABAgTSAgZvul7hCBAgQIAAAQIEDF4/QIAAAQIECBAgkBYweNP1CkeAAAECBAgQIGDw+gECBAgQIECAAIG0gMGbrlc4AgQIECBAgAABg9cPECBAgAABAgQIpAUM3nS9whEgQIAAAQIECBi8foAAAQIECBAgQCAtYPCm6xWOAAECBAgQIEDA4PUDBAgQIECAAAECaQGDN12vcAQIECBAgAABAgavHyBAgAABAgQIEEgLGLzpeoUjQIAAAQIECBAweP0AAQIECBAgQIBAWsDgTdcrHAECBAgQIECAgMHrBwgQIECAAAECBNICBm+6XuEIECBAgAABAgQMXj9AgAABAgQIECCQFjB40/UKR4AAAQIECBAgYPD6AQIECBAgQIAAgbSAwZuuVzgCBAgQIECAAAGD1w8QIECAAAECBAikBQzedL3CESBAgAABAgQIGLx+gAABAgQIECBAIC1g8KbrFY4AAQIECBAgQMDg9QMECBAgQIAAAQJpAYM3Xa9wBAgQIECAAAECBq8fIECAAAECBAgQSAsYvOl6hSNAgAABAgQIEDB4/QABAgQIECBAgEBawOBN1yscAQIECBAgQICAwesHCBAgQIAAAQIE0gIGb7pe4QgQIECAAAECBAxeP0CAAAECBAgQIJAWMHjT9QpHgAABAgQIECBg8PoBAgQIECBAgACBtIDBm65XOAIECBAgQIAAAYPXDxAgQIAAAQIECKQFDN50vcIRIECAAAECBAgYvH6AAAECBAgQIEAgLWDwpusVjgABAgQIECBAwOD1AwQIECBAgAABAmkBgzddr3AECBAgQIAAAQIGrx8gQIAAAQIECBBICxi86XqFI0CAAAECBAgQMHj9AAECBAgQIECAQFrA4E3XKxwBAgQIECBAgIDB6wcIECBAgAABAgTSAgZvul7hCBAgQIAAAQIEDF4/QIAAAQIECBAgkBYweNP1CkeAAAECBAgQIGDw+gECBAgQIECAAIG0gMGbrlc4AgQIECBAgAABg9cPECBAgAABAgQIpAUM3nS9whEgQIAAAQIECBi8foAAAQIECBAgQCAtYPCm6xWOAAECBAgQIEDA4PUDBAgQIECAAAECaQGDN12vcAQIECBAgAABAgavHyBAgAABAgQIEEgLGLzpeoUjQIAAAQIECBAweP0AAQIECBAgQIBAWsDgTdcrHAECBAgQIECAgMHrBwgQIECAAAECBNICBm+6XuEIECBAgAABAgQMXj9AgAABAgQIECCQFjB40/UKR4AAAQIECBAgYPD6AQIECBAgQIAAgbSAwZuuVzgCBAgQIECAAAGD1w8QIECAAAECBAikBQzedL3CESBAgAABAgQIGLx+gAABAgQIECBAIC1g8KbrFY4AAQIECBAgQMDg9QMECBAgQIAAAQJpAYM3Xa9wBAgQIECAAAECBq8fIECAAAECBAgQSAsYvOl6hSNAgAABAgQIEDB4/QABAgQIECBAgEBawOBN1yscAQIECBAgQICAwesHCBAgQIAAAQIE0gIGb7pe4QgQIECAAAECBAxeP0CAAAECBAgQIJAWMHjT9QpHgAABAgQIECBg8PoBAgQIECBAgACBtIDBm65XOAIECBAgQIAAAYPXDxAgQIAAAQIECKQFDN50vcIRIECAAAECBAgYvH6AAAECBAgQIEAgLWDwpusVjgABAgQIECBAwOD1AwQIECBAgAABAmkBgzddr3AECBAgQIAAAQIGrx8gQIAAAQIECBBICxi86XqFI0CAAAECBAgQMHj9AAECBAgQIECAQFrA4E3XKxwBAgQIECBAgIDB6wcIECBAgAABAgTSAgZvul7hCBAgQIAAAQIEDF4/QIAAAQIECBAgkBYweNP1CkeAAAECBAgQIGDw+gECBAgQIECAAIG0gMGbrlc4AgQIECBAgAABg9cPECBAgAABAgQIpAUM3nS9whEgQIAAAQIECBi8foAAAQIECBAgQCAtYPCm6xWOAAECBAgQIEDA4PUDBAgQIECAAAECaQGDN12vcAQIECBAgAABAgavHyBAgAABAgQIEEgLGLzpeoUjQIAAAQIECBAweP0AAQIECBAgQIBAWsDgTdcrHAECBAgQIECAgMHrBwgQIECAAAECBNICBm+6XuEIECBAgAABAgQMXj9AgAABAgQIECCQFjB40/UKR4AAAQIECBAgYPD6AQIECBAgQIAAgbSAwZuuVzgCBAgQIECAAAGD1w8QIECAAAECBAikBQzedL3CESBAgAABAgQIGLx+gAABAgQIECBAIC1g8KbrFY4AAQIECBAgQMDg9QMECBAgQIAAAQJpAYM3Xa9wBAgQIECAAAECBq8fIECAAAECBAgQSAsYvOl6hSNAgAABAgQIEDB4/QABAgQIECBAgEBawOBN1yscAQIECBAgQICAwesHCBAgQIAAAQIE0gIGb7pe4QgQIECAAAECBAxeP0CAAAECBAgQIJAWMHjT9QpHgAABAgQIECBg8PoBAgQIECBAgACBtIDBm65XOAIECBAgQIAAAYPXDxAgQIAAAQIECKQFDN50vcIRIECAAAECBAgYvH6AAAECBAgQIEAgLWDwpusVjgABAgQIECBAwOD1AwQIECBAgAABAmkBgzddr3AECBAgQIAAAQIGrx8gQIAAAQIECBBICxi86XqFI0CAAAECBAgQMHj9AAECBAgQIECAQFrA4E3XKxwBAgQIECBAgIDB6wcIECBAgAABAgTSAgZvul7hCBAgQIAAAQIEDF4/QIAAAQIECBAgkBYweNP1CkeAAAECBAgQIGDw+gECBAgQIECAAIG0gMGbrlc4AgQIECBAgAABg9cPECBAgAABAgQIpAUM3nS9whEgQIAAAQIECBi8foAAAQIECBAgQCAtYPCm6xWOAAECBAgQIEDA4PUDBAgQIECAAAECaQGDN12vcAQIECBAgAABAgavHyBAgAABAgQIEEgLGLzpeoUjQIAAAQIECBAweP0AAQIECBAgQIBAWsDgTdcrHAECBAgQIECAgMHrBwgQIECAAAECBNICBm+6XuEIECBAgAABAgQMXj9AgAABAgQIECCQFjB40/UKR4AAAQIECBAgYPD6AQIECBAgQIAAgbSAwZuuVzgCBAgQIECAAAGD1w8QIECAAAECBAikBQzedL3CESBAgAABAgQIGLx+gAABAgQIECBAIC1g8KbrFY4AAQIECBAgQMDg9QMECBAgQIAAAQJpAYM3Xa9wBAgQIECAAAECBq8fIECAAAECBAgQSAsYvOl6hSNAgAABAgQIEDB4/QABAgQIECBAgEBawOBN1yscAQIECBAgQICAwesHCBAgQIAAAQIE0gIGb7pe4QgQIECAAAECBB4ubAMk15D4AgAAAABJRU5ErkJggg== <issue_comment>username_1: Which graph are you trying to export? Would you mind sharing a full reproducible example? <issue_comment>username_0: https://jsfiddle.net/qcy8ta61/ <issue_comment>username_1: Here's a working example: https://jsfiddle.net/mew7uwa1/2/ <issue_comment>username_1: At the moment, [`Snapshot.toImage`](https://github.com/plotly/plotly.js/blob/e891bac60dfca4a72b0308cbf0fe26f1061daf9b/src/snapshot/toimage.js#L21) expects the graph DOM element and won't try to grab it from a string id like the top-level `Plotly` methods. We are [planning](https://github.com/plotly/plotly.js/issues/83) on bringing the `toImage` method to the top-level `Plotly` object where call signatures like yours :arrow_double_up: will be accepted.<issue_closed> <issue_comment>username_0: Thanks! <issue_comment>username_1: done in https://github.com/plotly/plotly.js/pull/446
<issue_start><issue_comment>username_0: /merge <issue_comment>username_1: :tada: This PR is included in version 3.7.5 :tada: The release is available on: - `npm package (@latest dist-tag)` - [GitHub release](https://github.com/sealsystems/node-eslint-config-es/releases/tag/3.7.5) Your **[semantic-release](https://github.com/semantic-release/semantic-release)** bot :package::rocket:
<issue_start><issue_comment>Title: Requesting scripts.js file username_0: Can I get an unbundled `script.js` file of [`scripts.min.js`](https://github.com/thiagorossener/jekflix-template/blob/master/assets/js/scripts.min.js)? <issue_comment>username_0: Got it [https://github.com/thiagorossener/jekflix-template/tree/master/src/js/main](https://github.com/thiagorossener/jekflix-template/tree/master/src/js/main)<issue_closed>
<issue_start><issue_comment>Title: cudaErrorMemoryAllocation out of memory username_0: **Describe the bug** MemoryError: std::bad_alloc: out_of_memory: CUDA error at: /usr/local/miniconda3/envs/rapids-22.02/include/rmm/mr/device/cuda_memory_resource.hpp Even if it reads an empty file, it will also report the error! **Steps/Code to reproduce bug** Follow this guide http://matthewrocklin.com/blog/work/2018/02/28/minimal-bug-reports to craft a minimal bug report. This helps us reproduce the issue you're having and resolve the issue more quickly. **Expected behavior** ```python df = cudf.Series([1,2,3,None,4]) ``` **Environment overview ** - Environment location: Cloud - Method of cuDF install: conda **Environment details:** - i use the flow command to create a conda virtual env: ```python conda create -n rapids-22.02 -c rapidsai -c nvidia -c conda-forge \ rapids=22.02 python=3.8 cudatoolkit=11.4 dask-sql``` then i use the virtual env to make jupyter kernel i hope someone can solve it.
<issue_start><issue_comment>Title: Why not latest ubuntu version? username_0: Why not using latest ubuntu version? It's 15.10 already <issue_comment>username_1: It's configurable via ENV after #11 merged. <issue_comment>username_2: @username_0 Phusion base image is based on Ubuntu LTS, which is usually considered a sensible choice. We want to control for the one moving target, i.e. swift, and it would suck to have to maintain it if Ubuntu broke for whatever reason. @username_1 I assume he's talking about the FROM statement. I have my reservations about the phusion base image, but tbh we can change it before we hit 1.0 and call it official. <issue_comment>username_3: The Ubuntu Version is configurable. Right now going with latest LTS release supported by the official Swift ubuntu builds.<issue_closed>
<issue_start><issue_comment>Title: Add batch insert support username_0: Will need extra logic to test for record presence prior to collection insert. This can probably be done with a sort(source) and then a front()/back() comparison to determine if the iterators cover a range of the target data set.
<issue_start><issue_comment>Title: Wrapped C# request and response in a new object to avoid being disposed #624 [Delivers #110445462] username_0: #624 <issue_comment>username_1: @username_0 As dicussed, we should always dispose of the response message if we are not returning it (so generated code should dispose of it). We should consider having OperationResponse and AzureOperationResponse simply contain the request and response, and just disposing the response from the OperationResponse in extension methods. We should probably dispose the response when throwing an exception after the request is made. <issue_comment>username_1: LGTM, modulo the change we discussed.
<issue_start><issue_comment>Title: Trying to alter settings with only pull access will return 404 username_0: We should probably return 403 as a user with pull access knows about the repo anyway. <issue_comment>username_1: Sounds fair. <issue_comment>username_1: @username_0 maybe we should move this to the team-teal repo? <issue_comment>username_1: Closing this as this should be opened on the team-teal repo<issue_closed>
<issue_start><issue_comment>Title: Improved initial empty state username_0: when I loaded the schema I expected to see the full graph (which in retrospect I'm assuming isn't the case since on a huge graphql things would blow up) but it was confusing to not see anything. my first thought was that there was an error. might be a nice touch to either: - have some "empty state" like "search for a node to get started" - default to always showing the "query" node so I can organically dig into the graph without knowing what the nodes are
<issue_start><issue_comment>Title: cache.action broken on GAE username_0: _From [richar..._at_gmail.com](https://code.google.com/u/112985171729633859941/) on August 28, 2014 09:21:35_ I used this to cache a controller method: __at__cache.action(time_expire=300, cache_model=cache.ram, session=False, vars=False, public=True) Works fine on standard web2py but under GAE: Traceback (most recent call last): File "/home/hoju/web2py/gluon/restricted.py", line 220, in restricted exec ccode in environment File "/home/hoju/web2py/applications/places/controllers/default.py", line 103, in \<module> File "/home/hoju/web2py/gluon/globals.py", line 385, in \<lambda> self._caller = lambda f: f() File "/home/hoju/web2py/gluon/cache.py", line 510, in wrapped_f rtn = cache_model(cache_key, lambda : func(), time_expire=time_expire) File "/home/hoju/web2py/gluon/contrib/gae_memcache.py", line 40, in __call__ self.client.set(key, (time.time(), value), time=time_expire) File "/home/hoju/google_appengine/google/appengine/api/memcache/__init__.py", line 763, in set namespace=namespace) File "/home/hoju/google_appengine/google/appengine/api/memcache/__init__.py", line 868, in _set_with_policy time, '', namespace) File "/home/hoju/google_appengine/google/appengine/api/memcache/__init__.py", line 947, in _set_multi_async_with_policy stored_value, flags = _validate_encode_value(value, self._do_pickle) File "/home/hoju/google_appengine/google/appengine/api/memcache/__init__.py", line 227, in _validate_encode_value stored_value = do_pickle(value) File "/home/hoju/google_appengine/google/appengine/api/memcache/__init__.py", line 392, in _do_pickle pickler.dump(value) TypeError: _decorated() takes exactly 1 argument (0 given) _Original issue: http://code.google.com/p/web2py/issues/detail?id=1969_ <issue_comment>username_0: _From [richar..._at_gmail.com](https://code.google.com/u/112985171729633859941/) on August 28, 2014 00:22:19_ Web2py Version 2.9.5-stable+timestamp.2014.03.16.02.35.39<issue_closed> <issue_comment>username_1: This looks like a GAE bug. It may have been resolved since.
<issue_start><issue_comment>Title: Create a router attribute to represent the calculated base URI username_0: The base URI must be an exposed attribute that is used to calculate the URI of the router (if it match the basePath). The router will calculate the changes in it URI based on value of baseURI. It must start with empty string (empty value will skip any internal call) It enables when it have any string. By default the browser listeners will ignore the domain since it is useless in regular site environment. It follows the same events rules from rules, executing `enter` when it enter in a valid path, `match` when it matches to a valid path, `change` when it changes to a new value and `leave` when it leave the valid basePath. When executing `leave` it must ignore the rule attribute on all it internal rules and dispatch leave on every rule available that is in match state. The browser listeners must change this property value.<issue_closed>
<issue_start><issue_comment>Title: Arunesh username_0: ## Issue Type: [ ] Bug Report [ ] Feature Request [ ] Documentaion ## **Describe the bug** - A clear and concise description of what the bug is. ## **Possible solution** - Describe the solution you thought of. ## **Screenshots** - If applicable, add screenshots to help explain your problem.<issue_closed>
<issue_start><issue_comment>Title: Docs: PullIfNotPresent -> IfNotPresent; PullAlways -> Always username_0: PullIfNotPresent is the internal name in go code; IfNotPresent is the user-facing value that is specified in json/yaml. For docs, we should use the user-facing string. Similarly PullAlways -> Always <issue_comment>username_0: The current user facing docs (http://kubernetes.io/v1.0/docs/user-guide/images.html#updating-images) are incorrect and should be fixed. <issue_comment>username_1: Thanks much. LGTM <issue_comment>username_0: @brendandburns does the merge bot know about the release branch (or does it care)?
<issue_start><issue_comment>Title: Modularizing Material and FortAwesome under Core username_0: ## What: <!-- What changes are being made, why are these changes necessary? --> Modularizing Material <!-- To automatically close the corresponding issue use: Closes #<issue-number, e.g. Closes #47 --> Issue number: N/A <issue_comment>username_0: Modularizing Material and FortAwesome <issue_comment>username_1: Why would we want to import more material from start ? <issue_comment>username_0: When modularizing you can import Material in just one place. It is exported to the root. I have imported all material components into one module to make it easier to develop. You can delete the compontents not in use. I did the same thing for FortAwesome but for only Icons and features used. Check it out. It makes coding easier. <issue_comment>username_1: @username_0 yeah but that has a bad impact on tree-shaking / startup time right ? <issue_comment>username_0: You are right! Congrats for the great work!
<issue_start><issue_comment>Title: Expose public API + make available safely username_0: __Is your feature request related to a problem? Please describe__ Token simulation is a _huge_ extension. Because of that it has a certain set of issues: * Large "service footprint": It provides a massive number of services, non of which are namespaced. As a result it is easy to collide with existing (bpmn-js) services declared * No public API: It does not provide any reasonable public API (no docs on usage either). For proper consumption we shall add API that allows external components to interface with it. https://github.com/bpmn-io/bpmn-js-token-simulation/pull/112 is a start, however it only does a poor mans job. __Describe the solution you'd like__ * [ ] Investigate if auxiliary token-simulation services can be hidden from sight (using didi `__exports__` or proper namespacing * [ ] Investigate clear boundary / public API __Describe alternatives you've considered__ Keep what exists, even if it is a little messy. __Additional context__ <!-- Add any other context or screenshots about the feature request here. -->
<issue_start><issue_comment>Title: Change @typescript-eslint/return-await to in-try-catch username_0: I would recommend changing [@typescript-eslint/return-await](https://github.com/typescript-eslint/typescript-eslint/blob/main/packages/eslint-plugin/docs/rules/return-await.md) to error in [in-try-catch](https://github.com/typescript-eslint/typescript-eslint/blob/main/packages/eslint-plugin/docs/rules/return-await.md#in-try-catch) instead of always to prevent async code from taking on the performance hit of having a microtask just to await the promise in the current stack. See [no-return-await](https://eslint.org/docs/rules/no-return-await) for more details. <issue_comment>username_1: Vast majority of codebases will never benefit from that that microtask optimization, but they will benefit daily from better stack trace. In those codebases where this matters, just disable that rule.<issue_closed>
<issue_start><issue_comment>Title: Replace remaining tags by annotations username_0: Pousse-Café Doc still relies on javadoc custom tags defined in interface `Tags`. They should all be replaced with annotations and deprecated. This should be done before closing #164 so that support for javadoc tags is dropped when it is closed.
<issue_start><issue_comment>Title: Add info to README about changing root key for the JSON adapter. username_0: It turns out you actually can change the root key when using the JSON adapter. If you override the `json_key` method, it lets you define a root key that doesn't have to be based on the class name. Workaround for #986 <issue_comment>username_1: Yeap, indeed you could do this, but don't want to encourage ppl to do this. Just saw #986 and understand the need here, but I'm more inclined to merge #987 and bring the functionality back. <issue_comment>username_0: Fair enough, that’s definitely a better solution in the end. -- Ryan LeFevre (@username_0) Hodinkee, Inc. Sr. Software Engineer
<issue_start><issue_comment>Title: Add support for Mockito 1.10.9+ username_0: <issue_comment>username_1: I'd like to propose a secondary solution here and wanted some feedback. What if we were to shade mockito 1.10.8 which would allow this library to be used with other versions of mockito. As it stands, I'm using this library and two version so of mockito (1.10.8 and 1.10.19). This only works because I'm doing so in separate modules within a multi module build project. That said, I think shading might be a better approach. I don't really see how mockito will bring much more to this library regardless of future changes. Otherwise, if shading isn't a better solution, then why not go ahead and make a pull request on this one and hopefully owners will accept. After much review, I think shading is better, with this being secondary. The fact that this already is functional makes it immediately better as shading will take a little bit more time to pull off. <issue_comment>username_2: If the license is compatible (assuming it is), a more lightweight solution would be to copy the cglib manipulation classes out of mockito into catch-exception. There's a lot of code in mockito not used, only a small part that is leveraged for catch-exception. <issue_comment>username_3: Is there any plan to make it work? <issue_comment>username_4: +1 to what @username_3 has written. We would like to upgrade to mockito 2.0 as soon as it is released this month, but that would break all our tests using catch-exception. <issue_comment>username_0: Yes, there is a plan. Can you test #19 with mockito 2.0 ? <issue_comment>username_4: Thanks for info @username_0. Mockito 2.0 is still not released but I tried #19 together with 2.0.5-beta and it worked smoothly. Hope to see this fix in catch-exception 1.3.5. <issue_comment>username_5: According to Szczepan's [blog post](http://blog.mockito.org/2015/01/continuously-delivering-beta-versions.html) we don't need to wait for Mockito 2.0 as beta versions have different meaning here. When can we expect 1.3.5 release? <issue_comment>username_0: will be relased in this week, maybe tommorow <issue_comment>username_0: Please try [catch-exception-1.4.1-beta-2.jar](https://oss.sonatype.org/service/local/repositories/releases/content/eu/codearte/catch-exception/catch-exception/1.4.1-beta-2/catch-exception-1.4.1-beta-2.jar) <issue_comment>username_6: @username_0 1.4.1-beta-2 works without any problems in my case - but it is still marked beta. Is there any chance to get that stable release out, either 1.3.5 or 1.4.1? I really do not care that much which one, I am only a bit hesitant to use beta-rated libraries in (testing for) production. Thanks! <issue_comment>username_0: Thank for veryfing, 1.4.1 is relased.<issue_closed> <issue_comment>username_5: Works like a charm, thank you! <issue_comment>username_7: Hi! As much as I appreciate and use catch-exception a lot, since the shaded Mockito was included, it introduced some evasive problems in the tests when the incorrect class was imported. If by mistake a mix of original and shaded Mockito classes is used in the test, it will (obviously) fail and it is usually not the first idea to check your imports. Any chance of rethinking this solution? <issue_comment>username_2: Just restrict imports in your IDE. You should never be using the shaded classes in your own code <issue_comment>username_7: Wonderful, thanks for the tip, didn't know about that option!
<issue_start><issue_comment>Title: Use JavaScript for...of loops when possible username_0: I noticed that CoffeeScript "for...in" loops turn into C-style for loops over indexes, which is a bit ugly in my opinion. For example, this: ```coffee for s in ['Hello', 'World'] console.log(s) ``` becomes this: ```js let iterable = ['Hello', 'World']; for (let i = 0; i < iterable.length; i++) { let s = iterable[i]; console.log(s); } ``` ([repl](http://decaffeinate-project.org/repl/#?evaluate=true&code=for%20s%20in%20%5B'Hello'%2C%20'World'%5D%0A%20%20console.log(s))) The way I'd write it in modern JavaScript is this: ```js for (let s of ['Hello', 'World']) { console.log(s) } ``` Of course, if there's a significant difference in behavior, it may be impossible to use JavaScript "for...of" safely, but it would certainly be nice.<issue_closed>
<issue_start><issue_comment>Title: Guzzle 7, extra_vars in job creation username_0: - Allow use of Guzzle 7 - Allow Job creation with custom variables when flag ask_extra_vars - Updated README <issue_comment>username_0: I think you accidentally skipped this PR, I see you did approve the one on oauth but not on the cooler library <issue_comment>username_1: Thanks for the commit.
<issue_start><issue_comment>Title: List of caches - multiline title username_0: Hi! There are some caches, that have really long names and are part of the trail (eg.: A really, really long cache name that is part of this beautifull trail No. 11). In a list of all saved caches c:geo only shows first XY characters (I have a small phone screen). Since the cache number is usually on the end, I can not find a specific cache easily. If I sort by name I have to count where the desired one will be, that is quite hard. The proximity sort is also of no use in some cases, when the trail is circular and the returning path is nearing the curent one, so the closest caches are not always the next ones. I wish there was an option to have multi-line title names, to switch them on and off. Is that possible to implement? Regards; Luka <issue_comment>username_1: Closing due to lack of feedback and possible workaround.<issue_closed> <issue_comment>username_0: I am sorry, but I do not understand. What kind of lack of feedback? What possible workaround?
<issue_start><issue_comment>Title: Add Ubuntu install examples for using local mounted iso username_0: - Simply by downloading the ISO and mounting it to on-http static repo as follows: - mount -o loop /var/mirrors/ubuntu-14.04.4-server-amd64.iso on-http/static/http/ubuntu <issue_comment>username_0: @RackHD/corecommitters @zyoung51 @jlongever @tannoa2 @uppalk1 <issue_comment>username_1: 👍 <issue_comment>username_0: @username_1 updated!! Thanks!! <issue_comment>username_2: :+1: @username_0 I didn't pay attention for Ubuntu ISO installation, we want to do some following steps to make Ubuntu installation better and I've created a story to let iso installation parameters to be consistent with net installation. the story link https://www.pivotaltracker.com/n/projects/1492892/stories/126851627 and also create a story to analysis and validate repos , the story link https://www.pivotaltracker.com/story/show/126851439 Any problems, let me know. <issue_comment>username_0: @username_2 sounds good!! <issue_comment>username_0: @username_3 code updated!! Thanks! <issue_comment>username_3: :+1:
<issue_start><issue_comment>Title: Handling click event on image username_0: Hi again, i'm triying to execute a function on a click event of a image, here is my code: <code> actualizarMapa = function(a,b,c,d){ // add some new plots ... var updatedOptions = {'images' : {}}; updatedOptions.images['image1'] = { 'src' : 'img/img02.png', 'latitude' : '-34.27083595', 'longitude' : '-66.54968262', 'width':80, 'height':80 }; updatedOptions.images['image2'] = { 'src' : 'img/img01.png', 'latitude' : '-32.15701249', 'longitude' : '-65.67077637', 'width':80, 'height':80 }; var newPlots = []; // and delete some others ... var deletedPlots = [b]; $(".maparea1").trigger('update', [updatedOptions,newPlots, deletedPlots, {animDuration : 1000, afterUpdate:function(container, paper, areas, plots, options){ var mapConf = $.fn.mapael.maps['sanluis_departments'], coords = {}; for (var id in options.images) { coords = mapConf.getCoords(options.images[id].latitude, options.images[id].longitude); paper.image(options.images[id].src, coords.x - options.images[id].width / 2, coords.y - options.images[id].height / 2, options.images[id].width, options.images[id].height); } } }]); } </code> I could put a pointer cursor via CSS, but i cant handle the click event in the image, do you have any ideas? Can i add a new attr to the image object?<issue_closed> <issue_comment>username_1: Hey! It has been more than one year. I'm closing this thread. Feel free to submit a new issue if you have any problems.
<issue_start><issue_comment>Title: Not handling urls properly when grav is in a subdirectory username_0: Apologies for so many issues in a row! This is another URL problem. When on the editing page, the plugin sends requests to the wrong URL, presumably because my installation is in a sub-directory. Sends request to: http://localhost/translator/edit/task:translator.keep.alive Should be one of these (I'm not sure which): http://localhost/path/to/grav/translator/edit/task:translator.keep.alive http://localhost/path/to/grav/en/translator/edit/task:translator.keep.alive There seems to be a similar problem with the emails send for approving / rejecting changes.
<issue_start><issue_comment>Title: 🛑 Penyanyi VF V2 is down username_0: In [`90a0611`](https://github.com/username_0/uptimevf/commit/90a0611f2e3990b151a8ef0c07345f812dc6a3ff ), Penyanyi VF V2 (https://evobot-1.nexter32.repl.co) was **down**: - HTTP code: 0 - Response time: 0 ms<issue_closed> <issue_comment>username_0: **Resolved:** Penyanyi VF V2 is back up in [`669cf69`](https://github.com/username_0/uptimevf/commit/669cf690a24feaa985ea5d0a5733273cb00aa1dd ).
<issue_start><issue_comment>Title: Timer is Broken After blocks-continue username_0: Symptoms: ``` Log records from the iteration 3563: gradient_norm_threshold: 116.254608154 sequence_log_likelihood: 19.8159103394 time_read_data_this_batch: 0.0 time_read_data_total: 626.026049137 time_train_this_batch: 0.0 time_train_total: 54414.1115253 total_gradient_norm: 69.3658370972 total_step_norm: 0.11755874753 ``` I have not investigated it yet and do not know if it's a problem of the extension or of the timer itself. <issue_comment>username_0: It took me a while to realize what's wrong: `profile.current` has to be reset when training is resumed, otherwise one can end up with ```ipython ipdb> self.profile.current ['after_training', 'Checkpoint', 'after_training', 'Checkpoint', 'after_training', 'Checkpoint', 'training', 'epoch', 'train'] ```<issue_closed> <issue_comment>username_0: Reopened because a regression test is missing. <issue_comment>username_0: Symptoms: ``` Log records from the iteration 3563: gradient_norm_threshold: 116.254608154 sequence_log_likelihood: 19.8159103394 time_read_data_this_batch: 0.0 time_read_data_total: 626.026049137 time_train_this_batch: 0.0 time_train_total: 54414.1115253 total_gradient_norm: 69.3658370972 total_step_norm: 0.11755874753 ``` I have not investigated it yet and do not know if it's a problem of the extension or of the timer itself.
<issue_start><issue_comment>Title: Update the Insights app to poll the Authenticate endpoint until status changes from 'incomplete' username_0: **Specification** On this initial load of the insights app there is background work going on to setup the users dashboard/reports within Power Bi which only takes around 20-30 seconds. Currently when logging on to a new account for the first time the i-frame never loads as the endpoint returns an 'incomplete' status. **Changes required** - Update the home tab to poll the endpoint every X seconds until the status changes from 'incomplete'. - If the status is 'failed' show a failure message similar to before with metabase - If the status is 'complete' use the first report in the collection to load the i-frame (as we do now) **Additional info** Status enum: incomplete, complete, failed Contact @username_0 when ready and I'll clear down RES for end to end testing Example payload: ``` { "status": "string", "information": "string", "token": "string", "reports": [ { "name": "string", "reportId": "3fa85f64-5717-4562-b3fc-2c963f66afa6", "embeddedUrl": "string" } ] } ```<issue_closed>
<issue_start><issue_comment>Title: Added PR Checklist to README username_0: This PR adds a discussion to the contribution section on a checklist for new feature PRs to Houston. <issue_comment>username_0: The relevant content in formatted Markdown: Pull Request Checklist To submit a pull request (PR) to Houston, we require the following standards to be enforced. Details on how to configure and pass each of these required checks is detailed in the sections in this guideline section. Ensure that the PR is properly formatted We require code formatting with Brunette (a better version of Black) and linted with Flake8 using the pre-commit (includes automatic checks with brunette and flake8 plus includes other general line-ending, single-quote strings, permissions, and security checks) # Command Line Example pre-commit run --all-files Ensure that the PR is properly rebased We require new feature code to be rebased onto the latest version of the develop branch. git checkout develop git pull --rebase # Use the following to make this the default: git config pull.rebase true git checkout <feature-branch> git pull git rebase develop # Resolve all conflicts git merge develop # Sanity check git push --force origin <feature-branch> Ensure that the PR uses a consolidated database migration We require new any database migrations (optional) with Alembic are consolidated and condensed into a single file and version. Exceptions to this rule (allowing possibly up to 3) will be allowed after extensive discussion and justification. All database migrations should be created using a downgrade revision that matches the existing revision used on the develop branch. Further,a PR should never be merged into develop that contains multiple revision heads. invoke app.db.downgrade <develop branch revision ID> rm -rf migrations/versions/<new migrations>.py invoke app.db.migrate invoke app.db.upgrade invoke app.db.history # Check that the history matches invoke app.db.heads # Ensure that there is only one head Ensure that the PR is properly tested We require new feature code to be tested via Python tests and simulated REST API tests. We use PyTest (and eventually Coverage) to ensure that your code is working cohesively and that any new functionality is exercised. We require new feature code to also be fully compatible with a containerized runtime environment like Docker. pytest Ensure that the PR is properly sanitized We require the PR to not include large files (images, database files, etc) without using GitHub Large File Store (LFS). git lfs install git lfs track "*.png" git add .gitattributes git add image.png git commit -m "Add new image file" git push We also require any sensitive code, configurations, or pre-specified values be omitted, truncated, or redacted. For example, the file _db/secrets/py is not committed into the repository and is ignored by .gitignore. Ensure that the PR is properly reviewed After the preceding checks are satisfied, the code is ready for review. All PRs are required to be reviewed and approved by at least one registered contributor or administrator on the Houston project.
<issue_start><issue_comment>Title: Dev: Azure Automomation ARM cmdlet updates username_0: Azure Automomation ARM cmdlet updates Has few bug fixes and a new cmdlet Remove-AzureRMAutomationConnectionType <issue_comment>username_1: Hi __@username_0__, I'm your friendly neighborhood Azure Pull Request Bot (You can call me AZPRBOT). Thanks for your contribution! <p> It looks like you're working at Microsoft (safeerm). If you're full-time, we DON'T require a contribution license agreement. </p> <p> If you are a vendor, or work for Microsoft Open Technologies, DO please sign the electronic contribution license agreement. It will take 2 minutes and there's no faxing! https://cla.azure.com. </p> TTYL, AZPRBOT; <issue_comment>username_2: @username_0 tests? <issue_comment>username_0: Will submit PR from other repo. Our team repo has some issues
<issue_start><issue_comment>Title: Update quakeml detail search to return contributed quakeml username_0: Find the phase-data product if available, use origin otherwise. Find the quakeml with the requested event id. Return non-preferred quakeml if requested using non-preferred id.<issue_closed> <issue_comment>username_1: Find the phase-data product if available, use origin otherwise. Find the quakeml with the requested event id. Return non-preferred quakeml if requested using non-preferred id. @username_2 can you provide more information on this? <issue_comment>username_2: This is almost implemented, but the fall through is not in sync with the summary quakeml format.<issue_closed>
<issue_start><issue_comment>Title: casbin username_0: go get: module github.com/zeromicro/zero-contrib@upgrade found (v1.1.0), but does not contain package github.com/zeromicro/zero-contrib/auth/casbin<issue_closed> <issue_comment>username_0: fda <issue_comment>username_0: go get: module github.com/zeromicro/zero-contrib@upgrade found (v1.1.0), but does not contain package github.com/zeromicro/zero-contrib/auth/casbin<issue_closed>
<issue_start><issue_comment>Title: feat(ci): Add bitbucket pipelines support username_0: Create the yml file to configure bitbucket pipelines with a custom docker image for karma <issue_comment>username_1: This is great @username_0! I don't want to add the file by default since not everybody will want to have it, but this is an amazing reference. Would you mind submitting a PR that adds that to the documentation? (FAQ) Thanks so much! <issue_comment>username_2: Hi @username_0 , the Docker [image](https://hub.docker.com/r/gwhansscheuren/bitbucket-pipelines-node-chrome-firefox/) suggested doesn't have a public `Dockerfile`, so I'm wondering if it is safe to trust it for CI with Bitbucket? PS: I'm not an expert of Docker, so sorry if my question may seem silly <issue_comment>username_0: I just added the link in the description in docker hub https://bitbucket.org/hans_scheuren/docker-node-chrome-firefox/src/9cb6ac1d7bf2dc36270a88115868a05d9eb724a6/Dockerfile?at=master&fileviewer=file-view-default
<issue_start><issue_comment>Title: install on fresh ubuntu 14.04 modoloboa password returned by port check username_0: hi, fresh ubuntu 14.04 64bit updated. if i run the setup it stops with: Traceback (most recent call last): File "/srv/modoboa/env/bin/modoboa-admin.py", line 7, in <module> handle_command_line() File "/srv/modoboa/env/local/lib/python2.7/site-packages/modoboa/core/commands/__init__.py", line 95, in handle_command_line commands[args.command](commands, verbose=args.verbose).run(remaining) File "/srv/modoboa/env/local/lib/python2.7/site-packages/modoboa/core/commands/__init__.py", line 36, in run self.handle(args) File "/srv/modoboa/env/local/lib/python2.7/site-packages/modoboa/core/commands/deploy.py", line 188, in handle info = dj_database_url.config(default=url) File "/srv/modoboa/env/local/lib/python2.7/site-packages/dj_database_url.py", line 51, in config config = parse(s, engine, conn_max_age) File "/srv/modoboa/env/local/lib/python2.7/site-packages/dj_database_url.py", line 98, in parse 'PORT': url.port or '', File "/usr/lib/python2.7/urlparse.py", line 113, in port port = int(port, 10) ValueError: invalid literal for int() with base 10: 'modoboaDBPass' Traceback (most recent call last): File "./run.py", line 64, in <module> main() File "./run.py", line 54, in main scripts.install("modoboa", config) File "/root/modoboa-installer/modoboa_installer/scripts/__init__.py", line 21, in install getattr(script, appname.capitalize())(config).run() File "/root/modoboa-installer/modoboa_installer/scripts/base.py", line 138, in run self.post_run() File "/root/modoboa-installer/modoboa_installer/scripts/modoboa.py", line 131, in post_run self.apply_settings() File "/root/modoboa-installer/modoboa_installer/scripts/modoboa.py", line 109, in apply_settings utils.mkdir(d, stat.S_IRWXU | stat.S_IRWXG, pw[2], pw[3]) File "/root/modoboa-installer/modoboa_installer/utils.py", line 97, in mkdir os.mkdir(path, mode) OSError: [Errno 2] No such file or directory: '/srv/modoboa/instance/media/webmail' config is default with changed passwords [modoboa] user = modoboa home_dir = /srv/modoboa venv_path = %(home_dir)s/env instance_path = %(home_dir)s/instance timezone = Europe/Berlin dbname = modoboa dbuser = modoboa dbpassword = "S1kl7y!jHjbbRbz?" # Extensions to install (to install all of them, use: all) extensions = modoboa-amavis modoboa-pdfcredentials modoboa-postfix-autoreply modoboa-sievefilters modoboa-stats <issue_comment>username_0: after reclone and changing my passwords it worked.<issue_closed>
<issue_start><issue_comment>Title: Connect login form to backend username_0: As a Roam user I want to be able to access my account each time I visit the site So that I can have a customized experience Given that the user accesses the sight When they complete the login form They are they information is validated in the database, and they are presented with a customized experience on the site<issue_closed>
<issue_start><issue_comment>Title: MathML span position username_0: - [x] Check [common issues](https://katex.org/docs/issues.html). - [x] Check the bug is reproducible in [the demo](https://katex.org). If not, check KaTeX is up-to-date and installed correctly. - [x] Search for [existing issues](https://github.com/KaTeX/KaTeX/issues). **Describe the bug:** The MathML span's position seems weird in our [Markdown editor component](https://github.com/bytedance/bytemd). Demo here: https://bytemd.netlify.app/ **(La)TeX code:** The code of (La)TeX you tried to render: ```latex \displaystyle \left( \sum_{k=1}^n a_k b_k \right)^2 \leq \left( \sum_{k=1}^n a_k^2 \right) \left( \sum_{k=1}^n b_k^2 \right) ``` **Screenshots:** ![image](https://user-images.githubusercontent.com/9524411/108308593-f284af80-71ea-11eb-820a-a36791f48007.png) **Environment (please complete the following information):** - KaTeX Version: v0.12.0 - Device: Desktop - OS: macOS big sur - Browser: Chrome - Version: 88.0.4324.182 <issue_comment>username_1: Not involved with this project but I had a quick look at this example out of interest and found that if I remove the newlines before/after the `$$`'s, it displays left aligned: ``` $$\displaystyle \left( \sum_{k=1}^n a_k b_k \right)^2 \leq \left( \sum_{k=1}^n a_k^2 \right) \left( \sum_{k=1}^n b_k^2 \right)$$ ``` <issue_comment>username_2: In KaTeX default configuration, the math is supplied twice: (1) In HTML, to be visually rendered by the browser. (2) In MathML, for accessibility (screen reader). The MathML is not intended to appear visually. <issue_comment>username_0: Yep, noticed that the MathML node size is 1x1. But in this case, its position seems weird (see the screenshot)
<issue_start><issue_comment>Title: null clientType is converted to 'null' string username_0: ## The feature or bug you are proposing The library is converting the `null` value into the `null` string headers, e.g. for the `CLIENTTYPE_HEADER_KEY`. This results in forwarding that given header with the `null` string as value even when the header is actually not provided/forwarder. ## The description of the bug or the rationale of your proposal The library should not forward heraders when those are set to `null`, or at least forward those with an empty string instead. Note that while this bug has been discovered for the `CLIENTTYPE_HEADER_KEY`, it is impacting also other headers. ## A snippet of code for replicating the issue or showing the proposal usage if applicable A possible solution can be replacing: ``` function getClientType(clientType) { return clientType || null } ``` with: ``` function getClientType(clientType) { return clientType || '' } ``` ## The expected result for your bug The library should not forward a header if the header is set to null. ### Your environment node: 12.19.0-alpine custom-plugin-lib: 4.2.0 os: --
<issue_start><issue_comment>Title: left outer joins no longer contain duplicated rows username_0: There was a bug in the previous code that when the left-side was rewinded, we would output several times the same row. This has been fixed and log messages have been added using the logger interface so that we can leave them there without too much penalty. <issue_comment>username_1: +1
<issue_start><issue_comment>Title: Long Manual code is not printing in the device configuration while launching the all-clusters-app username_0: #### Problem 12 digits manual pairing code is not printed when we launch the all-clusters-app. We observe that, logs has, General QR Code and the manual 11 digit short code. ![image](https://user-images.githubusercontent.com/90545664/153130617-d1464524-206f-4cfd-9413-9a16bd08b735.png)<issue_closed> <issue_comment>username_0: Long manual pairing code is printing.
<issue_start><issue_comment>Title: Fix DNSLink publishing (dnslink.io and dnslink.dev) username_0: It is broken right now (invalid DNSSIMPLE token?): https://github.com/ipfs/dnslink-website/runs/2632605780#step:12:7 (same issue for `dnslink.io` and `dnslink.dev`) @username_1 do you have a sense of what's wrong? Is the fix re-adding token or do we need to have separate one for each domain? <issue_comment>username_0: @username_1 ping – I've re-run job but the same token error is printed: https://github.com/ipfs/dnslink-website/runs/2779095871?check_suite_focus=true#step:11:8: ``` npx dnslink-dnsimple --domain dnslink.io --link /ipfs/bafybeicurvyneoolixzsbsnscukduxrsocgp4a4z7uyfiflafz7au6menq shell: /usr/bin/bash -e {0} env: DNSIMPLE_TOKEN: *** Failed to find zone record for dnslink.io with the token provided. Did you use the wrong token? ``` <issue_comment>username_0: - [ ] set DNSLink at `dnslink.io` to `dnslink=/ipfs/bafkreidj5lipga46mwq4wdkrrmarjmppobvtsqssge6o5nhkyvsp6pom3u` – a static redirect page pointing at `dnslink.dev` as the canonical variant <issue_comment>username_1: I'm ready to cut this over, but doing so will expose a TLS error until the Fleek cert is updated. Let me know when you're at the keyboard and we can make the change live. <issue_comment>username_2: @username_1 awesome! Thank you. In https://github.com/dnslink-std/test/pull/3 the dnsimple token/account would also be needed, would it be okay to set the tokens for the organization? <issue_comment>username_1: I've added `DNSIMPLE_ACCOUNT`/`DNSIMPLE_TOKEN` as secrets to dnslink-std/test <issue_comment>username_0: Done! https://dnslink.dev loads from Fleek and https://dnslink.io redirects to it. Thanks for help @username_1 :raised_hands: :heart:<issue_closed>
<issue_start><issue_comment>Title: Port may be incorrectly omitted from the Host header username_0: When a non-standard port is being used (an HTTP request on a port other than 80 or an HTTPS request on a port other than 443), the request's `Host` header should include the port. While `OperationRequestFactory` will add a `Host` header if one is not already present, it currently doesn't include the port. The most obvious effect of this is that the `Host` header in the HTTP request snippet may be incorrect.<issue_closed>
<issue_start><issue_comment>Title: We can't query installed use flags for a non installed pkg username_0: Same as in the develop branch, you can't query installed information on a non installed package. Brakes install of new packages. <issue_comment>username_1: Go Go Jenkins! <issue_comment>username_0: Uh, one "if" to much... who do i add the corrected commit? <issue_comment>username_2: @username_0 You just push the commit up to your local branch that this pull request was made from, and GitHub will pick it up and apply it here. It looks to me like you already figured that out, though. :) Is this ready to go? <issue_comment>username_0: Yes, figured it out. I am using this variant allready, should be fine.
<issue_start><issue_comment>Title: High ascii bytes in folder names can break folder handling username_0: Fixed in encoding-bug branch but I'm not sure this is the correct way to solve the problem. Revisit. ---------------------------------------- - Bitbucket: https://bitbucket.org/username_00/imapclient/issue/21 - Originally reported by: Menno Smits - Originally created at: 2009-10-18T21:50:21<issue_closed> <issue_comment>username_0: Ensure that this is no longer an issue: {{{ In [8]: i.delete_folder('abcd\xff') --------------------------------------------------------------------------- UnicodeDecodeError Traceback (most recent call last) /home/msmits/<ipython console> /usr/local/lib64/python2.4/site-packages/IMAPClient-0.5-py2.4.egg/imapclient/imapclient.pyc in delete_folder(self, folder) 283 @return: Server response. 284 ''' --> 285 typ, data = self._imap.delete(self._encode_folder_name(folder)) 286 self._checkok('delete', typ, data) 287 return data[0] /usr/local/lib64/python2.4/site-packages/IMAPClient-0.5-py2.4.egg/imapclient/imapclient.pyc in _encode_folder_name(self, name) 570 def _encode_folder_name(self, name): 571 if self.folder_encode: --> 572 return imap_utf7.encode(name) 573 return name 574 /usr/local/lib64/python2.4/site-packages/IMAPClient-0.5-py2.4.egg/imapclient/imap_utf7.pyc in encode(s) 41 _in.append(c) 42 if _in: ---> 43 r.extend(['&', modified_base64(''.join(_in)), '-']) 44 return ''.join(r) 45 /usr/local/lib64/python2.4/site-packages/IMAPClient-0.5-py2.4.egg/imapclient/imap_utf7.pyc in modified_base64(s) 67 68 def modified_base64(s): ---> 69 s_utf7 = s.encode('utf-7') 70 return s_utf7[1:-1].replace('/', ',') 71 UnicodeDecodeError: 'ascii' codec can't decode byte 0xff in position 0: ordinal not in range(128) }}} ---------------------------------------- Original comment by: Menno Smits <issue_comment>username_0: What I don't like about the current fix is that you can create a folder passing a name like 'Left\xffRight' and then get back a folder name from list_folders() that isn't the same. IMAPClient is performing a little too much magic for my liking. Instead of the transparent unicode conversion magic using latin-1, just raise a friendly ValueError to indicate that a unicode object should be passed if out of range characters are passed (see section 5.1.3 of the IMAP RFC). Since unicode objects can sometimes come back now, also change folder encoding to always return unicode objects. This is an API change and should be noted as such. This is also good practice for easing Python 3 conversion. Ensure there are unit and live tests that exercise internationalised folder names. ---------------------------------------- Original comment by: Menno Smits <issue_comment>username_0: Fixed in r147 and r148. ---------------------------------------- Original comment by: Menno Smits
<issue_start><issue_comment>Title: Fixed a condition that was unintentionally always false. username_0: The function sylvan_set_limits required the table_ratio parameter to be between -10 and 10. However, the condition of this check was always false. <issue_comment>username_1: Should fix: https://github.com/username_2/sylvan/issues/11 Too. Will try after this is merged. <issue_comment>username_2: Ah thanks. Didn't notice the pull request but it's being fixed. I'm going to include a few guidelines in the source, probably going to add some more documentation soon-ish.
<issue_start><issue_comment>Title: Is it possible to refresh the dialog window without closing and reopening it? username_0: I'm trying to place a bootstrap image carousel in a vex dialog window, but if I try to navigate between images, the picture won't change until I close and reopen the dialog. Is there any way to force the dialog message to reload? ``` javascript var d = $(this).children('.my-dialog'); // getting content from hidden div vex.dialog.open({ message: d.html(), buttons: [], afterOpen: function() { $('.carousel-control').click(function() { // i want to update the dialog window here }); } ``` Thank you! <issue_comment>username_1: This is something that should just work if the carousel contents are contained within the dialog. It’s hard to say what could be wrong without seeing the full picture. Would you mind sharing a jsFiddle or live page somewhere where we can see the broken functionality? <issue_comment>username_0: Thanks for getting back to me-- here's a jsfiddle that shows what I mean. For context, I want to have multiple icons (.work-obj) that open vex dialogs when clicked, each with different html in their hidden divs (.work-dialog). http://jsfiddle.net/qx7e550a/10/<issue_closed> <issue_comment>username_1: Just initialize the carousel in the vex: http://jsfiddle.net/username_1/qx7e550a/11/ Since you were setting the html text as the inside of the vex, the copy of the carousel inside vex was not the one you initialized before opening the dialog. <issue_comment>username_0: it works perfectly, thanks again!!
<issue_start><issue_comment>Title: Need update method which can update multiple documents in Mongo username_0: The `eve.io,mongo.mongo.Mongo.remove()` is capable of removing set of documents from a Mongo Collection. Similarly we need update() which can update multiple documents matching query. At present, the update method in `eve.io.mongo.mongo.Mongo.update()` forces to use only primary key (`_id`). Although this is true as per REST API guidelines but there will be some scenarios where multiple update is necessary. PyMongo supports updating multiple documents but eve misses it. <issue_comment>username_1: In theory we could open up the collection endpoint for some sort of "bulk" `PUT` and even `PATCH` support, but that would require quite some work and also, as you mention, I am not sure it would adhere to sound REST principles (probably so, but I'd want to investigate it a little further). You can achieve your goal by updating every document individually of course, which I understand is not very practical in your use case. I'm keeping this ticket open for the time being, in case someone wants to give a shot at it and submit a PR.
<issue_start><issue_comment>Title: Add volume to convex hull computations username_0: I just noticed that no routine was available to compute the volume of a convex hull using qhull whereas these functions are available in qhull. Unfortunately, I could not contribute directly to a pull request given that on my system Cython 0.22 is not yet available but the following seems to do more or less what I want : diff --git a/scipy/spatial/qhull.pyx b/scipy/spatial/qhull.pyx index c369ae9..e52e44a 100644 --- a/scipy/spatial/qhull.pyx +++ b/scipy/spatial/qhull.pyx @@ -121,6 +121,8 @@ cdef extern from "qhull/src/libqhull.h": realT max_outside realT MINoutside realT DISTround + realT totvol + realT totarea jmp_buf errexit setT *other_points unsigned int visit_id @@ -176,6 +178,9 @@ cdef extern from "qhull/src/io.h": cdef extern from "qhull/src/geom.h": pointT *qh_facetcenter(setT *vertices) nogil +cdef extern from "qhull/src/geom.h": + double qh_getarea(facetT *facetlist) nogil + cdef extern from "qhull/src/poly.h": void qh_check_maxout() nogil @@ -330,6 +335,18 @@ cdef class _Qhull: _qhull_lock.release() @cython.final + def volume(self): + global _active_qhull + qh_getarea(self._saved_qh.facet_list) + return self._saved_qh.totvol + + @cython.final + def area(self): + global _active_qhull + qh_getarea(self._saved_qh.facet_list) + return self._saved_qh.totarea + + @cython.final def close(self): if _qhull_lock.acquire(False): try: But I think there is a better way to do it, as this patch segfaults the test when trying to compute an area... If someone could guide me on the right track, I would be glad. Thanks for your work ! Hervé Audren <issue_comment>username_1: That would be an useful addition. . How to do it: (i) you need to lock and activate the qhull instance; see for example get_paraboloid_shift_scale() how to do this. (ii) The area and volume should be extracted in ConvexHull._update, as the Qhull data structures are discared after computation except in the incremental mode.<issue_closed> <issue_comment>username_2: Fixed by merging of #5014.
<issue_start><issue_comment>Title: Latest caffe master and v0.15 username_0: I successfully merged the latest caffe master with the Windows branch but I noticed that v.015 caffe branch is different - are there plans merging them in the future? There are also layers (such as rnn) that are not included in v.015 and vice versa. thanks _laszlo<issue_closed> <issue_comment>username_1: I don't know what fork you're referring to (perhaps NVcaffe?), but this issue tracker is for BVLC/caffe:master. The windows branch is now regularly maintained by @willyd and tracks master.
<issue_start><issue_comment>Title: fix: Preselect the Seek button to match Producer's initial timeline mode username_0: The current timeline mode is tracked by a variable, so the checked state of the timeline mode buttons should be synced to that variable. On initial load, that variable is set to Seek mode. This fix ensures that the Seek button is highlighted on initial load.
<issue_start><issue_comment>Title: Configuring a gulp task runner takes more than 3s sometimes username_0: Related to #4019 - VSCode Version: 18ebb4a4f5491af7b16e91e42fc36410bbb2e4d7 - OS Version: El Capitan Steps to Reproduce: 1. open vscode repo, delete `tasks.json` 2. Configure Tasks Runner > gulp 3. `tasks.json` is generated only after 3 seconds, I would expect it is instantanious<issue_closed> <issue_comment>username_1: There is little I can do about it since I spawn gulp to determine the task. If I call gulp --tasks-simple on the command line it takes quite some time as well. For example for VSCode between 2 and 3 seconds on my machine. <issue_comment>username_0: Makes sense
<issue_start><issue_comment>Title: Fix bug injected during skeleton port username_0: This library method originally came from the skeleton project. While running pep8 after porting it I injected a logic bug. <!-- Reviewable:start --> --- This change is [<img src="https://reviewable.io/review_button.svg" height="35" align="absmiddle" alt="Reviewable"/>](https://reviewable.io/reviews/openbmc/pyphosphor/2) <!-- Reviewable:end --> <issue_comment>username_1: </details> --- Reviewed 1 of 1 files at r1. Review status: all files reviewed at latest revision, all discussions resolved. --- *Comments from [Reviewable](https://reviewable.io:443/reviews/openbmc/pyphosphor/2#-:-KIUaJ37tvai89JEp5iC:1417088721)* <!-- Sent from Reviewable.io -->
<issue_start><issue_comment>Title: Configuration option "host" is inconsistent with elasticsearch input "hosts" option. username_0: It looks like the elasticsearch output plugin uses [`host`](https://www.elastic.co/guide/en/logstash/current/plugins-outputs-elasticsearch.html#plugins-outputs-elasticsearch-host) while the elasticsearch input plugin uses [`hosts`](https://www.elastic.co/guide/en/logstash/current/plugins-inputs-elasticsearch.html#plugins-inputs-elasticsearch-hosts). Both options essentially do the same thing in the respective plugins. It would be nice if the name was consistent in both plugins.<issue_closed> <issue_comment>username_1: This has been fixed in master, and will be released with LS2.0 :) Thanks for filing the issue @username_0 !
<issue_start><issue_comment>Title: [rebased] Always emit success and failure metrics username_0: Based on and supersedes #84, rebased on top of master and with updated VCR cassettes. Travis should pass Thanks @devonbleak! <issue_comment>username_0: CI is green. Merging! <issue_comment>username_0: @devonbleak: Ugh, I just used a "squash merge" from Github's UI to merge this PR and for some reason it removed your authorship from the commit on master... Sorry about that, I'll make sure to mention you in the changelog. Thanks again for your contribution!
<issue_start><issue_comment>Title: use scrollbar with file format list username_0: The "Export Molecule" functionality allows saving to a vast number of different file formats, 136 according to `obabel -L formats write`. Clicking on the file format list in the Export Molecule dialog, if you want to select a particular format then you have to tap down "manually" one by one until you reach the format you want. It's quite laborious. It would be much easier if there was scrollbar that came with the list. With other applications there's sometimes a scroll bar to roll through these lists, or so I thought. Actually LibreOffice has the same problem. with no scrollbar to select alternative document file formats. Even gnumeric can't do it, with the list of spreadsheet file types. Evidently it's a common problem, possibly system-dependent. Gimp does offers a scrollbar with "Export Image" file formats, though its format list seems to be organized in a different way. I suspect Qt might have a way of activating a scrollbar with these listboxes. If that proves to be the case then it would be helpful to do it. `QFileDialog` has `nameFilters()`, but not clear to me how the rendering of the filters in the dialog box is handled, and might not be the same thing. I can see avogadro has related code in FileFormatDialog::selectFileFormat(), qtgui/fileformatdialog.cpp, though not clear to me where scrollbar settings would go. I'm using Gnome on Linux. Possibly it's a Gnome problem, might not be solvable by Qt or Avogadro. <issue_comment>username_1: Under the hood, we use `QFileDialog::getSaveFileName(…);` and just supply the list of name filters. On Mac and Windows, these use the native save dialogs. While there's no scrollbar on my Mac, it's a popup menu / combobox and I can scroll like usual or type to skip to those items. I tried some searching online, but I don't see anyone addressing this question - perhaps because most programs don't have as long a list. I suspect this is a function of the Qt style - but if Gnumeric doesn't provide it, it may be a GNOME / GTK style issue. I can say that the popup doesn't matter. If you type a particular extension, that should set the format that will be used. <issue_comment>username_1: While I understand this is frustrating, I think it should pick up format by extension by default. The menu is only a hint. So if you export and type "file.pdb" it should export a PDB file. Otherwise, I'm open to suggestions, but it doesn't seem like an Avogadro-specific issue, more of a GTK / GNOME theme issue.<issue_closed>
<issue_start><issue_comment>Title: [1.3.0 Development] Stream stops in Flash username_0: I upgraded my development area to contrib-hls 1.3.0 and Video.JS 5.3 (tried with 5.2 too) and noticed the Flash fallback does not seem to work very good anymore. Tried this in IE 11 on Win7, which does not have MSE support and Firefox 42 on Win10 with MSE disabled. The player will fall back to Flash, but the stream stops after a few seconds (looks like end of first .m3u8 list) with a 'Network error' in console. Works fine with MSE enabled and in Flash < contrib-hls 1.3.0, so looks like a new problem. <issue_comment>username_0: @username_1 I did a new build with the latest commits from @imbcmdth and found out something interesting. Tested again with Firefox 42, MSE disabled in about:config. - First, loading the page; - Hit play; - The loading spinners stays visible, stream stops after the first .m3u8 file ends (3 .ts files); - Hit pause, hit play again; - Video reloads, loading spinner disappears this time and stream will keep on playing without stopping; <issue_comment>username_1: Hmm. We have a couple fixes queued up for Flash and they sound similar to the symptoms you're describing. I'm hoping we'll finally get this one nailed in the next release. Stay tuned.<issue_closed> <issue_comment>username_0: Latest commits did indeed fix this problem. But now it's the HTML5 one that's breaking after a few segments (#483).
<issue_start><issue_comment>Title: Tutorial Page hcp-java-weatherapp-part5.md Issue username_0: Tutorial issue found: [https://github.com/SAPDocuments/Tutorials/blob/master/tutorials/hcp-java-weatherapp-part5/hcp-java-weatherapp-part5.md](https://github.com/SAPDocuments/Tutorials/blob/master/tutorials/hcp-java-weatherapp-part5/hcp-java-weatherapp-part5.md) contains invalid tags. Even though your tutorial was created, the invalid tags listed below were disregarded. Please double-check the following tags: - topic>cloud - topic>java - tutorial>intermediate<issue_closed>
<issue_start><issue_comment>Title: Tests files lack verbose comments username_0: When utilizing test driven development, you should document what is accomplished in each test case accomplishes. This allows you to nail down three facts about your software: high-level documentation (English description of what the code accomplishes), low-level documentation (examples of HOW to use the functionality in code), and creation of comprehensive testing to ensure the software works as expected.<issue_closed>
<issue_start><issue_comment>Title: Logging questions username_0: While working on `sudo nerdctl` port forwarding I found myself in need of capturing logs. This is when I noticed some inconsistencies. I wanted to do a PR to make some more of the logging useful but I wasn't sure what was supposed to happen and I have some opinions on what I'd like to do. So, here are my questions... - I noticed both logrus and the standard library log package. Should the log package statements be moved to logrus? - The guest agent logs go to stderr. It's run as a daemon so there is no logging. Is this intentional or should the logs be moved to a file? If they are a file I would set most output to debug level and then run the guest agent with info level so most data would not be logged by default. This would be to not fill up VM disk space. - logrus is in maintenance mode (see the README). The project recommends other logging packages. I was wondering if you would be open to a logging API and then logrus as the implementation being used. This would make it easier to change later if needed. I would go with https://github.com/masterminds/log-go. Note, I wrote that so this is a shameless plug. When I get a little time this fall I'm happy to clean up the logging if you're interested. <issue_comment>username_1: What are pros and cons of `masterminds/log-go`? <issue_comment>username_0: There are two pros... First, Go packages often include a specific logging library. These packages used in one place are regularly imported into other applications. When different packages using different logging libraries are imported into one main application we can run into some problems: 1. There can be binary bloat as more than one logging implementation is imported. 2. If one of the loggers is not configured than logging information (especially debugging or trace levels) can be missed. 3. An application needs more complex setup logic to instantiate all of the logging implementations. Kubernetes is a great example of this. If I read the dependencies right, it is including 8 different logging implementations. Many of these are due to dependencies. log-go provides an interface that can be used all over. Both at the package level and as an interface passed around. Then, the application itself (main) can setup the logger to use everywhere. When a package is imported by someone they can setup the logger of their choice. Second, changing loggers can be a pain. It means modifying the entire application. If a logger becomes deprecated or it doesn't implement a feature you need it's hard to change. Using an interface and being able to change out the logger in just one place make this much easier. There are two cons... 1. Using an interface means you end up just using the subset of features in the interface. Some loggers have features not in log-go. It was built to satisfy the major loggers at the time it was written but I'm sure some features are missing. 2. log-go does not have major usage at this point. It's newer. I'm ok with using it or not. I'm happy to do the work in transitioning if you are open to it. Thanks for reading through this. <issue_comment>username_0: Using an interface and configuring it would also solve issues like #112
<issue_start><issue_comment>Title: [igxPivot] Row selection for some row dimensions doesn't trigger the grid css pipe after scroll username_0: ## Description Row selection for some row dimensions doesn't trigger the grid css pipe that handles row selection styles after the user has scrolled down. * igniteui-angular version: 13.0.x * browser: ## Steps to reproduce 1. Open the pivot grid sample 2. Scroll to the bottom 3. Click on "01/06/2020 -> Bikes" ## Result The header is selected, but the row is does not receive selection style until moving the mouse to trigger the pipes. ## Expected result The row should receive correct selection styles after clicking. ## Attachments ![PivotGridSelection](https://user-images.githubusercontent.com/3768136/150236168-9d04f916-499d-4a19-a588-96e2d5ffbc7c.gif)<issue_closed>
<issue_start><issue_comment>Title: KVM: Fix plugging ip addresses to the wrong interface username_0: Fix for KVM, where the public ip would be added to the guest network. Reason is because it used 'broadcastUrl' as an index, which turned out not to be unique in all cases and was overwritten. This resulted in the public ip address being added to the wrong interface (see screenshots). Problem: ![kvm-wrong-interface-1](https://cloud.githubusercontent.com/assets/1630096/14937805/fe3c472e-0f12-11e6-90af-a7c1df4b8310.png) After patch: ![kvm-wrong-interface-2](https://cloud.githubusercontent.com/assets/1630096/14937807/044291e6-0f13-11e6-85a5-fe776c8db638.png) The `192.168.23.0/24` range is used as 'public' in this lab. <issue_comment>username_1: Strangely enough, I just ran into this problem yesterday.I'll pull it in. Thanks Remi! <issue_comment>username_0: Jenkins error says: `Unable to instrument project`, seems like a workspace issue or such. <issue_comment>username_0: This time around Jenkins is happy and Travis times out :-s ``` No output has been received in the last 10m0s, this potentially indicates a stalled build or something wrong with the build itself. ``` <issue_comment>username_0: Force pushed a final time, else I give up on this. <issue_comment>username_0: @username_1 Sorry but this seems to have issues. We run this in prod with Cosmic but apparently some other patch is missing here in order for this to work properly. Unfortunately I have no more time to work on this now. <issue_comment>username_2: this does not work for me as well. If there are multiple public interfaces in the VR, only the last one works.
<issue_start><issue_comment>Title: (#5368) - Remove references to index.html to fix excerpts username_0: Similar to the issue we had before, the new jekyll build doesn't work when referencing index.html so the post details were not getting excerpts on the homepage and blog first page. [skip ci] <issue_comment>username_1: Ah I missed that one, good catch, thanks
<issue_start><issue_comment>Title: Performance increase on create branch username_0: When creating a new branch, committing a change and pushing that branch to remote, each file is actually seen as a new change in the commit. Therefore each file needs to be scanned. This takes some time and really shouldn't. The presumption was that only the files that were changed were included in the commit, however this doesn't appear to be so. A performance improvement would be to determine if the file has changed from the start of the branch to the latest commit and only then validate the file as it has changed on branch. <issue_comment>username_0: Update: Behaviour that I've seen so far: Test # | Change description | RefChange | File ChangeType ---------|---------------------------|-----------------|----------------------- 1 | New Branch Changed file on first push | ADD | MODIFY 2 | New Branch UnChanged file on first push | ADD | ADD 3 | New Branch New file on first push | ADD | COPY 4 | Edited Branch after push reject | UPDATE | MODIFY It's number 2 we want to avoid above. When creating a new branch, we don't want to check the files that have been added, as this denotes that they haven't changed on branch. <issue_comment>username_0: I think you would have to do something like this: https://answers.atlassian.com/questions/223143/how-to-get-only-new-changesets-in-pre-receive-hook-on-new-branch-push Effectively exclude all branches (or refs to those branches) when completing the CommitsBetweenRequest build.<issue_closed>
<issue_start><issue_comment>Title: Fix for TIKA-1882 username_0: The following mime magic has been added to tika-mimetypes.xml to better detect the below mime-types: 1. **application/vnd.ms-cab-compressed (.cab files)** - pattern "MCSF" in the first 4 bytes 2. **application/vnd.xara (.xar files)** - pattern "xar!" in the first 4 bytes 3. **application/x-mobipocket-ebook (.mobi files)** - pattern "BOOKMOBI" starting at byte position 60 4. **video/quicktime (.mov files)** - patterns "free" and "wide" seen starting at byte position 4 <issue_comment>username_1: To make it easier to read, would you be able to re-do the first 3 patterns as text rather than hex? <issue_comment>username_0: Please find the requested changes in eb2d06b. After merging with the latest file, I found that mime magic for *.cab and *.mobi files had already been updated. Hence did not update for *.cab files. However, the offset for the "BOOKMOBI" pattern of *.mobi files was set as 23. During my analysis, I found some files with the pattern at offsets of 58 and 60. Therefore, I updated the mime magic to use a range from 0:60. Tested the updated tika-mimetypes.xml file for mp4 and quicktime formats as well.
<issue_start><issue_comment>Title: Add buildDepend for gi-gst. username_0: ###### Motivation for this change Final fix for the gi-gst work. ###### Things done - [X] Tested using sandboxing ([nix.useSandbox](http://nixos.org/nixos/manual/options.html#opt-nix.useSandbox) on NixOS, or option `build-use-sandbox` in [`nix.conf`](http://nixos.org/nix/manual/#sec-conf-file) on non-NixOS) - Built on platform(s) - [X] NixOS - [ ] OS X - [ ] Linux - [ ] Tested compilation of all pkgs that depend on this change using `nix-shell -p nox --run "nox-review wip"` - [ ] Tested execution of all binary files (usually in `./result/bin/`) - [ ] Fits [CONTRIBUTING.md](https://github.com/NixOS/nixpkgs/blob/master/.github/CONTRIBUTING.md). --- <issue_comment>username_1: Fixed by https://github.com/NixOS/cabal2nix/commit/22311078120b9531c9185328fecb130824d1ddc9. Thank you very much for your efforts! <issue_comment>username_0: Certainly a reasonable alternative. Thanks!
<issue_start><issue_comment>Title: Update change-cloud-flow-owner.md username_0: Point to full licensing guidance instead of including one specific part of it that might not apply. <issue_comment>username_1: @username_0 : Thanks for your contribution! The author(s) have been notified to review your proposed change. <issue_comment>username_2: @username_0 : Thanks for your contribution! The author(s) have been notified to review your proposed change. <issue_comment>username_3: @MSFTMan : Thanks for your contribution! The author(s) have been notified to review your proposed change.
<issue_start><issue_comment>Title: [New Feature]: Add `-SkipPRCheck` switch username_0: ### Description of the new feature/enhancement For packages like KDE.Kate, the version is the same and only InstallerUrl is changed (w/ InstallerSha256). If some user has set `ContinueWithExistingPRs: never`, the user is required to change the setting to `always/ask` and then revert it back again. Therefore, we should add a switch `-SkipPRCheck` so that it continues ignoring the setting. ![image](https://user-images.githubusercontent.com/83997633/140276253-305aecfc-c05c-42c4-8bcb-97ee576bc78d.png) ### Proposed technical implementation details (optional) if `-skipprcheck`, then, &nbsp; skip pr check and ignore the setting else &nbsp; do nothing, continue normally<issue_closed>
<issue_start><issue_comment>Title: Open for collab - to convert DM_Control to full c++ for speed username_0: I enjoy this library and have used it in research , I'm currently writing my graduate paper, where i'll be showing my benchmark for the rgb tacking problem. Looking through the codebase, I see lots of optimization capability if a high performance language like c++ was used instead, it can directly interface with mujoco since that' in C. I am looking to collaborate with other c++ developers to create a c++ version and maybe write a python wrapper over that instead. I get really low FP during training, and not so feasible to train on a few cores. I have a private repo where i implemented PPO, A2C and impala with full multi-threading, MPI and template support, and the lib-torch api. I am open to share that with any developer who is willing to collaborate too.
<issue_start><issue_comment>Title: Unify spell of `TypeScript` username_0: ### What are the changes and their implications? Unify spell of [TypeScript](https://www.typescriptlang.org/). ### Checklist - [ ] Tests added for changes - [ ] PR submitted to [blitzjs.com](https://github.com/blitz-js/blitzjs.com) for any user facing changes <!-- IMPORTANT: Make sure to check the "Allow edits from maintainers" box below this window --> <issue_comment>username_1: Thank you @username_0. @all-contributors add @username_0 for code <issue_comment>username_2: @koolii @username_1 Looks like it hasn't been approve, so I approved it! @username_0 Thanks good change! :)
<issue_start><issue_comment>Title: Lib NOT WORKING at all ! username_0: Hello ! I use your lib inside a Docker image https://github.com/username_0/docker-satis Acutally, a json schema validation has been added on composer/satis (dev-master) a few days ago [https://github.com/composer/satis/blob/9e8ef8e906f204d01ce468dcc123306919488ddc/src/Composer/Satis/Command/BuildCommand.php#L282](https://github.com/composer/satis/blob/9e8ef8e906f204d01ce468dcc123306919488ddc/src/Composer/Satis/Command/BuildCommand.php#L282) AND [https://github.com/composer/satis/blob/master/res/satis-schema.json](https://github.com/composer/satis/blob/master/res/satis-schema.json) The require entry in the JSON file should be an object not an array ! ```shell { ... require: {} ... } ``` ```shell [Composer\Json\JsonValidationException] The json config file does not match the expected JSON schema ``` Please fix it, or change the dependency to previous version THX ! :smile: <issue_comment>username_1: It tooks me some minute to realize what we were talking about, so far after a quick check, I think it comes from the serialization process here https://github.com/username_1/satisfy/blob/master/src/Playbloom/Satisfy/Model/Configuration.php#L40 If @username_2 or you @username_0 have a quick fix, I won't be able to update this repo before 2-3 weeks.<issue_closed> <issue_comment>username_2: @username_0 Can you remove line `require: {}` and test with latest dev version? <issue_comment>username_0: Hi @username_2 ! It's **working** with the latest dev version ! Thx ! :+1: :smile: <issue_comment>username_2: you're welcome ;) i will create next minor version <issue_comment>username_1: :clap: :clap: thanks guys, amazing.
<issue_start><issue_comment>Title: feat(reference-guide): add Deis v1 Migration Guide username_0: <issue_comment>username_1: If you have this wait until 2.4 then we can update a lot of these to be Deployments and get rid of much of the scale down / up again because you already use `patch` <issue_comment>username_2: Overall, looks pretty good. Left some line notes. This isn't too far off from an LGTM. :) <issue_comment>username_3: Are there any plans to include instructions how to actually migrate applications and configuration? Preferably include a script which does read config from existing V1 cluster, and writes it back into the V2 cluster <issue_comment>username_0: There's no data migration plan that I'm aware of, since there are API-breaking changes between the two. You could use `deis config:pull` and `deis config:push` to pull configuration from your old apps, however. <issue_comment>username_3: @username_0 still leaves you with domains and certificates. Also trying to remember to do this for every app is a bit of a pain (although the git push step should still be done manually I guess) In theory one can hook up a script similar to https://github.com/deis/deis/blob/master/contrib/util/reset-ps-all-apps.sh which will be using plain curl requests for the V1 interaction and could use the app for V2 interaction. Once I consider a move to V2 I might write such a script myself I am not sure how much effort it would be to write such a script and if actually worth the effort ( as this is something one should only do once) <issue_comment>username_0: Simple script, assuming `deis1` is your v1 client and `deis2` is for v2: ``` for app in `deis apps | grep -v "==="`; do rm .env; deis1 config:pull -a $app; deis2 config:push -a $app; done ``` <issue_comment>username_0: this is starting to get off-topic from the original so I'd suggest opening another ticket for a data migration strategy, but the configuration/domains/certs is the easy part to migrate. The application builds, health checks, etcd configuration and processes themselves are the harder parts to migrate over since those are either all re-written or thrown away (referring to etcd). <issue_comment>username_2: Hit and run comment on any hypothetical effort toward a comprehensive migration guide or script... Even if the CLIs seem strikingly similar, there are so many important differences between V1 and Workflow that, altho easily overlooked, would make a migration process non-trivial. Consider, for instance, how radically different domain and certificate management are now. Also consider that more than just apps need to be moved-- users do too-- and so do permissions / app ownership / collaborator info. Portions of that cannot be accomplished by scripting the old and new CLIs and would have to be tackled by a process with insight in the databases. <issue_comment>username_0: @username_4 I can't comment on your review, but I was just in the middle of making those changes. They should be live now :) <issue_comment>username_4: As I have 0 experience with v1, I dont feel qualified to lgtm the v1 commands listed here. But the rest of the document (intro paragraph, and kubectl commands) look fine to me. <issue_comment>username_0: whoops, forgot to add links to kubectl, fleetctl, deisctl and helm. <issue_comment>username_5: Very helpful docs--thanks for writing this (and being patient)!