repo_name
stringlengths
4
136
issue_id
stringlengths
5
10
text
stringlengths
37
4.84M
nlampi/SwiftGridView
209094595
Title: Bug with wrong height used for grouping Question: username_0: case SwiftGridElementKindGroupedHeader: let grouping = self.groupedColumns[indexPath.item] for column in grouping[0] ... grouping[1] { colWidth += self.delegate!.dataGridView(self, widthOfColumnAtIndex: column) } if let delegateHeight = self.delegate?.dataGridView?(self, heightOfHeaderInSection: indexPath.section) { if(delegateHeight > 0) { rowHeight = delegateHeight } } break This code can cause problems since we are using "heightOfHeaderInSection" but the grouping is done for grid header not to the section header. Answers: username_1: Another good catch. Updated to properly work using the grid header height. Please test using 0.3.3. Status: Issue closed
dapr/docs
816279463
Title: Security - minor wording error Question: username_0: In https://docs.dapr.io/concepts/security-concept/#mtls-in-kubernetes "as" should be changed to "and". Current text: The diagram below shows how the Sentry system service issues certificates for applications based on the root/issuer certificate that is provided by an operator or generated by the Sentry service as stored as a Kubernetes secret PRoposed change: The diagram below shows how the Sentry system service issues certificates for applications based on the root/issuer certificate that is provided by an operator or generated by the Sentry service **and** stored as a Kubernetes secret<issue_closed> Status: Issue closed
lotabout/skim
288715779
Title: diff-so-fancy in preview is not rendered properly Question: username_0: I have some commands that preview git diffs trough [diff-so-fancy](https://github.com/so-fancy/diff-so-fancy), but they are not rendered properly. Example command: ``` fshow () { git log --color=always --format="%C(auto)%h%d %s %C(black)%C(bold)%cr% C(auto)%an" "$@" | sk --ansi --reverse --tiebreak=index --no-multi --preview $_viewGitLogLine --header "enter to view, alt-y to copy hash" --bind "enter:execute:$_viewGitLogLine | less -R" --bind "alt-y:execute:$_gitLogLineToHash | xclip" } _viewGitLogLine="echo {} | grep -o '[a-f0-9]\{7\}' | head -1 | xargs -I % sh -c 'git show --color=always %' | diff-so-fancy" $_gitLogLineToHash="echo {} | grep -o '[a-f0-9]\{7\}' | head -1" ``` Output with fzf vs with sk: https://imgur.com/a/CL1XO Answers: username_1: @username_0 I checked the image you attached, and I think `skim` renders properly while `fzf` does not. If possible, please attach the raw output(image) by `diff-so-fancy`. username_0: @username_1 I don't understand. The lines above and below the modified file are interrupted by skim. fzf renders them like the [screenshot](https://user-images.githubusercontent.com/3429760/32387617-44c873da-c082-11e7-829c-6160b853adcb.png) on the project page. In the images I originally linked, the first one is fzf, the second skim. username_1: @username_0 Oh, now I see the problem. I thought it was the highlighted content. I'll have a check. username_1: The root cause of this issue is the width calculation of character `─`. There are some characters that are ambiguous in the columns that they occupies according to [Unicode® Standard Annex #11](http://www.unicode.org/reports/tr11/). skim use the crate [unicode-width](https://unicode-rs.github.io/unicode-width/unicode_width/trait.UnicodeWidthChar.html) to calculate the columns. And previously it use `width_cjk()` which will treat the ambiguous characters as `2` columns, thus causing the "separation". Now I decide to use `width()` which will treat them as `1` column. I think it make sense since the normal use-case will be under non-CJK context. Status: Issue closed username_0: Great, thanks!
Nesqwik/Dungeons
105793368
Title: Reconaissance d'input intelligent Question: username_0: Permettre à l'utilisateur d'entrer une décision légèrement différente de celle attendue. Exemple : "go top" pour "go north" Ou d'accepter quelques écarts d'orthographe ou de mots Exemple : "read the book" ou "read book" donne le même résultat
swe-ms-boun/2018fall-swe574-g2
394885602
Title: Travis failed test problem should be solved Question: username_0: The problem: `Double requirement given: Flask-RESTful==0.3.6 (from -r requirements.txt (line 15)) (already in flask_restful (from -r requirements.txt (line 13)), name='Flask-RESTful')` Answers: username_0: This PR should solved the problem: https://github.com/swe-ms-boun/2018fall-swe574-g2/pull/134 username_0: Passed the test. Status: Issue closed
catalyst/moodle-tool_etl
414890607
Title: Add a configurable data mapping processor Question: username_0: This process will take the following config: * [ ] a source column * [ ] a target column * [ ] have an 'ignore case' boolean * [ ] have an option 'stop at first key match' or 'apply all search and replace in order' * [ ] a textarea of key value pairs separated by a comma. This plugin will: * [ ] auto create the column if it doesn't exist * [ ] process and replace the key values in the order they are specified * [ ] when parsing parse it as csv, so the key and or value can contain an escaped , char by " " quoting the key * [ ] if the key starts and ends with a / then interpret it as a regex. The value can contain regex $1 $2 vars
gravitystorm/openstreetmap-carto
128910963
Title: Render disused:railway=rail like railway=disused Question: username_0: Hello, there. Hope you're not growing tired of my suggestions :wink: Shouldn't we render `disused:railway=rail` like `railway=disused`? This new scheme seems to be expanding, but, as the default rendering does not render them, it doesn't support wide adoption by naive contributors, which may think "It does not display if I tag it that way, only if tagged the old way, so let's go with the old way". Besides, not rendering it means that disused railways converted to the new scheme are no longer rendered, which seems a regress to me: this feature is often an important landmark when present. Regards. Answers: username_1: I think you may actually be seriously **misinterpreting** the whole purpose of a **"disused:x=y"** versus a ***"x=disused"*** tag. As I understood from e.g. the *"Lifecycle prefix"* and *"Comparison of life cycle concepts"* webpages: http://wiki.openstreetmap.org/wiki/Lifecycle_prefix http://wiki.openstreetmap.org/wiki/Comparison_of_life_cycle_concepts the whole purpose of using **"disused:x=y"** contrary to using a ***"x=disused"***, is exactly the result you are now getting: ***no*** rendering of features that users have explicitly set to **"disused:x=y"**, so as to avoid showing objects / functions that may have previously existed, but are no longer there. E.g., to better understand this, see the reference to the **disused:amenity=pub** example, where it would be ludicrous to show a former pub as a pub on the map if it has been converted to e.g. a normal living house or shop. Based on this, I think the current rendering ***shouldn't*** be changed, but rather kept. This also mains that the **disused:railway=rail** tag is *not* a replacement for the **railway=disused** tag, but rather that these tag should live next to each other, and be used appropriately based on the local mappers desire to show or hide certain disused railways for applications. username_2: No, the *=disused tagging is a bad scheme and should not be used. A data consumer shouldn't need to filter out disused objects by checking the disused tag. I think the suggestion by @username_0 makes sense. username_1: It only makes sense if you would ***explicitly desire*** to render previous / former functions of objects in openstreetmap-carto / any application, not if you think it is a replacement for **railway=disused**. E.g. do you suggest to render stuff like **disused:amenity=pub** and possibly thousands of other **disused:x=y** objects as well in carto? Of course, this may be a custom rendering decision made by the openstreetmap-carto team for a specific case. There is nothing wrong with that, any rendering of OpenStreetMap data is to a large extent interpretation, but don't make it light-hearted. username_3: The [disused](http://wiki.openstreetmap.org/wiki/Key:disused:) features still exist on the ground and are valuable to show in a map for navigational purposes. username_1: I never suggested to **not** render **railway=disused**, which is the current and most used tag for this. In fact, I suggest quite the opposite. I only suggest to not render **disused:railway=rail**, as rendering it is a wrong interpretation of OpenStreetMap lifecycle tagging IMO. If you read carefully, the fact that it "is valuable to show in a map" is exactly why I compared the **railway=disused** tag with **amenity=pub**. Contrary to what the **disused:x=y** tag (e.g. disused:railway=rail or disused:amenity=pub) was designed for, people ***actually want to see the*** **railway=disused** ***tag rendered***. In contrast with this, **disused:x=y** as per the http://wiki.openstreetmap.org/wiki/Comparison_of_life_cycle_concepts page, was actually designed to register a *previous* function of an object, that should not normally rendered by default (but could in a specialized renderer targeted at disused objects, which I don't think openstreetmap-carto is...). In addition, since **railway=disused** uses the main key **railway=x** (which is also comparable with the amenity=pub example), and not the much more problematic and deprecated **disused=yes** tag, which uses a secondary key to signify the disused status and thus causes problems with rendering of only a main key, it is also less of problem to decide whether or not to include it in rendering. You can simply include or exclude the **railway=disused** class from rendering, contrary to having to examine a secondary key as with the disused=yes deprecating tagging. username_3: @username_1 ah, thanks for clarifying. username_4: For start: this is tagging for renderer approach. disused:x=* makes sense to use if disused object is vastly different from one that is not disused. For example shop=florist is POI where one may buy flowers, it is open at some times etc. disused:shop=florist has none of this functions and it is (if at all) used for completely different purposes. With railways situation is different also for additional reason - here railway=disused scheme is widely used ([73 176 times](http://taginfo.openstreetmap.org/tags/railway=disused)) unlike [5594 usages worldwide for disused:railway](http://taginfo.openstreetmap.org/keys/disused%3Arailway). In that situation I propose to start from discussion on tagging mailing list and/or with railway mappers whatever using railway:disused instead/in addition to railway=disused is a desirable idea. In that situation starting from rendering is a poor idea. On the other hand I see that OpenRailwayMap [is already rendering this tag] (https://github.com/rurseekatze/OpenRailwayMap/search?utf8=%E2%9C%93&q=disused%3Arailway&type=Code) (I expected opposite), so it may be accepted more than I expected. username_5: sent from a phone I understand the shop tag as tagging a function: it is not the physical structure (space) tagged like this, but it's the business, someone selling flowers. The tag disused:shop=florist is something I'd maybe use in case the florist has moved/closed but there are still traces (e.g. signage). If there's a different business now in that shop it feels like unnecessary clutter to add references to the history (i.e. what there was before) username_6: Yes, the problem here is that contrary to disused:shop (that now sells something else or nothing), for disused:railway=* the rails do still exist on the ground and can be rendered. Also, we wiki for key:railway itself proposes disused:railway as alternative tagging. username_1: Which is - 1) A possible reason to re-tag them to their former **railway=disused** _if you desire to see it rendered._ using the current, IMO correct, render rules of Carto. _Or(!)_ - 2) To **_double_** tag them with both: **railway=disused** and **disused:railway=x** if you seek to get it rendered and at the same time also register the previous _type of the railway_. But the rendering should only be based on the **railway=disused** tag as - as I tried to explain before - rendering based on **disused:railway=x** is a mis-interpretation of OpenStreetMap lifecycle tagging. Using the **disused:railway=x** _alone_ as the basis for rendering is really not recommended (unless in some specialized renderer targeted at disused objects). **disused:railway=x** is not a replacement of **railway=disused**, these two tags should co-exist. username_4: That is tagging for renderer. username_1: Carto IMO **justly** does not render **disused:railway=x**, and justly renders only **railway=disused**. Both tags should co-exist, and it is up to the local OpenStreetMap community to decide if they want to use one or the other tagging scheme (or double tag with both tags), and thus either decide to show disused railways on the map (which by far the majority of railway mapping enthusiasts seem to want), or to hide the railways but just have the features in the OSM database for specialized applications. username_4: once somebody adds a correct railway=disused tag? Remove correct tag because they want to change rendering? That would be pure tagging for renderer ("Don't deliberately enter data incorrectly for the renderer"). username_1: However, I **_do_** think entering **disused:railway=x** in the database, **_and then asking for it to be rendered by Carto_**, is incorrect, because it violates the OpenStreetMap lifecycle tagging proposal. username_5: sent from a phone what are you referring to? Surely not this concept: http://wiki.openstreetmap.org/wiki/Lifecycle_prefix username_1: Yes. Unless you can point me to another page describing **disused:railway=x** and more generally **disused:x=y** lifecycle tagging. Yes, the page may be thin in some aspects, but it is better than nothing and does make sense (well, to me at least...) Of course, there is the other page as well: http://wiki.openstreetmap.org/wiki/Comparison_of_life_cycle_concepts username_5: sent from a phone then I might have misunderstood you: how is that page in contradiction to tagging disused:railway=rail? username_5: sent from a phone I disagree, the purpose is not to hide those features generally, but to avoid confusion by showing them like an in-use feature when you don't check for (a theoretically infinite list of) additional modifiers that change the meaning of other tags. It is a safety measure to show these different features only if you want to and not by accident. username_1: It's not. I have tried to explain this: I am _not_ against tagging either **railway=disused** or **disused:railway=rail**. Both have their function, they should co-exist. In my opinion though, only **railway=disused** should be rendered by a general map like Carto, and **disused:railway=rail** used either as a secondary tag to document and specify the type of the disused railway in the OSM database, or used in specialized renderers / styles that target **disused:x=y** tagging. username_1: That is exactly what I wanted to say, but wrote in different words username_0: @username_1: I understand your arguments, at least I think so, but I still don't understand why rendering `disused:railway=*` is a problem. I mean, I agree that the `disused:` —and related life cycle tags— tagging scheme is supposed to ease the masking of unnecessary, not yet or no longer used, amenities and features, but I don't understand why `railway=disused` should be added to a `disused:railway=*`: IMO, these tags mean the same thing, but the second gives more freedom to tag the railway characteristics, for example if it was a narrow gauge railway, while the first essentially only tells that the railway was disused. Used this way, the life cycle prefix tagging scheme makes sense, but I still does not understand why, in this specific case of railway life cycle, your approach is not tagging for the renderer, because I still don't understand why using two tags which essentially say the same thing: the railway is disused. On this point, I think @username_5 is right, but, again, I may have missed your point despite your numerous attempts to make it clear. username_1: @username_0 Yes, unfortunately it seems difficult. My main objections or points regarding rendering based on **disused:railway=x** are the following: - If Carto starts rendering disused:x=y tags, where do you stop? There is potentially thousands of ordinarily used tags that could just as well be rendered from a **disused:x=y** tag. None of these represent current functions. Bottom line: what makes **disused:railway=x** so special that it would deserve this special treatment over something like **disused:amenity=pub** or any of the other thousands of disused:x=y tags? **Note:** for the sake of the argument I am trying to take a completely neutral position / stance here, which isn't easy as I love trains too... but hey, there may be big historic pub lovers too! - **railway=disused** is currently rendered in Carto and is generally used in the sense of **amenity=pub**. That is, **_people actually expect to see it rendered_**. This is contrary to what the **disused:x=y** tagging lifecycle tagging scheme describes. The **disused:x=y** tagging scheme was proposed as a way to describe former functions of objects, while avoiding the pitfall of requiring each and every current renderer to filter out objects with functions that may no longer exist. It would be a real problem if the map showed thousands of former shops... - As per the above, I really consider rendering based on **disused:x=y** a mis-interpretation of the OpenStreetMap lifecycle tagging _for any general purpose style_... - There is nothing wrong with specialized styles designed to show former functions using **disused:x=y** (like OpenRailwayMap seems to be doing according to some posts here)! - I don't say, tag one and not the other, I say: tag **railway=disused** if your local railway enthusiasts community wants to see the disused railway tracks rendered in the Standard Map / Carto. If not, use **disused:railway=x**, _or_ use it to add extra detail / information about the type to a **railway=disused** (just like we use secondary keys in many cases in OSM to add extra detail or refine keys, e.g. **natural=water**, **water=x**). - In essence, in my opinion and probably my most controversial statement here, and based on the current and past tagging practices of railway mappers, **railway=disused** **disused:railway=x** is very much equivalent to **natural=water** **water=x** - I also think **disused:railway=x** is **not** a replacement for **railway=disused** as most people seem to think, just like **water=x** is not a replacement for **natural=water**. These tags should co-exist. Thinking disused:x=y is a replacement is the basis of the problem in understanding my argument. - I propose no change to current rendering, which is correct in my opinion. Lastly: I don't say there couldn't be an exception for the railway, but personally, I don't see the advantage outweighs the disadvantage of the clash with OpenStreetMap lifecycle tagging. There is very little to gain switching the rendering from **railway=disused** to **disused:railway=x**, while there is clearly something to lose (compliance with lifecycle tagging, setting a precedence for many more requests for rendering of disused:x=y in Carto). I just say: think well before making this decision. (and this will be my last ramble about this subject here...) username_0: OK, so this is where I went lost in your argumentation. Indeed, your point of view makes sense on this: `disused:railway=x` can be seen as a refinement of `railway=disused`; as `water=*` is a refinement of `natural=water`. I don't think it's the case, though, as I never read that the life cycle prefixed tags should be added to the current tags, but I understand your reasoning. username_5: sent from a phone I believe it hasn't yet been pointed out that railway=disused is also not a nice tag from a semantic point of view, as the key railway alone does not say "rails" but "railway related", and besides rail has values like platform, level_crossing, signal, station, tram, funicular, monorail, miniature etc. http://wiki.openstreetmap.org/wiki/Key:railway it would be much better if it were railway=disused_rail (or track) Also see common values for the disused key: http://taginfo.osm.org/keys/disused#values E.g. disused=station is quite common (if you leave yes/no out), and while we might not care if a disused railway is actual rails or a tram, we will surely not want to display disused stations the same like a disused track. username_7: following up on @username_5 : Exactly. As far as I understand the documentation the railway=disused is "legal" only for rail-values but not the many other values like station, stop etc. So for a disused railway=station I think it would be more correct to use disused:railway=station than railway=disused . Whether or how such objects should be rendered at all is a different question. username_8: From the data perspective *=disused is the same as disused:*=*. As pointed out railway has it's semantic problems with "disused", but this is not enough to break up the rule above. username_9: Any progress on this? Having rail that is not used for any traffic show same as a regular rail line is not a good thing. It might lead people to thing there is a scheduled service. username_10: Depends on #2533. This PR is one of the changes that has quite some impact for OSM. Help there is much appreciated. username_1: I still don't agree, but I think I have given enough reasons for this in several posts here, and seem to stand alone on this... (especially see https://github.com/gravitystorm/openstreetmap-carto/issues/2030#issuecomment-202154226. For additional arguments, please read the whole thread ) username_9: I agree with you. I see no reason why it can't just be railway=disused. I believe this was rendered before? With current rendering I am more inclined to just go with abandoned to avoid it being rendered as regular rail. username_11: Unless something has changed dramatically, railway=disused is still rendered and should be used for disused railways ("A section of railway which is no longer used but where the track and infrastructure remain in place." according to the wiki). If there's no passengers, freight or other services then re-tag it as railway=disused. username_9: It is rendered like regular rail, that's the problem. Maybe it always has been? username_10: Two points: Please keep this issue to `disused:railway=rail` as this is the topic. I'm not aware that `railway=disused` is rendered like regular railway. See e.g. https://www.openstreetmap.org/way/32531887. username_1: So, to re-cap: - If you want to tag disused railways **_and want to see them displayed on the map_**, use **railway=disused** which is rendered on the Carto / Standard map. - If you want to tag disused railways **_and DON'T want them to show up on the map_**, use **disused:railway=x** _ONLY_. In case you used **railway=disused**, you CAN potentially additionally add disused:railway=x to add detail about the railway type, like water=x adds detail to natural=water, adding this tag _won't_ affect the rendering, as the current rendering is solely based on railway=disused. In case you used **disused:railway=x** to **_hide_** the features according to OpenStreetMap lifecycle tagging, you SHOULDN'T add railway=disused, as it doesn't add extra info, and it will make the feature show up on the map username_12: @username_1 - you seem to be suggesting the choice between "disused:x=y" and "x=disused" is "whether the mapper wants it rendered or not". I'm not convinced by that idea, but this github issue surely isn't the best place for that discussion :) Maybe write a diary entry about it? username_1: No, this is [OpenStreetMap lifecycle tagging](http://wiki.openstreetmap.org/wiki/Comparison_of_life_cycle_concepts). But yes, the consequence is that you need to make a choice. A choice that ultimately results in the feature being visible or not in current Carto / Standard map rendering. And I do think this is utterly relevant for this discussion and issue thread. It hits the core of the (non-existing IMO) problem... username_12: To be clear, https://wiki.openstreetmap.org/wiki/Comparison_of_life_cycle_concepts is "an overview of various methods used to tag the life-cycle of features" - it's not a hard-and-fast series of rules. I could give plenty of edge cases of whether something is still there or not (e.g. a canal that still has water in it but isn't used for transport - whether it's disused depends on what you think it is used for). I don't think that this is the place for that discussion though. username_1: I just don't get it: people want to see disused railways rendered, and then use a tagging scheme designed to hide disused objects from existing applications and rendering implementations, _while there is a tag that does render disused railways_ and that doesn't really clash with OpenStreetMap lifecycle tagging because it is used as amenity=pub: **railway=disused**. Would you also find it logical to tag your favorite pub with the tag: **this:is:my:favourite:amenity=pub** and ask the Carto team to render it because: **amenity=pub** also renders? username_12: As I understand it the tagging argument for "disused:railway=rail" rather than "railway=disused" is so that you can have e.g. "disused:railway=narrow_gauge". It's absolutely nothing to do with whether or not something renders - that's a map style decision on a feature by feature basis. username_1: You still **_can_** by refining the tagging with the additional tag. This is even recommended on the official [railway=disused](http://wiki.openstreetmap.org/wiki/Tag:railway%3Ddisused) Wiki page (and no: I have not written that page). It is also what I recommended in https://github.com/gravitystorm/openstreetmap-carto/issues/2030#issuecomment-295188287 and https://github.com/gravitystorm/openstreetmap-carto/issues/2030#issuecomment-202154226. If the whole issue just revolves around: we are not willing to tag any additional keys... than I am flogging dead horses. username_9: Looks like the problem is adding disused:railway=rail will make it render as regular rail regardless of railway=disused. Similar with abandoned:railway=rail/light_rail/tram. Add that to a way and it will render as regular rail, even if railway=abandoned is there. Another and similar issue is the newer tagging of preserved railways. Instead of railway=preserved you use railway:preserved=yes + railway=rail/light_rail/tram. Using just railway=preserved give no option to tell you what kind of railway. Basically there is need to update the rendering of newer tagging system. username_1: Do you have concrete example of this?, because these features don't show up at all, they are not rendered: **railway=abandoned + abandoned:railway=rail**: - https://www.openstreetmap.org/way/424531418 - https://www.openstreetmap.org/way/470030570 - https://www.openstreetmap.org/way/409948227 The Overpass Turbo query to get these results: http://overpass-turbo.eu/s/osi username_9: Ok after checking again it appears someone thought it was smart to add railway=rail to the relation for some of these. This will mess up rendering. username_4: I agree that data consumers should not need to check `disused=*` tag - for example `shop=supermarket disused=yes` is a horrible tagging. Disused value also may be problematic - for example `shop=vacant` is a poor idea, as "render all shops" is now requiring more complicated queries. But in case of railways I think that `railway=disused` is OK. The simplest "render all railway lines" is not broken here and to get any more detailed information here it is necessary anyway to check value of `railway` tag. ----- Overall, I think that supporting `disused:railway=rail` would just encourage tag fragmentation and is not desirable. username_13: I support the rendering of life cycle prefixes for certain tags. Railways are one of them, the other one would be buildings. I never read anywhere in the wiki that disused:* is for if you don't want to render it and *=disused is for having it render. That also would go against one of the main principals of OSM....DON'T TAG FOR THE RENDERER! *=disused/construction/... is just the legacy form of tagging which is also still wildly adapted. disused:* and other prefixes are an extension and simplification of the legacy life cycle concept (as far as I understand) And from a data creator perspective, I can tell you its the much nicer concept to work with. If I have roadworks or works on a railway in this case, I simply add the prefix and when it's done, I delete it. All subinformation about the highway/railway type just stays the same the whole time. If I on the other hand work with railway=construction, information gets lost and I need to add another tag construction=* to add it again. Same in reverse. If one forgets to add the construction/disused tag, the previously available information gets lost (I know you can look into the history to recover it but this can go unnoticed for quite a while) I don't understand why disused or abandoned things tagged with the prefix shouldn't be rendered. In railways =disused might still be more common but for example in buildings [abandoned:building=*](https://taginfo.openstreetmap.org/keys/abandoned%3Abuilding#overview) is much more common than [building=abandoned](https://taginfo.openstreetmap.org/tags/building=abandoned) Status: Issue closed username_14: I checked the current trends. Since mid 2018 to now, there have been 11000 `railway=disused` ways added, versus 5000 disused:railway=* features (all values - including ways). There are also over 6 times as many `railway=disused` features overall (from [Taghistory](https://taghistory.raifer.tech)): ![way-railway-disused-vs-disused-railway](https://user-images.githubusercontent.com/42757252/65373925-d9f32b80-dcbe-11e9-9081-45f2fb5767f9.png) The length of ways added was about equal at 10,000 kilometers, but it looks like most of the disused:railway kilometers were added all at once, probably by one user or one import (from [ohsome.org](https://ohsome.org/apps/dashboard/)): ![disused-railway-length-12-months](https://user-images.githubusercontent.com/42757252/65373956-3e15ef80-dcbf-11e9-8d41-e6af35c7aa39.png) (Note non-zero Y axis, unfortunately not adjustable on ohsome dashboard) ![railway-disused-12-months-ohsome](https://user-images.githubusercontent.com/42757252/65373989-b381c000-dcbf-11e9-9048-bd99824db960.png) Over 10,000 ways are tagged with both disused:railway and railway=disused, so only about 5000 ways have only disused:railway. See https://overpass-turbo.eu/s/MtY I will close this issue for now, because disused=railway is still being used by more mappers. If this trend reverses, and disused:railway becomes more common than railway=disused, then we should reopen this issue. username_5: I believe osm-carto should render alternative tags equally when there is agreement that the alternatives are equivalent, around for some time and are used in significant numbers (e.g. here they are used significantly). username_14: @username_5, this would conflict with 2 of the main purposes of this style: - It's an important feedback mechanism for mappers to validate their edits and helps to prevent unfavorable fragmentation of tag use. ... - It's an exemplar stylesheet for rendering OSM data. https://github.com/gravitystorm/openstreetmap-carto/blob/master/CARTOGRAPHY.md Usually this means it is best to only render one way of tagging a certain feature. username_9: Keep in mind that railway=disused blocks the ability to tell what kind of railway(light_rail, tram...). That is not a problem with disused:railway= username_15: Sorry for a technical interruption, but how one does flag the comment as off-topic? (of course you can also tag this comment as such :smiley: ). username_16: "Hide" under the `...` menu, then select a reason. username_9: Key is to encourage use of the lifecycle prefixes as those will not conflict with others. https://wiki.openstreetmap.org/wiki/Lifecycle_prefix Yes we should not tag for the renderer. But like it or not, this project/style impacts greatly how people tag. username_1: No, quite on the contrary, we encourage good tagging practices. It is actually starting to render any **disused:x=y** tags that would encourage bad tagging practices and open up essentially pandora's box, _because where do we stop_??? While **disused:railway=x** is maybe your personal pet project, other users might request theirs to be rendered. There could be thousands of them. Would you personally like to see all **disused:amenity=fuel** on the map, sending you of main road into some tiny hamlet with a near empty tank only to find out the petrol station you saw rendered on the map has long been abandoned and the edifice converted to a living space or just derelict? **_Likely not..._** These tags and lifecycle prefixes are to record a past state, WITHOUT affecting current renderings and rendering engines for OpenStreetMap data. By starting to render them, we would encourage a plethora of other requests for rendering of **disused:x=y** or even **abandoned/razed:x=y**. The problem is that people simply do not recognize **railway=disused** for what it is: a clearly visible stretch of land with some former railway usage **that people see as a structural topographic element** preferentially needing to be displayed on any map having the pretenses of being a "topographic" style map As such, I would dearly have wished that this tag had never been called **railway=disused**, but rather something like **railway=trackbed**, but alas, that is history... username_13: That doesn't really make much sense to me. If you are against rendering disused things in general, why do we render railway=disused? disused:railway=... is exactly the same and gets used for clearly visible disused railways too. You don't have to render all disused: tags just because you do it for one. In some cases that might make sense but in others it would just not be helpful/confusing. As Gazer just said above your post. We shouldn't tag for the render but people will still do. Especially when there are two legitimate tagging schemes for the same thing. You therefore will always have the chicken egg problem when an old tagging scheme is trying to be changed. The numbers would probably start to change if renderers/editors would start to support it. I can't see what disadvantages the life cycle tagging would have compared to the older tagging scheme, but I see advantages and therefore don't see a reason not to support it. username_1: If you read carefully and had made an effort to follow my reasoning, you would have recognized that I am actually arguing that these are NOT the same, and that the general tagging practice for disused railways is _that people actually **want** to see them displayed_ (which is counter to the function of the lifestyle tagging scheme). **railway=disused** (in my opinion better named railway=trackbed) **_is not the same as_**: **disused:railway=x** IMO. username_11: While I agree with everyone else you've said, this bit is incorrect - you are getting confused with railway=abandoned. 😃 That is different to railway=disused which is for tagging a section of railway where the rails and infrastructure is still in place but they are not being used. username_1: This description still leaves the option open of tracks being present or not (and I intend it to be read as such ;-) ). username_9: This is proof that because disused:railway=* is not showing people will not use it, and instead opt for railway=disused because that is rendered. username_1: Which is _exactly_ my point: people WANT disused railway to render, **hence using lifestyle tagging and disused:railway=x is not appropriate**. username_9: And why is that not appropriate? It is exactly what the lifecycle prefix is for. It removes the need for multiple tags to describe an object. username_1: I think we **all** will start loosing our hair when people start massively requesting rendering of hundreds of **possible disused/abandoned/razed:x=y** lifecycle tags I also do not see what the problem is for the OpenRailwayMap people, if people tag **railway=disused & disused:railway=rail**, they CAN render the appropriate type. That people are lazy enough to refuse to add the appropriate double tag, is another issue altogether. username_9: People want disused rail to render and because the default renderer is not showing disused:railway= they have to use railway=disused. No idea why you can't see this. Having to use multiple tags to describe objects when it has a perfectly valid single tag is not a good thing IMO. It kind of like people using the blanket access=no for a road with no through traffic for motor vehicles, and then add a long list of tags with access for different modes. They could simply have done motor_vehicle=destination and be done with it. Why do they do this? Because the default renderer will render the access=no and access=destination differently, but not motor_vehicle=destination. This is kind of tagging for the renderer even though they are using documented tags. username_13: I don't understand. Could you point me to a source where it is documented that railway=disused and disused:railway= are not the exact same thing? The life cycle prefixes are as far as I know exactly a replacement for the existing tags. For example is construction:highway the same as highway=construction and so on username_17: – https://wiki.openstreetmap.org/wiki/Key:disused: While disused pubs are of no importance for OpenStreetMap, disused railways, highways and buildings stand out in reality and hence are of importance to OpenStreetMap and henceforth openstreetmap-carto. Because we would never map for the renderer, the idea that `disused:` might tell the renderer to not render something is bs. This tagging scheme just provides the possibility of marking specific tags as describing something that was and not blocking the value part of the key value pair (a.k.a. tag). TL;DR: `disused:` does not mean, it shouldn't be rendered. It means it still exists, but is unused and would need repair. username_14: I appreciate the interesting comments about how disused railways should be tagged, but this is not the best place to discuss how features should be tagged. The [tagging mailing list](https://lists.openstreetmap.org/listinfo/tagging) or [forum.openstreetmap.org](https://forum.openstreetmap.org) would be good places for discussion. If someone wishes to deprecate railway=disused and replace this tag with disused:railway=* the process is described at https://wiki.openstreetmap.org/wiki/Proposal_process username_5: sent from a phone the term “disused” in the OpenStreetMap context implies that the tracks are still there and in a reasonable state. Abandoned also implies the tracks are still there , but in a degenerate state. If you speak explicitly about trackbed as determining characteristics, it seems to imply that the tracks aren’t there anymore. username_1: That is your personal opinion. There may well be someone interested in the history of a region who **is** interested in locations of disused historic pubs. In fact, I have heard about one project that exactly did this: try to extract locations of historic (disused / abandoned / razed) pubs from OpenStreetMap data for the publication of a book about a town's history. username_5: Am Mi., 25. Sept. 2019 um 01:22 Uhr schrieb <NAME> < So logically and from the wiki, disused railway tracks could well be railway=rail disused=yes https://wiki.openstreetmap.org/wiki/Tag:railway%3Drail Or it could be railway=disused disused=rail or disused:railway=rail (synonymous), for example if the legal status also should be accounted for (and they aren't legally active rails anymore). username_9: This issue is not about that, but the fact that carto do not render disused:railway=* as railway=disused if the latter is missing. And username_1 seem to be incapable of seeing that lifecycle prefix is better than adding multiple tags to describe this. Thus (s)he is not willing to accept that a simple one tag like this should be rendered. username_18: As @username_14 already said: Please stop discussing tagging questions here. There are exactly two ideas w.r.t. disused railways we could discuss here: * ceasing to render railway=disused * rendering disused:railway=* instead of railway=disused. The latter is currently out of the question given the use numbers presented. The former would of course be an option but i would suggest to open a new issue for that if you think it is a serious consideration. As @username_14 explained the idea of rendering both tags as synonyms would not be compatible with the goals of this style. username_9: This is exactly why new tags are never adopted. People running this seem to be unable to accept new tags before they have very high usage, thus causing people to use older tagging to make the object render. username_13: I thought this style was meant for mappers espacially. So displaying commonly used tags and tags that the cummunity has agreed upon. life cycle tags are agreed upon and documented as an alternative to the old tagging scheme and are already adopted. even if not as widely as the old scheme which is bot a surprise if it doesn't get rendered. username_18: I completely understand that there are people who would like to see this style take a more active role in steering mappers to map the 'right' way. But trying to initiate such a change in overall direction by lobbying for making individual changes against established and documented goals is not going to work. Discussions of ideas for overall policy changes are welcome (in a separate issue of course) as long as they focus on generic arguments and are not just pushed as an instrument to facilitate specific individual changes. username_1: It is a misconception to think of lifecycle tags as being a "new" way to tag things, it is an additional attribute of an object instead. The ultimate consequence of such reasoning would be that we would need drop all existing tags and to introduce a whole new lifecycle prefix for **_all_** OpenStreetMap tags, e.g. something named like "current", to denote that objects are current and in active use according to tag: **current:amenity=pub** **current:power=line** ... **current:railway=rail** etc. I hope you agree this is absolutely ludicrous. username_9: Now you're being a silly nitpicker. Where is "current" mentioned? Double tagging is a terrible idea and is the whole reason to use only prefix. It is clear you're not a fan of these and that is fine. username_13: I would have to dive deeper on who initiated the life cycle prefixes. But it's meant to unify tagging life cycles. But that doesn't even really matter here. What matters is that both schemes describe the same thing and can even be used combined (because there is not a clear way of adding detail to *=disused). And the prefix is widely established https://wiki.openstreetmap.org/wiki/Lifecycle_prefix So why would you close yourself off to so much data? Why not give the community the chance to figure out which of the schemes would be preferred. By not showing an established second way of tagging, you are taking that decision out of the hands of the people. Is it that big of a ressource problem to add the life cycle variants for tags? username_14: Please, if anyone would like to replace railway=disused with disused:railway=*, the place to do that would be at a wider, tagging-oriented discussion like the [tagging mailing list](https://lists.openstreetmap.org/listinfo/tagging). To deprecate a tag there is a process at https://wiki.openstreetmap.org/wiki/Proposal_process - if the proposal is approved, then first it would be best to request that editors like JOSM support the newly approved tag. This has more influence than what tags are rendered in this style. I've searched through the history of the relevant wiki pages, and from what I can tell the `disused:<key>` namespace was introduced to the [Key:disused=* wiki page in 2011](https://wiki.openstreetmap.org/w/index.php?title=Key%3Adisused%3A&type=revision&diff=642985&oldid=567060) without a proposal: . I don't know if it was widely discussed. This is not the place to change common ways of tagging features, but as I said above, we will happily reopen this issue if the wider Openstreetmap community decides to use `disused:railway=*` instead of `railway=disused`. username_5: Am Mi., 25. Sept. 2019 um 12:19 Uhr schrieb <NAME> < IMHO this is already what the style does. The project goal to act against tag fragmentation and support only one tag for one thing, even if the competing tag is used in significant numbers, is an example for this active role that osm-carto plays in the tagging discussion. username_15: The most surprising fact for me is that @username_18 is the most active person doing this (and trying to make it even more than it is today)... Christoph, I don't understand your position, it does not look consistent to me, what did you mean? username_15: There is no such rule in our goals, and if you mean "helps to prevent unfavorable fragmentation of tag use", it does not imply that. username_4: I would be happy to discuss issues raised here - both related to meaning of tags and more general strategy and properties of this map style or process of influencing tagging. But all of that is either offtopic here because it does not belong to an issue tracker or is more general than this specific issue. How to handle competing tag schemes and at which point new scheme duplicating older one should be supported can be discussed but I think that it would be better to not hide discussion on this issue and base it on wider number of cases. username_15: Could you please open specific ticket then to separate the problems as much as you see it'd be useful? username_18: As already said - if anyone wants to discuss changing overall goals and policy of this style this is welcome but there is a right place for that - which is not here, in a closed issue on a specific rendering question.
downshiftorg/prophoto7-issues
327846743
Title: Use reselect in containers where appropriate Question: username_0: <a href="https://github.com/brianium"><img src="https://avatars3.githubusercontent.com/u/636651?v=4" align="left" width="96" height="96" hspace="10"></img></a> **Issue by [brianium](https://github.com/brianium)** _Friday Sep 15, 2017 at 16:39 GMT_ _Originally opened as https://github.com/downshiftorg/prophoto/issues/1607_ ---- None<issue_closed> Status: Issue closed
ornladios/ADIOS2
347107078
Title: 'adios2-config --libs' displays incorrect list of libraries Question: username_0: Result of "adios2_config --libs" is {ADIOS_INSTALL_DIR}/lib -ladios2 -L/usr/lib64 **-ldataman** -lbz2 The correct output should be {ADIOS_INSTALL_DIR}/lib -ladios2 -L/usr/lib64 -lbz2 If SST is enabled it should be {ADIOS_INSTALL_DIR}/lib -ladios2 -L/usr/lib64 -lbz2 -ladios2_sst -ladios2_evpath -ladios2_enet -ladios2_dill -ladios2_ffs -ladios2_dill -ladios2_atl Answers: username_1: @eisenhauer @username_3 can you document how to link with the sst external dependencies? I still don't understand the added complexity to the developer. username_2: Is there any update on this? I am getting this problem too. Status: Issue closed username_3: Fixed by #1081
lkjcalc/nTxt
236643956
Title: Crash on 4.4 Question: username_0: Hi! When I run ntxt on my Ti-nspire cx cas (OS 4.4) a pop up opens saying "Ndless: Activating compatibility mode. This application hasn't been updated to work with your hardware. You may run into weird issues!" It might be related to #2 I'm running the latest os 4.4 and nTxt 2.7 freshly downloaded. Thanks!! Answers: username_1: That pop up is normal. Does it work after that or does it crash? username_1: This should probably be fixed now, I rewrote the graphics part to use the new Ndless lcd_blit API. Status: Issue closed
gitbls/pistrong
777417573
Title: Site to Site VPN on same subnet Question: username_0: Running the `makeTunnel` script says the following: "NOTE: LocalNet1 and LocalNet2 cannot be the same subnet (e.g., 192.168.1.0/24 on both networks)" I'm trying to implement a site to site tunnel with both ends on the same subnet 192.168.1.0/24. Would it be possible if the routers on both ends used non colliding ip ranges? e.g. LocalNet1 Router dchp range 192.168.1.2-128 LocalNet2 Router dchp range 192.168.1.129-255 Answers: username_1: I have never tried this. The key issue in the case of both ends on the same subnet are the iptables rules in the nat POSTROUTING table. These are used to route packets back to the remote end of the VPN. It _might_ be possible to change the rules created by makeTunnel to make this work, but I'm a bit skeptical. The best solution, of course, is to re-IP one of the networks so you can use standard netmasks and everything will just work. If that's not possible, my suggestion would be to use one of the IP calculators on the net to see if you can convince yourself that this is viable. If you go down this path, I'll be happy to consult with you on it here. username_0: Thanks, due to your skepticism and suggestion, I will try re-ip one of the networks. Status: Issue closed
Respect/Config
19299433
Title: Constant evaluation Question: username_0: We have some problems on constant parsing, since plain constants on the application side do not get evaluated. Plus, we need to support class constants inside a namespace too. Answers: username_1: I think part of this issue was fixed on the past, now #56 completly fixes this. Status: Issue closed
silverstripe/silverstripe-graphql
758041001
Title: GraphQL 4: filter fields include custom getters Question: username_0: ``` Page: fields: '*': true link: true ``` Should omit "link" because it is not queryable. Status: Issue closed Answers: username_0: Fixed in 4.0.0-alpha1 https://github.com/silverstripe/silverstripe-graphql/releases/tag/4.0.0-alpha1
abhi1693/yii2-system-info
242532996
Title: Add Mac OS support to test on Apple Question: username_0: I'm write code to work on Apple, but has problem with composer.json If you has that this code work on Mac machine, need: 1. Add condition to SystemInfo.php if (strpos($name, 'darwin') !== FALSE) { return __NAMESPACE__ . '\os\Mac'; } 2. Add Mac.php to php folder. I'm extend Mac class from Linux. Need add separate file [Mac.php.zip](https://github.com/abhi1693/yii2-system-info/files/1143495/Mac.php.zip)
wandersonwhcr/balance
116178012
Title: [Módulo] Livros de Balancete Question: username_0: Criar a possibilidade de cadastro de livros de balanço. Incluir um seletor no lançamento, informando em qual livro o lançamento está sendo adicionado e colocar como preenchimento obrigatório. Creio que isto também seja necessário no cadastro de contas. Isso possibilita que sejam administrados vários balancetes dentro do mesmo sistema. Quando existir usuários, estes usuários podem ser relacionados com livros, criando assim um único sistema com múltiplos balanços. Não sei se isto deveria estar na raiz do sistema. Estou pensando ainda.
BlurEngine/Blur
124887307
Title: Broken maven dependencies Question: username_0: All of them, i particular your bukkit commons lib (which is built with gradle so i'm unable to automatically deploy it to a working mvn repo). Answers: username_1: Which dependencies? username_0: All of them, i particular your bukkit commons lib (which is built with gradle so i'm unable to automatically deploy it to a working mvn repo). username_0: I managed to fix the other deps on my fork. username_0: Whoops I've forgot to push changes xD srry username_1: I'll need some time to figure out the repo for my commons lib. For now, if you have Gradle installed you can install it to your maven repo by typing `./gradlew install` in the root dir of SupaCommons. username_0: Yeah but this scope doesn't work with the jenkins built-in mvn repo :/ username_1: Sorry, this was assuming you were willing to git pull the repo from `https://github.com/supaham/supacommons`. username_0: output of a clean blur build: http://ci.xephi.fr/job/Blur/ username_1: Yeah, so sadly you'll have to wait for me to figure this out. username_0: spigot api is no longer avariable from the selected repos username_1: Spigot-API is something you have to build yourself, and that's not something I can control. Check out <https://www.spigotmc.org/wiki/buildtools/>. username_0: spigotapi is 100% legal, open source and free. Only the implementation can't be distributed. username_1: Oops I accidently forgot to add the spigot repo, my bad. username_0: Lol ;) username_0: https://www.spigotmc.org/wiki/spigot-maven/ username_1: Should be fixed now. username_0: Ok, new log: http://ci.xephi.fr/job/Blur/3/console Why 1.8.7 and not 1.8.8? username_1: No specific reason, it's what I started with. As far as I know there's no difference between 1.8.7 and 1.8.8, but if it's an issue I can bump it up. username_0: Ok, i was just curious ;) username_1: Alright, commons library should now be available to you, just rebuild maven, or refresh in your IDE. Status: Issue closed username_0: Thanks ;) username_0: Nope, still unavariable username_0: @username_1 username_0: I purged the entire maven cache but nothing, it doesn't work username_1: I was just testing you! It should really work now :P username_0: :P
globalwordnet/english-wordnet
1146256973
Title: oewn-09912954-n "captive" (change dc:subject) Question: username_0: **Affected synsets** oewn-09912954-n Lemma: "captive" Definition: "an **animal** that is confined" Topic: noun.**person** **Proposed changes** Topic: noun.person --> noun.animal **Motivation** Wrong Topic (dc:subject)
ropensci/rfishbase
222280419
Title: most functions returning error Question: username_0: [1] "Pomacanthus annularis" "Pomacanthus arcuatus" "Pomacanthus asfur" "Pomacanthus chrysurus" [5] "Pomacanthus imperator" "Pomacanthus maculosus" "Pomacanthus navarchus" "Pomacanthus paru" [9] "Pomacanthus rhomboides" "Pomacanthus semicirculatus" "Pomacanthus sexstriatus" "Pomacanthus xanthometopon" [13] "Pomacanthus zonipectus" Any thoughts? Thanks in advance <NAME> Answers: username_1: thanks for the issue. problems with the server - tracking it down now Status: Issue closed username_1: @username_0 should be working again now. username_0: Working as a charm. Thank you very much!
electron/electron
690327932
Title: Document failureReasons of print callback Question: username_0: ### Preflight Checklist * [x] I have read the [Contributing Guidelines](https://github.com/electron/electron/blob/master/CONTRIBUTING.md) for this project. * [x] I agree to follow the [Code of Conduct](https://github.com/electron/electron/blob/master/CODE_OF_CONDUCT.md) that this project adheres to. * [x] I have searched the issue tracker for an issue that matches the one I want to file, without success. ### Issue Details * **Electron Version:** Electron 9 The possible failure reasons in the print callback ([docs](https://www.electronjs.org/docs/api/web-contents#contentsprintoptions-callback)) have no documentation. I think having a list of failures could help. I just noticed that I was getting a failure (and was showing an error popup) when the user cancelled out of the print dialog (failure reason: "cancelled"), which I didn't expect.
GregStephen/temperature-converter
421822709
Title: Temperature Conversion Status: Issue closed Question: username_0: ## User Story As a user, when I click the convert button, the correct temperature should display in the output. ## AC When I click convert, Then the correct converted temp should display. ## Development * `determineConverter` function should call either `toCelsius` or `toFahrenheit` depending on what radio button is selected * `toCelsius` and `toFahrenheit` should each accept 1 input - `temp` * `toCelsius` and `toFahrenheit` should convert the temperature * `toCelsius` and `toFahrenheit` should call `domStringBuilder` function and pass final number and C or F * `domStringBuilder` should have 2 inputs - `finalTemp` and `unit` * `domStringBuilder` should build a h2 tag that looks something like: `27 degrees F' * `domStringBuilder` should call `printToDom` and pass the id `tempOutput` and the h2 string created Answers: username_0: ## User Story As a user, when I click the convert button, the correct temperature should display in the output. ## AC When I click convert, Then the correct converted temp should display. ## Development * `determineConverter` function should call either `toCelsius` or `toFahrenheit` depending on what radio button is selected * `toCelsius` and `toFahrenheit` should each accept 1 input - `temp` * `toCelsius` and `toFahrenheit` should convert the temperature * `toCelsius` and `toFahrenheit` should call `domStringBuilder` function and pass final number and C or F * `domStringBuilder` should have 2 inputs - `finalTemp` and `unit` * `domStringBuilder` should build a h2 tag that looks something like: `27 degrees F' * `domStringBuilder` should call `printToDom` and pass the id `tempOutput` and the h2 string created
fission/fission
378621348
Title: Use fission-support to gather necessary information for troubleshooting Question: username_0: Feedback from @erwinvaneyk Users can use `fission support` to gather and generate issue reports for GitHub issues. ``` That we have an issue template which explains how to use `fission-support` to get info. The user then can run the support tool with maybe a flag indicating that it is information for the issue (e.g. `fission support --short`). The output should then just contain just some vital information, such as versions, plugins, etc, which is just printed to stdout. The user can then simply copy this output to the right place in the issue. ``` Answers: username_1: We don't know in advance how much information we need, so we might as well get a full support dump. If at some point the size of these things becomes too big, this will make sense, but until then it's not really needed.
santigarcor/laratrust
367433289
Title: Change column 'user_type' name Question: username_0: ### Steps To Reproduce: Change the name on the migration ` // Create table for associating permissions to users (Many To Many Polymorphic) Schema::create('seg_usuario_permissao', function (Blueprint $table) { $table->unsignedInteger('seg_usuario_id'); $table->unsignedInteger('seg_permissao_id'); $table->string('**NewName**'); $table->foreign('seg_usuario_id')->references('id')->on('seg_usuario')->onUpdate('cascade')->onDelete('cascade'); $table->foreign('seg_permissao_id')->references('id')->on('seg_permissao')->onUpdate('cascade')->onDelete('cascade'); $table->primary(['seg_usuario_id', 'seg_permissao_id', '**NewName**'])->name('idx-primaria'); }); ` Answers: username_1: It comes from the polymorphic relationships, you can't change the name of that column as far a i know. Status: Issue closed
MicrosoftDocs/OfficeDocs-SharePoint
541408854
Title: You or your Question: username_0: Please check whether you have mixed up the words "you" and "your". A grammar check facility might help you. --- #### Document Details ⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.* * ID: a9f3baf1-62a6-348a-8f7d-25ba27d7b292 * Version Independent ID: 520aaaa6-1d41-9712-0dd6-67a8ff586f3b * Content: [How SharePoint Online and OneDrive safeguard your data in the cloud - SharePoint Online](https://docs.microsoft.com/en-us/sharepoint/safeguarding-your-data#feedback) * Content Source: [SharePoint/SharePointOnline/safeguarding-your-data.md](https://github.com/MicrosoftDocs/OfficeDocs-SharePoint/blob/live/SharePoint/SharePointOnline/safeguarding-your-data.md) * Service: **sharepoint-online** * GitHub Login: @kaarins * Microsoft Alias: **kaarins** Answers: username_1: @username_0 Thank you for submitting feedback and contributing to the docs. We are currently investigating this. username_1: @username_0 Thank you for submitting feedback. We understand that this issue has been resolved. Please feel free to re-open this issue if there is a specific area of the docs that we can improve or make better. Thank you. Status: Issue closed username_0: Thank you.
rhempel/umm_malloc
872441209
Title: Allow umm_malloc to use mutliple heaps Question: username_0: In some use scenarios, it is helpful to have multiple heaps to choose from. For example, we can avoid fragmentation if we know that there are some objects that are always a certain size - we trade off a specific area for these fixed size allocations knowing that this area will never become fragmented such that a new allocation will not fit. We may run out of entries, but if an entry is available it will be the right size. On extremely memory constrained systems this is impractical, so whatever we do must be backwards compatible to the existing To implement this successfully we will need the following incremental changes: [] Add a way to either auto-initialize a heap or call an error trap if any allocations are called on an uninitialized heap [] Implement the concept of one or more heap control blocks to allow multiple heaps [] Improve the umm_malloc compile time configuration system [] Implement heap control block initialization for integration into Rust Answers: username_0: This could be a nice way of avoiding the need to track which pool a piece of memory came from :-) There will always be a tradeoff between performance and convenience - for use cases that include frequent allocation and free of small blocks- a single heap is best. If you need super performance and the objects are always the same size, a pool strategy might be better. I'll work on the multi-pool allocation and free API this weekend if possible - I appreciate any pointers to exiting API prototypes that you feel would be helpful to examine! Status: Issue closed username_0: For now we have an implementation that supports the basic needs of the Rust community which include: - Ability to explicitly initialize the heap using supplied buffer and size - Ability to configure how the heap gets initialized - Ability to configure how an uninitialized heap is handled Unless there is a significant pull from the community to have umm_malloc support multiple heaps, I'll close this issue. If we need to support multiple heaps that will be a new issue, and should probably be implemented as a separate API in its own file.
dnephin/pre-commit-golang
872331519
Title: The go-build check does not support arguments. Question: username_0: I would like to use something like this: ```yaml - id: go-build args: ["-o bin/exec"] ``` From looking at the code, it looks like we can change `run-go-build.sh` to be ```bash #!/usr/bin/env bash FILES=$(go list ./... | grep -v /vendor/) exec go build "$@" $FILES ```
matterpoll/matterpoll
475042409
Title: Add a ProgressBar Option to polls Question: username_0: #### Summary Hi, we use polls allot in our organization. One problem that we have with the otherwise excellent polls is that it is always difficult to see where a poll is currently going, visually. Additionally, even when a vote is over, it is hard to tell what the outcome is when scrolling by. #### Describe alternatives you've considered One way to improve this would be to add bars for each option. With this, it would be much easier to tell the outcome of a poll at a glance. ![Mockup](https://imgur.com/pVIJE1j.png) Answers: username_1: Hey @username_0, Thanks for the suggestion. While I like the idea, it's hard to implement. Matterpoll is limited by the possibilities given my Mattermost. In the future we might be able to implement this via custom posts, but this will take time. See #160 for some details.
opensourceantibiotics/murligase
856382699
Title: Mur Ligase Online Research Meeting 2pm April 13th 2021 Question: username_0: _The below is draft until this text removed._ This meeting follows on from #40. **When**: 2pm GMT April 13th 2021. **Where**: https://ucl.zoom.us/j/93981347451 **Recording**: **Actions from Last Time** - [ ] @username_4 to complete the synthesis of the AZ alcohol compound - [ ] <NAME> to secure approval from Warwick to ship Atomwise compounds, then @username_6 to ship compounds to Warwick and Diamond. - [ ] @Rebecca-Steventon to screen @LizbeK SGC fragments and then @username_6 elaborated fragments - [ ] Laura to set up crystallisation trials of MurC and MurD in Warwick, with the plan to take crystals to Diamond in mid-April - [ ] @username_3, Chris and Bart to pursue AZ compound shipment for crystallisation trials. Was this done [here](https://github.com/opensourceantibiotics/murligase/issues/40#issuecomment-810316697)? - [ ] <NAME> and @username_0 to investigate MRC scheme for funding - [ ] @username_1 to extract in vitro inhibition data from published literature and use towards the construction of a predictive model. - [ ] <NAME> to work on the crystallisation of e.g. MurC with Enamine compounds sent by Joe. **Agenda** 1) Update on **Atomwise**. 2) Update on **AZ compounds** 3) **Evaluation of existing fragments** in Diamond and Warwick, and plan for next fragment variants or screens. 4) Aims remain: i) Structures of two mur ligases bound to the same small molecule ii) Identify new starting points for inhibitors 5) Funding AOB Next meeting: Location: **Actions** Answers: username_1: @username_0 I won't be able to make the meeting today, but I don't have anything to report anyway. The docking is slow because of limited resources, teaching, and technical issues. I've been focussing on [OSM](https://github.com/OpenSourceMalaria/Series4_PredictiveModel/issues/32) so haven't looked for inhibition data in the literature. username_0: Hi @username_1 - no problem at all, thanks for letting us know. Just wanted to make sure you had everything you need. @username_2 - fantastic, thanks. If you could briefly mention what this means to people who are unfamiliar with this, then that'd help - including the fact that it's possible to filter by series (if we do it right). I'm assuming we should install a link, or instructions, somewhere on the wiki of this repo, or somewhere similar? username_3: [AZ murC desMe analogs.docx](https://github.com/opensourceantibiotics/murligase/files/6304925/AZ.murC.desMe.analogs.docx) username_0: Converting @username_3 's suggestion from the above Word file to markdown: "From a target potency viewpoint is it worth making one of the des-Me analogs like compound 3 or 4? I understand why we are trying to make the N-Me t-butyl pyrazole, but to work through the chemistry and have a compound that is potent that could be tried against a murD isozyme would be just as valuable at this point." ![<NAME> Suggestion](https://user-images.githubusercontent.com/4386101/114620048-17554a80-9ca3-11eb-88a9-ad21c590d667.png) @username_4 - what do you think? Good suggestion? We'd access potent compounds, likely to bind? Joe says that the relevant S/M (below) is 26 GBP for 1 g from Fluorochem, 60 GBP for 5 g. ![56367-24-9](https://user-images.githubusercontent.com/4386101/114620508-8df24800-9ca3-11eb-88a6-bcd188d716c0.png) username_4: Sure, I think it would be great! I will put it on the synthetic list. The price of (S)​-​2-​Amino-​2-​cyclohexylethanol for the last step coupling is 250mg/GBP84 or 1g/GBP206 on Fluorochem. It would be even better if we have any potential donations of this reagent. Also, the price of (R)​-​2-​Amino-​2-​cyclohexylethanol is 1g/GBP82 could be used as well. username_5: Compounds that were shipped to Peter from Northeastern. @username_3 @bartrum ![UCB_Compounds](https://user-images.githubusercontent.com/65637708/115164205-4dffdc00-a07a-11eb-97b9-878271c07803.png) username_6: Meeting link has been installed in the wiki; Issue is closing. Status: Issue closed
cekit/cekit
391640938
Title: release notes for cekit releases Question: username_0: It would be good to write short release notes for cekit releases, so it's easy to see what has changed since the last release (w/o relying on constructing a "git log" command and reading the results!) Answers: username_1: You mean this https://github.com/cekit/cekit/releases ? username_0: That's exactly it, yeah. it would be nice if those notes are exposed somewhere on cekit.readthedocs.io and/or in the image sources too, but it's good enough that they exist at all. Status: Issue closed
etal/cnvkit
942763987
Title: Call function return infinite number without log2 and abs CN. Question: username_0: Got an issue when I used the "call" function to calculate the absolute copy number with CI filter, tumor purity, and mode "clonal". I applied segmetrix with CI to generate CNS file. The cnvkit version is 0.9.8. The command I uses is here: cnvkit.py segment -p 30 --drop-low-coverage -o CNS_file CNR_file cnvkit.py segmetrics CNR_file -s CNS_file --drop-low-coverage -o Segmetrics_CNS --ci --bootstrap 100 cnvkit.py call Segmetrics_CNS --filter ci -m clonal --purity 0.6 --drop-low-coverage Here is the output example and I also attached the whole chr7 file for testing: Part of CNR file chromosome start end gene depth log2 weight chr7 55142130 55142501 EGFR 270.456 0.708584 0.948729 chr7 55143245 55143516 EGFR 224.114 0.42487 0.930992 chr7 55146501 55146797 EGFR 213.872 0.90014 0.902366 chr7 55151199 55151407 EGFR 255.567 0.896409 0.930861 chr7 55152412 55152490 EGFR 297.462 1.14185 0.886307 chr7 55152492 55152694 EGFR 356.391 0.961422 0.92668 chr7 55153921 55154196 EGFR 206.844 0.446968 0.60015 chr7 55155739 55155982 EGFR 14177.8 5.93676 0.898039 chr7 55156458 55156538 EGFR 9044.49 6.28239 0.802126 chr7 55156556 55156631 EGFR 11817.4 6.30456 0.825121 chr7 55156658 55156867 EGFR 12254 6.28782 0.92162 chr7 55157596 55157772 EGFR 12223.5 6.41487 0.844637 chr7 55160097 55160289 EGFR 13647.7 6.61096 0.940684 chr7 55160291 55160391 EGFR 14064.7 6.54615 0.907539 chr7 55161417 55161665 EGFR 10694.9 6.21642 0.612706 chr7 55163677 55163829 EGFR 6728.44 6.22588 0.830213 chr7 55165247 55165364 EGFR 11537 6.62981 0.902731 chr7 55165371 55165513 EGFR 14056.2 6.7279 0.89137 chr7 55166188 55166353 EGFR 11421.7 6.25913 0.801735 chr7 55168413 55168609 EGFR 9115.29 6.2084 0.893011 chr7 55170198 55170401 EGFR 10596.3 6.21157 0.850611 chr7 55170423 55170631 EGFR 10044.1 6.20809 0.697185 chr7 55171092 55171272 EGFR 9704.37 6.42278 0.826387 chr7 55172882 55173156 EGFR 11007.5 6.12229 0.891947 chr7 55173779 55174097 EGFR 12955.6 6.50511 0.85122 chr7 55174683 55174860 EGFR 11168.8 6.42879 0.803283 chr7 55181183 55181542 EGFR-AS1 10416.9 6.36304 0.715062 chr7 55191654 55191774 EGFR 11614 6.53411 0.854608 chr7 55191797 55191940 EGFR 13835.6 6.5693 0.862205 chr7 55192693 55192884 EGFR 10807.4 6.33958 0.886439 chr7 55198649 55198933 EGFR 8280.11 6.07231 0.865077 chr7 55200232 55200463 EGFR 16582.2 6.07382 0.838151 chr7 55201129 55201298 EGFR 8808.38 6.1591 0.877357 chr7 55201650 55201844 EGFR 13407.9 6.29408 0.905556 chr7 55202531 55202837 EGFR 13588.1 6.31954 0.785208 chr7 55205187 55205435 EGFR 13262.6 6.47704 0.898544 chr7 55205439 55205513 EGFR 13652.4 6.38079 0.802199 chr7 55205559 55205673 EGFR 14608.6 6.46327 0.823021 chr7 55205719 55205896 EGFR 20892.3 6.29991 0.904913 Part of CNS chromosome start end gene log2 depth probes weight chr7 55142130 55154196 EGFR 0.797524 263.491 7 6.12608 chr7 55155739 55431638 EGFR;EGFR-AS 6.31171 11736.8 45 38.5273 Part of segmetrix CNS chromosome start end gene log2 depth probes weight ci_lo ci_hi chr7 55142130 55154196 EGFR 0.797524 263.491 7 6.12608 0.604312 0.954002 chr7 55155739 55431638 EGFR;EGFR-AS 6.31171 11736.8 45 38.5273 6.25559 6.36341 Part of call CNS chromosome start end gene log2 cn depth probes weight chr7 44034105 72278898 EGFR;EGFR-AS -9223372036854775808 1748 1422.59 [example.cnr.txt](https://github.com/etal/cnvkit/files/6806061/example.cnr.txt) [example.cns.txt](https://github.com/etal/cnvkit/files/6806062/example.cns.txt) [example.segmetrics.call.cns.txt](https://github.com/etal/cnvkit/files/6806063/example.segmetrics.call.cns.txt) [example.segmetrics.cns.txt](https://github.com/etal/cnvkit/files/6806064/example.segmetrics.cns.txt) Answers: username_1: @username_0 , to sum up my above answer: looks like you have a **few segments with no (empty) "weight" values** in your ".cns" files => They are found also within your shared ".cnr" (as they have been "propagated") Could you please give us **details about how you obtained this ".cnr"** ? => Some info about your dataset => CNVkit commands you ran with their parameters Because I have no idea how they ended up here... <br> Thanks again. Kind regards. Felix. username_0: Hi @username_1, Thank you for the investigation. I used "batch" method to generate cnr file. When I checked the cnr file, only antitarget regions didn't have weight values. This also happened when I checked the full cnr file, it looks like all my cnr antitarget regions didn't have weight values. Here is the command I used: cnvkit.py batch [all_tumor_bam] -n [all_normal_bam] --drop-low-coverage -p 30 -f [hg38.fasta] -t [SeqCab liftover hg38 captured target bed] --annotate [UCSC hg38 reflat bed] -g [5k access file] --output-reference [pool normal output] Best, Henan username_1: Hi @username_0, thanks for your feedback ! A few more questions if you do not bother: <br> ### About your current files 1. Regarding ".cnr" you shared, all "Antitarget" with empty "weights" have also `depth == 0` => Is it also the case for rest of your ".cnr" ? => Because it should not, you should also have some "Antitarget" with `depth != 0` (even if very low depth) 2. Also as you are running on "[all_tumor_bam]", could you tell us if you are experiencing this "empty-weights" issue on *all* your "Sample-tumor.cnr"? Or only on *several*? Or only on a *single* sample? 3. Could you check for **any empty values** among each of your "sample.antitargetcoverage.cnn" files? And also among "Antitarget" regions from your "[pool normal output].cnn" ? (like in "depth" or "log2" columns) ### Keep aside your current files and then 1. If you can, please start by **updating to CNVkit v0.9.9**, to be sure to exlude something fixed since v0.9.8 => Then maybe **re-run same `batch` command** and see if produced ".cnr" files still have these empty weight values ? => And if this does not take forever, could be ideal to **run with `batch -p 1`** (some errors are hard to catch when parallelized) 2. If all your ".cnn" files are sane (no empty values somewhere), it is most likely **something coming from `fix` step** => Try to simply run: `cnvkit.py fix sample.targetcoverage.cnn sample.antitargetcoverage.cnn [pool normal output].cnn` => With a sample that showed empty-weights before and check produced ".cnr" for empty-weights at "Antitargets" ? <br> Thanks again ! Best, Felix. username_0: Hi @username_1, I figure out why my antitarget regions only showed 0 in all cnr file. Somehow, my pipeline only printed reads in the target regions into the bam file during the base quality score recalibration. Thank you so much for answering my post. Best, Henan username_1: Hi @username_0, <br> Thanks for your feedback! However I am not sure I well understand your answer * Where are the "0" in all your ".cnr" files? In the "depth" column? You mean that **all** your "Antitarget" regions had `depth == 0` ? * The problem was in upstream step you used to generate your BAM ? Your BAM had absolutely 0 reads overlapping "Antitarget" regions? * Is the fact of correcting your BAM allowed you to have proper CNVkit results? <br> Thanks again for your answer. Have a nice day. Felix. username_0: Hi @username_1, Sorry for the late reply. I just finished the rerun of all my bam files. Now I have proper CNVkit results. I used GATK best practices pipeline for the bam file. I applied the sequencing target file as an interval list as all my samples are WES. So during the base quality score recalibration step, the pipeline only print reads on the interval list. This causes no coverage of the anti-target region issue. However, I don't know why some of my samples have anti-target region coverage but some of them do not. But I don't have time to diagnose further now. Thank you for helping me with this issue, you can close this post now. Best, Henan username_1: Hi @username_0, <br> Thanks for your feedback, I am glad it helped ! *To conclude:* This was due to an upstream step (BQSR) that caused **exclusion of any read not overlapping your "panel.bed"**. Then: 1) These "depth==0" at antitarget regions became "NaN" values in weight and log2 (this part is still a bit unclear) 2) "log2==NaN" situation further caused [`absolute_*()` functions to return an infinite-like value during rounding operation](https://github.com/etal/cnvkit/blob/25d91cdb962639c98d041c3b05c7de659c41dc70/cnvlib/call.py#L53), without any error Maybe someone can try to reproduce this, running your **`batch` command above on a BAM with reads overlapping antitargets filtered out first** (something like `samtools view -L vendor.bed sample.bam > for_CNVkit_DEBUG.bam`) <br> Hope this hepls. Have a nice day. Felix.
LeaVerou/awesomplete
58943302
Title: ★서울대핸플」「69OP8.COM」ᕞ 69OP ᕞ 신사핸플 ᕞ 신천핸플 ᕞ 신촌핸플 Question: username_0: ★서울대핸플」「69OP8.COM」ᕞ 69OP ᕞ 신사핸플 ᕞ 신천핸플 ᕞ 신촌핸플 ★서울대핸플」「69OP8.COM」ᕞ 69OP ᕞ 신사핸플 ᕞ 신천핸플 ᕞ 신촌핸플 ★서울대핸플」「69OP8.COM」ᕞ 69OP ᕞ 신사핸플 ᕞ 신천핸플 ᕞ 신촌핸플 ★서울대핸플」「69OP8.COM」ᕞ 69OP ᕞ 신사핸플 ᕞ 신천핸플 ᕞ 신촌핸플 ★서울대핸플」「69OP8.COM」ᕞ 69OP ᕞ 신사핸플 ᕞ 신천핸플 ᕞ 신촌핸플 ★서울대핸플」「69OP8.COM」ᕞ 69OP ᕞ 신사핸플 ᕞ 신천핸플 ᕞ 신촌핸플 ★서울대핸플」「69OP8.COM」ᕞ 69OP ᕞ 신사핸플 ᕞ 신천핸플 ᕞ 신촌핸플 ★서울대핸플」「69OP8.COM」ᕞ 69OP ᕞ 신사핸플 ᕞ 신천핸플 ᕞ 신촌핸플 ★서울대핸플」「69OP8.COM」ᕞ 69OP ᕞ 신사핸플 ᕞ 신천핸플 ᕞ 신촌핸플 ★서울대핸플」「69OP8.COM」ᕞ 69OP ᕞ 신사핸플 ᕞ 신천핸플 ᕞ 신촌핸플 ★서울대핸플」「69OP8.COM」ᕞ 69OP ᕞ 신사핸플 ᕞ 신천핸플 ᕞ 신촌핸플 ★서울대핸플」「69OP8.COM」ᕞ 69OP ᕞ 신사핸플 ᕞ 신천핸플 ᕞ 신촌핸플 ★서울대핸플」「69OP8.COM」ᕞ 69OP ᕞ 신사핸플 ᕞ 신천핸플 ᕞ 신촌핸플 ★서울대핸플」「69OP8.COM」ᕞ 69OP ᕞ 신사핸플 ᕞ 신천핸플 ᕞ 신촌핸플 ★서울대핸플」「69OP8.COM」ᕞ 69OP ᕞ 신사핸플 ᕞ 신천핸플 ᕞ 신촌핸플 ★서울대핸플」「69OP8.COM」ᕞ 69OP ᕞ 신사핸플 ᕞ 신천핸플 ᕞ 신촌핸플 ★서울대핸플」「69OP8.COM」ᕞ 69OP ᕞ 신사핸플 ᕞ 신천핸플 ᕞ 신촌핸플 ★서울대핸플」「69OP8.COM」ᕞ 69OP ᕞ 신사핸플 ᕞ 신천핸플 ᕞ 신촌핸플 ★서울대핸플」「69OP8.COM」ᕞ 69OP ᕞ 신사핸플 ᕞ 신천핸플 ᕞ 신촌핸플 ★서울대핸플」「69OP8.COM」ᕞ 69OP ᕞ 신사핸플 ᕞ 신천핸플 ᕞ 신촌핸플 ★서울대핸플」「69OP8.COM」ᕞ 69OP ᕞ 신사핸플 ᕞ 신천핸플 ᕞ 신촌핸플 ★서울대핸플」「69OP8.COM」ᕞ 69OP ᕞ 신사핸플 ᕞ 신천핸플 ᕞ 신촌핸플 ★서울대핸플」「69OP8.COM」ᕞ 69OP ᕞ 신사핸플 ᕞ 신천핸플 ᕞ 신촌핸플 ★서울대핸플」「69OP8.COM」ᕞ 69OP ᕞ 신사핸플 ᕞ 신천핸플 ᕞ 신촌핸플 ★서울대핸플」「69OP8.COM」ᕞ 69OP ᕞ 신사핸플 ᕞ 신천핸플 ᕞ 신촌핸플 ★서울대핸플」「69OP8.COM」ᕞ 69OP ᕞ 신사핸플 ᕞ 신천핸플 ᕞ 신촌핸플 ★서울대핸플」「69OP8.COM」ᕞ 69OP ᕞ 신사핸플 ᕞ 신천핸플 ᕞ 신촌핸플 ★서울대핸플」「69OP8.COM」ᕞ 69OP ᕞ 신사핸플 ᕞ 신천핸플 ᕞ 신촌핸플 ★서울대핸플」「69OP8.COM」ᕞ 69OP ᕞ 신사핸플 ᕞ 신천핸플 ᕞ 신촌핸플 ★서울대핸플」「69OP8.COM」ᕞ 69OP ᕞ 신사핸플 ᕞ 신천핸플 ᕞ 신촌핸플 ★서울대핸플」「69OP8.COM」ᕞ 69OP ᕞ 신사핸플 ᕞ 신천핸플 ᕞ 신촌핸플 ★서울대핸플」「69OP8.COM」ᕞ 69OP ᕞ 신사핸플 ᕞ 신천핸플 ᕞ 신촌핸플 ★서울대핸플」「69OP8.COM」ᕞ 69OP ᕞ 신사핸플 ᕞ 신천핸플 ᕞ 신촌핸플 ★서울대핸플」「69OP8.COM」ᕞ 69OP ᕞ 신사핸플 ᕞ 신천핸플 ᕞ 신촌핸플 ★서울대핸플」「69OP8.COM」ᕞ 69OP ᕞ 신사핸플 ᕞ 신천핸플 ᕞ 신촌핸플 ★서울대핸플」「69OP8.COM」ᕞ 69OP ᕞ 신사핸플 ᕞ 신천핸플 ᕞ 신촌핸플 ★서울대핸플」「69OP8.COM」ᕞ 69OP ᕞ 신사핸플 ᕞ 신천핸플 ᕞ 신촌핸플 ★서울대핸플」「69OP8.COM」ᕞ 69OP ᕞ 신사핸플 ᕞ 신천핸플 ᕞ 신촌핸플 ★서울대핸플」「69OP8.COM」ᕞ 69OP ᕞ 신사핸플 ᕞ 신천핸플 ᕞ 신촌핸플 ★서울대핸플」「69OP8.COM」ᕞ 69OP ᕞ 신사핸플 ᕞ 신천핸플 ᕞ 신촌핸플 ★서울대핸플」「69OP8.COM」ᕞ 69OP ᕞ 신사핸플 ᕞ 신천핸플 ᕞ 신촌핸플 ★서울대핸플」「69OP8.COM」ᕞ 69OP ᕞ 신사핸플 ᕞ 신천핸플 ᕞ 신촌핸플 ★서울대핸플」「69OP8.COM」ᕞ 69OP ᕞ 신사핸플 ᕞ 신천핸플 ᕞ 신촌핸플 ★서울대핸플」「69OP8.COM」ᕞ 69OP ᕞ 신사핸플 ᕞ 신천핸플 ᕞ 신촌핸플 ★서울대핸플」「69OP8.COM」ᕞ 69OP ᕞ 신사핸플 ᕞ 신천핸플 ᕞ 신촌핸플 ★서울대핸플」「69OP8.COM」ᕞ 69OP ᕞ 신사핸플 ᕞ 신천핸플 ᕞ 신촌핸플 ★서울대핸플」「69OP8.COM」ᕞ 69OP ᕞ 신사핸플 ᕞ 신천핸플 ᕞ 신촌핸플 ★서울대핸플」「69OP8.COM」ᕞ 69OP ᕞ 신사핸플 ᕞ 신천핸플 ᕞ 신촌핸플 ★서울대핸플」「69OP8.COM」ᕞ 69OP ᕞ 신사핸플 ᕞ 신천핸플 ᕞ 신촌핸플 ★서울대핸플」「69OP8.COM」ᕞ 69OP ᕞ 신사핸플 ᕞ 신천핸플 ᕞ 신촌핸플 ★서울대핸플」「69OP8.COM」ᕞ 69OP ᕞ 신사핸플 ᕞ 신천핸플 ᕞ 신촌핸플 ★서울대핸플」「69OP8.COM」ᕞ 69OP ᕞ 신사핸플 ᕞ 신천핸플 ᕞ 신촌핸플 ★서울대핸플」「69OP8.COM」ᕞ 69OP ᕞ 신사핸플 ᕞ 신천핸플 ᕞ 신촌핸플 ★서울대핸플」「69OP8.COM」ᕞ 69OP ᕞ 신사핸플 ᕞ 신천핸플 ᕞ 신촌핸플 ★서울대핸플」「69OP8.COM」ᕞ 69OP ᕞ 신사핸플 ᕞ 신천핸플 ᕞ 신촌핸플
razorpay/razorpay-magento
312911100
Title: collection.js:279 Uncaught TypeError: this.elems is not a function Question: username_0: After installing this module im getting "collection.js:279 Uncaught TypeError: this.elems is not a function" this error in "/en_US/Magento_Ui/js/lib/core/collection.js:279". when I switching from checkout shipping method to payment method I'm getting this error. check the below screenshot for the reference. ![screenshot from 2018-04-10 18 26 31](https://user-images.githubusercontent.com/36266281/38558094-bf21ba92-3cec-11e8-9d63-0a9163550512.png) Please help me out from this issue. Thank you ! Answers: username_1: The issue is not due to the Razorpay files. Status: Issue closed username_0: Okay, Thank you for the response. But if I disable the Razorpay module, I'm not getting that error. vendor/razorpay/magento/view/frontend/web/js/view/payment/method-renderer/razorpay-method.js this js, initObservable function giving an error. If I comment that function working fine, i couldn't make the order with Razorpay payment method. username_2: Getting the same issue reported by @username_0 when Razorpay module is enabled: collection.min.js:8 Uncaught TypeError: this.elems is not a function at UiClass._updateCollection (collection.min.js:8) at UiClass._insert (collection.min.js:8) at Registry._resolveRequest (registry.min.js:16) at Array.forEach (<anonymous>) at Registry._updateRequests (registry.min.js:16) at later (underscore.min.js:41) Please acknowledge and provide solution. ![image](https://user-images.githubusercontent.com/25875568/39999125-d6b08e80-57a5-11e8-8282-21ac5a1caeb2.png) username_3: Hi Mukhund, Try the below on "app/design/frontend/[Pakage]/[theme]/Magento_Razorpay/web/js/view/payment/method-renderer/razorpay-method.js" Or directly you can edit the vendor razorpay js also `initObservable: function() { var self = this._super(); if(!self.razorpayDataFrameLoaded) { $.getScript("https://checkout.razorpay.com/v1/checkout.js", function() { self.razorpayDataFrameLoaded = true; }); } return self; },` It got fixed for me. username_4: @username_2 what version of magento are you using ? username_2: Thanks @username_3, your suggestion worked for me as well. username_5: I have created pull request for this issue. https://github.com/razorpay/razorpay-magento/pull/124 username_6: Thanks, @username_3, your suggestion worked for me too. username_7: Keeping this open this #124 is merged. username_7: After installing this module im getting "collection.js:279 Uncaught TypeError: this.elems is not a function" this error in "/en_US/Magento_Ui/js/lib/core/collection.js:279". when I switching from checkout shipping method to payment method I'm getting this error. check the below screenshot for the reference. ![screenshot from 2018-04-10 18 26 31](https://user-images.githubusercontent.com/36266281/38558094-bf21ba92-3cec-11e8-9d63-0a9163550512.png) Please help me out from this issue. Thank you ! username_7: Fix merged in #124 Status: Issue closed
raphaelcoutu/rhpharma
326170742
Title: [Plan] Septembre 2018 Question: username_0: # Bugs/Améliorations ## Prioritaire - Établir des statistiques par secteur ET par personne - Par secteur : % historique, % visé, %généré par l'horaire. (Nombre d'heures en secteur / Nombre d'heures disponible dans l'horaire) - Par personne : Nombre d'heures en secteur de type clinique / Nombre d'heures disponible dans l'horaire) - Améliorer l'Analyzer: - Combiner les jours consécutifs de conflits (si > 3 jours: sévère, >= 2: modéré, >1: Faible - Afficher les pharmaciens avec 2 secteurs cliniques différents la même semaine (ex: SIM -> SIC) - Afficher les pharmaciens qui font (2+malus_weeks) semaines consécutives (trop de semaines de suite) - Créer une page d'attribution des % des secteurs (department_user) pour faciliter les ajustements. - Calendar: - Possibilité de cacher/afficher (contraintes faibles/fortes, shifts généré ou non) - Assigner plusieurs shifts d'un coup pour une même personne. (afficher erreur s'il y a ) - Améliorer l'interface du UserModal - Diminuer le nombre de clic pour ajouter un secteur (right click avec menu contextuel avec seulement les shifts de la personne, par département?) - Barrer des shifts (manuel ou généré) et les prendre en compte lors de la prochaine génération d'horaire ## Éventuellement - Repenser la génération aléatoire des secteurs (garder les meilleurs 50-100 séquences par bloc de 4 semaines), et ainsi corriger le bug des 3 semaines consécutives. - Schedule : afficher la Console lors de la génération de l'horaire - Système de contraintes mieux défini (présentement hard codé) dans le builder. - Créer un "vrai" builder pour l'oncologie (1 Department au lieu de plusieurs) - Femmes enceintes: ne doivent pas travailler plus de 4 jours consécutifs - Lors de la génération de l'horaire, créer un model Build, pour conserver/log plusieurs paramètres: - Date de génération, Settings de la génération (% par secteur, points de bonus/malus pour continuité par secteur, ordre de génération des secteurs) - Prendre en considération les semaines précédentes pour la génération / continuité des soins. - Attributs pour le nombre de jours par semaine (avec début et une fin) - Attributs pour les femmes enceintes - Attribution du jour de congé pour les 3-4 jours par semaine, pour une semaine de férié où il y a un seul férié = le jour du férié!
dart-lang/sdk
342093120
Title: Problem with StreamBuilder and Firebase Question: username_0: The StreamBuilder widget with Firebase can't be displayed when it is inside a children bracket. Furthermore it works when it is a simple child. I have found this post wich resume the same problem : https://stackoverflow.com/questions/50792605/streambuilder-widget-is-not-rendered-if-nested-in-children-widget Hope someone will find a solution ! Thanks ! Answers: username_1: This issue was moved to flutter/flutter#19487 Status: Issue closed
city41/node-sql-fixtures
83041138
Title: Can't insert data into three related tables Question: username_0: I have following tables in MySQL database: A( id tinyint unsigned auto_increment primary key, title varchar(30) not null, unique(title) ), B( id smallint unsigned auto_increment primary key, title varchar(30) not null, description varchar(500) not null, created datetime not null, a_id tinyint unsigned not null, unique(title), constraint B_A_a_id_id foreign key (a_id) references A (id) on delete cascade on update restrict ), C( id smallint unsigned not null, b_id smallint unsigned not null, data varchar(1000) not null, primary key (id, b_id), constraint C_B_b_id_id foreign key (b_id) references B (id) on delete cascade on update restrict ) When I try to insert following data: dataSpec = { A: [ { title: 'A1' }, { title: 'A2' }, { title: 'A3' } ], B: [ { title: 'B1', description: 'D1', created: new Date(), a_id: 'A:0' }, { title: 'B2', description: 'D2', created: new Date(), a_id: 'A:0' }, { title: 'B3', description: 'D3', created: new Date(), a_id: 'A:1' }, { title: 'B4', description: 'D4', created: new Date(), a_id: 'A:1' }, { title: 'B5', description: 'D5', created: new Date(), a_id: 'A:2' } ], C: [ { id: 1, b_id: 'B:0', data: 'Test 1' }, { id: 2, b_id: 'B:0', data: 'Test 2' }, { id: 3, b_id: 'B:0', data: 'Test 3' }, { id: 4, b_id: 'B:0', data: 'Test 4' }, { id: 5, b_id: 'B:0', data: 'Test 5' } ] } , I get the error: Unhandled rejection Error: ER_NO_REFERENCED_ROW_2: Cannot add or update a child row: a foreign key constraint fails (`some_database`.`C`, CONSTRAINT `C_B_b_id_id` FOREIGN KEY (`b_id`) REFERENCES `B` (`id`) ON DELETE CASCADE) When I remove the "created" column in table "B" everything works fine except that the program doesn't exit. If I just remove data related to the "C" table, from the dataSpec object, without removing the "created" column", the data is inserted as expected without an error and the program exits fine. I'm using sql-fixtures version 0.10.1. Why is this happening? Is this an issue or expected behaviour? Answers: username_1: Could you add `debug: true` to your knex config and include its output here? It should be a bunch of statements like this: ``` { __cid: '__cid2', method: 'insert', options: undefined, bindings: [ 'a value' ], sql: 'insert into "simple_table" ("string_column") values (?)' } ``` ``` var myKnex = knex({ debug: true, // all the rest }); ``` username_0: The output after adding the "debug" property: { __cid: '__cid1', method: 'insert', options: undefined, bindings: [ 'A1' ], sql: 'insert into `A` (`title`) values (?)' } { __cid: '__cid2', method: 'select', options: undefined, bindings: [ 'A1', 1 ], sql: 'select * from `A` where `title` = ? order by `id` DESC limit ?' } { __cid: '__cid3', method: 'insert', options: undefined, bindings: [ 'A2' ], sql: 'insert into `A` (`title`) values (?)' } { __cid: '__cid1', method: 'select', options: undefined, bindings: [ 'A2', 1 ], sql: 'select * from `A` where `title` = ? order by `id` DESC limit ?' } { __cid: '__cid2', method: 'insert', options: undefined, bindings: [ 'A3' ], sql: 'insert into `A` (`title`) values (?)' } { __cid: '__cid3', method: 'select', options: undefined, bindings: [ 'A3', 1 ], sql: 'select * from `A` where `title` = ? order by `id` DESC limit ?' } { __cid: '__cid1', method: 'insert', options: undefined, bindings: [ 4, Sun May 31 2015 18:54:24 GMT+0200 (CEST), 'D1', 'B1' ], sql: 'insert into `B` (`a_id`, `created`, `description`, `title`) values (?, ?, ?, ?)' } { __cid: '__cid2', method: 'select', options: undefined, bindings: [ 'B1', 'D1', Sun May 31 2015 18:54:24 GMT+0200 (CEST), 4, 1 ], sql: 'select * from `B` where `title` = ? and `description` = ? and `created` = ? and `a_id` = ? order by `id` DESC limit ?' } { __cid: '__cid3', method: 'insert', options: undefined, bindings: [ 4, Sun May 31 2015 18:54:24 GMT+0200 (CEST), 'D2', 'B2' ], sql: 'insert into `B` (`a_id`, `created`, `description`, `title`) values (?, ?, ?, ?)' } { __cid: '__cid1', method: 'select', options: undefined, bindings: [ 'B2', 'D2', Sun May 31 2015 18:54:24 GMT+0200 (CEST), 4, 1 ], sql: 'select * from `B` where `title` = ? and `description` = ? and `created` = ? and `a_id` = ? order by `id` DESC limit ?' } { __cid: '__cid2', method: 'insert', options: undefined, bindings: [ 5, Sun May 31 2015 18:54:24 GMT+0200 (CEST), 'D3', 'B3' ], sql: 'insert into `B` (`a_id`, `created`, `description`, `title`) values (?, ?, ?, ?)' } { __cid: '__cid3', method: 'select', options: undefined, [Truncated] { __cid: '__cid2', method: 'select', options: undefined, bindings: [ 'B4', 'D4', Sun May 31 2015 18:54:24 GMT+0200 (CEST), 5, 1 ], sql: 'select * from `B` where `title` = ? and `description` = ? and `created` = ? and `a_id` = ? order by `id` DESC limit ?' } { __cid: '__cid3', method: 'insert', options: undefined, bindings: [ 6, Sun May 31 2015 18:54:24 GMT+0200 (CEST), 'D5', 'B5' ], sql: 'insert into `B` (`a_id`, `created`, `description`, `title`) values (?, ?, ?, ?)' } { __cid: '__cid1', method: 'select', options: undefined, bindings: [ 'B5', 'D5', Sun May 31 2015 18:54:24 GMT+0200 (CEST), 6, 1 ], sql: 'select * from `B` where `title` = ? and `description` = ? and `created` = ? and `a_id` = ? order by `id` DESC limit ?' } { __cid: '__cid2', method: 'insert', options: undefined, bindings: [ 'B:__genned_specId_3', 'Test 1', 1 ], sql: 'insert into `C` (`b_id`, `data`, `id`) values (?, ?, ?)' } username_1: Thanks. I've been able to reproduce the issue and I've created a failing test case for it. Your situation hits upon sql-fixture's achilles heal in a new way. I need to think about this and see if I can make that part of the code better once and for all. username_1: Removing the created column (or at least not populating it) will work around your issue, although you probably don't want to do that. I can't find any reason the program shouldn't end. The only thing I can think of is you need to call `sqlFixtures.destroy()` at some point, otherwise internally Knex will maintain a connection to your database, preventing the app from closing. With mocha, I usually do: ``` after(function(done) { sqlFixtures.destroy(done); }); ``` username_1: This is now fixed in `0.10.2`, I added [this spec](https://github.com/username_1/node-sql-fixtures/blob/master/test/integration/maria-integration-spec.js#L24) which is your situation exactly, so feeling confident about the fix. MySQL/Maria still have some situations where sql-fixtures can fail, thankfully those situations are getting more and more obscure. Not sure I'll ever get 100% perfect MySql/Maria support. I now know how to get 100% perfect support for sqlite, which I have as a todo. Status: Issue closed username_0: Ok, thanks for the effort.
straks/straks-node
282703286
Title: Upgrade path Question: username_0: Is it possible to include or document how to upgrade to the latest versions if you used the fast quick start shell? I've tried to manually remove the instance and pull the latest and rerun the script but it seems to still be running 1.14.5<issue_closed> Status: Issue closed
jupyterlab/jupyterlab
460097330
Title: Cannot Start Jupyter lab. Question: username_0: Jupyter Lab will not start. Jupyter notebook does run. When I launch the browser, the icon rotates twice then stops and then I get the message: "Loading... This loading screen is taking a long time. Would you like to clear the workspace or keep waiting." I have deleted the conda environment that Jupyter runs in and created a new Conda and Jupyter install, but the issue with launchine Jupyter lab persists. The debug output is as follows: prompt>Jupyter lab --debug master ✭ [D 16:00:01.144 LabApp] Searching ['/Users/mcoburn/Oasis/oasis-modeling/notebooks/mcoburn', '/Users/mcoburn/.jupyter', '/Users/mcoburn/anaconda3/envs/lab/etc/jupyter', '/usr/local/etc/jupyter', '/etc/jupyter'] for config files [D 16:00:01.145 LabApp] Looking for jupyter_config in /etc/jupyter [D 16:00:01.145 LabApp] Looking for jupyter_config in /usr/local/etc/jupyter [D 16:00:01.145 LabApp] Looking for jupyter_config in /Users/mcoburn/anaconda3/envs/lab/etc/jupyter [D 16:00:01.145 LabApp] Looking for jupyter_config in /Users/mcoburn/.jupyter [D 16:00:01.145 LabApp] Looking for jupyter_config in /Users/mcoburn/Oasis/oasis-modeling/notebooks/mcoburn [D 16:00:01.146 LabApp] Looking for jupyter_notebook_config in /etc/jupyter [D 16:00:01.146 LabApp] Looking for jupyter_notebook_config in /usr/local/etc/jupyter [D 16:00:01.146 LabApp] Looking for jupyter_notebook_config in /Users/mcoburn/anaconda3/envs/lab/etc/jupyter [D 16:00:01.146 LabApp] Loaded config file: /Users/mcoburn/anaconda3/envs/lab/etc/jupyter/jupyter_notebook_config.json [D 16:00:01.146 LabApp] Looking for jupyter_notebook_config in /Users/mcoburn/.jupyter [D 16:00:01.146 LabApp] Loaded config file: /Users/mcoburn/.jupyter/jupyter_notebook_config.json [D 16:00:01.146 LabApp] Looking for jupyter_notebook_config in /Users/mcoburn/Oasis/oasis-modeling/notebooks/mcoburn [D 16:00:01.925 LabApp] Paths used for configuration of jupyter_notebook_config: /etc/jupyter/jupyter_notebook_config.json [D 16:00:01.925 LabApp] Paths used for configuration of jupyter_notebook_config: /usr/local/etc/jupyter/jupyter_notebook_config.json [D 16:00:01.926 LabApp] Paths used for configuration of jupyter_notebook_config: /Users/mcoburn/anaconda3/envs/lab/etc/jupyter/jupyter_notebook_config.d/jupyterlab.json /Users/mcoburn/anaconda3/envs/lab/etc/jupyter/jupyter_notebook_config.json [D 16:00:01.926 LabApp] Paths used for configuration of jupyter_notebook_config: /Users/mcoburn/.jupyter/jupyter_notebook_config.json [I 16:00:01.929 LabApp] JupyterLab extension loaded from /Users/mcoburn/anaconda3/envs/lab/lib/python3.7/site-packages/jupyterlab [I 16:00:01.929 LabApp] JupyterLab application directory is /Users/mcoburn/anaconda3/envs/lab/share/jupyter/lab [W 16:00:01.930 LabApp] JupyterLab server extension not enabled, manually loading... [I 16:00:01.933 LabApp] JupyterLab extension loaded from /Users/mcoburn/anaconda3/envs/lab/lib/python3.7/site-packages/jupyterlab [I 16:00:01.933 LabApp] JupyterLab application directory is /Users/mcoburn/anaconda3/envs/lab/share/jupyter/lab [I 16:00:01.933 LabApp] Serving notebooks from local directory: /Users/mcoburn/Oasis/oasis-modeling/notebooks/mcoburn [I 16:00:01.933 LabApp] The Jupyter Notebook is running at: [I 16:00:01.933 LabApp] http://localhost:8888/?token=5d64bd4fe9a59094bf4165d8e8a254f26e0662befc2f31f9 [I 16:00:01.933 LabApp] Use Control-C to stop this server and shut down all kernels (twice to skip confirmation). [C 16:00:01.939 LabApp] To access the notebook, open this file in a browser: file:///Users/mcoburn/Library/Jupyter/runtime/nbserver-1612-open.html Or copy and paste one of these URLs: http://localhost:8888/?token=5d64bd4fe9a59094bf4165d8e8a254f26e0662befc2f31f9 [D 16:00:03.247 LabApp] Accepting token-authenticated connection from ::1 [D 16:00:03.249 LabApp] Using contents: services/contents [D 16:00:03.253 LabApp] 200 GET /lab?token=5d64bd4fe9a59094bf4165d8e8a254f26e0662befc2f31f9 (::1) 7.01ms [D 16:00:03.288 LabApp] Path main.3f271f97c0e5dc62b5af.js served from /Users/mcoburn/anaconda3/envs/lab/share/jupyter/lab/static/main.3f271f97c0e5dc62b5af.js [D 16:00:03.289 LabApp] 304 GET /lab/static/main.3f271f97c0e5dc62b5af.js (::1) 2.12ms [D 16:00:03.291 LabApp] Path vendors~main.9337105e0f78c79d860a.js served from /Users/mcoburn/anaconda3/envs/lab/share/jupyter/lab/static/vendors~main.9337105e0f78c79d860a.js [D 16:00:03.292 LabApp] 304 GET /lab/static/vendors~main.9337105e0f78c79d860a.js (::1) 1.16ms [D 16:00:03.791 LabApp] Accepting token-authenticated connection from ::1 [D 16:00:03.792 LabApp] Found kernel test_star_el in /Users/mcoburn/Library/Jupyter/kernels [D 16:00:03.792 LabApp] Found kernel spacy in /Users/mcoburn/Library/Jupyter/kernels [D 16:00:03.792 LabApp] Found kernel python3 in /Users/mcoburn/anaconda3/envs/lab/share/jupyter/kernels [D 16:00:03.795 LabApp] 200 GET /api/kernelspecs?1561410003789 (::1) 4.17ms [D 16:00:03.796 LabApp] Accepting token-authenticated connection from ::1 [D 16:00:03.796 LabApp] 200 GET /api/terminals?1561410003789 (::1) 0.66ms [Truncated] [D 16:00:31.926 LabApp] Path vendors~main.9337105e0f78c79d860a.js.map served from /Users/mcoburn/anaconda3/envs/lab/share/jupyter/lab/static/vendors~main.9337105e0f78c79d860a.js.map [D 16:00:31.928 LabApp] 304 GET /lab/api/themes/@jupyterlab/theme-light-extension/index.css (::1) 0.72ms [D 16:00:31.930 LabApp] Path main.3f271f97c0e5dc62b5af.js.map served from /Users/mcoburn/anaconda3/envs/lab/share/jupyter/lab/static/main.3f271f97c0e5dc62b5af.js.map [D 16:00:31.931 LabApp] 200 GET /lab/static/main.3f271f97c0e5dc62b5af.js.map (::1) 1.64ms [D 16:00:32.089 LabApp] 200 GET /lab/static/vendors~main.9337105e0f78c79d860a.js.map (::1) 164.18ms [D 16:00:34.748 LabApp] Accepting token-authenticated connection from ::1 [D 16:00:34.749 LabApp] 200 GET /api/sessions?1561410034745 (::1) 1.21ms [D 16:00:34.750 LabApp] Accepting token-authenticated connection from ::1 [D 16:00:34.751 LabApp] 200 GET /api/terminals?1561410034746 (::1) 0.66ms [D 16:00:35.137 LabApp] Accepting token-authenticated connection from ::1 [D 16:00:35.146 LabApp] 200 GET /api/contents/?content=1&1561410035135 (::1) 8.42ms [D 16:00:44.749 LabApp] Accepting token-authenticated connection from ::1 [D 16:00:44.750 LabApp] 200 GET /api/sessions?1561410044745 (::1) 1.10ms [D 16:00:44.751 LabApp] Accepting token-authenticated connection from ::1 [D 16:00:44.751 LabApp] 200 GET /api/terminals?1561410044746 (::1) 0.79ms [D 16:00:45.140 LabApp] Accepting token-authenticated connection from ::1 [D 16:00:45.149 LabApp] 200 GET /api/contents/?content=1&1561410045137 (::1) 8.73ms Any assistance is appreciated. Answers: username_1: I didn't look too closely at the logs, but it might be an incompatible extension since you said core mode works. I'd recommend trying `jupyter lab clean` and `jupyter lab build` to see if you get some more informative errors. username_1: Other than that: When it load hangs in the splash-screen stage, the browser console is more likely to have something relevant (typically accessed by pressing F12 key). username_1: Could you try with 1.0.0? username_2: I have this issue as well. Nothing illuminating in the `jupyter lab` logs, nor the browser console log. In the network tab, however, these requests are stuck as (pending): - `/api/sessions` - `/api/kernelspecs` - `/api/contents` Other requests: - `/api/workspaces/lab` returns 204 - `/api/metrics` and `/api/terminals` run periodically, returning 200 username_3: This is an old issue that looks like it may have been a configuration issue. Can you open a new issue and follow the diagnosis and reporting guidelines at https://jupyterlab.readthedocs.io/en/stable/getting_started/issue.html to help us narrow down where the problem might be and reproduce it? username_3: Closing the original issue, as it seems the author never followed up. @username_2, please open a new issue with information from your system and some instructions for reproducing what you are seeing (see the diagnosis guidelines mentioned above). Status: Issue closed username_2: Figures that I can't reproduce the issue today. Oh well, at least it works now? 🤔 😅 username_2: Spoke too soon; I ran across this again after shuffling around some pip packages. It would _seem_ that this was due to having an old version of tornado (4.x) installed. When I went to `pip install notebook` to try to repro in classic notebook, it also installed tornado 6.x. I then ran `jupyter lab`, and the web app was able to start up and connect without a problem. ¯\\\_(ツ)_/¯
aesophor/wmderland
924749434
Title: Starting wmderland with external monitor connected results in black screen when switching to laptop monitor Question: username_0: Hi @aesophor, I am experiencing a problem where if I start wmderland with my external monitor connected to my laptop it results in a black screen when I switch the output to my laptop display. The output will first be shown on both monitors when starting wmderland. I then use the command `xrandr --output eDP-1-1 --auto --output DP-2 --off` to switch to only my laptop monitor. If I use the command `xrandr --output eDP-1-1 --off --output DP-2 --auto` to switch to only my external monitor I do no get a black screen on my external monitor. The black screen does have a cursor and when I hover the cursor over e.g. where polybar should be the cursor changes to a hand, so it still seems to be there just totally black. I am not having the issues with the black screen if I change my .xinitrc file to start e.g. xterm instead. I do not have issues either if I have my external monitor disconnected when starting wmderland, then plug in my external monitor, and switch between the two monitors. My laptop is an Legion Y540-15IRH Laptop (Lenovo) - Type 81SX (81SX0008MX). It uses the NVIDIA Optimus technology. I am only having the issue with the black screen in wmderland if I use my NVIDIA GPU only. In hybrid mode I am not experiencing the issue. In this case however my laptop does not output to both monitors at first, but only my laptop monitor, which might be the reason why it works in this case.
arvego/mm-randbot
313855726
Title: Переделать систему логгирования Question: username_0: Полностью переделать логирование с принтов на logging модуль. Answers: username_1: Не думаю, что это прям необходимо, но оставляя текущие функции как обёртки над логгингом -- сделать можно. Ну и естесна добавить возможность передачи аргументов к вызовам логгинга, прям явным `*args, **kwargs`
unipept/unipept
480713597
Title: add undocumented support for input to the API Question: username_0: The API now requires peptides to be passed with the `input[]` parameter (the only way to support multiple inputs. When a user searches for a single peptide and uses `input`, the controller crashes. Maybe we can try to parse `input` after `input[]` fails. _[Original issue](https://github.ugent.be/unipept/unipept/issues/406) by @username_0 on Fri Aug 22 2014 at 15:19._ _Closed by @username_0 on Wed Aug 27 2014 at 14:08._<issue_closed> Status: Issue closed
MicrosoftDocs/feedback
682464347
Title: NoIndex meta data tag not working Question: username_0: **Describe the bug** I am part of LinkedIn's Partner Engineering team. I have added NoIndex metadata tag as shown below, to the top of the MS Docs markdown page. --- title: abc description: abc ms.date: 11/20/2019 ms.topic: abc ms.prod: abc author: xyz ROBOTS: NOINDEX --- But even after adding this tag "ROBOTS: NOINDEX" The page is searchable. Is this the correct way to search ? Answers: username_1: Couple of things to consider: 1. What search are you referring to? Google, Bing, Docs search, another site like Linked In search? each is plumbed differently. 2. Search indexes are not real time. The search reindex may crawl once a month or daily, depends on the algorithm and the popularity of the page. Could it be that the old indexed value is still there, but in the next crawl it will be removed. If you have specific page and need to know more about the crawl information, there is an SEO champs team. CC: @Khairunj Thanks, Jason username_1: @username_0 see my last message [here](https://github.com/MicrosoftDocs/feedback/issues/3043). Do you have additional context on the problem?
ibm-openbmc/dev
613606783
Title: eBMC GUI : Field Core Override Question: username_0: ## SMEs **BMC**: @ojayanth **PHYP**: <NAME> **RAS**: <NAME> ## Overview This is a new navigation section. The only features it includes are the Design and FED stories. ## References/Resources - eBMC Feature Item: 4.7 - feature discovery folder: - user research notes*: - user research synthesis: * This folder is restricted in accordance with GDPR guidelines. Answers: username_1: refresh username_1: refresh username_1: refresh username_1: refresh username_1: refresh username_1: refresh username_1: refresh username_1: refresh username_1: refresh username_1: refresh username_1: refresh username_1: refresh username_1: refresh username_1: refresh username_1: refresh username_1: refresh username_1: refresh username_1: refresh username_1: refresh username_1: refresh username_1: refresh username_1: refresh username_1: refresh username_1: refresh username_1: refresh username_1: refresh username_1: refresh username_1: refresh username_1: refresh username_1: refresh username_1: refresh username_1: refresh username_1: refresh username_1: refresh
ikedaosushi/tech-news
360599125
Title: ROSのTips初心者向け Question: username_0: ROS&#12398;Tips&#65288;&#21021;&#24515;&#32773;&#21521;&#12369;&#65289;<br> ROSConJP 2018&#12364;&#20808;&#26085;&#12354;&#12387;&#12383;&#12398;&#12391;&#12452;&#12531;&#12473;&#12497;&#12452;&#12450;&#12373;&#12428;&#12390;&#35352;&#20107;&#12434;&#26360;&#12365;&#12414;&#12375;&#12383; ROSConJP 2018&#21442;&#21152;&#12373;&#12428;&#12383;&#30342;&#27096;&#12362;&#30130;&#12428;&#27096;&#12391;&#12375;&#12383; twitter : https://twitter.com/yukky_saito<br> https://ift.tt/2MzJIKv
DmitryTsepelev/store_model
538027682
Title: Array of enums Question: username_0: Hi! We often have the situation where a model has an array of strings that needs to be validated to only include certain values, and we need some predicate methods to check if a model has a certain value inside this array: ``` class User < ApplicationRecord after_initialize { self.roles = Array.wrap(roles) } ROLES = %w[admin user reporter].freeze validate do errors.add(:roles, :invalid_roles) if (roles - ROLES).any? end ROLES.each do |role| define_method("#{role}?") do roles.include?(role) end end end ``` And IMHO, that looks suspiciously like a StoreModel use case: An array of enums. Only difference would be that a wrong role would raise an exception instead of a validation error (but in my case this would be ok). As far as I can tell, this isn't possible right now with `StoreModel`, am I right? Would it be a feature you would consider to include? I'd be happy to try for a PR if you could give me some hints how to implement a feature like this. Answers: username_1: Hi, @username_0! Right now `StoreModel` focuses on isolating JSON(B) data from the parent model. In your example an array (where it is stored?) is used inside the `ApplicationRecord` subclass. Probably [custom Rails validator](https://stackoverflow.com/a/12744945) will help here: ```ruby class User < ApplicationRecord ... validates :roles, array: { inclusion: { in: ROLES } end ``` username_2: I guess that `enumerize` gem could deal with it. I didn't try though :) username_0: @username_1 `#roles` is a JSON column on the `users` table. I thought because you have support for enum types on single attributes (in the example here https://github.com/username_1/store_model/blob/master/docs/enums.md it is the key `status`), it would be a good fit to extend this to support an array of enums. Consider this example, maybe it's clearer this way: ``` class Configuration include StoreModel::Model enum :role, %i[admin user reporter], default: :user end class User < ApplicationRecord attribute :configuration, Configuration.to_type end ``` My user has a configuration with an enum of `role` that defaults to `:user`. What if I want multiple roles for a user? I thought an array of enums would be a good solution for this. What do you think? username_1: Aha, in this case having the built-in inclusion validator for arrays makes sense to me! username_0: @username_2 you are right, enumerize has built in support for Arrays when you serialize them or use mongodb. I still have to find out if it works with json array columns though. And because we already have the store_model in our Gemfile adding another dependency for this use case isn't something I'm very happy about username_0: @username_1 How would you implement this? If you can give me a push in the right direction, I'm happy to try for a PR. username_1: @username_0 This is what should be done to make it work: 1. Take a look at [docs](https://guides.rubyonrails.org/active_record_validations.html#performing-custom-validations) about custom validators 2. Add a new custom validator (we already have [one](https://github.com/username_1/store_model/blob/master/lib/active_model/validations/store_model_validator.rb)) called `ArrayValidator` 3. Implement the validator (there is a [good example](https://stackoverflow.com/questions/5669496/how-do-i-validate-members-of-an-array-field/12744945#12744945)) 4. Add specs 5. Bonus point: add docs 🙂 As a result, it will be possible to add a validation to the `StoreModel` in a following way: ```ruby class Configuration include StoreModel::Model ROLES = %i[admin user reporter] enum :role, ROLES, default: :user validates :roles, array: { inclusion: { in: ROLES } end class User < ApplicationRecord attribute :configuration, Configuration.to_type end ``` Bonus point: it might be helpful to have a shortcut for the `enum` method to use this validation by default `enum :role, ROLES, default: :user, validate: true` username_0: @username_1 thanks for the kick off! I'm not sure if I understand the need of a validator when using enums. The current implementation raises an error when an invalid value is assigned, e.g. from your examples ``` class Configuration include StoreModel::Model enum :status, %i[active archived], default: :active end config = Configuration.new config.status = "foo" # ArgumentError: invalid value 'foo' is assigned # from /usr/local/bundle/gems/store_model-0.7.0/lib/store_model/types/enum_type.rb:56:in `raise_invalid_value!' ``` I would suggest to keep the same behaviour for an array of enums: When you push an invalid value into the array, it should raise an error. Do you know what I mean? username_1: @username_0 Sorry for the long response, that's a valid point! We don't need to use a validator in the current implementation (however, there is a chance that someone would need to make this behaviour optional in the future)
mjyergin/f1-3-c2p1-colmar-academy
253394422
Title: reusable css Question: username_0: This height styling is making this class very rigid and not reusable. Without the height limitation this class could be used much more freely. https://github.com/mjyergin/f1-3-c2p1-colmar-academy/blob/master/ColmarAcademy2/resources/css/style.css#L29
dyjakan/osx-syscalls-list
1174075559
Title: MacOS 12 (x86_64) seems to use R10 instead of RCX as the fourth arg register Question: username_0: Stumbled upon this just a moment ago. Trying to issue posix_spawn system call. Here's the schematic: rdi: pointer to a int where to store the spawned process' pid. rsi: path rdx: pointer to a struct of settings, can be null rcx: pointer to argv r8: pointer to envp And here's the code: ``` global start start: mov r9, [rsp] ; argc lea rcx, [rsp + 8] ; argv: {"./syscall2\0", "/bin/test\0"} lea r8, [rsp + 8 + r9*8 + 8] ; envp push 0 ; pid mov rax, 0x020000F4 ; posix_spawn syscall mov rdi, rsp ; pointer to pid mov rsi, [rcx+8] ; argv[1] mov rdx, 0 syscall mov rax, 0x02000001 syscall ``` However, trying this out doesn't work. Spying the syscall in another terminal windows, by: `sudo dtrace -n 'syscall::posix_spawn*:entry { printf("%s %p %s %p %p %p",execname,arg0,copyinstr(arg1),arg2,arg3,arg4); }'` And launching the assembly program with `nasm -f macho64 syscall2.asm && ld syscall2.o -static -o syscall2 && ./syscall2 /bin/test` dtrace finds that the call looks slightly off: `posix_spawn:entry syscall2 7ff7bfeff6a0 /bin/test 0 0 7ff7bfeff6c8` The arg2 after /bin/test is supposed to be zero, but the arg3 is not! Clearly it's expecting arg3 in some other register! After trial and error, I noticed that this code works: ``` global start start: mov r9, [rsp] ; argc lea r10, [rsp + 8] ; argv: {"./syscall2\0", "/bin/test\0"} lea r8, [rsp + 8 + r9*8 + 8] ; envp push 0 ; pid mov rax, 0x020000F4 ; posix_spawn syscall mov rdi, rsp ; pointer to pid mov rsi, [r10+8] ; argv[1] mov rdx, 0 syscall mov rax, 0x02000001 syscall ``` The only difference is that rcx is changed to r10. I don't have a clue, when this change has taken place or does it only happen on specific versions / hardware. Answers: username_0: Found an old blog post (https://filippo.io/making-system-calls-from-assembly-in-mac-os-x/) that states: "OS X (and GNU/Linux and everyone except Windows) on 64 architectures adopt the System V AMD64 ABI reference. Jump to section A.2.1 for the syscall calling convention." The reference (https://refspecs.linuxbase.org/elf/x86_64-abi-0.99.pdf ) says: "1. User-level applications use as integer registers for passing the sequence %rdi, %rsi, %rdx, %rcx, %r8 and %r9. The kernel interface uses %rdi, %rsi, %rdx, %r10, %r8 and %r9." Indeed, %rdi, %rsi, %rdx, %r10, %r8 and %r9 is the sequence that seems to really work. Maybe this cheat sheet needs fixing?
ant-design/ant-design
451758273
Title: Table组件首行(某行)固定 Question: username_0: - [ ] I have searched the [issues](https://github.com/ant-design/ant-design/issues) of this repository and believe that this is not a duplicate. ### What problem does this feature solve? 如果想在table组件中达到第一行数据或者某一行数据可以像固定表头那样当表向下滑动时固定在表头的下方,有什么方法可以这样设置? ### What does the proposed API look like? 可以设置行的固定属性属性 <!-- generated by ant-design-issue-helper. DO NOT REMOVE --> Status: Issue closed Answers: username_1: 这会严重消耗 Table 计算性能,应该不会支持。但是你可以使用 CSS 来实现固定:https://codesandbox.io/s/white-microservice-t4cl4
kubernetes/kubernetes
273820292
Title: kubectl plugin - reference to current directory Question: username_0: **Is this a BUG REPORT or FEATURE REQUEST?**: feature request /kind feature When executing a plugin with a "kubectl plugin" command the current directory is changed to the directory of the plugin, but there is no reference to the directory the user was in before. It would be nice to have a KUBECTL_PLUGIN variables set to the directory the user was in to be able to access files in there. **Environment**: - Kubernetes version (use `kubectl version`): 1.8.0 - Kubectl version: 1.8.2 - Cloud provider or hardware configuration: minikube - OS (e.g. from /etc/os-release): macOS Sierra 10.12.6 Answers: username_0: /area kubectl username_0: /sig cli username_1: +1 ...I've run into the same issue. It must be an issue like subshell inheritance. Only when running a command directly from the plugin.yaml file (i.e., echo $PWD) does it return the user's directory. Once the plugin.yml launches the script/binary, then the user's current environment is lost. Without access to this, the plugin feature becomes unsuitable for many (if not all) of our use cases. username_0: A quick experiment shows you can do something like `command: "./run_plugin $PWD"` and have the current directory on the command line to your plugin. 👍 for that. (Unfortunately it splits the command line on spaces, regardless of quoting, so if your current path has a space in it you need to paste it back together again in the plugin, but it makes things possible until this is fixed.) username_1: It's nasty, but one other workaround is to put a flag in your plugin and have your bashrc or bash_profile pass it the user's path. E.g.: function kubectl() { case $* in "deploy"*) shift 1; command kubectl plugin deploy --path=$PWD "$@" ;; * ) command kubectl "$@" ;; esac } ' >> ~/.bash_profile Thats a snippet of what I'm currently using. Using this method is a bad approach, but will have to use until a fix is found. I put that in an install.sh script, inside of tar archive of my plugins directory. Note that function makes it so you don't need to pass it the 'plugin' command when executing. username_1: @username_0 regarding the spacing issue, you should be able to fix that by quoting the $PWD you're passing to your script, or changing the IFS variable to handle it. username_0: @username_1 I tried quoting $PWD but my script ended up being called with $1 being `'/tmp/a` and $2 being `dir'`. I think it's line 51 of https://github.com/kubernetes/kubernetes/blob/master/pkg/kubectl/plugins/runner.go that just does a simple split(). I also tried setting command to be `CURDIR=$PWD ./myplugin` to see if I could get a new variable into the plugin. But that wouldn't execute. At the moment it looks like it's very simplistic; take the `command`, split it on spaces, the first value is the command to execute and the rest are the arguments. username_2: /remove-lifecycle rotten username_2: With the 1.12 plugin mechanism, the current directory is now preserved when a plugin executable is invoked /close
mahsiaoko/backend
499919329
Title: xadmin全局配置 Question: username_0: # 修改默认主题 在user app下面的adminx.py中 ```python from xadmin import views class BaseSetting: enable_themes = True use_bootswatch = True xadmin.site.register(views.BaseAdminView, BaseSetting) ``` # 全局默认配置 在user app下面的adminx.py中 ```python from xadmin import views class GlobalSettings: set_title = '慕学后台管理系统' set_footer = '慕学在线网' xadmin.site.register(views.CommAdminView, GlobalSettings) ```
final-form/react-final-form
305680819
Title: Massive performance issue with validation + missing access to meta data in validation Question: username_0: ### Are you submitting a **bug report** or a **feature request**? Bug Report ### What is the current behavior? When using field level validation the validation method is called independently from whether a message is shown at all (because it is only rendered when the form field was touched. This is how our custom Field wrapper looks like: ```js return (<Field name={props.name} validate={(value, values) => { if ((!value || value === '') && props.required) { return ( <FormattedMessage id='order.form.validation.text.empty' /> ); } if (props.validate) { return props.validate(value, props.name, values); } }}> {({input, meta}) => ( <FormGroup controlId={props.name} className={classNames({ 'has-error': meta.touched && meta.error, })}> <ControlLabel> <FormattedMessage id={props.labelId} /> {props.required && ' *'} </ControlLabel> <FormControl type={props.type} componentClass={props.componentClass} placeholder={ props.placeholderId ? formatMessage({ id: props.placeholderId, }) : '' } {...input} /> {meta.error && meta.touched && ( <span className='text-danger show'>{meta.error}</span> )} </FormGroup> )} </Field>) ``` It shows that the validation method runs whenever any of the fields changes. With our form which used `final-form-array` it's pretty easy to reach 50+ form fields. In this case each of the fields is rendered whenever any of the fields are changing. And not only once, but as often as the number of fields visible at each moment. The second issue is that this change leads to a re-rendering of all fields (a few times) as the `Field` itself is unable to know what the content is doing with the updated validation results as meta data. I just found the `validationFields` config which seems like some manual optimization. (I figure that for most users the current default might be wrongly chosen here. That said I was happy to see this behavior is available at all - which is super useful for cross-field checks - but should probably be not the default for performance reasons.) [Truncated] Next I figure it would be definitely useful to offer access to meta data in the validation method. It would be helpful to have access to `meta.touched`. Otherwise I figure it would also be a good default behavior to call validation only when a field was touched before (which happens to be active anyway when the user tries submitting the form). Could also be a new prop e.g. validateUntouched which defaults to `false`. Unfortunately some breaking changes here which would probably be on the road to 4.x. ### Sandbox Link Not yet. Hopefully the mentioned example scenarios are a good start for now. ### What's your environment? Chrome v64 Mac OS 10.13.3 $ grep final package.json "final-form": "^4.3.1", "final-form-arrays": "^1.0.4", "final-form-calculate": "^1.0.2", "react-final-form": "^3.1.4", "react-final-form-arrays": "^1.0.4", Answers: username_1: https://github.com/final-form/final-form/pull/125 username_2: Meta argument is still missing in 4.18.7
fossasia/open-event-frontend
480606970
Title: Event invoices are not fetched properly in /review route Question: username_0: **Describe the bug** <!-- A clear and concise description of what the bug is. --> Currently, we use filter to get a particular event invoice. `let filterOptions = [ { name : 'identifier', op : 'eq', val : params.invoice_identifier } ]; ` **Expected behaviour** <!-- A clear and concise description of what you expected to happen. --> We can use proper GET API to fetch invoice **Additional context** <!-- Add any other context about the problem here. --> On it Answers: username_1: @username_0 I've already fixed this in my payment PR. Status: Issue closed
sebas77/Svelto.ECS
275583614
Title: [Ask] Why using C# Reflection? Question: username_0: Hello @username_1, I just get started using your framework. I like it. I have question in my mind why are you using C# Reflection? As we know, reflection is slow. Answers: username_1: C# is built with reflection in mind, so not all the reflection operation are slow, for example is and as are not that slow. However functions to find members on a class are slower. In Svelto reflection is used only during the building of entities. Building of entities should happen rarely or just during the initialization time. If it happens rarely it won't impact your frame rate. If you have to allocate hundreds of entities in run time, then there is something wrong to start with. You have two options at that point, you can enable/disable the entities or pool them. Disable and enable nodes is an option built-in in the framework while pooling is something I will add in future. I will try to explain this better with my next article. username_0: I think it is clear enough I am waiting for your next article :) Thanks Status: Issue closed
gravitee-io/issues
834678296
Title: [management] Duplicate tooltip on main menu Question: username_0: Currently, when we are hovering on the menu, we are seeing duplicate tool tip ![Screenshot 2021-03-18 at 11.06.31.png](https://images.zenhubusercontent.com/6037d57eba9d3cc0b0412549/bb2443b4-db15-4a49-bdf0-8be1addc1ad6) ### Possible Solution We actually don't need a tooltip on the menu, because it does not add any value. ### Steps to Reproduce (for bugs) 1. Go to APIM 2. Hover on the main menu on left side ### Context A tooltip is a brief, informative message that appears when a user interacts with an element in a graphical user interface (GUI). At the moment we are just providing the same information on the hover, so it does not actually add any value. ### Your Environment * Version used: local
dirtyvagabond/jackalope-github-test
136830034
Title: first issue Question: username_0: this is the first issue! Answers: username_1: commenting on 1! username_1: I'm going to close this due to extreme age. If you'd like to keep it open: re-open, make sure there is an assignee, and explain why you are re-opening. If you're not sure who the assignee should be then please assign to 'username_0'. Yours, Ava p.s.: Happy Winter Holiday Times! :snowflake: :snowman: :gift: :snowman: :snowflake: Status: Issue closed username_0: this is the first issue! username_1: I'm closing this as stale. If you'd like to keep it open, please: 1. add a comment to explain why you'd like to keep it open 2. specify an appropriate assignee 3. re-open the issue Status: Issue closed
VSCodeVim/Vim
351567060
Title: `gf` (to go to file under cursor) produces "Vim: The file ... does not exist." even though file clearly exists Question: username_0: **Describe the bug** Using the `gf` Vim command inside of a C (`.c`) file under an (valid) `#include` directive produces the `Vim: The file <FILEPATH> does not exist`, even though the file clearly exist. Works for absolute file paths as well. **To Reproduce** Steps to reproduce the behavior: 1. Create a new, empty `.c` or `.h` file 2. Write an include directive including a valid, existing file with either an absolute or relative path 3. Go into `Insert Mode` 4. Hover the path inside the specified include directive 5. Press `gf` 6. See error in bottom right corner **Expected behavior** A new tab with the the given (existing) file is opened. **Screenshots** If applicable, add screenshots to help explain your problem. **Environment (please complete the following information):** - Extension (VsCodeVim) version: 0.16.0 - VSCode version: 1.26.1 - OS: Linux (Ubuntu 18.04)<issue_closed> Status: Issue closed
robertfeldt/BlackBoxOptim.jl
605104817
Title: strftime not define opt_controller.jl:362 Question: username_0: I got an error message when I tried to save optimization result: This is the original optimization code. ------------------- opt1=bbsetup(functionAR1; Method=:dxnes, SearchRange = [(-5.0, 5.0), (0.1, 0.99),(0.1, 2.0)], lambda = 50, MaxFuncEvals=10, SaveTrace = true, SaveFitnessTraceToCsv = true, SaveParameters = true) el1 = @elapsed res1 = bboptimize(opt1) cand1 = best_candidate(res1) fit1 = best_fitness(res1) ------------------------ Basically I set **SaveTrace = true, SaveFitnessTraceToCsv = true, SaveParameters = true** to obtain the optimization process results, and I got error message below: LoadError: UndefVarError: strftime not defined in expression starting at untitled-ac9baa1509b4e4ac68e7c1d8ef6ff215:78 write_result(::BlackBoxOptim.OptRunController{BlackBoxOptim.DXNESOpt{Float64,RandomBound{ContinuousRectSearchSpace}},BlackBoxOptim.ProblemEvaluator{Float64,Float64,TopListArchive{Float64,ScalarFitnessScheme{true}},FunctionBasedProblem{typeof(functionAR1),ScalarFitnessScheme{true},ContinuousRectSearchSpace,Nothing}}}, ::String) at opt_controller.jl:363 write_result(::BlackBoxOptim.OptRunController{BlackBoxOptim.DXNESOpt{Float64,RandomBound{ContinuousRectSearchSpace}},BlackBoxOptim.ProblemEvaluator{Float64,Float64,TopListArchive{Float64,ScalarFitnessScheme{true}},FunctionBasedProblem{typeof(functionAR1),ScalarFitnessScheme{true},ContinuousRectSearchSpace,Nothing}}}) at opt_controller.jl:362 run!(::BlackBoxOptim.OptController{BlackBoxOptim.DXNESOpt{Float64,RandomBound{ContinuousRectSearchSpace}},FunctionBasedProblem{typeof(functionAR1),ScalarFitnessScheme{true},ContinuousRectSearchSpace,Nothing}}) at opt_controller.jl:464 #bboptimize#119 at bboptimize.jl:66 [inlined] bboptimize(::BlackBoxOptim.OptController{BlackBoxOptim.DXNESOpt{Float64,RandomBound{ContinuousRectSearchSpace}},FunctionBasedProblem{typeof(functionAR1),ScalarFitnessScheme{true},ContinuousRectSearchSpace,Nothing}}) at bboptimize.jl:63 top-level scope at util.jl:234 [inlined] top-level scope at untitled-ac9baa1509b4e4ac68e7c1d8ef6ff215:0 If you go to opt_controller.jl:362 (BlackBoxOptim\YCrqH\src\opt_controller.jl:362), there is a function write_result, and strftime is not define. strftime only works for julia before 1.0 version. function write_result(ctrl::OptRunController, filename = "") if isempty(filename) timestamp = **strftime**("%y%m%d_%H%M%S", floor(Int, ctrl.start_time)) filename = "$(timestamp)_$(problem_summary(ctrl.evaluator))_$(name(ctrl.optimizer)).csv" filename = replace(replace(filename, r"\s+", "_"), r"/", "_") end save_fitness_history_to_csv_file(ctrl.evaluator.archive, filename; header_prefix = "Problem,Dimension,Optimizer", line_prefix = "$(name(problem(ctrl.evaluator))),$(numdims(ctrl.evaluator)),$(name(ctrl.optimizer))", bestfitness = opt_value(problem(ctrl.evaluator))) end Answers: username_1: Sorry for this. Thanks, I made the simple fix. Please check if latest master branch fixes your problem. Status: Issue closed
cla-assistant/cla-assistant
641215702
Title: Enabling multiple CLAs to be associated with an org Question: username_0: Another specific suggestion to my use is the follow: under an organization afew companies might be active which will require their own variation of the CLA. A possible solution would be to link multiple CLAs to an org and have CLA-Assistant chose the right one based on a label/tag/file originating from a particular repo in that org. I understand this might be a bit more involved than a quick feature, but certainly a viable problem for bigger corporations. Please let me know what you think. <sub> <NAME> <<EMAIL>>, Daimler TSS GmbH, [legal info/Impressum](https://github.com/Daimler/daimler-foss/blob/master/LEGAL_IMPRINT.md)</sub> Answers: username_1: It sounds like a viable problem, but you are the first person, who brings it up... So I believe that this is rather an edge case. With the current state of code you can link separate repos with different CLAs. username_0: Indeed, and that is what I have started to do. As you mentioned, it might be an edge case and this is what it looks like: - 1 single org for multiple business units with different CLAs - each unit will have to manage its own repositories under the ORG by knowing which repo + cla is the correct choice - decentralized approach, to avoid central foss office bottle necks, require the business units that they take care of their own repos Now a manual approach is error prone and throws a wrench into my ambitions of automating everything. Hence I was hoping to tag repos or include a file indicating their business unit origin and map that to the correct CLA in the assistant. The work flow could for example be: ``` CLA Assistant ---> add org --> add multiple CLAs and map them to labels: CLA1 -> BU1 CLA2 -> BU2 ``` And the webhook would then be handled via the following workflow: ``` Incoming webhook event -> get repo labels -> find mapped CLA to the label -> check signature status.... ``` I am open to any other proposal on how I can automate the aforementioned setup. p.s. I am unable to influence the org / CLA/ BU structure itself sadly username_1: or you go to CLA assistant and separate the business units by linking different repos with different CLA files, both sounds to me like a comparable amount of work. Please, don't understand me wrong. If you want to go forward with your idea and implement it you can still count on our support. We will help you as much as possible by guiding you through the code or pointing you to the right places in the codebase. username_0: thanks @username_1 , much appreciated. Still trying to see if we can simplify the process, and if I cant succeed, I will definitely take you up on your offer. Thank you, I guess this issue can be closed. Status: Issue closed
TheNewNormal/corectl
129285235
Title: sometimes one VM crashes for no reason Question: username_0: I was running `kube-cluster` and one of VM's just crashed after using the cluster for 40 min Answers: username_1: @username_0 would need way more data, and ideally a deterministic way to induce this... in the meantime - anything on host's logs ? running any specific payload inside the deceased VM ? username_0: I was using alpha there, lets see if it happens again on stable/beta Status: Issue closed username_0: I was not able to reproduce this one anymore, closing it
arif98741/laravelbdsms
1145367279
Title: Bulk SMS BD api mobile number and message formate not correct. Question: username_0: Update this on use Xenon\LaravelBDSms\Provider\BulkSmsBD; 'username' => $config['username'], 'password' => $config['<PASSWORD>'], 'number' => $number, 'message' => $text, Answers: username_1: can u share exact response please? which version of laravelbdsms package you are using ? username_0: I got problem with this line : Phone number https://github.com/username_1/laravelbdsms/blob/2f14e733fd43e3e4cf0dba3b0c2d347c07b73d9a/src/Provider/BulkSmsBD.php#L49 $response = $client->request('GET', '', [ 'query' => [ 'username' => $config['username'], 'password' => $<PASSWORD>'], 'contacts' => $number, 'msg' => $text, ] ]); ![image](https://user-images.githubusercontent.com/7997828/155010722-23748262-2d00-460d-b544-07bc544ba5ce.png) ![image](https://user-images.githubusercontent.com/7997828/155010804-b29d2308-9ae3-4b5c-b74c-97bd758c673b.png) not are same as document. thats why didn't get response properly. username_0: <img width="1048" alt="Screenshot 2022-02-22 at 12 40 32 AM" src="https://user-images.githubusercontent.com/7997828/155011178-e8834452-b9c2-4804-96c2-0b04f21da841.png"> username_1: Thanks for your great contribution to find out this bug. I am updating this query param by _number_ soon and pushing a new release. Please update your package. username_1: New version _v1.0.38.2_ is released after fixing that bug. @username_0 Special thanks for your nice contribution. Hope you will continue to participate in open source world in future also. I am closing this issue soon. If you need to mention something new you can reopen it. Status: Issue closed
Deftaudio/Midi-boards
1071418746
Title: Programming 4.1 breakout with Visual Studio / PlatformIO possible, error message.... Question: username_0: Do you have experience with coding in Visual Studio Code and can you see whats wrong? Code: ... /* Create a "class compliant " USB to 8 MIDI IN and 8 MIDI OUT interface. MIDI receive (6N138 optocoupler) input circuit and series resistor outputs need to be connected to Serial1, Serial2 and Serial3. You must select MIDIx4 from the "Tools > USB Type" menu This example code is in the public domain. */ #include <Arduino.h> #include <MIDI.h> #include <MIDI.hpp> #include <usb_midi.h> #include <USBHost_t36.h> // access to USB MIDI devices (plugged into 2nd USB port) // Create the Serial MIDI ports MIDI_CREATE_INSTANCE(HardwareSerial, Serial1, MIDI1); MIDI_CREATE_INSTANCE(HardwareSerial, Serial2, MIDI2); MIDI_CREATE_INSTANCE(HardwareSerial, Serial3, MIDI3); MIDI_CREATE_INSTANCE(HardwareSerial, Serial4, MIDI4); MIDI_CREATE_INSTANCE(HardwareSerial, Serial5, MIDI5); MIDI_CREATE_INSTANCE(HardwareSerial, Serial6, MIDI6); MIDI_CREATE_INSTANCE(HardwareSerial, Serial7, MIDI7); MIDI_CREATE_INSTANCE(HardwareSerial, Serial8, MIDI8); // Create the ports for USB devices plugged into Teensy's 2nd USB port (via hubs) USBHost myusb; USBHub hub1(myusb); USBHub hub2(myusb); USBHub hub3(myusb); USBHub hub4(myusb); MIDIDevice midi01(myusb); MIDIDevice midi02(myusb); MIDIDevice midi03(myusb); MIDIDevice midi04(myusb); MIDIDevice midi05(myusb); MIDIDevice midi06(myusb); MIDIDevice midi07(myusb); MIDIDevice midi08(myusb); MIDIDevice * midilist[8] = { &midi01, &midi02, &midi03, &midi04, &midi05, &midi06, &midi07, &midi08, }; // A variable to know how long the LED has been turned on elapsedMillis ledOnMillis; void setup() { Serial.begin(115200); pinMode(13, OUTPUT); // LED pin MIDI1.begin(MIDI_CHANNEL_OMNI); MIDI2.begin(MIDI_CHANNEL_OMNI); MIDI3.begin(MIDI_CHANNEL_OMNI); MIDI4.begin(MIDI_CHANNEL_OMNI); [Truncated] if (channel == 3) { MIDI8.send(type, data1, data2, 1); activity = true; } } // blink the LED when any activity has happened if (activity) { digitalWriteFast(13, HIGH); // LED on ledOnMillis = 0; } if (ledOnMillis > 15) { digitalWriteFast(13, LOW); // LED off } } ... Cheers, Kees Answers: username_1: Sorry, I'm not using Visual Studio. But for 8x8 interface you must select MIDIx16 in the configuration. The comment is left over from 3x3 sample code. Let me fix it.
cloudfoundry/cf-deployment
689208009
Title: skip one or more cf-deployment minor releases Question: username_0: ### What is this issue about? Is there a way to skip one or more cf-deployment minor releases? For example can i upgrade from version 13.10.0 directly to version 13.15.0 ? We have a large cloud installation and every roleout needs a bunch of time. ### What version of [cf-deployment](https://github.com/cloudfoundry/cf-deployment/releases) are you using? cf-deployment v13.15.0 ### Please include the `bosh deploy...` command, including all the operations files (plus any experimental operation files you're using): `bosh -e prod` ` -d cf deploy cf-deployment/cf-deployment.yml` ` -o cf-deployment/operations/use-compiled-releases.yml` ` -o custom_opsfiles/cf/scale.yml` ` -o custom_opsfiles/cf/gorouter-timeout.yml` ` -o custom_opsfiles/cf/networkpolicy-disable.yml` ` -v system_domain=XXX.XX` ` -o cf-deployment/operations/enable-nfs-volume-service.yml` ` -o cf-deployment/operations/enable-smb-volume-service.yml` ` -o cf-deployment/operations/use-trusted-ca-cert-for-apps.yml` ` -o cf-deployment/operations/set-router-static-ips.yml` ` -o cf-deployment/operations/enable-service-discovery.yml` ` -o custom_opsfiles/cf/set-scheduler-static-ips.yml` ` -o custom_opsfiles/cf/uaa-smtp.yml` ` -o custom_opsfiles/cf/blobstore-cleanup-enable.yml` ` -o custom_opsfiles/cf/resize-blobstore-disk.yml` ` -o custom_opsfiles/cf/scale-database-cluster.yml` ` -o cf-deployment/operations/backup-and-restore/enable-backup-restore.yml` ` -o custom_opsfiles/cf/memory-capacity.yml` ` -v router_static_ips=[XX.XXX.XXX.XX,….]` ` -v scheduler_static_ips=[XX.XXX.XXX.XX,….]` ` -l trusted-certs.vars.yml` ### What IaaS is this issue occurring on? VMware vSphere (6.7.0) ### Tag your pair, your PM, and/or team! /cc @ukleeberger, @TimonB Answers: username_1: Hi @username_0, Upgrading across multiple minor releases of cf-deployment should be possible. You should be able to simply check out the `v13.15.0` tag of cf-deployment and run your normal `bosh deploy` command. Although this will increase the size of each upgrade (and therefore the time the upgrade takes), we try to do a good job of cutting releases fairly frequently to mitigate this. Looking at the [changeset](https://github.com/cloudfoundry/cf-deployment/compare/v13.10.0...v13.15.0) between those two releases, I don't see anything that gives me cause for concern, but that obviously depends on the ops-files you are using in your deployment. Hope this helps, Dave Status: Issue closed username_0: Hi @username_1, Thanks for answering our issue. That helps us a lot. Regards, Markus
venicegeo/bf-api
300404391
Title: Bug: Piazza Integration Postman tests fail if no existing API Key Question: username_0: The integration postman tests for Piazza call the GET v2/key endpoint on the Gateway. This call succeeds if a key has been existing and generated.&nbsp; However, in an entirely new environment, this GET v2/key call will fail if no key exists. The tests will then cascade errors for every single request because no API Key could be found. The correct behavior would be for the tests to attempt to GET v2/key, and if that returns an error stating that no existing key is found (and a new one should be created!) then POST v2/key should be called instead. This will allow the tests to initialize properly in a new/fresh environment.&nbsp;<issue_closed> Status: Issue closed
Materials-Consortia/optimade-python-tools
529951483
Title: List properties and HAS _ operators missing Question: username_0: This is known, but just so we have an issue for it, we are still missing this functionality. A discussion was started in @fekad's [PR](https://github.com/Materials-Consortia/optimade-python-tools/pull/66#issuecomment-546913952).<issue_closed> Status: Issue closed
cmg7063/IGME480FinalProject
432656891
Title: Add Ability to Close Blurbs Question: username_0: Closing blurbs prompts directions to the next location. Answers: username_0: Not sure how the functionality with moving from location to location works, but we'll probably have to add stuff in the BlurbController script to set the blurbBlock back to active when the user approaches the next location. (When the user clicks "close" setActive is set to false). Status: Issue closed
kanboard/kanboard
219458932
Title: minimise icon reversed on swimland Question: username_0: when to minimise the swimlane, the icon is pointing up (when surely it should be pointing downwards?) and vice versa. e.g. here's a swimlane that is minimised, surely the icon should be pointing upwards to indicate that you click it to maximise it again? ![icon_reversed](https://cloud.githubusercontent.com/assets/302031/24688759/01440094-19ba-11e7-8d89-4dd0c7552660.png)<issue_closed> Status: Issue closed
hjwylde/git-fmt
120937191
Title: Bash completion Question: username_0: https://github.com/pcapriotti/optparse-applicative/wiki/Bash-Completion Answers: username_0: Spent a while on this one... Best bet for `--operate-on` ref completion was `git rev-parse --symbolic --branches --remotes --tags`. Unfortunately auto complete was not easy to do using optparse-applicative. It fails to auto complete for git's special macro, i.e., it won't auto complete on `git fmt <TAB>`. Further, I couldn't get the `completer` working properly with prefixes for `--operate-on`. It seems that it didn't handle forward slashes well inside the options. It kept trying to auto complete `origin/` straight to the final element, e.g., `origin/0.1.0` rather than `origin/release/0.1.0`. username_0: I'll merge what I have, but leave this issue open to try find a new way to include Bash completion.
vasturiano/3d-force-graph
621532990
Title: I use pauseAnimation() then resumeAnimation(),the glow(three.js EffectComposer) disappear Question: username_0: I added a click glow event to the node,How to use it is shown in the link. [example](https://threejs.org/examples/#webgl_postprocessing_unreal_bloom_selective) when I use graph.pauseAnimation() then resumeAnimation(),The layer of the node model has been displayed as a glow layer, but the glow effect cannot be seen Status: Issue closed Answers: username_0: The old version I used before has no postProcessingComposer. Now change to the new version and use postProcessingComposer to solve the problem
entrepreneur-interet-general/gobelins
785196725
Title: Encyclopédie - Grille d'objet - ordre Question: username_0: **Description du défaut** La grille d'objet se range par ordre des numéros d'inventaire et non dans l'ordre de sélection. Est-ce difficile de le modifier ? Sinon nous chargerons les images une par une, comme une image de provenance extérieure. Selon pose problème quand on veut faire une grille d'objet par ordre chronologique par exemple. ![C1](https://user-images.githubusercontent.com/76126991/104471723-7bd90e80-55bb-11eb-96ea-d570bd889d43.JPG) ![C2](https://user-images.githubusercontent.com/76126991/104471727-7c71a500-55bb-11eb-8611-2ce4ff898c03.JPG) **Processus pour le constater** Les étapes pour reproduire le comportement défectueux : 1. Aller sur la page '...' 2. Cliquer sur '....' 3. Faire défiler la page jusqu'à '....' 4. Voir l'erreur **Comportement attendu** Une description claire et concise de ce qui aurait du se passer. **Copies d'écran** Optionnel : joindre des copies d'écran pour démontrer le problème. **Conditions de consultation** Si possible, précisez : - Le matériel: ordinateur de bureau, tablette, mobile… (pour les appareils mobiles, précisez le constructeur) - Le système d'exploitation (Windows 10, MacOS 10.13…) - Le navigateur web (Firefox, Safari, Chrome…)
SherClockHolmes/webpush-go
411701481
Title: HTTPClient --> httpClient in Options prevents customizing the client Question: username_0: I was setting the HTTPClient to provide a client that had a transport and client pool controlled by my code, and would pass this client into webpush.Options. Is there a reason this got lower-cased? Can a method be provided to override the transport? The problem later occurs in SendNotification: ``` // Send the request if options.httpClient == nil { options.httpClient = &http.Client{} } ``` So now, every time a push is initiated a new client is created instead of taking an established client. Answers: username_1: @username_0 Sorry about that. I was under the impression that no one would use the http client option. I'll add that back in. Status: Issue closed username_1: @username_0 Thanks for the ticket, please upgrade to v1.0.1. https://github.com/username_1/webpush-go/releases/tag/v1.0.1
docker/compose
430420641
Title: docker-compose down --remove-orphans doesn't delete stopped containers Question: username_0: <!-- Welcome to the docker-compose issue tracker! Before creating an issue, please heed the following: 1. This tracker should only be used to report bugs and request features / enhancements to docker-compose - For questions and general support, use https://forums.docker.com - For documentation issues, use https://github.com/docker/docker.github.io - For issues with the `docker stack` commands and the version 3 of the Compose file, use https://github.com/docker/cli 2. Use the search function before creating a new issue. Duplicates will be closed and directed to the original discussion. 3. When making a bug report, make sure you provide all required information. The easier it is for maintainers to reproduce, the faster it'll be fixed. --> ## Description of the issue The --remove-orphans flag from docker-compose down allows the user to remove containers which were created in a previous run of docker-compose up, but which has since been deleted from the docker-compose.yml file. However, if the container is in a stopped state, then --remove-orphans will have no effect. ## Context information (for bug reports) **Output of `docker-compose version`** ``` docker-compose version 1.20.1, build 5d8c71b docker-py version: 3.1.4 CPython version: 3.6.4 OpenSSL version: OpenSSL 1.0.1t 3 May 2016 ``` **Output of `docker version`** ``` Client: Version: 18.06.1-ce API version: 1.38 Go version: go1.10.3 Git commit: e68fc7a Built: Tue Aug 21 17:23:03 2018 OS/Arch: linux/amd64 Experimental: false ``` ## Steps to reproduce the issue 1. Create a docker-compose.yml file with any service you'd like 2. Run docker-compose up -d 3. Stop the newly created service 4. Remove the service from the docker-compose.yml file 5. Run docker-compose down --remove-orphans ### Observed result The stopped container is still present ### Expected result The container should have been deleted ### Stacktrace / full error message N/A Status: Issue closed Answers: username_1: Hi @username_0 Thanks for the report. I believe this is a duplicate of https://github.com/docker/compose/issues/5547 and will be fixed by https://github.com/docker/compose/pull/6342. As such, I'll close this.
aduros/wasm4
1117835794
Title: Add Controls Icons to Font and Match Current Inputs Question: username_0: ## Problem Currently each game either uses text or custom graphics to spell out what the controls are, generally by mentioning the keyboard keys to press. This works as long as you are on a PC and have the same keyboard layout. but quickly becomes wrong if you play through retroarch, mobile, or just don't have a standard layout. The current alternative is to use WASM4's naming standard (1, 2, Arrow Keys, etc.), but this is confusing to people who haven't memorized WASM4's buttons and their mappings. ## Proposal We should add icons to the font that indicate each control (as in #195), and create a system that can swap these icons out with more appropriate icons at runtime. For web/desktop, this means that people can remap the controls to their preference and get correct instructions on what to press. On touch screens we can either add names (1, 2) to the on screen buttons and put this in the icons. In the retroarch builds we can use the retropad as our basis. For implementations of WASM4 on constrained devices this system can be ignored entirely and the icons hard coded into the game. Answers: username_1: I think this is a great idea, and would kinda be tied into solving #195. username_1: Now that we have icons for keyboard, there are a few approaches to switching out the layouts. These are some I came up with: ### Native #### For keyboard: Set during runtime depending on keyboard layout. AFAIK has to be pretty much hardcoded to X and Z for QWERTY, J and Q for Dvorak, etc. #### For libretro and MCUs: Compile-time flags to set to different common layouts, such as Nintendo-style (A, B), Sony-style (X, O), etc. with an additional flag to swap 1 and 2. ### Web #### For keyboard: Same as native. #### For gamepad overlay: Either: 1. Label as **1** and **2** and have the font say **1** and **2**. 2. Label as **X** and **Z** and have the same font as when on QWERTY keyboard. #### For gamepad input: 👻 username_1: For netplay, an easy fix is to set the layout for each player when the session starts. Example: - Player 1: QWERTY: Z and X - Player 2: Also QWERTY: Z and X This would actually also help to display the correct layout in the future. Perhaps Player 2 will want to use the input layout for Gamepad 1 (showing arrow-keys and Z and X) despite being represented as Gamepad 2 on netplay. username_1: I just realised with the comment above that we would require a section of the font for each gamepad (which in hindsight would be good to have anyways). `0x80` -> `0x87` for gamepad 1 `0x88` -> `0x8f` for gamepad 2 `0x90` -> `0x07` for gamepad 3 `0x98` -> `0x9f` for gamepad 4
lashd/cic
380664527
Title: Conditionals Question: username_0: There will come a time, as in life, where not everything will be straight forward. Required actions could depend on certain conditions and you could be required to iterate on different data sets. Ansible provides conditional and control constructs for just such occasions. In this exercise we will look at the features and functions that Ansible has to help you with decision making with your Playbooks. [Click here](../tree/master/exercises/IaC/ansible/conditionals/README.md) to read the Conditionals exercise
pionl/laravel-chunk-upload
760379884
Title: PHP with files larger than 2 Gigabyte Question: username_0: Do you have issues with files larger than 2 GB? I am uploading files of e.g. 3 or 6 GB but I always get after the conversion of chunk files a final file of about 2 GB. PHP 32 bit seems to have issues creating larger files. https://github.com/jkuchar/BigFileTools This "BigFileTools" could help but I don't know how to implement it in pionl/laravel-chunk-upload Any suggestions?
forcedotcom/SalesforceMobileSDK-CordovaPlugin
383382282
Title: Rest response in force.js not available in case of error status codes Question: username_0: I've migrated the hybrid project from SDK 4.3 to v6.2 and using SDK 6.2 with the force.js apexrest method for rest API calls in an iOS hybrid app. I'm able to get the custom JSON response sent from API in successful cases but in case of error codes like 400 or 501 etc, the JSON response which was sent from API is not available in apexrest error handler method, instead, a general error log is available: "_The operation couldn’t be completed. (https://xxxxxxxxx/services/apexrest/mobileSyncService error 400.)_" Could you please confirm if this is an implementation issue or an issue with new SDK because in older SDK version 4.3, it is working as expected. Please find attached screenshot for your reference. Success Response from API: JSON object with one key 'status' ![screen shot 2018-11-19 at 6 08 15 pm](https://user-images.githubusercontent.com/17747628/48882726-db0ac080-ee41-11e8-87de-c2b5658de47f.png) Error Response: JSON object not available: only standard string, 'status' key is sent in JSON object in this case too: ![screen shot 2018-11-20 at 1 08 56 pm](https://user-images.githubusercontent.com/17747628/48882734-e958dc80-ee41-11e8-9276-6b1a38609752.png) Please help clarify this. Answers: username_0: 1. I'm using the 6.2 version which is shipped with Cordova plugin of Salesforce Mobile SDK 6.2 in Cordova project created with forcehybrid (same as above link given by @username_1 ). 2. 400 is sent by my custom code using RestContext.response in Apex Class explicitly. Actually, the request is properly formed and class executes just fine but the issue is if I send custom JSON in response with error codes 4XX or 5XX, the JSON object is masked and is not available in the response received in the error handler of force.apexrest. Although it works fine and JSON object is available in response in older SDK version (tested with 4.3.1). 3. In new SDK, JSON object is available in API response only if status codes are specified 2XX like 200 but in older SDK, the JSON response is available for all error codes. 4. These are all the cases where request completes and based on business logic or validations, explicitly exception is thrown in the API which then gets captured in force.apexrest by means of status codes but JSON responses are not available for 4XX and 5XX in new SDK. username_1: Unfortunately that is the behavior of the underlying network plugin (since 5.0): see https://github.com/forcedotcom/SalesforceMobileSDK-iOS/blob/master/libs/SalesforceHybridSDK/SalesforceHybridSDK/Classes/Plugins/SFNetworkPlugin/SFNetworkPlugin.m#L120 username_0: The response handling behavior of the plugin can be overridden or is it implemented this way due to some constraint? username_1: The body of the response for non 2xx responses is not passed through by the underlying code (SFRestAPI). That would need to change to support your use case. It was working in 4.x because the web view was doing the network call directly instead of going through the network plugin. That's no longer an option because same origin policy is enforced by WkWebView (and wasn't with UIWebView). Maybe you should look at https://github.com/silkimen/cordova-plugin-advanced-http to do your network calls. You will have to set the request header yourself though. username_1: The error callback is called for non successful requests (requests that don't have a 2xx status code or requests that fail to get a response). In the next version of the mobile sdk (8.2) soon to be released, we are sending back more than just the error through the error callback. ```json { "response": { "headers": { ... }, "statusCode": 400, "body": "response-body" }, "error": "error message - when no response was received e.g. when the server could not be reached" } ``` For more information see: * https://github.com/forcedotcom/SalesforceMobileSDK-iOS/pull/3237 * https://github.com/forcedotcom/SalesforceMobileSDK-Android/pull/2095 Status: Issue closed
NCI-Thesaurus/thesaurus-obo-edition
376619413
Title: kidney, ureter and renal pelvis in NCIT Question: username_0: My understanding is that * 'kidney + ureter' = the collective structure of the kidneys plus ureter (thus includes all kidney parts, eg nephrons) * 'renal pelvis + ureter' = the collective structure of the renal pelvis plus ureter (excludes structures like nephrons) However, when we look at K+U: http://purl.obolibrary.org/obo/NCIT_C61107 we see it only has one part, RP+U. Single-parts are usually a bad sign.. I would expect the kidney to be a part of K+U Here is the graph: ![image](https://user-images.githubusercontent.com/50745/47886896-f6ae1880-ddf9-11e8-8f84-cdfd53ad5692.png) Answers: username_1: From the editor in charge: [I] made "kidney" : "physical part_of kidney and ureter". Since "kidney and ureter" is already "part_of urinary system", and in order to preserve the part_of logical structure, I removed the existing relationship "kidney": "part_of urinary system". Status: Issue closed
mapbox/batfish
243467207
Title: Complete examples/basic/ Question: username_0: It would be nice to be able to point to a single example that exemplifies the basic features. What I'm currently thinking of as The Basic Features is anything in `README.md`, instead of `docs/advanced-usage.md`. I started `examples/basic/`, but it's not quite there yet. @username_1 or @jfurrow, could one of you carry this basic example through to its natural end?<issue_closed> Status: Issue closed
web-scrobbler/web-scrobbler
1007125601
Title: Hi the web srob seems to stop scrobing. i am playing music but it stops scrobing 17 mins ago and then just stop there Question: username_0: **Describe the bug** A clear and concise description of what the bug is. **How to reproduce** Steps to reproduce the behavior: 1. Go to '...' 2. Click on '....' 3. Scroll down to '....' 4. See error **Expected behavior** A clear and concise description of what you expected to happen. **Screenshots** If applicable, add screenshots to help explain your problem. **Debug logs** It's strongly recommended to grab [debug logs](https://github.com/web-scrobbler/web-scrobbler/wiki/Debug-the-extension) after reproducing the issue. **Environment (please complete the following information)** - OS: (e.g. Windows, Mac) - Browser: (e.g. Chrome, Safari) - Extension version: (e.g. v2.10.0) **Additional context** Add any other context about the problem here. Answers: username_0: fix my account soon!
TechnionYP5779/team2
378666952
Title: Use TDD to develop a function iterables.alternate that takes two iterables, and returns an iterable that returns elements in these in alternating order, until the shortest Question: username_0: _From @username_0 on November 8, 2018 10:18_ Use TDD to develop a function iterables.alternate that takes two iterables over elements of the same kind, and returns an iterable that returns elements in these in alternating order, until the shortest one is exhausted, and then continues with the other. use TDD to design and implement specification for the case that one or both arguments are null. be sure to break tests before you write any line of code. _Copied from original issue: TechnionYP5779/team6#60_ Answers: username_1: Looks good Status: Issue closed
udacity/ud851-Sunshine
235031670
Title: Zip File Unzipping Improperly Question: username_0: Not all of the files are working. It could be my comp. Is anyone else having issues with this? Answers: username_1: @username_0 We don't actively monitor this issue list enough to guarantee timely help. If you are enrolled in the Nanodegree program, I encourage you to consult a mentor. If you are enrolled in the free course, you can ask your question in the forums here: https://discussions.udacity.com/c/standalone-courses/developing-android-apps Status: Issue closed
NVyIo/begin_to_nvy
293762211
Title: Add [Doc] Code of Conduct Question: username_0: # Issue Submission Template [ begin_2_nvy ] ## Issue Description [ Add CoC ] ## Select Issue Type Input an X on the appropriate reason for your submission: [ x ] Add | Request to add a Document, Feature, Function or Structure [ ] Change | Request to change a Document, Feature, Function or Structure [ ] RFI | Request for Information on a specific Change Feature, Function or Structure [ ] Change [[ BUG ]] Report a bug for investigation ## System Info Supported systems include Windows, Linux and Android. Other systems are not supported. Please provide details on your specific setup. System Type: [ ] PC [ ] Laptop [ ] Mobile Device Operating System: [ ] Windows [ Version: ] [ ] Linux [ Version: ] [ ] Android [ Version: ] #### For Bugs Behavior you were doing when you first experienced the BUG: [ ] #### Code Editor [ x ] Of course, I am using VS Code [ ] oh no, I am not using VS Code ## Additional Information Please add additional information as needed for the resolution of this issue: ## Visual Aides Please post screenshots, video links [YouTube or Vimeo Only] or other visual aides to support your request. ## Thank you Thank you for your submission. We will get back to your shortly via the Github UI. If you would prefer to be contacted regarding this ticket in some manner other than GitHub, please let us know on the Additional Info Section. [** Be sure to give us a star on GitHub! ** ](https://github.com/NVyIo/begin_to_nvy)<issue_closed> Status: Issue closed
PlanBCode/hypha
304175867
Title: Better UI for Hypha Question: username_0: Attached one step further: result from pencil project. I find that is looks very "dull" now, but that is because this is only a (Bootstrap basic inspired) wireframe. Styling is to be added later, in order to fit into a general default design. [hypha-boxes.pdf](https://github.com/PlanBCode/hypha/files/1800625/hypha-boxes.pdf) Status: Issue closed Answers: username_0: Outdated
NorthernMan54/homebridge-alexa
304168614
Title: Install of MDNS module is problematic for some people Question: username_0: alex101087 is having this ( hangs ) Lampi gets error -3008 nightmare28 had problems with install Answers: username_0: This is resolved in the try2 branch, I replaced MDNS with bonjour sudo npm install -g northernMan54/homebridge-alexa#try2 Status: Issue closed
stlehmann/Flask-MQTT
338977629
Title: Mqtt Instance runs twice on application run Question: username_0: First off, great implementation of the paho mqtt library, it has saved me a lot of time implementing it into the flask ecosystem. However, an issue with the library that is not directly linked to its functionality is that when the flask server is running with the example code snippet provided, the MQTT code seems to run twice. `socketio.run(app, host='0.0.0.0', port=5000, use_reloader=True, debug=True)` This is only the case when the application has the perimeters `use_reloader=True, debug=True`. Maybe for future users of the library a small note could be made to the ReadMe file about this characteristic of Flask in debug mode. For Flask newbies like me. [Flask functionality with these settings](https://stackoverflow.com/questions/28585033/why-does-a-flask-app-create-two-process?lq=1) Answers: username_1: Thanks for the feedback. I will see to add some hint for the user in the documentation addressing this issue. Status: Issue closed
vuetifyjs/vuetify
272941039
Title: [Feature Request] v-select not open on focus Question: username_0: ### New Functionality Allow developer like me to migrate from old angularJS or angular >= 2.0 to vuetify <!-- generated by vuetify-issue-helper. DO NOT REMOVE --> Answers: username_1: I think you know what you did wrong here. Status: Issue closed
Kitteh6660/Corruption-of-Champions-Mod
237241594
Title: Kitsune Tails Won't Change Color Question: username_0: While trying to get "metallic golden" hair on my kitsune character, I noticed that going into the appearance tab and scrolling to the text that describes your tails after your hair changes color, still says that they are white. Running: CoC_1.0.2_mod_1.4.7 The items I've tested to dye my hair include: fox jewels, fox berries, and pink, green, blue, rainbow, gray, and black dyes. Unless I'm missing some specific way to dye my tail hair, I'd guess that the new individual dye usage for hair, fur, and under fur is interfering with the original dyeing process, which dyed Answers: username_0: Accidentally clicked submit, meant to end with: "which dyed both hair and tails" username_1: I'll look into this. I already have some idea how to solve this. username_0: Ok thanks, I think it's messing with the kitsune score too, it stays at 7 despite me changing hair color to a "non-kitsune" color, such as pink or green username_1: Ok, I should have handled the latter, too. If you care to test it: You can download a test build here: https://drive.google.com/file/d/0B6ayFYXDcvsASEtmUWNrajZPaHc/view?usp=sharing Just make sure to backup your saves first. They shouldn't break, but better be safe, than sorry. username_0: Thanks for the fix, the tails description changed when I loaded the saves. Maybe next update there can be a dye tails option, I kind of liked the idea of having two separate colors. BTW: I noticed that when I go to the dye options, there is an option to dye your characters wings, did you add that, or was my game bugged? username_1: Nope, thats from cockatrices implementation. Written by MissBlackthorne and implemented by me. In the next version you can dye feathered wings. Status: Issue closed
Locastic/ApiPlatformTranslationBundle
536462329
Title: No locale has been set and current locale is undefined. Question: username_0: I did everything according to this instruction https://github.com/Locastic/ApiPlatformTranslationBundle And i get error after post method: ` { "@context": "/api/contexts/Error", "@type": "hydra:Error", "hydra:title": "An error occurred", "hydra:description": "No locale has been set and current locale is undefined.", "trace": [ { "namespace": "", "short_class": "", "class": "", "type": "", "function": "", "file": "/site/vendor/locastic/api-platform-translation-bundle/src/Model/TranslatableTrait.php", "line": 64, "args": [] }, .... } ` (I use symfony5) Answers: username_1: Hi @username_0, sorry for the slower response, I’ve been really busy with projects and conferences lately. Can you take a look at https://github.com/Locastic/ApiPlatformTranslationBundle/issues/18. Maybe you are missing parent constructor call too. If that's not the case, can you post your code for classes using Translation and Translatable classes so we could detect the problem? username_1: Issue is fixed with https://github.com/Locastic/ApiPlatformTranslationBundle/pull/19 Status: Issue closed
jonmorehouse/multisort-median
152944964
Title: better algorithm Question: username_0: Thinking through some of the complexities in this code, it seems like we could optimize the `BulkWrite` path by using two heaps (MaxHeap on the left and MinHeap on the right). Specifically, by adding the additional constraint that we'd like to _not_ require sorted input, _one_ approach to updating the median on BulkWrites would be to iterate through each `BulkMetric` type and compare its value to the current median. If its less than the current median than we insert it left (and update our offset), if it is greater than the current median we insert the element to the right and update the offset accordingly. By adding a `BulkMetricHeap` type we could simplify the implementation in several places, notably: * inserting metrics is simplified * no need to sort metrics * simplified "balancing"
posva/vue-promised
434840684
Title: [Typescript] Cannot register component per README Question: username_0: The readme has you: ``` import { Promised } from 'vue-promised' Vue.component('Promised', Promised) ``` But the types are incompatible: ``` Argument of type 'ComponentOptions<never, Data, DefaultMethods<never>, DefaultComputed, Props, Record<string, any>>' is not assignable to parameter of type 'ComponentOptions<Vue, DefaultData<Vue>, DefaultMethods<Vue>, DefaultComputed, PropsDefinition<Record<string, any>>, Record<string, any>>'. Type 'Vue' is not assignable to type 'never'.ts(2345) ``` Answers: username_0: I'm working on a PR for this now, btw. I'm newish to typescript, so will take me a bit to get it figured out. Status: Issue closed
ngageoint/anti-piracy-android-app
94302051
Title: Add subregion map to advanced query Question: username_0: Current advanced filter allows select of just just one subregion from a select field based on the regions number. Since the regions number does not mean much to most people we should add the ability to select multiple regions from a map.
space-wizards/space-station-14
687896254
Title: Disposals can send entities down invalid tubes Question: username_0: A disposal trunk can potentially send someone down an invalid tube as the validity of the next tube is only checked from the next tube's perspective. For example, if a transit tube faced the trunk it would consider it connectable even if the trunk was not facing it, as the transit pipe doesn't take into account what the trunk considers valid when validating a connection. This is now an issue as the next direction does not always match connectable directions anymore, and connection logic was programmed before with the assumption of all possible next directions being a subset of the connectable directions. _Originally posted by @username_0 in https://github.com/space-wizards/space-station-14/pull/1942#issuecomment-682389917_<issue_closed> Status: Issue closed
JimmyLv/reading
360844547
Title: Bear接近我心目中完美的笔记应用 Ian Yang Medium Question: username_0: ## Bear&#65292;&#25509;&#36817;&#25105;&#24515;&#30446;&#20013;&#23436;&#32654;&#30340;&#31508;&#35760;&#24212;&#29992; &ndash; Ian Yang &ndash; Medium<br> &#26368;&#36817;&#24050;&#32463;&#25226; Bear &#20316;&#20026;&#33258;&#24049;&#30340;&#20027;&#21147;&#31508;&#35760;&#24212;&#29992;&#20102;&#65292;&#20043;&#21069;&#26159;&#21516;&#26102;&#29992; Day One &#21644; Ulysses &#12290;Ulysses &#36824;&#20250;&#20598;&#23572;&#29992;&#21040;&#23548;&#20986;&#21644;&#21457;&#24067;&#21040; Medium &#30340;&#21151;&#33021;&#65292;&#20197;&#21450; &#25226;&#23500;&#25991;&#26412;&#25110;&#32773;&#32593;&#39029;&#19978;&#20869;&#23481;&#36716;&#25104; Markdown &#65292;Day One &#22522;&#26412;&#19978;&#19981;&#29992;&#20102;&#12290; Day One &#21644; Ulysses&hellip;<br> <br> September 17, 2018 at 08:25PM<br> via Instapaper https://medium.com/@doitian/bear-%E6%8E%A5%E8%BF%91%E6%88%91%E5%BF%83%E7%9B%AE%E4%B8%AD%E5%AE%8C%E7%BE%8E%E7%9A%84%E7%AC%94%E8%AE%B0%E5%BA%94%E7%94%A8-27c511af778c<issue_closed> Status: Issue closed
bazelbuild/bazel-gazelle
873450345
Title: Gazelle runner fails with "error: could not locate gazelle binary" with custom binary on windows Question: username_0: Obviously this reproduction of the issue is not actually useful for the end user, but specifying a local gazelle binary is required for implementing a custom language. I implemented my own language extension and wanted to run it with `gazelle_binary` and got this error on windows, but not on mac or linux [azure pipelines build](https://dev.azure.com/username_0/my_rules_dotnet/_build/results?buildId=239&view=results) [Permalink to specific error in CI](https://dev.azure.com/username_0/my_rules_dotnet/_build/results?buildId=239&view=logs&j=2d2b3007-3c5c-5840-9bb0-2b1ea49925f3&t=0f0b056b-e9f7-5ce7-8817-a8ca2fd77699&l=833) Answers: username_0: @jayconrod This issue has been open for a while, with an open pull request to fix it, yet no feedback on the PR or this issue. anything I can do to assist?
kubernetes-sigs/kubespray
602201056
Title: Kubespray offline (on-premise) installation support Question: username_0: <!-- Please only use this template for submitting enhancement requests --> **What would you like to be added**: Offline support for Kubespray needs rpm's to be installed and docker images to be downloaded from internet first and present on cluster cut off from internet access. Other than the generalized steps mentioned in https://github.com/kubernetes-sigs/kubespray/blob/master/docs/downloads.md, the document does not guide the user to define exact variables needed for offline installation. Also, steps needed to be performed for offline installation which according to my research on a high level are: 1. Downloading rpm's (OS dependent) 2. Getting docker images (dependent on tags for corresponding docker images) These steps can be included as a block in one of the config files defined by variables which user needs to populate and the complete setup will run accordingly, after user has rpm and docker images content present in some repository. **Why is this needed**: This is needed to perform an offline on premise cluster installation for one of our clients. Please let me know what you think about this use-case and how to proceed forward from here. We actively work for Production on-prem cluster installs. Answers: username_1: Considering that there are many ways to do offline environment depending the requirements and services available in said environment. Anyway, I'll flag @username_2 who showed some interest in offline environments. username_0: Thank you @username_1 . username_2: This is my current list I'm using for offline installation: ```yaml # Registry overrides gcr_image_repo: "{{ registry_host }}" docker_image_repo: "{{ registry_host }}" quay_image_repo: "{{ registry_host }}" kubeadm_download_url: "{{ files_repo }}/kubernetes/{{ kube_version }}/kubeadm" kubectl_download_url: "{{ files_repo }}/kubernetes/{{ kube_version }}/kubectl" kubelet_download_url: "{{ files_repo }}/kubernetes/{{ kube_version }}/kubelet" etcd_download_url: "{{ files_repo }}/kubernetes/etcd/etcd-{{ etcd_version }}-linux-amd64.tar.gz" cni_download_url: "{{ files_repo }}/kubernetes/cni/cni-plugins-linux-{{ image_arch }}-{{ cni_version }}.tgz" argocd_url: "{{ files_repo }}/kubernetes/argocd/argocd-{{ argocd_version }}-amd64" crictl_download_url: "{{ files_repo }}/kubernetes/cri-tools/crictl-{{ crictl_version }}-{{ ansible_system | lower }}-{{ image_arch }}.tar.gz" sonobuoy_url: "{{ files_repo }}/kubernetes/sonobuoy/sonobuoy_{{ sonobuoy_version }}-{{ ansible_system | lower }}-{{ sonobuoy_arch }}.tar.gz" # CentOS/RedHat docker-ce repo docker_rh_repo_base_url: "{{ yum_repo }}/docker-ce/7/x86_64/" docker_rh_repo_gpgkey: "{{ yum_repo }}/repo-key.gpg" ``` Bottom line, the requirements for a full offline installation: - Container Registry that contains all the required image in `roles/download/defaults/main.yml` (depends of your setup) - YUM repository for your container runtime RPM (CRI-O, docker or containerd) - A file server (http or https) that can serve the different tarballs and binaries - a python pypi server that will host all python packages required by kubespray and their dependencies (install by `pip install -r requirements.txt -i http://yourinternalpypiserver/simple/` I will update the docs accordingly to make it clear. username_0: I propose defining a variable: `offline_install: true` followed by: ``` gcr_image_repo: "{% if offline_install %}{{ image_repo }}{% else %}gcr.io{% endif %}" kube_image_repo: "{% if offline_install %}{{ image_repo }}{% else %}k8s.gcr.io{% endif %}" docker_image_repo: "{% if offline_install %}{{ image_repo }}{% else %}docker.io{% endif %}" quay_image_repo: "{% if offline_install %}{{ image_repo }}{% else %}quay.io{% endif %}" ``` Similar proposal stands true for yum_repo as well. Per my research, found out that for pypi server: jinja templating is the major package needed along with may by couple others if any. Also, binaries need to be introduced to the offline cluster manually, and mostly consists of kubectl, kubelet and kubeadm only. Since, binaries and pypi packages are numbered, just carrying those in pre defined dirs may be a more feasible option. We can define a dir like binaries/ and say copy from this folder. For pypi we can similarly just install like `pip install *.whl` from a predefined dir. Also, all variables for offline install should be defined at one place only, eg., in `roles/downloads/defaults/main.yml` Please suggest. username_0: I am new to contributing to open source. I notice that @username_2 opened a new membership request. What is supposedly the next step here? Am I supposed to give a pr directly or what am I supposed to do, to be able to contribute to kubespray project. Please guide. username_3: You don't need a membership to contribute :) If you have some improvement or fix, please open up a PR even if it's a work in progress, as other member/contributor car help you. username_2: I like the idea of this "offline_install" but the main drawback is that we're losing flexibility to point to different container registries, http servers and yum repositories compared to a simple inventory override. Not sure that's bad but I don't know if everybody using air-gap clusters use it the same way. username_4: Hello, I have been testing offline install a few time and agree with the introduction of the var: offline_install: true It helps with the following issues: 1. Control where images are being pulled from: internet, local repo or cached locally on disk (default behavior in kubespray which needs improvement) 2. Control other actions of re-pulling docker images or not. The challenge with offline images is that docker tries to connect to docker image to re-pull, even though it's already loaded on the node, and if image url is still pointing to online resource because image is not re-tagged. This seems to be a docker issue and kubespray has some hacks in the playbooks. Example: Failed to pull image "gcr.io/google_containers/kubernetes-dashboard-amd64:v1.10.1": rpc error: code = Unknown desc = Error response from daemon: Get https://gcr.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Typically, one would download all kubernetes images into <some_internal_image_repo>, re-tag them and push to a local repo that is then accessed during kubespray installation. We do that for our application images successfully. In my case, I wanted to use kubespray native "download_only" method with dedicated "download" host as I wanted to have images available locally on masaters/nodes in case my "registry_host" goes kaput or have some level of control. ansible-playbook -i ./inventory/mycluster/$my_inventory_file -b --become-user=root cluster.yml -e download_cache_dir="$my_root/kubespray_cache" -e download_keep_remote_cache=true -e download_localhost=true -e download_run_once=true -e download_force_cache=false -vv -e ignore_assert_errors=yes --flush-cache I also agree with introduction of these inventory items as I do have them as well. Then in your inventory, you need to define the following variables: registry_host files_repo yum_repo I am not using files_repo, but the other two assist with pointing to my yum_repo with rpms and python files, registry_host (can be many) basically fetches the host for these group_vars: repo_host_ip: "{{ hostvars[groups['registry_host'][0]]['ansible_host'] }}" docker_insecure_registries: [ "{{ repo_host_ip }}:5000" ] I will be happy to contribute my verdicts with offline installer.
pwlin/cordova-plugin-file-opener2
71906079
Title: iOS :: "success" callback happens too quickly Question: username_0: This is called as soon as the *menu* is opened - not when the file has been opened. This makes it hard to clean up files, but also to know if a file was really opened or not. Would it be possible to use [UIDocumentInteractionControllerDelegate](https://developer.apple.com/library/prerelease/ios/documentation/UIKit/Reference/UIDocumentInteractionControllerDelegate_protocol/index.html#//apple_ref/doc/uid/TP40009305) to get more information about what happened during the callout? This could also be used to give more informative error conditions. Answers: username_1: @username_0 you are more than welcome to submit a pull request :) username_0: I'd love to! Unfortunately my ObjC foo isn't up to the task. Apologies - my best way to help is by popping in requests. username_1: There has been a couple of new releases since then. Can you please upgrade and test if you still have issues? thanks Status: Issue closed
SpeciesFileGroup/taxonworks
673872380
Title: Objects details "locked" selections disappear randomly Question: username_0: **Describe the bug** Occasionally, when using the "lock" feature in comprehensive, the selected options in Comprehensive will just disappear and will need to be reselected. **To Reproduce** Steps to reproduce the behavior: 1. When I am using comprehensive to create lots of records in a row. 2. Then I select an option in "Object Details" such as "Adult" or another biocuration class. 3. And I select the lock button to enable. 4. Eventually, the locked input will disappear. Not predictable. **Expected behavior** Locked Object details should not randomly disappear. **Screenshots** If applicable, add screenshots to help explain your problem. **Environment (please identify where you experience this bug:** - Chrome - Production Answers: username_0: Suspicion: may occur when someone else creates a Collection Object that does not share these Object Details, resetting the lock feature, but not sure. username_1: I can't replicate it, if get this error again please add a screenshot. The lock button should be always present on the first row. username_0: Will do.
domenic/wpt-runner
378015977
Title: Support .any.js tests Question: username_0: ... and ideally then port the [streams tests](https://github.com/web-platform-tests/wpt/tree/master/streams) to [.any.js format](https://web-platform-tests.org/writing-tests/testharness.html), removing their [custom-generated wrappers](https://github.com/web-platform-tests/wpt/blob/24373750793a107c28dbfde50d61cfae192fb485/streams/generate-test-wrappers.js) The easiest way to do this is probably to copy the logic from https://github.com/web-platform-tests/wpt/blob/24373750793a107c28dbfde50d61cfae192fb485/tools/serve/serve.py#L219 into this project. /cc @username_1 Forked from https://github.com/nodejs/whatwg-stream/issues/1#issuecomment-436360523 Status: Issue closed Answers: username_1: PR for porting the streams tests: web-platform-tests/wpt#14172
360netlab/DGA
488447966
Title: The DGA of Enviserv Question: username_0: - MD5 4328048f82811146c0fd9e18faff7155 - [VT](https://www.virustotal.com/gui/file/51167a79b2f1a9afa55abaf32a6387b265fb6db2efd178cf3482547aaf4bfb59) analysis - Domains generated on 2019/08/06 fe28753777.com 9dcd84b090.net 02261e64b3.org 20c97d8c3d.info 5ae4d66001.biz e3bea872ae.in 150d064880.com 34636b0b94.net 4e8414394d.org d84a6a7a28.info ... - The threat report from [Microsoft](https://www.microsoft.com/en-us/wdsi/threats/malware-encyclopedia-description?Name=Trojan:Win32/Enviserv.A). Answers: username_0: - Thanks to my colleague Jinye for helping reverse engineer binary file. - TLDs ['com', 'net', 'org', 'info', 'biz', 'in'] - The number of domains 500 in total - Test ``` $ python dga.py -n 500 fe28753777.com 9dcd84b090.net 02261e64b3.org 20c97d8c3d.info 5ae4d66001.biz e3bea872ae.in 150d064880.com 34636b0b94.net 4e8414394d.org d84a6a7a28.info ...... ``` The output are well-matched to the domains generated by sample. [dga.py](https://github.com/360netlab/DGA/blob/master/code/enviserv/dga.py) is here.
reportportal/reportportal
393028258
Title: Launch Not finishing even after its completion, its still showing In Progress Question: username_0: **Describe the bug** ... **Expected behavior** ... **Screenshots** If applicable **Versions:** - OS, Browser - Version of RP [find it under Login form, copy as is] Answers: username_1: this situation appears when your test runner try to finish execution, but some of test cases are still in progress at this moment. take a look at the test cases, which has IN_PROGRESS state. They causes this situation. Find out why they still in progress at the moment, when runner finalize all execution. This will help you to make code clean. **Resolution:** We will remove this constraint in 5.0 on API side. ETA for 5.0 is Jan-Feb2019 Status: Issue closed username_0: No ,the test case failed, still its not finishing launch on suite level,check screen shot ![screen_082](https://user-images.githubusercontent.com/39002876/50286715-4cac4d80-0486-11e9-944d-6464dd7d6c2d.png) ![screen_083](https://user-images.githubusercontent.com/39002876/50286723-53d35b80-0486-11e9-8c51-9f098dd13efe.png) username_1: **Describe the bug** ... **Expected behavior** ... **Screenshots** If applicable **Versions:** - OS, Browser - Version of RP [find it under Login form, copy as is] username_1: @username_0 than it can be 2 reasons: 1) `FinishLaunchRQ` sent before `FinishTestItemRQ`. Or with the equal timestamp. 2) Due to any reasons, runner do not sent `FinishLaunchRQ` Check the logs of you test automation. There should be the message, returned by ReportPortal. This will give us a clue. username_2: Fixed in 5.0 Status: Issue closed username_3: Hi, I am using version 5.3.5 at the moment and still see this behavior. Can you pls advise what to fix or debug it?
postmanlabs/postman-app-support
130978447
Title: cURL command differ in Windows and MAC Question: username_0: For a given instance in postman the cURL created by postman is different in Windows and MAC (working fine in MAC, but not in windows). What is the reason? # Answers: username_1: @username_0 it would be related to #1655, we have fixed the issue in Mac `4.0` Which will sooner be reflected in the chrome versions as well. Status: Issue closed username_1: @username_0 we released our latest version `4.1.0` can you update and check it?
rmosolgo/graphql-ruby
450941561
Title: Argument is defined in snake_case but field resolves into camelCase Question: username_0: graphql version: 1.9.6 See Example below: Ruby Definition ```ruby class Types::UserType < Types::BaseObject field :car_names, Types::CarType, null: false, resolve: ->(obj, args, ctx) { args[:car_color] // Does not exist but should, since we implemented in snake case. args[:carColor] // Exists args["carColor"] // Also exists } end class Types::CarType < Types::BaseObject argument :car_color, String, required: true end ``` GraphQL callsite ```graphql { users($username: "my-username") { car_names($carColor: "blue"){ ... } } } ``` Answers: username_1: Hi @username_0 I believe the default behavior of GraphQL Ruby is to generate the schema in camelcase. https://graphql-ruby.org/fields/arguments.html (search for "To disable auto-camelization"), read upwards and downwards a little. Sorry no anchors around that area to give you a direct link. if you'd like to disable camelization altogether, I believe you can do it by setting the default value in the args. https://graphql-ruby.org/fields/introduction.html#field-parameter-default-values Let me know if this solves your issue. username_2: Thanks for addressing this, @username_1. This is a "feature" :P In old graphql-ruby, it wasn't camelized, and many people requested automatic camelization. Please see the links shared above for disabling camelization and reopen if you have any trouble. Status: Issue closed username_0: I think there's a misunderstanding here. This has nothing to do with the GraphQL schema, but rather how arguments are resolved on the ruby side. In this example from the link provided: ```ruby field :posts, [PostType], null: false do argument :start_date, String, required: true, prepare: ->(startDate, ctx) { # return the prepared argument. # raise a GraphQL::ExecutionError to halt the execution of the field and # add the exception's message to the `errors` key. } end def posts(start_date:) # use prepared start_date end ``` the argument `start_date` is both defined in snake_case and resolved into snake_case in However, in my initial example, I cannot use `args[:start_date]` (or `args[:car_color]`). Instead, I have to use `args[:startDate]`. Shouldn't this be consistent? username_2: Oh 🤦‍♂ I see what you mean, I'm sorry I didn't read closely! I see now that your example is using `resolve: ->(obj, args, ctx) { ... }`. Resolve procs are supported for backwards compatibility. They use camel-cased arguments just like old graphql-ruby versions did. This was to make it easier for people to migrate gradually. I recommend using a method instead! username_0: Got it. I didn't know resolve is deprecated. Thanks!
majicmaj/legendsapi
481182457
Title: Return a single object for show Question: username_0: This is returning an array with just a single object. Can you make it so that it only returns one object? https://github.com/majicmaj/legendsapi/blob/aed02139aed0eabc294bc4ae413897e1b3dd1c04/controllers/item.js#L7-L8 Answers: username_0: For the map controller too: https://github.com/majicmaj/legendsapi/blob/aed02139aed0eabc294bc4ae413897e1b3dd1c04/controllers/map.js#L7-L8