content
stringlengths
240
2.34M
<issue_start><issue_comment>Title: Add better copy+paste error handling (again) username_0: Summary of changes from commits: * Fix a small error I forgot about in Firefox * Return a promise when stopping a connection (add it to the actual core methods and then propagate those promises to the UI) * General error revamp * Specific error messages for different cases * Do a better job detecting error cases (mostly pasting links at bad times) and provide reasonable handling * Switch to share mode if pasting a link while trying to set-up a get connection * Do not allow any links to be pasted while a connection is established * Do not allow unexpected links to be handled in any other cases * Ending a connection takes the user to a copy+paste-specific page instead of the general one * Errors are displayed in bubbles near relevant buttons (or the bar) instead of a random span * Going back is now considered to be ending the connection instead of leaving the connection open while on a different page * Switching to copy-paste mode is now done through a core-signal instead of the previous way of calling a global function * Require the user to click a button to start using the proxy after successfully establishing the connection As usual, check commit messages for more in-depth explanations of what's going on in any commit. <!-- Reviewable:start --> [<img src="https://reviewable.io/review_button.png" height=40 alt="Review on Reviewable"/>](https://reviewable.io/reviews/uproxy/uproxy/1114) <!-- Reviewable:end --> <issue_comment>username_1: Comments from the [review on Reviewable.io](https://reviewable.io:443/reviews/uproxy/uproxy/1114) --- <sup>**[src/generic_core/remote-connection.ts, line 115 \[r1\]](https://reviewable.io:443/reviews/uproxy/uproxy/1114#-JkndqTBgRppJa9VfsPS)** ([raw file](https://github.com/uproxy/uproxy/blob/75b72982a48d4b7d5abe8854fe53cf0d7d8c18b6/src/generic_core/remote-connection.ts#L115)):</sup> I'm confused about why these lines were switched? I believe rtcToNet_close() is async, so shouldn't this.stateRefresh_ do the same thing whether its called before or after rtcToNet_.close? --- <sup>**[src/generic_ui/polymer/copypaste.ts, line 51 \[r1\]](https://reviewable.io:443/reviews/uproxy/uproxy/1114#-JknVaZzC2GUAjO88-HQ)** ([raw file](https://github.com/uproxy/uproxy/blob/75b72982a48d4b7d5abe8854fe53cf0d7d8c18b6/src/generic_ui/polymer/copypaste.ts#L51)):</sup> Can we change the names of prev and doBack to be a little more descriptive? prev means something more like handleBackArrowClick and doBack means something like stopCopyAndPasteAndGoBack... There might be more concise ways of saying this while still doing a bit more to explain the differences --- <sup>**[src/generic_ui/polymer/copypaste.ts, line 98 \[r1\]](https://reviewable.io:443/reviews/uproxy/uproxy/1114#-JkneDRo0U6M2ZHdS0As)** ([raw file](https://github.com/uproxy/uproxy/blob/75b72982a48d4b7d5abe8854fe53cf0d7d8c18b6/src/generic_ui/polymer/copypaste.ts#L98)):</sup> is it possible that this would ever not be NONE? Would we want to print an error in that case? --- <sup>**[src/generic_ui/scripts/ui.ts, line 161 \[r1\]](https://reviewable.io:443/reviews/uproxy/uproxy/1114#-JknRUXkaROGkIFXR9us)** ([raw file](https://github.com/uproxy/uproxy/blob/75b72982a48d4b7d5abe8854fe53cf0d7d8c18b6/src/generic_ui/scripts/ui.ts#L161)):</sup> Can you add a comment explaining what this is used for? --- <sup>**[src/generic_ui/scripts/ui.ts, line 265 \[r1\]](https://reviewable.io:443/reviews/uproxy/uproxy/1114#-JkneQZFpllyJzNe7fcG)** ([raw file](https://github.com/uproxy/uproxy/blob/75b72982a48d4b7d5abe8854fe53cf0d7d8c18b6/src/generic_ui/scripts/ui.ts#L265)):</sup> can you add comments on which of these methods are only used for copy+paste? Thanks --- <!-- Sent from Reviewable.io --> <issue_comment>username_1: Comments from the [review on Reviewable.io](https://reviewable.io:443/reviews/uproxy/uproxy/1114) --- Once those comments are addressed it's OK to merge --- <!-- Sent from Reviewable.io --> <issue_comment>username_0: Comments from the [review on Reviewable.io](https://reviewable.io:443/reviews/uproxy/uproxy/1114) --- <sup>**[src/generic_ui/polymer/copypaste.ts, line 51 \[r1\]](https://reviewable.io:443/reviews/uproxy/uproxy/1114#-JknVaZzC2GUAjO88-HQ)** ([raw file](https://github.com/uproxy/uproxy/blob/75b72982a48d4b7d5abe8854fe53cf0d7d8c18b6/src/generic_ui/polymer/copypaste.ts#L51)):</sup> How would `handleBackClick` and `exitMode` work? --- <!-- Sent from Reviewable.io --> <issue_comment>username_0: Comments from the [review on Reviewable.io](https://reviewable.io:443/reviews/uproxy/uproxy/1114) --- I'm going to push the changes but not merge, feel free to do that tomorrow if it looks good. --- <sup>**[src/generic_core/remote-connection.ts, line 115 \[r1\]](https://reviewable.io:443/reviews/uproxy/uproxy/1114#-JkndqTBgRppJa9VfsPS)** ([raw file](https://github.com/uproxy/uproxy/blob/75b72982a48d4b7d5abe8854fe53cf0d7d8c18b6/src/generic_core/remote-connection.ts#L115)):</sup> I don't think there is any reason to have them in a specific order, that was fewer lines than doing ``` var closed = this.rtcToNet_.close(); this.stateRefresh_(); return closed; ``` though --- <sup>**[src/generic_ui/polymer/copypaste.ts, line 51 \[r1\]](https://reviewable.io:443/reviews/uproxy/uproxy/1114#-JknVaZzC2GUAjO88-HQ)** ([raw file](https://github.com/uproxy/uproxy/blob/75b72982a48d4b7d5abe8854fe53cf0d7d8c18b6/src/generic_ui/polymer/copypaste.ts#L51)):</sup> Okay, going with these names for now. I'm not totally happy with either of these, but it's better than what I had. --- <sup>**[src/generic_ui/polymer/copypaste.ts, line 98 \[r1\]](https://reviewable.io:443/reviews/uproxy/uproxy/1114#-JkneDRo0U6M2ZHdS0As)** ([raw file](https://github.com/uproxy/uproxy/blob/75b72982a48d4b7d5abe8854fe53cf0d7d8c18b6/src/generic_ui/polymer/copypaste.ts#L98)):</sup> Yes, it would be if you had navigated to copy+paste mode before pasting the request link. --- <sup>**[src/generic_ui/scripts/ui.ts, line 114 \[r1\]](https://reviewable.io:443/reviews/uproxy/uproxy/1114#-JkoZRcl_jW1tTsQBjwk)** ([raw file](https://github.com/uproxy/uproxy/blob/75b72982a48d4b7d5abe8854fe53cf0d7d8c18b6/src/generic_ui/scripts/ui.ts#L114)):</sup> Done. --- <sup>**[src/generic_ui/scripts/ui.ts, line 161 \[r1\]](https://reviewable.io:443/reviews/uproxy/uproxy/1114#-JknRUXkaROGkIFXR9us)** ([raw file](https://github.com/uproxy/uproxy/blob/75b72982a48d4b7d5abe8854fe53cf0d7d8c18b6/src/generic_ui/scripts/ui.ts#L161)):</sup> Done. --- <sup>**[src/generic_ui/scripts/ui.ts, line 265 \[r1\]](https://reviewable.io:443/reviews/uproxy/uproxy/1114#-JkneQZFpllyJzNe7fcG)** ([raw file](https://github.com/uproxy/uproxy/blob/75b72982a48d4b7d5abe8854fe53cf0d7d8c18b6/src/generic_ui/scripts/ui.ts#L265)):</sup> Done. --- <!-- Sent from Reviewable.io --> <issue_comment>username_1: Comments from the [review on Reviewable.io](https://reviewable.io:443/reviews/uproxy/uproxy/1114) --- :+1: --- <!-- Sent from Reviewable.io -->
<issue_start><issue_comment>Title: Screenshots should be cacheable but use timestamps/hashes username_0: Ideal solution is to use the dragonfly gem for handling image uploads. Then the designer can request just the image sizes he needs dynamically. These images will automatically be assigned a hash in order to bust cache.<issue_closed>
<issue_start><issue_comment>Title: Increase scheduled_test timeout on dart2js transformer tests username_0: Those tests are intermittently flaking on Windows. Since running dart2js takes a while, I think it's scheduled_test that's killing the test before it completes. We should probably bump up the timeout on those tests.
<issue_start><issue_comment>Title: initial support for storcli username_0: install storcli, remove megaraid use storcli for monitoring check matching PR for checks - https://github.com/blueboxgroup/ursula-monitoring/pull/1 <issue_comment>username_1: This part looks okay, other than I don't like adding more external package sources. <issue_comment>username_1: Tests pass, have you been able to test this out on real hardware? <issue_comment>username_1: Merging for test deploys. We can always revert if it's no good.
<issue_start><issue_comment>Title: List of files/blob username_0: How do I get the list? <issue_comment>username_1: ...or I might expand on what @username_0 is asking; I am just learning Feathers, so forgive if the is a noob question. Would this be considered a "typical" Feathers "service"? Is there really any method other than `create`? You wouldn't be applying other methods such as `update` or `find` or even query this service in any way but rather directly with the filesystem? The reason that I am confused is that I believe I read somewhere that every feathers service has specific methods; ````js // reference: var myService = { find: function(params, callback) {}, get: function(id, params, callback) {}, create: function(data, params, callback) {}, update: function(id, data, params, callback) {}, updateMany: function(data, params, callback) {}, patch: function(id, data, params, callback) {}, patchMany: function(data, params, callback) {}, remove: function(id, params, callback) {}, removeMany: function(params, callback) {}, setup: function(app) {} } ```` Would it be correct to say that a second service, such as a `media` db service is needed to add information, such as the owner, and the upload file "service" would be invoked when creating a new `media` object? <issue_comment>username_2: Anything that has at least one of those methods is considered a service but yes, if you want to store additional metadata you create a separate service to store those (as well as getting a list of media). To answer the original question, the [abstract blob store](https://github.com/maxogden/abstract-blob-store) does not include any functionality to list all files (probably because some of the supported storages don't). As mentioned, the solution would be to store and list metadata in a separate database backed service.<issue_closed>
<issue_start><issue_comment>Title: Clean up GerritCheckoutProvider.getCloneBaseUrl() username_0: This is to track that [this code](https://github.com/uwolfer/gerrit-intellij-plugin/blob/master/src/main/java/com/urswolfer/intellij/plugin/gerrit/extension/GerritCheckoutProvider.java#L144) can be cleaned up now as [Gerrit issue 2208](https://code.google.com/p/gerrit/issues/detail?id=2208) is fixed.
<issue_start><issue_comment>Title: Filters With Exclamation Mark username_0: It would be nice if Liquid supported filters that end with an exclamation mark. The current support is a little odd though, as the strict parser allows it, but the lax does not. Lax ```ruby $ > template = ::Liquid::Template.parse(%q({{'yep' | test!}})) => #<Liquid::Template:0x007fc93a8c1ad0 ... @filters=[["test", []]]>]>, @warnings=nil> ``` Strict ```ruby $ > template = ::Liquid::Template.parse(%q({{'yep' | test!}})) => #<Liquid::Template:0x007fc93a8fbf78 ... @filters=[["test!", []]]>]>, @warnings=nil> ``` I have been able to patch it for a long time, but not sure if other implications would be or if other non-alpha numeric characters would be desired. I can put in a PR if desired. <issue_comment>username_1: In Liquid 3 the strict parser no longer supports this for consistency with the lax parser. At some point we decided that filters (being idempotent) shouldn't have a use for `!` suffixes. Feel free to reopen this issue if you disagree.<issue_closed>
<issue_start><issue_comment>Title: Order of val in relation to children? username_0: I don't know if this is an issue because I'm not 100% familiar with XML specifications, but I don't seem to be able to know where the val is in relation to the children, for example for this xml string ``` <some> <child> first child val </child> some val <child>second child val</child> </some> ``` seems to give no indication where some val is, and if the string is like this ``` <some> <child>first child val</child> some val <child>second child val</child> some more val </some> ``` some val is completely ignored in favor of some more val. <issue_comment>username_1: So you are correct, this is valid XML. But `xmldoc` doesn't support this kind of "mixed content" in favor of the simplicity of the `val` property. Otherwise you'd have to write `node.children[0].val` where the child is a text node, instead of the more intuitive `node.val`. Are you parsing HTML or XML? <issue_comment>username_0: I'm parsing XML from an open document format file. To be clear I haven't actually run into mixed content, but I don't know if I will get mixed content too, though I think it's unlikely. What I'm more worried about is the first case where I don't know if some val comes after the first child.<issue_closed> <issue_comment>username_1: Yeah, that information will unfortunately be lost by `xmldoc`. You could probably make some simple modifications to the library to retain the text nodes as additional children, or you might investigate a more full-featured library like [node-elementtree](https://github.com/racker/node-elementtree). I hope this answers your original question - closing this issue for now!
<issue_start><issue_comment>Title: Update to npm@next username_0: Work I did a while ago any didn’t care to PR, sorry. :smile: I can update the commit messages to be a bit more descriptive if you like, they weren’t intended for public eyes when I wrote them :P cc @username_1 <issue_comment>username_1: Merged as 1f197e4
<issue_start><issue_comment>Title: Setting polymorphic field just uses base class username_0: If I have a POJO with a field that that has type of the base class of a polymorphic class defined using Jackson's @JsonTypeInfo to deserialize to different sub-classes based e.g. on an enum field of that base class then when I do an update setting the field to an instance of one of the base class's subclasses then the update operates as if I specified just the base class. In the below program I get the output: Updated value: ShapeAndString [string=square-to-circle, shape=Circle [radius=0, Shape [type=CIRCLE]]] Expected value: ShapeAndString [string=square-to-circle, shape=Circle [radius=6, Shape [type=CIRCLE]]] NOT MATCHED The value of the 'radius' field has not been set in the database because the findAndModify operation has treated the value being set merely as a Shape and not a Circle. This is odd, because insert's and find's serialize and deserialize the polymorphic type fine. This seems to be a problem in `org.mongojack.internal.util.SerializationUtils.serializeDBUpdate` or maybe `findUpdateSerializer`. The BeanPropertyWriter for the shape field found in findUpdateSerializer has a null _serializer field and so it calls `findValueSerializer` for the type of the BeanPropertyWriter i.e. the base class. I'm not sure what the fix is here to teach serializeDBUpdate to serialize the whole POJO specified in the DBUpdate. Here's my test program: ```java package bugs.mongojack; import org.mongojack.DBQuery; import org.mongojack.DBUpdate; import org.mongojack.Id; import org.mongojack.JacksonDBCollection; import com.fasterxml.jackson.annotation.JsonProperty; import com.fasterxml.jackson.annotation.JsonSubTypes; import com.fasterxml.jackson.annotation.JsonTypeInfo; import com.fasterxml.jackson.annotation.JsonSubTypes.Type; import com.mongodb.DB; import com.mongodb.MongoClient; import com.mongodb.ServerAddress; import com.mongodb.WriteConcern; public class PolymorphicMember { public static enum ShapeType { SQUARE, CIRCLE; } @JsonTypeInfo(use = JsonTypeInfo.Id.NAME, include = JsonTypeInfo.As.EXISTING_PROPERTY, property = "type") @JsonSubTypes({ @Type(name = "SQUARE", value = Square.class), @Type(name = "CIRCLE", value = Circle.class), }) public static class Shape { @JsonProperty public ShapeType type; @Override public boolean equals(Object othat) { if (othat == null) return false; if (!(othat instanceof Shape)) return false; Shape that = (Shape)othat; return that.type.equals(type); } @Override public String toString() { return "Shape [type=" + type + "]"; } } public static class Square extends Shape { @JsonProperty public long length; public Square() { super.type = ShapeType.SQUARE; } @Override public boolean equals(Object othat) { if (this == othat) return true; if (othat == null) return false; if (!(othat instanceof Square)) return false; Square that = (Square)othat; return super.equals(that) && that.length == length; [Truncated] System.err.println("No updated object returned"); System.exit(1); } ShapeAndString expected = new ShapeAndString(); expected.string = inserted.string; expected.shape = circle; System.out.println("Updated value: " + updated); System.out.println("Expected value: " + expected); if (updated.equals(expected)) { System.out.println("MATCHED"); } else { System.out.println("NOT MATCHED"); } } } ``` <issue_comment>username_1: I am experiencing this exact same issue.
<issue_start><issue_comment>Title: Update CocoaPods to 3.1.3 username_0: Hi Erik, The 3.1.3 release is not available yet on CocoaPods. Would you updating the podspec? That would be super helpful :) Let me know if there's any way that I can help. <issue_comment>username_1: yes, that would be great :-) <issue_comment>username_2: -> OCMock (3.1.3) - ERROR | [watchOS] Returned an unsuccessful exit code. You can use `--verbose` for more information. - NOTE | [BEROR]error: There is no SDK with the name or path 'watchos' - NOTE | [watchOS] error: There is no SDK with the name or path 'watchos' Analyzed 1 podspec. [!] The spec did not pass validation, due to 1 error. Any idea what I need to do to fix this? <issue_comment>username_0: I guess its because of [this](https://github.com/CocoaPods/CocoaPods/issues/3925), so for now, it would be necessary to add `s.platform` to the spec, to validate with Xcode < 7. From the [documentation](https://guides.cocoapods.org/syntax/podspec.html#platform), I didn't get to disable only watchOS. I'm happy to take a look, but it's gonna be only by the end of next week. Unfortunately I'm on vacations and didn't bring my computer with me :) <issue_comment>username_1: You have to define the platform in your podspec file. I have created a pull request which fixes this issue: https://github.com/username_2/ocmock/pull/226 With that fix pod spec lint passes the validation :) <issue_comment>username_2: Should be in the pod repo later today, see #224/#226. As far as I am aware the changes from #206 are merged into the master branch. What makes you think they aren't? <issue_comment>username_1: Thanks for pushing the new pod spec :)<issue_closed>
<issue_start><issue_comment>Title: Error from using shapefile with multiple points username_0: Getting the following error in the console from the web example: http://leaflet.username_1.com/#3/32.62/10.63 `shit RangeError: Offset is outside the bounds of the DataView(…)` I used the shapefile from: http://lynxlynx.info/gis/multipoint.shape.zip <issue_comment>username_1: it is, from what I can tell, also breaking cartodb when I tried to upload it there, do you per-chance have some non-ascii characters in some fields as that it doesn't handle well <issue_comment>username_0: Yes, shp2pgsql will only decode this file when -W latin1 is passed in. `shp2pgsql -t 2D -W LATIN1 ~/Downloads/multipoint.shp` ``` shp2pgsql -t 2D -W LATIN1 ~/Downloads/multipoint.shp | more Shapefile type: MultiPointZ Postgis type: MULTIPOINT[2] SET CLIENT_ENCODING TO UTF8; SET STANDARD_CONFORMING_STRINGS TO ON; BEGIN; CREATE TABLE "multipoint" (gid serial, "o�i�ï¿" varchar(1), "naziv" varchar(89), "dostop" float8, "oddaljenos" float8, "lega" float8, "povr�ina" numeric, "prostornin" float8, "organski o" float8, "gradbeni o" float8, "komunalni" float8, "kosovni od" float8, "pnevmatike" float8, "motorna vo" float8, "salonitne" float8, "nevarni od" float8, "sodi z nev" varchar(1), "opis in ko" varchar(255), "velik del" varchar(1), "opombe" varchar(255), "ob�ina" float8, "parcela" float8, "izpisi" varchar(254), "katastrska" float8, "odgovornos" varchar(1), "datum vnos" varchar(10), "datum zadn" varchar(10), "tiskaj" varchar(114), "ocena pome" float8, "vrstni red" float8, "irsop" float8); ALTER TABLE "multipoint" ADD PRIMARY KEY (gid); SELECT AddGeometryColumn('','multipoint','geom','0','MULTIPOINT',2); INSERT INTO "multipoint" ("o�i�ï¿","naziv","dostop","oddaljenos","lega","povr�ina","prostornin","organski o","gradbeni o","komunalni","kosovni od","pnevmatike","motorna vo","salonitne","nevarni od","sodi z nev","opis in ko","velik de l","opombe","ob�ina","parcela","izpisi","katastrska","odgovornos","datum vnos","datum zadn","tiskaj","ocena pome","vrstni red","irsop",geom) VALUES ('F',NULL,'3','50','2','20.0000000000000000','1','0','0','100','0','0','0','0','0','F',NU LL,'F',' <p>Pripeljano s prikolico in stre?eno po pobo?ju proti akumulacijskemu jezeru. <br/> Precej neprijeten dostop zaradi nestabilnosti pobo?ja, v bli?ino se da pripeljati z avtom.</p> <p><font size="2"><strong>Po celotni obali ob travniku je v obr','41','3808699',NULL,'2597','F','2010/03/28','2011/02/04','<a target="_blank" href="http://register.ocistimo.si/RegisterDivjihOdlagalisc/PrintOdlagalisce?F=765">Tiskaj</a>','1 ','60','41','010400000001000000010100000000000000D9551A41295C8FC21F520141'); INSERT INTO "multipoint" ("o�i�ï¿","naziv","dostop","oddaljenos","lega","povr�ina","prostornin","organski o","gradbeni o","komunalni","kosovni od","pnevmatike","motorna vo","salonitne","nevarni od","sodi z nev","opis in ko","velik de l","opombe","ob�ina","parcela","izpisi","katastrska","odgovornos","datum vnos","datum zadn","tiskaj","ocena pome","vrstni red","irsop",geom) VALUES ('F',NULL,'3','100','2','10.0000000000000000','3','0','100','0','0','0','0','0','0','F',N ULL,'F',' <p>Dostop je sicer lahko tudi npr. s traktorjem - saj je bil material tako tudi pripeljan, vendar je verjetno parcela privatna (travnik).<br/> Prkolica ali dve gradbenega materiala, stre?ena po pobo?ju. Vmes tudi nekaj komunalnih odpadkov (vidi se j','41','3808888',NULL,'2597','F','2010/03/28','2011/02/04','<a target="_blank" href="http://register.ocistimo.si/RegisterDivjihOdlagalisc/PrintOdlagalisce?F=766">Tiskaj</a>','1','35','41','01040000000100000001 0100000000000000B5581A41295C8FC2EF520141'); <issue_comment>username_1: see username_1/parseDBF#9<issue_closed>
<issue_start><issue_comment>Title: Please explain how to get NH township versus county level results via the Python API username_0: ![of course i've slept why do you ask](https://media.giphy.com/media/YRHUxUsud3200/giphy.gif) <issue_comment>username_1: Township are just results where `level="township"` and county are results where `level="county"`. The command to load results in NH will actually load all three levels -- state, county, and township. You can query out the ones you want after loading. <issue_comment>username_1: ``` e = Election(electiondate='2016-02-09', datafile=self.data_url, testresults=False, liveresults=True, is_test=False) township_results = [z for z in e.results if z.level == 'township'``` <issue_comment>username_1: @username_0 Does the explanation above work for you?<issue_closed> <issue_comment>username_0: Yes. You are a hero.
<issue_start><issue_comment>Title: Use icon to open md-select (or dynamically open it) username_0: Hi, I would like to know if it's possible to have an icon that triggers a md-select. Look at this image, this is the desired effect that I would like to have. [Select triggered by icon](http://i.stack.imgur.com/JZ1qk.jpg) Does anyone have some advice on how to achieve it? It would be nice to have a service like $mdSelect to dynamically open/close the select like how it happens with sidenavs. Thank you! <issue_comment>username_0: If anyone is interested, for the moment I ended up using a md-menu with a (Angular) repeater inside it.<issue_closed> <issue_comment>username_1: This is possible. See the [demo page](https://material.angularjs.org/latest/#/demo/material.components.menu) which shows `md-position-mode` and `md-offset`. [The docs](https://material.angularjs.org/latest/#/api/material.components.menu/directive/mdMenu) should also clarify this.
<issue_start><issue_comment>Title: Use /batch for multiple tracking ids username_0: Referring to [Batching multiple hits in a single request](https://developers.google.com/analytics/devguides/collection/protocol/v1/devguide#batch), it looks like we can send just one request when dealing with multiple tracking ids rather than creating a new roUrlTransfer for each tracking id. <issue_comment>username_0: Closing. No further improvements since Roku announced native support for google analytics.<issue_closed>
<issue_start><issue_comment>Title: Cannot display the reponse if it cannot format it username_0: - VSCode Version: 1.3 - OS Version: Win 7 When the response contain element that cannot be parsed (Php error as exemple), it display the last preview or display an error message : "Unable to open '\response-preview': Unexpected token <." It would be great to display raw data to be able to read the error. Cheers <issue_comment>username_1: @username_0, could you please the error sample in more details, is the '<' in response header, or body? <issue_comment>username_2: I get this error too, but i'm not sure about the exact circumstances. VSCode: 1.3.1 OS: Win 7 Here is the error from the console: shell.ts:416 data.replace is not a function: TypeError: data.replace is not a function at Function.HttpResponseTextDocumentContentProvider.escase (C:\Users\user1\.vscode\extensions\humao.rest-client-0.5.3\out\src\views\httpResponseTextDocumentContentProvider.js:48:21) at Function.HttpResponseTextDocumentContentProvider.formatHeaders (C:\Users\user1\.vscode\extensions\humao.rest-client-0.5.3\out\src\views\httpResponseTextDocumentContentProvider.js:29:69) at HttpResponseTextDocumentContentProvider.provideTextDocumentContent (C:\Users\user1\.vscode\extensions\humao.rest-client-0.5.3\out\src\views\httpResponseTextDocumentContentProvider.js:12:699) at c:\Program Files (x86)\Microsoft VS Code\resources\app\out\vs\workbench\node\extensionHostProcess.js:10:22159 at c:\Program Files (x86)\Microsoft VS Code\resources\app\out\vs\workbench\node\extensionHostProcess.js:7:441 at new n.Class.derive._oncancel (c:\Program Files (x86)\Microsoft VS Code\resources\app\out\vs\workbench\node\extensionHostProcess.js:5:16489) at Object.l [as asWinJsPromise] (c:\Program Files (x86)\Microsoft VS Code\resources\app\out\vs\workbench\node\extensionHostProcess.js:7:406) at e.$provideTextDocumentContent (c:\Program Files (x86)\Microsoft VS Code\resources\app\out\vs\workbench\node\extensionHostProcess.js:10:22123) at c:\Program Files (x86)\Microsoft VS Code\resources\app\out\vs\workbench\node\extensionHostProcess.js:10:21758 at e.invoke (c:\Program Files (x86)\Microsoft VS Code\resources\app\out\vs\workbench\node\extensionHostProcess.js:6:23783) at e.fire (c:\Program Files (x86)\Microsoft VS Code\resources\app\out\vs\workbench\node\extensionHostProcess.js:6:25565) at HttpResponseTextDocumentContentProvider.update (C:\Users\user1\.vscode\extensions\humao.rest-client-0.5.3\out\src\views\httpResponseTextDocumentContentProvider.js:23:27) at C:\Users\user1\.vscode\extensions\humao.rest-client-0.5.3\out\src\controllers\requestController.js:48:41 at process._tickCallback (internal/process/next_tick.js:103:7)e.onUnexpectedError @ shell.ts:416(anonymous function) @ shell.ts:318e.onUnexpectedError @ errors.ts:73u @ errors.ts:88e.onUnexpectedExtHostError @ mainThreadErrors.ts:12e.handle @ abstractThreadService.ts:34s @ ipcRemoteCom.ts:269f @ ipcRemoteCom.ts:226_combinedTickCallback @ internal/process/next_tick.js:67_tickCallback @ internal/process/next_tick.js:98 <issue_comment>username_1: @username_2 it's very strange that string.replace is not a function, can you reproduce this? <issue_comment>username_2: I made a gif with the bug: ![rest-bug](https://cloud.githubusercontent.com/assets/4302588/16794376/2cad2e64-48e0-11e6-9bcd-a8e599db4837.gif) <issue_comment>username_1: @username_2, I can't view your image <issue_comment>username_2: Steps to reproduce: 1. save a file with the rest extension (test.rest) 2. do a GET http://example.com HTTP/1.1 3. close the response-view pane 4. now change the domain to google.com (GET http://google.com HTTP/1.1) 5. this is when i receive the error in the console and the response-view pane pops up with the response data of the previous request (from example.com) <issue_comment>username_0: I can view it. But it's from a different source for me. Now I got another error : Unable to open '\response-preview': data.replace is not a function. Here is the answer I get into a browser (as RAW). \<br /> \<b>Notice\</b>: Undefined variable: params in \<b>C:\var\www\******.php\</b> on line \<b>58\</b>\<br /> {"error":"Nothing has been found"} This is this tag '\<br/>' that made the previous error => Unexpected token <. username_2, it display the previous data because it can't display the new one (is there a cache?). If I close VSC, open it, write "GET http://google.com HTTP/1.1" into a new file and run the command, I get the error : Unable to open '\response-preview': data.replace is not a function. I just copy the url into the browser and this is 2 picture from firefox developper that have a better display. Chrome only display the raw message. ![2016-07-13 09_07_28-http___gateway local cine intra_rest_bypass_auth yes method getcrmsrbyid sr_id 3](https://cloud.githubusercontent.com/assets/1171858/16794626/55679b98-48d9-11e6-9a51-2ff05813570f.png) ![2016-07-13 09_07_43-http___gateway local cine intra_rest_bypass_auth yes method getcrmsrbyid sr_id 3](https://cloud.githubusercontent.com/assets/1171858/16794627/55695dc0-48d9-11e6-9151-d8101ef1553a.png) <issue_comment>username_2: @username_1, this bug was not present before the updates (of VSCode 1.3.1 and REST-Client 0.5.3) <issue_comment>username_1: @username_0 @username_2 I think this maybe related to the change in v0.5.3, that I will call escape method for each response header value <issue_comment>username_1: @username_0 @username_2 I have created the new version 0.5.4, you can update to it and the bug should have be solved <issue_comment>username_0: GET http://google.com HTTP/1.1 is working. My other link is back to the previous error : "Unable to open '\response-preview': Unexpected token <." The parse shouldn't work but is it possible to see the RAW data? <issue_comment>username_2: @username_1 Yes, for me, this issue is no longer present. Thanks! <issue_comment>username_1: @username_0 , could you please provide me the raw request that you issued so that I can debug? Thanks in advance <issue_comment>username_0: Check my previous comment. Here is the answer I get into a browser (as RAW). <br /> <b>Notice</b>: Undefined variable: params in <b>C:\var\www******.php</b> on line <b>58</b><br /> {"error":"Nothing has been found"} This is this tag '<br/>' that made the previous error => Unexpected token <. <issue_comment>username_1: @username_0. sorry for bothering you to repeat again, and is this the response body, if it is, I can't repro this error since in my code return escape the response body? <issue_comment>username_0: It's alright :) Yes, this is the response body. Firefox developer own a json viewer : https://developer.mozilla.org/en-US/docs/Tools/JSON_viewer I used it to render the rest response body into my rest api. When I made a mistake into my php code, it's display an error code : \<br /> \<b>Notice\</b>: Undefined variable: params in \<b>C:\var\www******.php\</b> on line \<b>58\</b>\<br /> (This is an exemple) The problem is that the browser and your extension can't render/format the json code because it's not json (But you know that). Btw, the browser, or other app allow to display the raw/pure data without format it. (as shown into my previous post picture). That allow my to view where come from the error and fix it. Your extension doesn't allow to view raw data because it force the json format and get an error if not working (I suppose). May be you should parse the json and if you got an error, just display the data without parsing it. Here is the PHP function that allow that : http://php.net/manual/en/function.json-last-error.php I don't know the extension language or VSC code, .... So this is just PHP exemple :( I hope you understand what I'm trying to say :) ps : XML format doesn't display VSC error because \<br/> tag or "<" token can be parsed into XML. \<br /> \<b>Notice\</b>: Undefined variable: params in \<b>C:\var\www\******.php\</b> on line \<b>58\</b> \<br /> \<?xml version="1.0" encoding="UTF-8"?> \<data> \<error>Nothing has been found</error> \</data> <issue_comment>username_1: @username_0, I know your pain now, do you mean that you have a response body with Cotent-Type is application/json, while the result is not real json(due to some bug), so my code failed at json parse. The expected behavior is that if parse failed, just display the original(raw) response body? <issue_comment>username_0: True. I used to call the result the response body (after the CRLF, after the header). Not sure about to standard term. The content-type is application/json and the result isn't real json because there is an error message. I would be nice to display the raw data when the parse failed and display an error message to say that the json.parse didn't work). <issue_comment>username_1: @username_0, nice suggestion that add a error/warning message as well when displaying raw response. And I will fix it ASAP<issue_closed> <issue_comment>username_1: @username_0 you can try latest version 0.5.5 :sweat_smile: <issue_comment>username_0: Awsome, thanks :)
<issue_start><issue_comment>Title: FAQ example is incorrect username_0: [The example in FAQ](https://github.com/go-gomail/gomail#faq) is incorrect: ``` d := gomail.NewPlainDialer("smtp.example.com", "user", "123456", 587) ``` Should be: ``` d := gomail.NewPlainDialer("smtp.example.com", 587, "user", "123456") ``` <issue_comment>username_1: Thanks! I'm currently in holidays so I will fix it when I return.<issue_closed>
<issue_start><issue_comment>Title: ActiveRel factories will build CREATE String username_0: Fixes the bug documented in specs: when using ActiveRel to create nodes in addition to rels, node properties that look like Cypher params were being interpreted literally. This modifies ActiveRel's query factory to build a CREATE string. There will be a performance boost, too, since the entire hash of properties will be parameterized instead of just the values. <issue_comment>username_1: This looks good to me! I'm not 100% sure I understood the bug. What do you mean for "being interpreted literally". You mean that `{}` in string parameters are interpreted like parameters bindings? <issue_comment>username_0: @username_1 That is correct. If you have a value that is `{my actual value}`, the gem interprets it as a parameter and it can cause an error.
<issue_start><issue_comment>Title: Wrong IPv6 returned username_0: JS Source: ```js ip.address( 'public', 'ipv6' ) // returns fe80::ad42:39c7:ff81:733 ``` OS: ```zsh $ ip addr inet6 2003:6a:6808:3f01:64d6:e507:513e:71ec/64 scope global mngtmpaddr noprefixroute dynamic ``` I shortened the output just because the rest is not important :) Thanks you in advance! <issue_comment>username_1: This is probably related to #61 if you do private instead of public you should get the public address. <issue_comment>username_0: Any update on this?
<issue_start><issue_comment>Title: JdkMongoSessionConverter supports custom ClassLoader username_0: <issue_comment>username_1: Thanks for the PR! This was merged via 9014ac906026094131f42c61e1dc40f7f5ac515f I also applied a bit of polish for your review b58ea03a3b712f764eac1371fe7496a42cf91715 I went ahead with the release so that we could stay on schedule. If you have feedback for changes, we can include that in the next release.
<issue_start><issue_comment>Title: Reserved characters should not be percent-encoded username_0: purell should not normalize reserved characters, as per [RFC3986](http://tools.ietf.org/html/rfc3986#section-2.2): reserved = gen-delims / sub-delims gen-delims = ":" / "/" / "?" / "#" / "[" / "]" / "@" sub-delims = "!" / "$" / "&" / "'" / "(" / ")" / "*" / "+" / "," / ";" / "=" ``` go package main import ( "fmt" "github.com/username_1/purell" ) func main() { fmt.Println(purell.MustNormalizeURLString("my_(url)", purell.FlagsSafe)) } ``` The above code outputs `my_%28url%29`, whereas it should be `my_(url)`. This is due to a bug in Go stdlib ([issue 5684](https://github.com/golang/go/issues/5684)).<issue_closed> <issue_comment>username_1: Totally agree, but as you mention, this is due to the parsing and escaping done by Go's stdlib. Once/if the bug is fixed in Go, this will be fixed too. Not a purell bug per se. <issue_comment>username_0: I believe that purell should provide its own implementation of Parse. <issue_comment>username_1: Pull requests welcome.
<issue_start><issue_comment>Title: MatchHistoryRequest returns Lol::InvalidAPIResponse username_0: I believe the matchhistory endpoint has been deprecated. Would it be possible to deploy a new gem version with the changes in master (match_list)? <issue_comment>username_1: Hey Jared, I am currently on holiday without access to my dev laptop. I'll do it in January. For now in your Gemfile pull directly from Github. <issue_comment>username_0: Thanks, @username_1. I'll give that a try. Also, is there a way to pass options to the match_list_request, e.g. ?rankedQueues=RANKED_SOLO_5x5?<issue_closed>
<issue_start><issue_comment>Title: Meta bug for sdo-phobos release (as v2.2) username_0: As of 2015-10-02 sdo-phobos is looking pretty complete. It is closed for substantive schema/example changes except those that arise here from final review by steering group. /cc @pmika @username_2 @ajax-als @scor @username_5 @tmarshbing @tilid @username_1 @rvguha See http://sdo-phobos.appspot.com/docs/releases.html for all substantial changes needing review. ## Known TODOs before we publish: * [ ] respond to #772 by updating about.html or adding a 'how we work' page. * [ ] add to FAQ: per #747 Clarify that we consider examples and /docs/* documentation to be under the same CC license as the other schemas (rather than licensed as the opensource software i.e. Apache2), and to contact us if this isn't enough. Add corresponding note to README.md in github repo. * [ ] add to FAQ: note that https:// should be considered acceptable in markup and that we expect in future / over time these will become default and canonical. But that http:// will continue to be fine too. * [ ] (website infrastructure) add more IDs to examples, and more properties to TYPES: config so that examples show up there. (optional but good to do) * [ ] Blog post. Should cover last release and Extensions status too. * [ ] Update overview text for /docs/releases.html to summarize the release. Add a section on the status of the live extensions (bib, auto) and grouping recent changes to them. ## Possible additional changes * actionPlatform and openingHoursStatus proposals from @username_1 - https://github.com/schemaorg/schemaorg/pull/824 https://github.com/schemaorg/schemaorg/pull/778 <issue_comment>username_0: * ongoing review discussion includes https://github.com/schemaorg/schemaorg/pull/821 around OfferCatalog <issue_comment>username_1: There is also a discussion going on in Issue #818 re: OfferCatalog. Can people comment there so there are not any cross conversations? /cc @pmika @username_2 @ajax-als @scor @username_5 @tmarshbing @tilid @username_0 @rvguha <issue_comment>username_2: As for #810: postalCode and addressCountry are now an expected property for GeoShape and GeoCoordinates: I think "expected" is too strong. There are many relevant places like mountain summits that do not have a postal code. <issue_comment>username_2: On http://sdo-phobos.appspot.com/areaServed there are extra characters after "sub-properties": ineligibleRegion. ''">eligibleRegion <issue_comment>username_2: As for #733 "Added example showing library availability for Book. Also added example for Library and linked it to openingHoursSpecification as a good example for that too." 1. The example does not show up for http://sdo-phobos.appspot.com/openingHoursSpecification. 2. The RDFa and Microdata examples for http://sdo-phobos.appspot.com/Library have very long lines. Limit the width of the examples to 65 or so characters. <issue_comment>username_0: @username_2 - thanks re bug. And regarding 'expected', ah yes, I see that the word is ambiguous. We don't expect it always to be there, but if we see it, we are not surprised because "it isn't unexpected". Let's find a better phrase. <issue_comment>username_2: Besides these points made in here, I am fine with sdo-phobos as it stands - but I have a strong opion on the need for https://github.com/schemaorg/schemaorg/pull/828 ;-) <issue_comment>username_2: @username_0 re "expected" - what about "allowed", "permitted", "supported" or "applicable" property ;-)? <issue_comment>username_3: could change to *An Organization (or ProgramMembership) to which this Person belongs. Also valid for relating Organization with ProgramMembership, for relating Organizations with one another, please use [parentOrganization](http://schema.org/parentOrganization) and [subOrganization](http://schema.org/subOrganization)* <issue_comment>username_4: "*for #733 "Added example showing library availability for Book*.": * Added as example to openingHoursSpecification * Long lines are now wrapped in example Code updated - needs pushing to sdo-phobos /cc @username_0 <issue_comment>username_1: I am trying to tie together the final details of OfferCatalog on issue #818. I would appreciate input there. /cc @pmika @username_2 @ajax-als @scor @username_5 @tmarshbing @tilid @username_0 @rvguha <issue_comment>username_2: As for signing off: I am fine with sdo-phobos as it stands, assuming that - OfferCatalog will be finalized as agreed in #818 (see there) and - openingHoursStatus (#824) is not included without additional discussion because of potential conflicts /cc @username_0 <issue_comment>username_5: @username_0 For signing off, I agree with the proposal outlined #818 by @username_1 and have responded on the thread. Please let know if there is anyother thread that needs attention. Thanks! <issue_comment>username_0: * I have just published some hopefully-final changes to #818 following discussion there - I believe we have reasonable consensus and a workable design. see http://sdo-phobos.appspot.com/OfferCatalog http://sdo-phobos.appspot.com/hasOfferCatalog (thanks @username_1 for getting this wrapped up) * Confirming that openingHoursStatus is not going into this release - needs more discussion. * @username_3 I'm missing the context for memberOf tweak. We also have 'department'. Is there an associated issue, it seems worth investigating. * I'm at W3C TPAC event currently, trying to review all comments on this release to make sure we didn't miss anything. It looks to me like we're pretty close, although there are some checklist items above that need attention still. <issue_comment>username_1: Yes, openingHoursStatus should be postponed to another release. Vicki Tardif Holland | Ontologist | vtardif@google.com <issue_comment>username_0: I think we're done, except for blog post and about page and https://github.com/schemaorg/schemaorg/issues/445# needs another look w.r.t. adding an example for http://sdo-phobos.appspot.com/ExhibitionEvent <issue_comment>username_0: see also https://groups.google.com/forum/#!topic/schema-org-sg/or9tUyM7yqo and https://groups.google.com/forum/#!topic/schema-org-sg/s1km5-aYBQ8 ... I have circulated a (hopefully final) release candidate, http://sdo-phobos.appspot.com/docs/releases.html I even more think that we are done now! Will throw together a quick blog post to match. PTAL.<issue_closed> <issue_comment>username_0: Released 2015-11-05, thanks everyone! We packed a lot into this one, even if there were no "big headline" additions. Blogged at http://blog.schema.org/2015/11/schemaorg-whats-new.html
<issue_start><issue_comment>Title: electronic-gradeable-with-no-TA-grading grades released date username_0: The edit gradeable form currently hides the grades released date when no ta grading is selected. It should still be visible. Because we don't have a grades released date, these items seem to be stuck in the "closed" category. They should be in the "graded" section (after the released date). <issue_comment>username_0: This was kind of a mess. I changed things in PR #635, but this needs work. <issue_comment>username_0: (because of active version incorrectly set and/or not set in database bugs)<issue_closed> <issue_comment>username_0: Rewrote and replaced with issue #707
<issue_start><issue_comment>Title: Tensorflow in Windows username_0: Can I compile and execute this code on windows? I tried like "pip install -r requirement.txt Then get error: Could not open requirements file: [Errno 2] No such file or directory: 'requirement.txt'" I think I may not using right command. I am very beginner at this.......:( <issue_comment>username_1: Your error message means you dont have the txt file, that is all. But that txt only mention that you need to have installed the wget and numpy. <issue_comment>username_1: I realized that your message is from Dec, 2016. I guess you solved your problem long time ago. <issue_comment>username_0: Hello, Yeah, I did solve the error, indeed. Thanks for your concern, I appreciate it. Thanks Hosna<issue_closed>
<issue_start><issue_comment>Title: [docker] test ways to not need root username_0: This issue is a migration from singularityware/singularity [to be deleted] @chrisfilo had an idea to somehow "tar.gz the entire docker image (and then not need root)" and this could be further thought out, so possibly some day we can create singularity images from docker without needing it - or minimally having a base OS template (created using root) that another image can be bootstrapped into (without root)
<issue_start><issue_comment>Title: [Payum] Exception for NotifyRequest when using Omnipay Bridge username_0: I am using Payum and the Payum OmnipayBridge in production for some time now and most of the payments (95%) go through properly. But sometimes I get a message from the payment provider Mollie that the webhook returns a 500 error. Now it turns out that the `notify` url returns the following error: ![d](https://puu.sh/rrMTP/fac0213b99.png) I have been trying to find out what is missing but don't know if it's a bug in Sylius or Payum itself. @username_1 Do you maybe know why the notify url is failing? <issue_comment>username_1: The omnipay bridge does not support notify request. Though it should be easy to add. We need an payum action which supports Notify request and correctly proxy data to omnipay gateway related methods. https://github.com/Payum/OmnipayBridge/tree/master/src/Action <issue_comment>username_1: The issue is not related to Sylius. Could you please re open it here https://github.com/Payum/OmnipayBridge<issue_closed>
<issue_start><issue_comment>Title: 🛑 Docs is down username_0: In [`7b374c7`](https://github.com/Warwick-Engineering-Society/uptime/commit/7b374c7180ef0780b2e2784222cd01c2147dd7b8 ), Docs (https://docs.engsoc.uk) was **down**: - HTTP code: 0 - Response time: 0 ms <issue_comment>username_0: **Resolved:** Docs is back up in [`24dfeab`](https://github.com/Warwick-Engineering-Society/uptime/commit/24dfeab60031e7208dbb8fc82a56fa09d75c60b0 ).<issue_closed>
<issue_start><issue_comment>Title: Feature: Preview Handler for Google Spreadsheets username_0: Nice to have: inline preview (iframe) for data sets that have google spreadsheet as type Example: http://datahub.io/dataset/fc-basel/resource/3094a953-f1ab-4f69-816b-a4b76c5603ed<issue_closed> <issue_comment>username_1: Using ckanext-googledocs or ckanext-officedocs should fulfill this. This issue is being closed as it is more than 18 months old. If you are still experiencing this problem and wish to help investigate further, or submit a PR, please feel free to re-open it.
<issue_start><issue_comment>Title: Update uuid to version 3.0.0 username_0: ## Version 3.0.0 of [uuid](https://github.com/kelektiv/node-uuid) just got published. Hi there, This week a new version of the [uuid](https://github.com/kelektiv/node-uuid) module got released. The old module which was available by the name [node-uuid](https://www.npmjs.com/package/node-uuid) got deprecated and will show error message upon every install of the module. To get rid of those install errors I've created this **automated pull request**. I've run some checks against your code base to test whether you're using one of the [deprecated apis](https://github.com/kelektiv/node-uuid/commit/5ae7287fc935eb55ef39133e4be17ef623ca000e), but this isn't the case. Please test the changes against your code. I didn't run any tests and therefore can't guarantee that it isn't breaking. You can also just close this pr if you don't want to update your module. In case there's already another pr open to upgrade uuid, I'm sorry for the effort I'm causing. <issue_comment>username_1: Sorry for late review. Could you rebase please? <issue_comment>username_0: done 👍
<issue_start><issue_comment>Title: Add DER keys username_0: Adds DER conversions of various existing keys. The docs list exactly what has been added. These vectors are needed for #1573. Depends on #1606 (and will need a rebase once it lands) <issue_comment>username_1: Other than the paths in the docs being wrong this looks fine to me. I've checked every key against the existing PEM version using 1.0.1h I compiled myself. <issue_comment>username_1: LGTM
<issue_start><issue_comment>Title: New line after comments in CSS files username_0: Whenever I use atom-beautify on CSS files, all `/* */` line or block comments get an empty line after them. ![empty-line-after-comments](https://cloud.githubusercontent.com/assets/1562939/10450983/7953bf62-7195-11e5-9e16-560f1be3e8f4.png) I do not have this issue with HTML or JS comments. <issue_comment>username_1: Please follow the instructions found here: https://github.com/username_1/atom-beautify/blob/master/CONTRIBUTING.md#new-issues-bugs-questions-etc Specifically regarding the command `Atom Beautify: Help Debug Editor` and putting that into a Gist and linking back here. This will help us debug your issue. <issue_comment>username_0: Thanks for the reply Gavin. Here is the gist with the debug information: https://gist.github.com/username_0/684c9d858199d54f4cb9
<issue_start><issue_comment>Title: Removed Active Support deprecations username_0: ### Summary Hey, in this PR I removed most of the deprecations announced for Rails 5.1. I left a couple more because I need more context to remove/clean up the code, specially for the ActiveSupport `halt_callback_chains_on_return_false` config. <issue_comment>username_0: Looks like there's an odd fail, I've re-run the failing specs on my end and everything went fine. <issue_comment>username_1: Thank you for the pull request but we don't accept pull request with deprecations removal. <issue_comment>username_0: @username_1 Sorry, I didn't know that. I'm wondering the reasons behind this, maybe it should appear somewhere in the contribution guide. <issue_comment>username_1: The release managers is the only people that have the context behind the deprecation so they are the best people to remove the deprecations and cleanup the code.
<issue_start><issue_comment>Title: Support for a data store that is implemented via promises username_0: I'm implementing a redis data store and have all of the methods on the data store resolving via promises. I'd like to adjust all of the methods that interact with the datastore to support potentially resolving a promise if one is present. If I get a +1 I''ll submit a PR. <issue_comment>username_1: closing as duplicate of #410. @username_0, if you're still interested, i'd love your feedback and continued help in that issue.<issue_closed>
<issue_start><issue_comment>Title: Support alternate clock sources username_0: It would be a nice feature if the source of current time was pluggable, rather than always obtained from `new Date()`. There should be a simple function that can be overwritten to provide custom functionality. For example: - I might want to get the current time from a server, rather than trusting the client's clock. - It would be helpful during unit tests to set up a fake clock with fixed values. I envision an interface similar to: ```javascript // default implementation moment.now = function() { return moment(new Date()); } // an alternate implementation, using a starting value from elsewhere var now = moment("2015-01-01T00:00:00Z"); setInterval(function(){ now.add(1,'s'); }, 1000); moment.now = function() { return now; } // a fixed value implementation, for use in unit tests moment.now = function() { return moment("2015-01-01T00:00:00Z"); } ``` This seems like it would be very easy to implement. Thoughts? <issue_comment>username_0: Or, a better way to implement the second one might be: ```javascript var offset = moment("2015-01-01T00:00:00Z").valueOf() - Date.now(); moment.now = function() { return moment(Date.now() + offset); } ``` <issue_comment>username_1: +1 <issue_comment>username_2: I assume it's very useful for testing. +1 <issue_comment>username_3: +1 <issue_comment>username_4: The function should return UNIX time, not some fancy moment/Date object, to break possible circular dependencies. <issue_comment>username_0: @username_4 - good point. <issue_comment>username_5: @username_0 @username_4 @username_3, @atogata has developed a proposed solution here https://github.com/moment/moment/pull/2766. Would you mind taking a look at it? Thanks!<issue_closed> <issue_comment>username_0: Closed, completed in #2766
<issue_start><issue_comment>Title: plugins v2 (sectioned db, with hooks) username_0: Basically, instead of hooking new behaviour into the main database, you can create sub sections: ``` js SubLevel(db) var fooDb = db.sublevel('foo', '~') ``` `fooDb` is an object with the levelup api, except when you do an `fooDb.put('bar')` the key is prefixed with `~foo~` so that it's separated from the other keys in the main db. This is great, because if you want to build a section of the database with special behaviour in it, you can create a subsection, and extend the behaviour in anyway you like -- but it will only affect that section! So, you can monkeypatch it - whatever, you won't introduce bugs into other parts of the program. Most of the plugins I built needed some sort of interception, where a value is inserted into one range, which triggers an action which inserts something into a different section. To get reliably consistent data this needs to be atomic. so, you can tie set hooks on a subsection, that trigger an insert into another subsection. when a key is inserted into the main db, index that write with a timestamp, saved into another subsection. ``` js var SubLevel = require('level-sublevel'); SubLevel(db) var sub = db.sublevel('SEQ') db.pre(function (ch, add) { add({ key: ''+Date.now(), value: ch.key, type: 'put' }, sub) //NOTE pass the destination db to add //and the value will end up in that subsection! }) db.put('key', 'VALUE', function (err) { //read all the records inserted by the hook! sub.createReadStream() .on('data', console.log) }) ``` `db.pre(function hook (ch, add) {...})` registers a function to be called whenever a key is inserted into that section. `ch` is a change, like a row argument to `db.batch`: `{key: k, value: v, type: 'put' || 'del'}`. If the hook function calls `add(ch)` ch is added to the batch (a regular put/del is turned into a batch) but, if a subsection is passed in `add(ch, sub)` then that put will be added to _that subsection_. Compare subsection code for queue/trigger https://github.com/username_0/level-sublevel/blob/e2d27cc8e8356cde6ecf4d50c980c2ba93d87b95/examples/queue.js with the old code - https://github.com/username_0/level-queue/blob/master/index.js and https://github.com/username_0/level-trigger/blob/18d0a1daa21aab1cbc1d0f7ff3690b91c1e0291d/index.js the new version is only ~ 60 lines, down from about ~ 200, also it's possible to use multiple different queue/trigger libs within the same db. And also, there is no tricky code that refers to ranges or prefixes in the subsection based code! in summary, - create subsections, add any features to your subsection. - use pre(fun) to trigger atomic inserts into your subsection.<issue_closed> <issue_comment>username_1: Basically, instead of hooking new behaviour into the main database, you can create sub sections: ``` js SubLevel(db) var fooDb = db.sublevel('foo', '~') ``` `fooDb` is an object with the levelup api, except when you do an `fooDb.put('bar')` the key is prefixed with `~foo~` so that it's separated from the other keys in the main db. This is great, because if you want to build a section of the database with special behaviour in it, you can create a subsection, and extend the behaviour in anyway you like -- but it will only affect that section! So, you can monkeypatch it - whatever, you won't introduce bugs into other parts of the program. Most of the plugins I built needed some sort of interception, where a value is inserted into one range, which triggers an action which inserts something into a different section. To get reliably consistent data this needs to be atomic. so, you can tie set hooks on a subsection, that trigger an insert into another subsection. when a key is inserted into the main db, index that write with a timestamp, saved into another subsection. ``` js var SubLevel = require('level-sublevel'); SubLevel(db) var sub = db.sublevel('SEQ') db.pre(function (ch, add) { add({ key: ''+Date.now(), value: ch.key, type: 'put' }, sub) //NOTE pass the destination db to add //and the value will end up in that subsection! }) db.put('key', 'VALUE', function (err) { //read all the records inserted by the hook! sub.createReadStream() .on('data', console.log) }) ``` `db.pre(function hook (ch, add) {...})` registers a function to be called whenever a key is inserted into that section. `ch` is a change, like a row argument to `db.batch`: `{key: k, value: v, type: 'put' || 'del'}`. If the hook function calls `add(ch)` ch is added to the batch (a regular put/del is turned into a batch) but, if a subsection is passed in `add(ch, sub)` then that put will be added to _that subsection_. Compare subsection code for queue/trigger https://github.com/username_0/level-sublevel/blob/e2d27cc8e8356cde6ecf4d50c980c2ba93d87b95/examples/queue.js with the old code - https://github.com/username_0/level-queue/blob/master/index.js and https://github.com/username_0/level-trigger/blob/18d0a1daa21aab1cbc1d0f7ff3690b91c1e0291d/index.js the new version is only ~ 60 lines, down from about ~ 200, also it's possible to use multiple different queue/trigger libs within the same db. And also, there is no tricky code that refers to ranges or prefixes in the subsection based code! in summary, - create subsections, add any features to your subsection. - use pre(fun) to trigger atomic inserts into your subsection.
<issue_start><issue_comment>Title: Update .coveragerc file username_0: The options of ``.coveragerc`` have changed with ``coverage`` 4.0. The ``exclude`` option seems to have been renamed: ``` py27 installed: coverage==4.0,iPOPO==0.6.3,jsonrpclib-pelix==0.2.6,nose==1.3.7,wheel==0.24.0 [...] coverage.misc.CoverageException: Unrecognized option '[report] exclude=' in config file .coveragerc ``` This causes Travis-CI builds to fail. <issue_comment>username_0: Fixed in 323baf095b71f0ec895a3db3f1131642b666b017<issue_closed>
<issue_start><issue_comment>Title: clock plugin breaks wox username_0: Wox version: 1.3.67 OS Version: Microsoft Windows NT 10.0.10240.0 Date: 05/24/2016 22:53:32 Exception: Wox.Plugin.SimpleClock System.TypeLoadException Could not load type 'Wox.Infrastructure.Storage.BaseStorage`1' from assembly 'Wox.Infrastructure, Version=1.3.67.0, Culture=neutral, PublicKeyToken=null'. at Wox.Plugin.SimpleClock.Commands.AlarmCommand.InitializeStorage(PluginInitContext context) at Wox.Plugin.SimpleClock.Commands.AlarmCommand..ctor(PluginInitContext context, CommandHandlerBase parent) at Wox.Plugin.SimpleClock.ClockPlugin.Init(PluginInitContext context) at Wox.Infrastructure.Stopwatch.Normal(String name, Action action) in C:\projects\wox\Wox.Infrastructure\Stopwatch.cs:line 28 at Wox.Core.Plugin.PluginManager.<>c.<InitializePlugins>b__18_0(PluginPair pair) in C:\projects\wox\Wox.Core\Plugin\PluginManager.cs:line 90 at System.Threading.Tasks.Parallel.<>c__DisplayClass17_0`1.<ForWorker>b__1() at System.Threading.Tasks.Task.InnerInvokeWithArg(Task childTask) at System.Threading.Tasks.Task.<>c__DisplayClass176_0.<ExecuteSelfReplicating>b__0(Object )<issue_closed> <issue_comment>username_1: because @lances101 use undocument API (storate)... ask him to update https://github.com/lances101/Wox.Plugin.SimpleClock <issue_comment>username_1: @username_0 SimpleClock has been updated
<issue_start><issue_comment>Title: No 0.3.x versions available in npm registry. username_0: Checking the versions available in the npm registry, it skips from v0.1.12 to v0.4.1 Are any v0.3.x available somewhere else since that's considered the present stable version according http://meanjs.org/? Maybe the "Download" button on the front page should also be pointing to the stable version? That way people who are just trying to use it can go ahead and do so, without having to try and figure out why they can't get the latest development versions to work, or be concerned that the docs for v0.1.12 aren't available either. http://registry.npmjs.org/generator-meanjs <issue_comment>username_1: 0.1.12 is the 0.3.x generator. <issue_comment>username_0: Cool. I'm not sure if there's a way to update or clarify that in the npm registry, but I'd happy to submit a PR sometime this week to help clarify that in the documentation if that's something you'd be interested in. <issue_comment>username_2: @username_0 go ahead and submit a PR! The documentation lives at meanjs/meanjs.github.io<issue_closed>
<issue_start><issue_comment>Title: Postgres password secret regenerated on upgrade but not updated in postgres server username_0: Not specifying the password for the postgres password secret causes a random alphanumeric value to be generated for the password, this works fine the first time the chart is installed but when the chart is run again (in my case as part of a `helm upgrade` of a chart depending on the postgres chart) the secret is regenerated but not updated in the postgres DB, making it impossible to log in using the secret. Setting a password in `Values.postgresPassword` fixes the issue. This looks like it's because the `POSTGRES_PASSWORD` environment variable is only used by the postgres container to set the password if the data directory has not been created (i.e the container is starting up for the first time) see https://github.com/docker-library/postgres/blob/a00e979002aaa80840d58a5f8cc541342e06788f/9.6/docker-entrypoint.sh#L40 <issue_comment>username_1: Hitting the same issue. Yes indeed, the `POSTGRES_PASSWORD` is only used the first time. But this is not the issue, we should find a way to not rerun the `rand` in every upgrade. I'd say it is a bug, and we should find a solution. This way, it is not usable, and then, we shouldn't use rand at all. <issue_comment>username_2: Hey folks, is this still an issue? <issue_comment>username_3: It's still an issue. It's pretty much a problem for every chart that generates a secret. https://github.com/kubernetes/charts/blob/89f291797efeeb8668da0007a04871611555de04/stable/postgresql/templates/secrets.yaml#L15 <issue_comment>username_4: No solution so far? PostgreSQL Chart is unusable. <issue_comment>username_5: Just bumped into this as well. <issue_comment>username_6: Same here. This is messed up for sure. Please fix!
<issue_start><issue_comment>Title: Ubuntu 16.04 username_0: Hello, i still get "==> default: mesg: ttyname failed: Inappropriate ioctl for device" everytime vagrant starts a provision-script. I have the latest vagrant-version 1.8.5 on Mac OS X El Capitan. Can I solve this? Thanks! <issue_comment>username_1: Hello, We're going to need a bit more detail to assist here. That bug, iirc, was related to issues with a specific Vagrant version. Perhaps you could post a bit more complete log and provider the version of the box you are using (`vagrant box list`). <issue_comment>username_0: I installed it today. Can you tell me which other logs you need and where to find them? <issue_comment>username_1: What version of virtualbox and vagrant please? (just to verify the CLI output as vbox doesn't always uninstall properly) <issue_comment>username_0: Latest virtualbox-extension is also installed. <issue_comment>username_1: Try with the latest boxes (2.3.0) which is built with Vbox 5.1 <issue_comment>username_1: Have you tried upgrading to vagrant 1.8.6? I'm not sure what the other error has to do with this if anything. <issue_comment>username_0: Thx for your reply. Upgrading to vagrant 1.8.6 did not change this. I will have a look at the second part soon. <issue_comment>username_2: Have this issue too on MacOS X Sierra ``` MacBook-Pro-user:bento plugin73$ vagrant box list bento/ubuntu-16.04 (virtualbox, 2.3.0) MacBook-Pro-user:bento plugin73$ vagrant version Installed Version: 1.8.6 Latest Version: 1.8.6 MacBook-Pro-user:bento plugin73$ VBoxManage --version 5.1.8r111374 ``` This is log of vagrant up: ``` ERROR vagrant: The following SSH command responded with a non-zero exit status. Vagrant assumes that this means the command failed! set -e mkdir -p /vagrant mount -o vers=3,udp 192.168.139.1:/Users/plugin73/vagrant/hexlet-box/bento /vagrant if command -v /sbin/init && /sbin/init --version | grep upstart; then /sbin/initctl emit --no-wait vagrant-mounted MOUNTPOINT=/vagrant fi Stdout from the command: Stderr from the command: mesg: ttyname failed: Inappropriate ioctl for device mount.nfs: Connection timed out ``` <issue_comment>username_0: Hmm, I had this sometimes when destroying-vm and provision it again. It seems that "exports" under "/etc/exports" were not cleared correctly. After removing anything from there and rebooting my mac it works again. I don't think your "mesg: ttyname failed: Inappropriate ioctl for device" is related to the nfs error. <issue_comment>username_2: @username_0 thank you! I just cleaned /etc/exports on my mac and restart it. After I tried to start vagrant and it stucked yet with the same logs... <issue_comment>username_1: So can I consider this closed as it's external to bento? <issue_comment>username_2: I think, my issue consider to vagrant 1.8.6 because after I reverted it to 1.8.5 and reinit my box I can't replay it. Thanx all! <issue_comment>username_0: @username_1 I can't test at the moment if the problem is solved by downgrading vagrant <issue_comment>username_1: Closing this one as it doesn't seem specific to the bento boxes, more vagrant configuration and NFS. Feel free to open another issue if it's something specific to a bento build.<issue_closed> <issue_comment>username_3: Funny, Vagrant says just the opposite. https://github.com/mitchellh/vagrant/issues/7155 <issue_comment>username_4: I think this is an Ubuntu 'feature' and not a problem with Vagrant/VirtualBox/Bento. I fix it in my Vagrantfile with this: ``` config.vm.provision "fix-no-tty", type: "shell" do |s| s.privileged = true s.inline = "sed -i '/tty/!s/mesg n/tty -s \\&\\& mesg n/' /root/.profile" end ``` <issue_comment>username_4: I add this to my Vagrantfile to fix this: ``` config.vm.provision "fix-no-tty", type: "shell" do |s| s.privileged = true s.inline = "sed -ri s'/(mesg n)/tty -s \1 || true /' /root/.profile" end ``` <issue_comment>username_5: this happens to me when i call scripts with #!/usr/bin/env bash instead of #!/bin/bash s.vm.provision :shell, path: "scripts/sbin/bootstrap.sh" if the shebang in this bootstrap.sh file is #!/usr/bin/env bash then i can reproduce the above error with Inappropriate ioctl for device on my MBP <issue_comment>username_6: I had the "mesg: ttyname failed: Inappropriate ioctl for device" issue with Vagrant 1.9.1 but it was fixed by upgrading to 1.9.4, in case anyone is googling this. <issue_comment>username_7: Nope, 1.9.7, still happening <issue_comment>username_8: ```DIFF Vagrant.configure("2") do |config| + config.ssh.shell="bash" config.vm.box = "ubuntu/xenial64" config.vm.provider "virtualbox" do |v| v.customize [ "modifyvm", :id, "--memory", 2048 ] v.customize [ "modifyvm", :id, "--cpus", 2 ] end ``` solves this for me on mac os x.
<issue_start><issue_comment>Title: Revert from expressions to reflection in Portable40 username_0: The change to using expressions for reflection in portable builds (57ccaf0) requires the use of dynamic code generation. This adds a size overhead of ~2MB per architecture to Xamarin.iOS apps because it requires Mono.Dynamic.Interpreter to be included. This PR reverts the change to expressions for the Portable40 build. Xamarin projects can use either the Portable or Portable40 assemblies and thereby choose whether to optimize for speed or size. <issue_comment>username_0: I'm going to take a different approach.
<issue_start><issue_comment>Title: Spinner with ReactJS ignores options username_0: I'm currently seeing the options I set for the spinner being ignored when used in parallel with ReactJS. Here are the set options in the context of the ReactJS class ```javascript var ArtistForm = React.createClass({ baseClassName : "col-md-5 col-md-offset-4 form-group", spinnerOpts: { lines: 13, // The number of lines to draw length: 2, // The length of each line width: 8, // The line thickness radius: 0, // The radius of the inner circle corners: 1, // Corner roundness (0..1) rotate: 0, // The rotation offset direction: 1, // 1: clockwise, -1: counterclockwise color: '#000', // #rgb or #rrggbb or array of colors speed: 1, // Rounds per second trail: 60, // Afterglow percentage shadow: false, // Whether to render a shadow hwaccel: false, // Whether to use hardware acceleration className: 'spinner', // The CSS class to assign to the spinner zIndex: 2e9, // The z-index (defaults to 2000000000) top: '50%', // Top position relative to parent left: '50%' // Left position relative to parent }, spinner: new Spinner(this.spinnerOpts).spin(), ``` And the spinner call: ```javascript document.getElementById("email-validate-cont").appendChild(this.spinner.el); ``` However, the spinner I see is the default: ![default-size](https://cloud.githubusercontent.com/assets/6409660/6652149/61e2e134-ca3b-11e4-8547-0893ef95c0a9.png)<issue_closed> <issue_comment>username_1: You are actually passing `undefined` to the Spinner constructor, `this` inside an object literal does not refer to the object being created. <issue_comment>username_0: I totally missed that. My mistake. Thanks!
<issue_start><issue_comment>Title: Threads broken on msys2/mingw32? username_0: Sorry if this is just a mistake on my end! After following the setup instructions over at [oh-god-windows.md](https://gist.github.com/username_2/df7b9a88167de53636fc) everything works fine, except any program using the Thread class crashes with ![Fatal error in GC: Collecting from unknown thread](https://cloud.githubusercontent.com/assets/569607/9370532/fbf23ea0-46c8-11e5-9ca6-66aa6fd5234f.png) even if no allocation occurred inside the threads. I tested using the example from the docs: ```ooc import structs/ArrayList import threading/Thread counter := 0 mutex := Mutex new() threads := ArrayList<Thread> new() main: func { for (i in 0..10) { threads add(Thread new(|| for (i in 0..1000) { mutex lock() counter += 1 mutex unlock() Thread yield() } )) } for (t in threads) t start() for (t in threads) t wait() // prints counter = ??? "counter = %d" printfln(counter) } ``` <issue_comment>username_1: I can confirm, I get the same error on my compilation. <issue_comment>username_2: I don't think the GC is compiled with threading support anymore on Windows. You can try doing ``` # in rock's folder make boehmgc-clean make boehmgc GC_FLAGS="--enable-threads=win32" # back to ~/ooc/tests rock -x rock threads_test ``` <issue_comment>username_0: Nope, no luck for me, but wasn't that flag already specified when we ran `GC_FLAGS="--build=i686-pc-mingw32 --enable-threads=win32" make rescue`? <issue_comment>username_2: Ahh right, forgot about that. Perhaps it's the GC's fault then, not registering threads properly? <issue_comment>username_1: Apparently calling GC_endthreadex before the thread exits makes this work! Here is my sample code that runs: ```ooc import structs/ArrayList import threading/Thread endthread: extern(GC_endthreadex) func(Int) counter := 0 mutex := Mutex new() threads := ArrayList<Thread> new() main: func { for (i in 0..10) { threads add(Thread new(|| for (i in 0..1000) { mutex lock() counter += 1 mutex unlock() Thread yield() endthread(0) } )) } for (t in threads) t start() for (t in threads) t wait() // prints counter = ??? "counter = %d" printfln(counter) } ``` It seems that for some reason the GC doesn't unregister threads properly, although I think this shouldn't be the case. Quoting a part of GC_CreateThread documentation in gc.h: ``` Currently the collector expects all threads to fall through and terminate normally, or call GC_endthreadex() or GC_ExitThread, so that the thread is properly unregistered ``` It does seem that the threads in the code above should "fall through and terminate normally" though, so I don't see why they cause the GC to complain. I will see if I can find out some other solution. Otherwise wrapping the user's function in a closure that calls it then endthreadex is the only way I can see fixing this but it just feels and probably is wrong. <issue_comment>username_1: Actually, the GC unregistering the thread may not be the issue, GC_endthreadex is defined as: ```c GC_API void GC_CALL GC_endthreadex(unsigned retval) { GC_unregister_my_thread(); _endthreadex(retval); } ``` Calling GC_unregister_my_thread at the end of the thread still makes the program crash, while calling windows' _endthreadex fixes it. <issue_comment>username_1: And for some reason I rebuilt boehmgc with -DDEBUG_THREADS and now it doesn't crash anymore >_> <issue_comment>username_1: This is kind of crazy, I rebuilt the GC with: ``` make boehmgc-clean ARCH=32 GC_FLAGS="CFLAGS='-DROCK_BUILD' --build=i686-pc-mingw32 --enable-threads=win32" make boehmgc ``` The program above no longer crashes. Apparently for some reason if you build with any define, it works, while it doesn't without one (I tried adding additional flags like --enable-thread-local-alloc and --enable-parallel-mark as well and the program only stopped crashing if I added a define in the CFLAGS), so I'm guessing it's an issue with the Makefile not passing the GC_FLAGS correctly (?). <issue_comment>username_1: I finally got it! Boehm GC is compiled with -O2 by default but that seems to optimize something important out, leading to this bug. Passing CFLAGS must override the default CFLAGS (which I think are -g -O2). I tried compiling with -O1, it still crashes, passing CFLAGS='-g -O0' seems to do the trick. Obviously, this is a Boehm GC bug, however it may have been fixed in more recent versions. As a temporary workaround, I suggest fixing our makefile to pass the necessary flags on windows (which would also fix make rescue). @username_0 I would appreciate if you could test this too (just make boehmgc-clean && ARCH=32 GC_FLAGS="CFLAGS='-g -O0' --build=i686-pc-mingw32 --enable-threads=win32" make boehmgc) @username_2 What do you think? <issue_comment>username_2: Oof, that's going to seriously impact performance of windows apps imho :( <issue_comment>username_1: I know, should I try with a more recent version of Boehm GC? It may have been fixed (although I didn't find anything similar in their issue tracker after a quick look) <issue_comment>username_3: @username_1 Try it from Git: https://github.com/ivmai/bdwgc/. <issue_comment>username_0: I thought that we switched to an old gc because we wanted to drop the windows pthreads dependency but the newer version didn't support win32-threads? How did you test with a newer version? Or am I talking nonsense? xD <issue_comment>username_1: I haven't managed to compile from git yet (autotools on windows suck, I had some trouble so I moved on to another issue for now) but at least up to 7.4.2 they are available (and they seem to be available on the latest version on git too, look at win32_threads.c) Also tried to recompile with gcc 5.2 (was using 4.9 before), still having the same issues.
<issue_start><issue_comment>Title: Dygraph 1.1.1 not working on WebKit-based browser username_0: Dygraph v1.1.0 and v1.1.1 is not working on WebKit based browser e.g [Midori](http://midori-browser.org/), [Uzbl](http://www.uzbl.org/), [DWB](http://portix.bitbucket.org/dwb/) and [Surf](http://portix.bitbucket.org/dwb/); but v1.0.1 and prior is working. To reproduce, open [Dygraph website](http://dygraphs.com/) in WebKit based browser. I've tested it on Linux Mint 17.2 64-bit using following browser : - Midori 0.5.11 - Uzbl commit 228bc38 - DWB commit dda5aa7 - Surf 0.7. On all mentioned browser, graph in Dygraph home page is not working : ![Graph not working](https://cloud.githubusercontent.com/assets/6129042/15990191/75b7cafc-30b6-11e6-9d39-1bc565ee43e1.png) But it's working on [example page](http://dygraphs.com/gallery/#g/annotations) : ![Graph is working](https://cloud.githubusercontent.com/assets/6129042/15990210/ea96c6ca-30b6-11e6-9e64-29b658931d59.png) <issue_comment>username_1: Are you seeing errors in the JS console? To be honest, I've never heard of any of those browsers. <issue_comment>username_2: FYI @username_1 these browsers are based on WebKitGTK+ (https://webkitgtk.org/). @username_0 you could check rstudio/dygraphs repository to see if there were some similar issues (and fixes) since RStudio IDE itself uses WebKit (not GTK+ but QT) for web view. <issue_comment>username_0: Sorry for late reply. I've created a Dygraph chart following [tutorials](http://dygraphs.com/tutorial.html) : ```html <html> <head> <script type="text/javascript" src="dygraph-combined-dev.js"></script> </head> <body> <div id="graphdiv2" style="width:500px; height:300px;"></div> <script type="text/javascript"> g2 = new Dygraph( document.getElementById("graphdiv2"), "temperatures.csv", {} ); </script> </body> </html> ``` When tested in Surf browser, console shows these errors : ![dygraph issue](https://cloud.githubusercontent.com/assets/6129042/19102525/1228412a-8afb-11e6-93ed-680a2ae12ffa.png)
<issue_start><issue_comment>Title: Router doesn't work anymore after switching active shell when using configureRouter username_0: **I'm submitting a bug report** * **Library Version:** 1.0.1 **Please tell us about your environment:** * **Operating System:** Windows 10 * **Node Version:** 4.4.7 * **NPM Version:** 3.10.5 * **JSPM OR Webpack AND Version** JSPM 0.16.16 but right now I work with npm only... * **Browser:** Chrome 52 * **Language:** ESNext **Current behavior:** I have multiple shells in my app (`login` & `app`) and initialize the router in the main app via `configureRouter`. The app initially loads the `login` shell from `main.js`. When I go from the `login` shell to the `app`, everything is fine. `configureRouter` is executed and the correct views for my current route are shown. Once I go back to the `login` shell and then again to the `app`, routing doesn't work anymore. I guess that's because `configureRouter` is not called anymore... If I initialize the router using the "old" way by injecting the router into the app and calling it's `configure` method from within the app's `activate` method everything is working as expected... <issue_comment>username_1: Closing this since we are tracking this issue already.<issue_closed> <issue_comment>username_0: Ok, is there any place where I can track the progress/updates on this? <issue_comment>username_1: https://github.com/aurelia/framework/issues/400 <issue_comment>username_0: Thks!
<issue_start><issue_comment>Title: Support importing markdown files username_0: This is part of the epic #4605 ---- We now have zip support in the uploader, meaning that it is possible to import from a set of files, rather than a single file. This issue describes the basic markdown file support that should be built into the importer in order to handle Roon imports. There are likely more features that could/should be added (like handling frontmatter), and it is also likely that the importer will get restructured to be an app or series of apps in future. When importing markdown files the following rules apply: - If a single markdown file is imported, this should become a post - If multiple markdown files are imported via a zip, each one should become a zip. The content of the markdown file is handled as follows: - If it has a level one heading marked up with `# My Heading`, this should be treated as the post's title (i.e. `post.title`) and removed from the content - If it has an image with no alt before a level one heading, this should be treated as a featured image (i.e. `post.image`) and removed from the content - The remaining content of the markdown file should become the content for the post (i.e. `post.markdown`) The filename is handled as follows: - If it contains a status (published or draft) then this status should be used for the post and the status should be removed from the filename \* - If it contains a date or date and time, then this should be used as the created_at date for drafts, or the published_at date for published posts and removed from the filename - The remaining filename should be used as the slug - If there is no title in the markdown, the slug should be used as the title as-is \* Note: Roon exports include deleted images, these should be explicitly ignored.<issue_closed>
<issue_start><issue_comment>Title: update migrations to match documentation username_0: updating migrations to match documentation for issue #28 <issue_comment>username_1: :cool: Can you please add the `return`s in the migrations? <issue_comment>username_0: Added. It did work for some reason (I mean migration was applied), even on one more place, good thing its fixed now to be sure its working properly. Thanks! <issue_comment>username_1: :clap: <issue_comment>username_1: It ran correctly because it actually executes the migration, but it will not wait for it to be finished
<issue_start><issue_comment>Title: v2.16.3.13 Crash: com.jecelyin.editor2.core.text.MetaKeyKeyListenerCompat.getMetaState username_0: ====================================================== App Version: v2.16.3.13 Phone: OPPO R2017 Android Version: 4.3 Memory: 106 MB / 848 MB Stacktrace: java.lang.NoSuchMethodError: com.jecelyin.editor2.core.text.MetaKeyKeyListenerCompat.getMetaState at com.jecelyin.editor2.core.text.method.BaseMovementMethod.getMovementMetaState(BaseMovementMethod.java:143) at com.jecelyin.editor2.core.text.method.BaseMovementMethod.onKeyDown(BaseMovementMethod.java:46) at com.jecelyin.editor2.core.widget.TextView.doKeyDown(TextView.java:5869) at com.jecelyin.editor2.core.widget.TextView.onKeyDown(TextView.java:5655) at android.view.KeyEvent.dispatch(KeyEvent.java:2623) at android.view.View.dispatchKeyEvent(View.java:7343) at android.view.ViewGroup.dispatchKeyEvent(ViewGroup.java:1393) at android.view.ViewGroup.dispatchKeyEvent(ViewGroup.java:1393) at com.jecelyin.editor2.view.TabViewPager.dispatchKeyEvent(TabViewPager.java:1846) at android.view.ViewGroup.dispatchKeyEvent(ViewGroup.java:1393) at android.view.ViewGroup.dispatchKeyEvent(ViewGroup.java:1393) at android.view.ViewGroup.dispatchKeyEvent(ViewGroup.java:1393) at android.view.ViewGroup.dispatchKeyEvent(ViewGroup.java:1393) at android.view.ViewGroup.dispatchKeyEvent(ViewGroup.java:1393) at android.view.ViewGroup.dispatchKeyEvent(ViewGroup.java:1393) at android.view.ViewGroup.dispatchKeyEvent(ViewGroup.java:1393) at com.android.internal.policy.impl.PhoneWindow$DecorView.superDispatchKeyEvent(PhoneWindow.java:1965) at com.android.internal.policy.impl.PhoneWindow.superDispatchKeyEvent(PhoneWindow.java:1421) at android.app.Activity.dispatchKeyEvent(Activity.java:2444) at android.support.v7.app.AppCompatActivity.dispatchKeyEvent(AppCompatActivity.java:513) at android.support.v7.view.WindowCallbackWrapper.dispatchKeyEvent(WindowCallbackWrapper.java:50) at android.support.v7.app.AppCompatDelegateImplBase$AppCompatWindowCallbackBase.dispatchKeyEvent(AppCompatDelegateImplBase.java:241) at com.android.internal.policy.impl.PhoneWindow$DecorView.dispatchKeyEvent(PhoneWindow.java:1877) at android.view.ViewRootImpl$ViewPostImeInputStage.processKeyEvent(ViewRootImpl.java:3805) at android.view.ViewRootImpl$ViewPostImeInputStage.onProcess(ViewRootImpl.java:3788) at android.view.ViewRootImpl$InputStage.deliver(ViewRootImpl.java:3387) at android.view.ViewRootImpl$InputStage.onDeliverToNext(ViewRootImpl.java:3437) at android.view.ViewRootImpl$InputStage.forward(ViewRootImpl.java:3406) at android.view.ViewRootImpl$AsyncInputStage.forward(ViewRootImpl.java:3491) at android.view.ViewRootImpl$InputStage.apply(ViewRootImpl.java:3414) at android.view.ViewRootImpl$AsyncInputStage.apply(ViewRootImpl.java:3548) at android.view.ViewRootImpl$InputStage.deliver(ViewRootImpl.java:3387) at android.view.ViewRootImpl$InputStage.onDeliverToNext(ViewRootImpl.java:3437) at android.view.ViewRootImpl$InputStage.forward(ViewRootImpl.java:3406) at android.view.ViewRootImpl$InputStage.apply(ViewRootImpl.java:3414) at android.view.ViewRootImpl$InputStage.deliver(ViewRootImpl.java:3387) at android.view.ViewRootImpl$InputStage.onDeliverToNext(ViewRootImpl.java:3437) at android.view.ViewRootImpl$InputStage.forward(ViewRootImpl.java:3406) at android.view.ViewRootImpl$AsyncInputStage.forward(ViewRootImpl.java:3524) at android.view.ViewRootImpl$ImeInputStage.onFinishedInputEvent(ViewRootImpl.java:3680) at android.view.inputmethod.InputMethodManager$PendingEvent.run(InputMethodManager.java:2000) at android.view.inputmethod.InputMethodManager.invokeFinishedInputEventCallback(InputMethodManager.java:1706) at android.view.inputmethod.InputMethodManager.finishedInputEvent(InputMethodManager.java:1697) at android.view.inputmethod.InputMethodManager$ImeInputEventSender.onInputEventFinished(InputMethodManager.java:1977) at android.view.InputEventSender.dispatchInputEventFinished(InputEventSender.java:141) at android.os.MessageQueue.nativePollOnce(Native Method) at android.os.MessageQueue.next(MessageQueue.java:132) at android.os.Looper.loop(Looper.java:124) at android.app.ActivityThread.main(ActivityThread.java:5178) at java.lang.reflect.Method.invokeNative(Native Method) at java.lang.reflect.Method.invoke(Method.java:525) at com.android.internal.os.ZygoteInit$MethodAndArgsCaller.run(ZygoteInit.java:745) at com.android.internal.os.ZygoteInit.main(ZygoteInit.java:561) at dalvik.system.NativeStart.main(Native Method)
<issue_start><issue_comment>Title: Move Symfony2 to src/Symfony2/ and link the required ruleset.xml and Sniffs/ directory to the root username_0: This will allow users to check out the repository directly to the Standards directory (i.e. "the old way")": git clone git://github.com/escapestudios/Symfony2-coding-standard.git Symfony2 and still allow Composer to work as required. <issue_comment>username_1: Hi @username_0, I might be missing a trick here, but I still don't really get why we would have to wrap the Symfony2-dir in an additional src/ - could you shine some light on this PR? Thanks in advance! Kind regards, David <issue_comment>username_0: Mostly for cleanliness. It's common convention to put your source under a ./src directory. It keeps the root clean. Right now we only have code to put under Symfony2, but in some hypothetical future, you could have a few other source files as well. Probably not for this project, but it's a good habit to be in. <issue_comment>username_1: Hi @username_0, thanks for clarifying that - in that case I'd prefer not to get this PR merged in... When looking at other coding standards, eg. [Wordpress coding standards](https://github.com/WordPress-Coding-Standards/WordPress-Coding-Standards), they have multiple but they just live inside of the root of the repo. Thanks though for opening the PR! Kind regards, David
<issue_start><issue_comment>Title: Execution of phing with vagrant failed. username_0: At least phpunit and phpmd are not longer supported by pear installation, so the phing build inside vagrant fails. With this changes i got it to work again. btw installing dependencies with composer inside vm would not work, if the host is a windows machine. <issue_comment>username_1: @username_0 Thanks, someone will review your pull request soon <issue_comment>username_2: @username_3 merge pls <issue_comment>username_3: @username_2 OK, I'll try to merge now. You can check the progress of the merge [here](http://www.username_3.com/t/3610-97582129) <issue_comment>username_3: @username_2 Done! FYI, the full log is [here](http://www.username_3.com/t/3610-97582129) (took me 7min) <issue_comment>username_2: @username_0 many thanks! <issue_comment>username_1: @username_3 deploy <issue_comment>username_3: @username_1 OK, I'll try to deploy now. You can check the progress [here](http://www.username_3.com/t/3610-98072201) <issue_comment>username_3: @username_1 Done! FYI, the full log is [here](http://www.username_3.com/t/3610-98072201) (took me 6min)
<issue_start><issue_comment>Title: Nodes are not reading hub's configuration during re-registering username_0: Steps to reproduce: **1) Start Selenium Grid. Let say, only 2 stations: HUB and NODE.** For HUB, set timeouts, as: "timeout": 20000 "browserTimeout": 20000 For NODE do not set any timeouts explicitly. **2) Run this code against Grid:** ```java driver.get(url); sleep(30000); driver.get(url); ``` It fails with `org.openqa.selenium.remote.SessionNotFoundException`. This line added to the NODE log: `INFO org.openqa.selenium.remote.server.DriverServlet - Session 7f5fffec-4882-4c4c-b091-c780c66d379d deleted due to client timeout` **3) So, trying to resolve issue, increase timeouts on the HUB:** "timeout": 40000 "browserTimeout": 40000 Restart HUB. NODE would re-register automatically. **Defect:** Code still fails. Timeout still set at 20 seconds. But, if you look at the Grid console, it would show new (updated) configuration, which is misleading. The reason: NODE haven't read new timeouts from the HUB. NODE will only read that new configuration, when re-started. **Expected:** NODE would read configuration from the HUB, when re-registering. <issue_comment>username_1: why are you setting your hub timeout to `20000` if you are going to sleep for `30000`? <issue_comment>username_0: That was example. You are right, it's not the best. Actually, issue which I had was, that after I installed Selenium with [Chef](https://supermarket.chef.io/cookbooks/selenium) default was [30 seconds](https://github.com/dhoer/chef-selenium/blob/master/resources/hub.rb), and I needed it to be longer. So, the defect is that, that HUB configuration is not propagated ti the NODE, when it re-registering. <issue_comment>username_2: Assigning this to myself for follow up... Will apply any changes (if needed) to Se3. <issue_comment>username_3: Observed the same issue when fixing misunderstood timeouts (seconds vs. milliseconds) on hub configuration (docker-selenium). We restarted the hub only but with no result. Restarting nodes helped, though.<issue_closed>
<issue_start><issue_comment>Title: Categorize POIs to facilitate result filtering [PL-PG06] username_0: Categories will make reverse place geocoding a useful service. This could be done in conjunction with adding popularity scores/tags. <issue_comment>username_1: ESRI's geocoder has a broad [selection of categories](https://developers.arcgis.com/rest/geocode/api-reference/geocoding-category-filtering.htm). <issue_comment>username_2: Hey @username_1, I've done some [research in to place taxonomies](https://github.com/pelias/pelias/wiki/Taxonomy-v1), I'm looking to implement a first version of it this week. If you have any comments or suggestions please let me know ;)<issue_closed> <issue_comment>username_0: Categories will make reverse place geocoding a useful service. This could be done in conjunction with adding popularity scores/tags. <issue_comment>username_0: Still need to add category mapping to all import pipelines. <issue_comment>username_1: @username_2 Your research is excellent and I support your proposed categories in a Pelias-specific taxonomy, but I think that you would be reinventing the wheel to create a new taxonomy. <issue_comment>username_2: hey @username_1 thanks for the feedback, so in the end I decided to flip-flop on my "start small and learn from it" idea, I think it's going to be really hard to learn anything profound from a small taxonomy so it's [grown a bit bigger](https://github.com/pelias/openstreetmap/blob/master/config/category_map.js) than the original proposal. The above linked code is live now in production and we are categorising the whole planet using that mapping. We will have to keep it as a private API for now until the dust settles on this feature, I expect to have the query logic exposed in the RESTful API in the next week or so and I'll share the API parameters with you along with the disclaimer that it's an undocumented feature and the categories will no doubt change or go away without warning until we finalize them. The really good news is that since we are foodies I've added cuisine categorisation so you can use the `/reverse` API to find the 10 nearest Korean BBQ places :) <issue_comment>username_0: TBD: - return categories in geojson? all endpoints? - implement on /search for all endpoints - [wait till es@2.0] implement for /suggest <issue_comment>username_2: FYI @username_1 this feature is now live for `/reverse` if you would like to have a play, it's undocumented but pretty self explanatory, you can use a query such as: ``` http://pelias.mapzen.com/reverse?lat=52.516701&lon=13.400000&size=40&categories=food:cuisine:chinese ``` The `?categories` param takes a comma-separated list of [categories](https://github.com/pelias/openstreetmap/blob/master/config/category_map.js) which are treated as `OR` conditions, any POIs which do not match *any* of the conditions will be excluded. Oh, also we rolled out a performance fix for bbox queries recently which should have resulted in a 3x speed up for your pelias queries. Enjoy! <issue_comment>username_0: We will be getting categories taxonomy from our data team in the near future.<issue_closed>
<issue_start><issue_comment>Title: Only first component is shown if application has no element username_0: If my application is a component without a tagName, only the first component rendered will show in the view tree. Is there any way to fix this? <issue_comment>username_1: This should be resolved as of the new component tree stuff.<issue_closed>
<issue_start><issue_comment>Title: Correctly use tabindex attributes error for news username_0: From #12: URL: http://m.live.bbc.co.uk/news ( I have used 'm.live' instead of 'm.' as it forces the mobile site to load for the tests. 'm.' redirects to 'www' on desktop browser widths) This looks more like a genuine accessibility issue on the page. @username_1 could you please confirm. (On the page in 2nd column the Most - Read tab - 'most-popular__header__tabs--read open' has tabindex="0"). Correctly use tabindex attributes Scenario: Check all tabindex values ✗ expected [#<Capybara::Element tag="a">].empty? to return true, got false /Library/Ruby/Gems/2.0.0/gems/rspec-expectations-3.2.0/lib/rspec/expectations/fail_with.rb:29:in fail_with' /Library/Ruby/Gems/2.0.0/gems/rspec-expectations-3.2.0/lib/rspec/expectations/handler.rb:40:inhandle_failure' /Library/Ruby/Gems/2.0.0/gems/rspec-expectations-3.2.0/lib/rspec/expectations/handler.rb:50:in block in handle_matcher' /Library/Ruby/Gems/2.0.0/gems/rspec-expectations-3.2.0/lib/rspec/expectations/handler.rb:27:inwith_matcher' /Library/Ruby/Gems/2.0.0/gems/rspec-expectations-3.2.0/lib/rspec/expectations/handler.rb:48:in handle_matcher' /Library/Ruby/Gems/2.0.0/gems/rspec-expectations-3.2.0/lib/rspec/expectations/expectation_target.rb:54:into' /Library/Ruby/Gems/2.0.0/gems/bbc-a11y-0.0.4/lib/bbc/a11y/cucumber_support/page.rb:62:in must_not_have_any_positive_tabindex_values' /Library/Ruby/Gems/2.0.0/gems/bbc-a11y-0.0.4/features/step_definitions/page_steps.rb:45:in/^there should be no elements with a tabindex attribte of 0 or greater$/' <issue_comment>username_1: tabindex values of 0 are acceptable on <a>, <button>, or <input>[type=checkbox,color,date,datetime,datetime-local,email,file,month,number,password,radio,range,search,tel,text,time,url,week] elements, and should therefore pass this test.<issue_closed>
<issue_start><issue_comment>Title: Adds systemd init style (support for arch platform) username_0: I have added a systemd unit file which can be used by setting `node[consul][init_style] = 'systemd'`. I have tested this init style on Arch Linux and everything seems to work as expected. It is probably best to use this init style as the default on `platform_family?("arch")`. I apologize for not adding the platform to `.kitchen.yml`, but I do not have Vagrant set up at the moment, so I must leave that up to someone else.
<issue_start><issue_comment>Title: Can't (correctly) generate a deb with a file in /etc/sudoers.d username_0: When you try to generate a deb with a file in /etc/sudoers.d, you can choose between a lintian error bad-perm-for-file-in-etc-sudoers.d 0660 != 0440, or generate the file with the expected `withPerms "0440"` and have the second build fail with a (misleading) java.io.FileNotFoundException: ... (Permission denied). Looks like the target directory gets reused without cleanup between builds. <issue_comment>username_1: Our target dir need to be refactored I think. Now it seems quite messy and unstructured. Also, cleaning this directory (not whole target) before creating package is a good idea IMHO (I've got some bugs with this). WDYT @username_2 ? <issue_comment>username_2: @username_1 you mean the created debian target directory structure? What particularly do you mean? I think the code base could always use a bit of a facelifting (there are too many debian related parts spread in the code base). Cleaning `target in Debian` before each packaging sounds good to me. <issue_comment>username_1: Yes, I mean dir structure created during packaging in target. <issue_comment>username_2: IMHO the messy part is where all the bin, configure scripts are placed :/
<issue_start><issue_comment>Title: The domain name tornadoweb.org seem to be expired username_0: Afternoon, I believe the domain name tornadoweb.org is expired. Here is what I get when I visit it <img width="1440" alt="screen shot 2016-09-13 at 17 01 45" src="https://cloud.githubusercontent.com/assets/17003643/18481509/4e328988-79d4-11e6-9f12-fbb5e8612189.png"> Cheers, Florent <issue_comment>username_0: for those looking urgently for the documentation, here's a link : http://tornado.readthedocs.io/en/stable/ <issue_comment>username_1: thanks @username_0 !! <issue_comment>username_2: Thanks for the report; I'm working on it. <issue_comment>username_3: +1 ![image](https://cloud.githubusercontent.com/assets/1590268/18499490/03ecd79a-7a72-11e6-84e9-2d0b35279890.png) <issue_comment>username_2: Domain is renewed, but DNS is a little messed up so it's redirecting to tornado.readthedocs.io until we can get that sorted out. <issue_comment>username_4: Still there is a problem. ![image](https://cloud.githubusercontent.com/assets/949232/18519640/57c1b7e2-7aad-11e6-891c-f8b686026be8.png) <issue_comment>username_5: Seems to be up and working now (9/15/2016 ~2:00 MT) <issue_comment>username_6: The website's restored now.<issue_closed>
<issue_start><issue_comment>Title: Fix bug in the sample code in the docs username_0: This bug actually confused a beginner, by making them believe that the stream was somehow mutable. <issue_comment>username_1: In this case, because GenEvent is a process, consuming messages from the stream *is* mutable. In any case, the example is still wrong because it should be using `take` instead of `drop` (i.e. consume the first three messages, then consumes the rest), otherwise it will consume the whole stream, which is why your change works, but remaining will always be a list. I will change it locally, thank you! :heart: :green_heart: :blue_heart: :yellow_heart: :purple_heart: <issue_comment>username_1: Actually, the example today uses `take`, I got confused. So to me, the current example is correct, I have also ran it locally and it is fine. Which behaviour are you seeing? <issue_comment>username_0: Oh man, you're totally right. The code I was looking at had taken this, applied it to a normal Stream, and expected similar results. Sorry about that!
<issue_start><issue_comment>Title: Incorrect decoding of paletted PNG username_0: [This PNG](https://github.com/servo/servo/blob/master/tests/wpt/web-platform-tests/images/green.png) should be solid green but the library decodes a 4-pixel stripe of black on the left edge. RGB data: ``` 00000000 00 00 00 00 00 00 00 00 00 00 00 00 00 ff 00 00 00000010 ff 00 00 ff 00 00 ff 00 00 ff 00 00 ff 00 00 ff ... ``` <issue_comment>username_1: Thanks you, fixed and waiting for travis…<issue_closed>
<issue_start><issue_comment>Title: pull request for e2e test at 12/18/2016 00:26:19 username_0: <issue_comment>username_0: ### :white_check_mark: Validation status: passed For more details, please refer to the [build report](https://opbuildstoragesandbox2.blob.core.windows.net/report/2016%5C12%5C18%5C623dccef-8d46-20fb-c28a-9374b0737101%5CPullRequest%5C201612180026269710-16764%5Cworkflow_report.html). **Note:** If you changed an existing file name or deleted a file, broken links in other files to the deleted or renamed file are listed only in the full build report.
<issue_start><issue_comment>Title: Test fixes username_0: Hi, I made two adjustments to the tests: * All tests currently failed on Windows because the cleanUp() tried to remove the current working directory, so I added chdir to the project directory before deleting. * I removed the UTF-8 path handling tests. These fail on Windows (probably because my locale is not Asian) and these tests are really note relevant to homura. The user should simply always use unicode strings. UTF-8-encoded files still work correctly, though, so I left them in. The UTF-8 encoded directory created by os.mkdir is jumbled on an unsuitable locale: ![homura_dirs](https://cloud.githubusercontent.com/assets/1778160/11540356/fcf28440-9934-11e5-985a-a4f09fc4e98b.png) All tests pass on Linux Python 2.7 but on Linux Python 3.4 and both Python versions on Windows have problems with the SSL certificate bundle being used by PyCurl by default (at least on my setup). Best regards, Martin <issue_comment>username_1: Hi Martin, Thanks for pull requests. I think the utf-8 issue involves many parts in the code and I don't want break other users in future releases. I want to keep the tests at this time. I think maybe a better way is to toggle off those unittest functions on Windows by checking the platform? The chdir patch is fine. Can you rebase the commits so that I can merge only chdir fix? <issue_comment>username_0: Okay, that is reasonable. I deleted the commit involving utf-8 tests. I will probably modify it as you suggested and submit another pull request later.
<issue_start><issue_comment>Title: Very likely bug in error correction constants username_0: I've been experiencing a bug that only effects QR code version-27 with error correction set to Q; All other combinations don't seem effected. I believe that the source of this bug is an incorrect value in the `DATA_BYTES_PER_BLOCK` for the combination (file `ec.rs`, line: `415`). Table 9 of the standard states that the number of blocks should be 26 for `24` not 27. Your input on this matter would be greatly appreciated. Thank you.<issue_closed> <issue_comment>username_1: Fixed in fee9deda25a19f6b659a8fb14493d52473eba1f2. Thanks!
<issue_start><issue_comment>Title: Surface plot has incorrect ranges returned by getMinGlobal() and getMaxGlobal() username_0: Related to this issue: https://github.com/username_0/LavaVu/issues/9 In libUnderworld/gLucifer/DrawingObjects/src/CrossSection.cpp Line: 551 The functions getMinGlobal() and getMaxGlobal() are returning an incorrect range of values for a field. When a Surface object is plotted the range returned is [290,310], resulting in incorrect colour mapping as seen in edge surface plot in posted image on above issue. When called after sampling a volume it gets a global range of [293,529]. Inspecting the data exported from either plot shows the actual range of values is [293,529], so the volume plot is getting the correct range. These both use the same CrossSection.cpp samping code, for a volume it is called repeatedly to sample slices across the domain, whereas Surface only samples on the edges. @jmansour any ideas why this would be happening?<issue_closed> <issue_comment>username_0: Fixed with 19254958bd81a3982b331f9511b5d4123605eda9 and https://github.com/username_0/LavaVu/commit/aa3172475e8bafd40024b72e70dbeeea821d32e1, now calculating range in LavaVu instead
<issue_start><issue_comment>Title: User authentication username_0: At present, we only support application authentication. We should also support user authentication. See https://dev.twitter.com/oauth/overview/introduction <issue_comment>username_1: I'll do this. <issue_comment>username_0: https://dev.twitter.com/oauth/reference <issue_comment>username_0: https://oauth.net/core/1.0/#anchor9<issue_closed>
<issue_start><issue_comment>Title: Directory path error in BaseMenuTest username_0: | Question | Answer |----------------|------------------------------------------------------------- | Bundle version | dev-master 02478b4 | Symfony version| v2.8.5 | php version | PHP 5.6.21 # Error message When running the included tests i get ``` 1) Sonata\AdminBundle\Tests\Menu\Integration\TabMenuTest::testLabelTranslationNominalCase Twig_Error_Loader: The "/home/aaronm/Sites/devadmin/vendor/sonata-project/admin-bundle/Tests/Menu/Integration/../../../vendor/knplabs/knp-menu/src/Knp/Menu/Resources/views" directory does not exist. /home/aaronm/Sites/devadmin/vendor/twig/twig/lib/Twig/Loader/Filesystem.php:94 /home/aaronm/Sites/devadmin/vendor/twig/twig/lib/Twig/Loader/Filesystem.php:76 /home/aaronm/Sites/devadmin/vendor/twig/twig/lib/Twig/Loader/Filesystem.php:34 /home/aaronm/Sites/devadmin/vendor/sonata-project/admin-bundle/Tests/Menu/Integration/BaseMenuTest.php:43 2) Sonata\AdminBundle\Tests\Menu\Integration\TabMenuTest::testLabelTranslationWithParameters Twig_Error_Loader: The "/home/aaronm/Sites/devadmin/vendor/sonata-project/admin-bundle/Tests/Menu/Integration/../../../vendor/knplabs/knp-menu/src/Knp/Menu/Resources/views" directory does not exist. /home/aaronm/Sites/devadmin/vendor/twig/twig/lib/Twig/Loader/Filesystem.php:94 /home/aaronm/Sites/devadmin/vendor/twig/twig/lib/Twig/Loader/Filesystem.php:76 /home/aaronm/Sites/devadmin/vendor/twig/twig/lib/Twig/Loader/Filesystem.php:34 /home/aaronm/Sites/devadmin/vendor/sonata-project/admin-bundle/Tests/Menu/Integration/BaseMenuTest.php:43 3) Sonata\AdminBundle\Tests\Menu\Integration\TabMenuTest::testLabelTranslationDomainOverride Twig_Error_Loader: The "/home/aaronm/Sites/devadmin/vendor/sonata-project/admin-bundle/Tests/Menu/Integration/../../../vendor/knplabs/knp-menu/src/Knp/Menu/Resources/views" directory does not exist. /home/aaronm/Sites/devadmin/vendor/twig/twig/lib/Twig/Loader/Filesystem.php:94 /home/aaronm/Sites/devadmin/vendor/twig/twig/lib/Twig/Loader/Filesystem.php:76 /home/aaronm/Sites/devadmin/vendor/twig/twig/lib/Twig/Loader/Filesystem.php:34 /home/aaronm/Sites/devadmin/vendor/sonata-project/admin-bundle/Tests/Menu/Integration/BaseMenuTest.php:43 ``` # Steps to reproduce Run the included tests # Expected results Tests should pass # Actual results See error message # Suggested fix Modify BaseMenuTest->setUp() to add array_filter and correct path for my installation type, as follows: ``` public function setUp() { $twigPaths = array_filter(array( __DIR__.'/../../../vendor/knplabs/knp-menu/src/Knp/Menu/Resources/views', __DIR__.'/../../../../../knplabs/knp-menu/src/Knp/Menu/Resources/views', __DIR__.'/../../../Resources/views', ), 'is_dir'); $loader = new StubFilesystemLoader($twigPaths); $this->environment = new \Twig_Environment($loader, array('strict_variables' => true)); } ``` This is similar to the technique used to solve the same problem in BaseWidgetTest. Thank you! <issue_comment>username_1: It looks like you're trying to run the admin tests from a project. Why are you doing that? <issue_comment>username_0: Is that not intended? I got the idea looking at bin/qa_client_ci.sh in sonata-sandbox. My thinking was that this would be a handy way to verify if something broke after updating dependencies. Am I barking up the wrong tree here? Apologies if so. <issue_comment>username_1: No, I don't think it is : when you get them, dependencies have already been tested, independently from each other. Also, some development dependencies that are necessary for testing might be missing. The sonata sandbox does run some unit tests, but I don't know why there would be a need for that : everything we merge passes tests. <issue_comment>username_1: But maybe I'm wrong and the sandbox test broke because of that… <issue_comment>username_2: you are right @username_1 <issue_comment>username_1: Closing then<issue_closed>
<issue_start><issue_comment>Title: update brings lots of "undefined-key" and binding problems username_0: For your information, I work under cygwin (I would rather not to). I guess, it's easy to fix but I don't have the time to mess with it. Since a recent auto update, every time i start my zsh, I get: "^@\" set-mark-comman" undefined-key "^A\" beginning-of-lin" undefined-key "^B\" backward-cha" undefined-key "^D\" delete-char-or-lis" undefined-key "^E\" end-of-lin" undefined-key "^F\" forward-cha" undefined-key "^G\" send-brea" undefined-key "^H\" backward-delete-cha" undefined-key "^I\" expand-or-complet" undefined-key "^J\" accept-lin" undefined-key "^K\" kill-lin" undefined-key "^L\" clear-scree" undefined-key "^M\" accept-lin" undefined-key "^N\" down-line-or-histor" undefined-key "^O\" accept-line-and-down-histor" undefined-key "^P\" up-line-or-histor" undefined-key "^Q\" push-lin" undefined-key "^R\" history-incremental-search-backwar" undefined-key "^S\" history-incremental-search-forwar" undefined-key "^T\" transpose-char" undefined-key "^U\" kill-whole-lin" undefined-key "^V\" quoted-inser" undefined-key "^W\" backward-kill-wor" undefined-key "^X^B\" vi-match-bracke" undefined-key "^X^E\" edit-command-lin" undefined-key "^X^F\" vi-find-next-cha" undefined-key "^X^J\" vi-joi" undefined-key "^X^K\" kill-buffe" undefined-key "^X^N\" infer-next-histor" undefined-key "^X^O\" overwrite-mod" undefined-key "^X^R\" _read_com" undefined-key "^X^U\" und" undefined-key "^X^V\" vi-cmd-mod" undefined-key "^X^X\" exchange-point-and-mar" undefined-key "^X*\" expand-wor" undefined-key "^X=\" what-cursor-positio" undefined-key "^X?\" _complete_debu" undefined-key "^XC\" _correct_filenam" undefined-key "^XG\" list-expan" undefined-key "^Xa\" _expand_alia" undefined-key "^Xc\" _correct_wor" undefined-key "^Xd\" _list_expansion" undefined-key "^Xe\" _expand_wor" undefined-key "^Xg\" list-expan" undefined-key "^Xh\" _complete_hel" undefined-key "^Xm\" _most_recent_fil" undefined-key "^Xn\" _next_tag" undefined-key "^Xr\" history-incremental-search-backwar" undefined-key "^Xs\" history-incremental-search-forwar" undefined-key "^Xt\" _complete_ta" undefined-key "^Xu\" und" undefined-key "^X~\" _bash_list-choice" undefined-key "^Y\" yan" undefined-key "^[^D\" list-choice" undefined-key "^[^G\" send-brea" undefined-key "^[^H\" backward-kill-wor" undefined-key "^[^I\" self-insert-unmet" undefined-key "^[^J\" self-insert-unmet" undefined-key "^[^L\" clear-scree" undefined-key "^[^M\" self-insert-unmet" undefined-key "^[^_\" copy-prev-wor" undefined-key "^[ \" expand-histor" undefined-key "^[!\" expand-histor" undefined-key "^[\"\" quote-regio" undefined-key "^[\$\" spell-wor" undefined-key /home/maxime.beucher/.zcompdump:2043: unmatched ' Please help me :p And btw, thank you very much !! All of you ! <issue_comment>username_1: @username_2 do you get the same output? Have you run `rm ~/.zcompdump*`? <issue_comment>username_2: @username_1 yes, exactly the same output without the last letter in bindings. Yes, removing of .zcompdump* files doesn't help, they are recreated broken again every login. <issue_comment>username_1: Can you run this and post the output file? ```zsh for d ($fpath); do [[ -f $d/compinit ]] && { echo $d/comp*; cat $d/comp* }; done > compfunctions.log ``` <issue_comment>username_2: @username_1 Here it is. Sorry for the delay. [compfunctions.log](https://github.com/robbyrussell/oh-my-zsh/files/3106246/compfunctions.log) <issue_comment>username_1: I couldn't find anything wrong with your files, they're all the same as a healthy zsh 5.0.7 installation. Try running `zsh -xvic exit &> ~/zsh-dump.log` and posting the generated file. <issue_comment>username_2: @username_1 [zsh-dump.log](https://github.com/robbyrussell/oh-my-zsh/files/3121372/zsh-dump.log) <issue_comment>username_1: Can you do `rm ~/.zcompdump*` and rerun the debugging zsh session from https://github.com/robbyrussell/oh-my-zsh/issues/745#issuecomment-486687702? I only see it loading the broken zcompdump file. <issue_comment>username_2: @username_1 I did `rm -f ~/.zcompdump*; zsh -xvic exit &> ~/zsh-dump.log` from bash this time [zsh-dump.log](https://github.com/robbyrussell/oh-my-zsh/files/3121812/zsh-dump.log) <issue_comment>username_1: It looks like compdump already gets the wrong bindkey values and I can't see anywhere else before compdump where these wrong values are set. Can you check if on a barebones zsh session bindkey is already messed up? Run `zsh -f` and then print the bindkey values running `bindkey`. <issue_comment>username_2: @username_1 Nope, they are fine of course, otherwise I'd not to raise that issue :) ```bindkey "^@" set-mark-command "^A" beginning-of-line "^B" backward-char "^D" delete-char-or-list "^E" end-of-line "^F" forward-char "^G" send-break "^H" backward-delete-char "^I" expand-or-complete "^J" accept-line "^K" kill-line "^L" clear-screen "^M" accept-line "^N" down-line-or-history "^O" accept-line-and-down-history "^P" up-line-or-history "^Q" push-line "^R" history-incremental-search-backward "^S" history-incremental-search-forward "^T" transpose-chars "^U" kill-whole-line "^V" quoted-insert "^W" backward-kill-word "^X^B" vi-match-bracket "^X^F" vi-find-next-char "^X^J" vi-join "^X^K" kill-buffer "^X^N" infer-next-history "^X^O" overwrite-mode "^X^U" undo "^X^V" vi-cmd-mode "^X^X" exchange-point-and-mark "^X*" expand-word "^X=" what-cursor-position "^XG" list-expand "^Xg" list-expand "^Xr" history-incremental-search-backward "^Xs" history-incremental-search-forward "^Xu" undo "^Y" yank "^[^D" list-choices "^[^G" send-break "^[^H" backward-kill-word "^[^I" self-insert-unmeta "^[^J" self-insert-unmeta "^[^L" clear-screen "^[^M" self-insert-unmeta "^[^_" copy-prev-word "^[ " expand-history "^[!" expand-history "^[\"" quote-region "^[\$" spell-word "^['" quote-line "^[-" neg-argument "^[." insert-last-word "^[0" digit-argument "^[1" digit-argument "^[2" digit-argument "^[3" digit-argument "^[4" digit-argument "^[5" digit-argument "^[6" digit-argument "^[7" digit-argument "^[8" digit-argument "^[9" digit-argument "^[<" beginning-of-buffer-or-history "^[>" end-of-buffer-or-history "^[?" which-command "^[A" accept-and-hold "^[B" backward-word "^[C" capitalize-word "^[D" kill-word "^[F" forward-word "^[G" get-line "^[H" run-help "^[L" down-case-word "^[N" history-search-forward [Truncated] "^[f" forward-word "^[g" get-line "^[h" run-help "^[l" down-case-word "^[n" history-search-forward "^[p" history-search-backward "^[q" push-line "^[s" spell-word "^[t" transpose-words "^[u" up-case-word "^[w" copy-region-as-kill "^[x" execute-named-cmd "^[y" yank-pop "^[z" execute-last-named-cmd "^[|" vi-goto-column "^[^?" backward-kill-word "^_" undo " "-"~" self-insert "^?" backward-delete-char "\M-^@"-"\M-^?" self-insert <issue_comment>username_1: Can you check after running `compinit` on a clean session? `zsh -f`, then ``` rm ~/.zcompdump* autoload compinit && compinit bindkey ``` If they are still correct then it is an OMZ problem and we'd have to rule out core files one by one. <issue_comment>username_2: @username_1 The output doesn't differ much if is taken that way: ![image](https://user-images.githubusercontent.com/7239315/56866616-ef03b100-69e3-11e9-8f7b-8d976bcf3f95.png) <issue_comment>username_1: Hmm ok then it must be triggered by an OMZ setting. Let's rule out lib files one by one: disable read permissions of $ZSH/lib/* files (you'll get a permission denied error, that's expected), restart the zsh session (`exec zsh`), then check `bindkey` output. Remember to `rm ~/.zcompdump*` before each restart of zsh. Once you know which file is the culprit, enable read permissions again and comment out lines of such file and restart zsh like before until you find out which command is messing up your key bindings. You should then be able to start a clean zsh session (`zsh -f`) and, running that same command (followed by `compinit`, presumably), trigger the bug again. <issue_comment>username_1: For your information, I work under cygwin (I would rather not to). I guess, it's easy to fix but I don't have the time to mess with it. Since a recent auto update, every time i start my zsh, I get: ``` "^@\" set-mark-comman" undefined-key "^A\" beginning-of-lin" undefined-key "^B\" backward-cha" undefined-key "^D\" delete-char-or-lis" undefined-key "^E\" end-of-lin" undefined-key "^F\" forward-cha" undefined-key "^G\" send-brea" undefined-key "^H\" backward-delete-cha" undefined-key "^I\" expand-or-complet" undefined-key "^J\" accept-lin" undefined-key "^K\" kill-lin" undefined-key "^L\" clear-scree" undefined-key "^M\" accept-lin" undefined-key "^N\" down-line-or-histor" undefined-key "^O\" accept-line-and-down-histor" undefined-key "^P\" up-line-or-histor" undefined-key "^Q\" push-lin" undefined-key "^R\" history-incremental-search-backwar" undefined-key "^S\" history-incremental-search-forwar" undefined-key "^T\" transpose-char" undefined-key "^U\" kill-whole-lin" undefined-key "^V\" quoted-inser" undefined-key "^W\" backward-kill-wor" undefined-key "^X^B\" vi-match-bracke" undefined-key "^X^E\" edit-command-lin" undefined-key "^X^F\" vi-find-next-cha" undefined-key "^X^J\" vi-joi" undefined-key "^X^K\" kill-buffe" undefined-key "^X^N\" infer-next-histor" undefined-key "^X^O\" overwrite-mod" undefined-key "^X^R\" _read_com" undefined-key "^X^U\" und" undefined-key "^X^V\" vi-cmd-mod" undefined-key "^X^X\" exchange-point-and-mar" undefined-key "^X*\" expand-wor" undefined-key "^X=\" what-cursor-positio" undefined-key "^X?\" _complete_debu" undefined-key "^XC\" _correct_filenam" undefined-key "^XG\" list-expan" undefined-key "^Xa\" _expand_alia" undefined-key "^Xc\" _correct_wor" undefined-key "^Xd\" _list_expansion" undefined-key "^Xe\" _expand_wor" undefined-key "^Xg\" list-expan" undefined-key "^Xh\" _complete_hel" undefined-key "^Xm\" _most_recent_fil" undefined-key "^Xn\" _next_tag" undefined-key "^Xr\" history-incremental-search-backwar" undefined-key "^Xs\" history-incremental-search-forwar" undefined-key "^Xt\" _complete_ta" undefined-key "^Xu\" und" undefined-key "^X~\" _bash_list-choice" undefined-key "^Y\" yan" undefined-key "^[^D\" list-choice" undefined-key "^[^G\" send-brea" undefined-key "^[^H\" backward-kill-wor" undefined-key "^[^I\" self-insert-unmet" undefined-key "^[^J\" self-insert-unmet" undefined-key "^[^L\" clear-scree" undefined-key "^[^M\" self-insert-unmet" undefined-key "^[^_\" copy-prev-wor" undefined-key "^[ \" expand-histor" undefined-key "^[!\" expand-histor" undefined-key "^[\"\" quote-regio" undefined-key "^[\$\" spell-wor" undefined-key /home/maxime.beucher/.zcompdump:2043: unmatched ' ``` Please help me :p And btw, thank you very much !! All of you ! <issue_comment>username_2: @username_1 Sorry, for the delay again. After I removed read permissions from library files, removed compdump file, logged into zsh, logged out and logged in again, the original message `/root/.zcompdump-hostname-5.0.7:1612: unmatched '` still appears. `bindkey` output is always fine though. Not sure how that output can be helpful, isn't it containing only active, thus correct values? <issue_comment>username_1: This is a supposition from the zsh trace you posted and the compdump code: <details><summary><a href="https://github.com/zsh-users/zsh/blob/zsh-5.0.7/Completion/compdump#L94-L99">compdump code</a></summary> ```zsh # Now dump the key bindings. We dump all bindings for zle widgets # whose names start with a underscore. # We need both the zle -C's and the bindkey's to recreate. # We can ignore any zle -C which rebinds a standard widget (second # argument to zle does not begin with a `_'). ... bindkey | while read -rA _d_line; do if [[ ${_d_line[2]} = (${(j.|.)~_d_bks}) ]]; then print -r "bindkey '${_d_line[1][2,-2]}' ${_d_line[2]}" fi done >> $_d_file ``` </details> <details> <summary>relevant zsh trace:</summary> ```zsh +compdump:94> bindkey +compdump:95> read -rA _d_line +compdump:96> [[ '' == () ]] +compdump:97> print -r 'bindkey '\''^@" set-mark-comman'\'' ' +compdump:95> read -rA _d_line +compdump:96> [[ '' == () ]] +compdump:97> print -r 'bindkey '\''^A" beginning-of-lin'\'' ' +compdump:95> read -rA _d_line +compdump:96> [[ '' == () ]] +compdump:97> print -r 'bindkey '\''^B" backward-cha'\'' ' +compdump:95> read -rA _d_line +compdump:96> [[ '' == () ]] +compdump:97> print -r 'bindkey '\''^D" delete-char-or-lis'\'' ' +compdump:95> read -rA _d_line +compdump:96> [[ '' == () ]] +compdump:97> print -r 'bindkey '\''^E" end-of-lin'\'' ' +compdump:95> read -rA _d_line +compdump:96> [[ '' == () ]] +compdump:97> print -r 'bindkey '\''^F" forward-cha'\'' ' +compdump:95> read -rA _d_line +compdump:96> [[ '' == () ]] +compdump:97> print -r 'bindkey '\''^G" send-brea'\'' ' +compdump:95> read -rA _d_line +compdump:96> [[ '' == () ]] +compdump:97> print -r 'bindkey '\''^H" backward-delete-cha'\'' ' +compdump:95> read -rA _d_line +compdump:96> [[ '' == () ]] +compdump:97> print -r 'bindkey '\''^I" expand-or-complet'\'' ' +compdump:95> read -rA _d_line +compdump:96> [[ '' == () ]] +compdump:97> print -r 'bindkey '\''^J" accept-lin'\'' ' +compdump:95> read -rA _d_line +compdump:96> [[ '' == () ]] +compdump:97> print -r 'bindkey '\''^K" kill-lin'\'' ' +compdump:95> read -rA _d_line +compdump:96> [[ '' == () ]] +compdump:97> print -r 'bindkey '\''^L" clear-scree'\'' ' +compdump:95> read -rA _d_line +compdump:96> [[ '' == () ]] +compdump:97> print -r 'bindkey '\''^M" accept-lin'\'' ' +compdump:95> read -rA _d_line +compdump:96> [[ '' == () ]] +compdump:97> print -r 'bindkey '\''^N" down-line-or-histor'\'' ' +compdump:95> read -rA _d_line +compdump:96> [[ '' == () ]] +compdump:97> print -r 'bindkey '\''^O" accept-line-and-down-histor'\'' ' +compdump:95> read -rA _d_line +compdump:96> [[ '' == () ]] +compdump:97> print -r 'bindkey '\''^P" up-line-or-histor'\'' ' +compdump:95> read -rA _d_line +compdump:96> [[ '' == () ]] +compdump:97> print -r 'bindkey '\''^Q" push-lin'\'' ' +compdump:95> read -rA _d_line +compdump:96> [[ '' == () ]] [Truncated] zle -C _list_expansions .list-choices _expand_word zle -C _most_recent_file .complete-word _most_recent_file zle -C _next_tags .list-choices _next_tags zle -C _read_comp .complete-word _read_comp bindkey '^X^R' _read_comp bindkey '^X?' _complete_debug bindkey '^XC' _correct_filename bindkey '^Xa' _expand_alias bindkey '^Xc' _correct_word bindkey '^Xd' _list_expansions bindkey '^Xe' _expand_word bindkey '^Xh' _complete_help bindkey '^Xm' _most_recent_file bindkey '^Xn' _next_tags bindkey '^Xt' _complete_tag bindkey '^X~' _bash_list-choices bindkey '^[,' _history-complete-newer bindkey '^[/' _history-complete-older bindkey '^[~' _bash_complete-word ``` <issue_comment>username_1: Ok nevermind, after going over the zsh trace once more I can see the problem but I'm not yet sure of the cause. Let's read the compdump code again: ```zsh bindkey | while read -rA _d_line; do if [[ ${_d_line[2]} = (${(j.|.)~_d_bks}) ]]; then print -r "bindkey '${_d_line[1][2,-2]}' ${_d_line[2]}" fi done >> $_d_file ``` and one of the iterations of the while loop: ```zsh +compdump:95> read -rA _d_line +compdump:96> [[ '' == () ]] +compdump:97> print -r 'bindkey '\''^@" set-mark-comman'\'' ' ``` The intention of `read -rA _d_line` is to read one line of input and parse it such that it puts every "word" of the line into the _d_line variable, which acts as an array. Therefore the line is separated by spaces and each item is one word. A `bindkey` line is of the form: ``` "^@" set-mark-command ``` So we should expect it to be parsed such that the key sequence (`"^@"` in this case) should end up at position 1 of the array (`$_d_line[1]`), and the bindkey widget (`set-mark-command`) should end up at position 2. For some reason that's not what we get; instead all the line is put in position 1, therefore: ```zsh if [[ ${_d_line[2]} = (${(j.|.)~_d_bks}) ]]; then ``` runs like ``` +compdump:96> [[ '' == () ]] ``` because there is no element `$_d_line[2]`. Furthermore, ```zsh print -r "bindkey '${_d_line[1][2,-2]}' ${_d_line[2]}" ``` runs like ``` +compdump:97> print -r 'bindkey '\''^@" set-mark-comman'\'' ' ``` because the 1st element is used, but the first and last elements are removed (that's what `${var[2,-2]}` does. In this case, the first double quote `"` and the last letter of `set-mark-command`. **That tells me that there is an issue with the `read` call.** To debug this, let's see what you get running ```zsh echo '"^@" set-mark-command' | read -rA _d_line print -l $_d_line ``` in a normal zsh session and one without rc files (`zsh -f`). <issue_comment>username_2: @username_1 ``` ... /root/.zcompdump-hostname-5.0.7:1612: unmatched ' ➜ ~ rm /root/.zcompdump-hostname-5.0.7 ➜ ~ which -a read /usr/bin/which: no read in (/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin) ➜ ~ which -a read /usr/bin/which: no read in (/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin) ➜ ~ echo '"^@" set-mark-command' | read -rA _d_line ➜ ~ print -l $_d_line "^@" set-mark-command ➜ ~ ➜ ~ [root@hostname.scl ~]# zsh -f hostname# which -a read read: shell built-in command hostname# echo '"^@" set-mark-command' | read -rA _d_line hostname# print -l $_d_line "^@" set-mark-command hostname# <issue_comment>username_1: Can you try on a normal zsh session again, but having remove the broken zcompdump first before running it? <issue_comment>username_2: @username_1 sorry! ``` [root@hostname ~]# rm -f ~/.zcompdump* [root@hostname ~]# zsh ➜ ~ which -a read /usr/bin/which: no read in (/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin) ➜ ~ echo '"^@" set-mark-command' | read -rA _d_line ➜ ~ print -l $_d_line "^@" set-mark-command ➜ ~ [root@hostname ~]# rm -f ~/.zcompdump* [root@hostname ~]# zsh -f hostname# which -a read read: shell built-in command hostname# echo '"^@" set-mark-command' | read -rA _d_line hostname# print -l $_d_line "^@" set-mark-command hostname# <issue_comment>username_1: I thought it was a fluke, that's why I asked a second time. Now I see that `which` is aliases in `/etc/profile.d/which2.sh`: ``` +_src_etc_profile_d:9> i=/etc/profile.d/which2.sh +_src_etc_profile_d:10> [ -r /etc/profile.d/which2.sh ']' +_src_etc_profile_d:11> . /etc/profile.d/which2.sh # Initialization script for bash and sh # export AFS if you are in AFS environment alias which='alias | /usr/bin/which --tty-only --read-alias --show-dot --show-tilde' +/etc/profile.d/which2.sh:4> alias 'which=alias | /usr/bin/which --tty-only --read-alias --show-dot --show-tilde' ``` Run that normal zsh again but with this code (and deleting the zcompdump file first): ```zsh builtin which -a read echo '"^@" set-mark-command' | builtin read -rA _d_line print -l $_d_line ``` <issue_comment>username_1: Don't bother, I think I have the culprit: `/etc/profile.d/five9_profile.sh` resets the `IFS` variable which is used by `read` to know which characters separate words. Snippet of the trace: ``` case "$TERM" in xterm) color_prompt=yes;; xterm-color) color_prompt=yes;; xterm-256color) color_prompt=yes;; screen) color_prompt=yes;; esac +/etc/profile.d/five9_profile.sh:15> case xterm-256color (xterm) +/etc/profile.d/five9_profile.sh:15> case xterm-256color (xterm-color) +/etc/profile.d/five9_profile.sh:15> case xterm-256color (xterm-256color) +/etc/profile.d/five9_profile.sh:18> color_prompt=yes IFS=" " +/etc/profile.d/five9_profile.sh:22> IFS=' ' # Set the default PS1 and bash options . ${profile_path}/bashrc_default ``` If you go to that file and comment out the line that changes `IFS` maybe it'll work again. Run this afterwards on a new shell to confirm: ``` echo '"^@" set-mark-command' | read -rA _d_line print -l $_d_line ``` I don't know where does that file come from or why it changes the IFS variable, but the normal thing to do is to reset it back to its original value after it's done using it.<issue_closed> <issue_comment>username_2: @username_1 oops! I know where that file comes from. Thanks a lot for your help. You are 100% correct, `unset IFS` was missing. Could you please wipe the name of that file from your previous message, like you 'redacted' the logs? <issue_comment>username_1: Done, I think that's all? Cheers <issue_comment>username_3: For anyone else researching this problem, I experienced a similar issue after I installed Node Version Manager (https://github.com/nvm-sh/nvm). The workaround I used was to remove the `~/.nvm/nvm.sh` loading script that was added to `.zshrc`.
<issue_start><issue_comment>Title: Change signal to `on`. username_0: Or more generally, make `options` a drop in replacement for `process` so you can debug program level things, plus hand off a `process` looking thing to `prolific` which can hook shutdown events to write out parting messages.<issue_closed>
<issue_start><issue_comment>Title: Karma-coverage does not show test coverage but function call coverage username_0: When I run the example from http://ariya.ofilabs.com/2013/10/code-coverage-of-jasmine-tests-using-istanbul-and-karma.html The coverage is 100%. When I add the self() function into sqrt.js: ``` var My = { sqrt: function(x) { if (x < 0) throw new Error("sqrt can't work on negative number"); return Math.exp(Math.log(x)/2); }, self: function(x) { return x; } }; My.self(); ``` the report shows 100% coverage although there is not test for the self function. <issue_comment>username_1: That sounds like an issue with istanbul, rather than with this tool. <issue_comment>username_2: I've the same issue and i'm not able to spot the problem and that is very annoying.
<issue_start><issue_comment>Title: Web UI username_0: <issue_comment>username_0: cc @username_1 <issue_comment>username_1: Can we get a quick list of features that we would want in a web interface (roughly sorted in priority, or grouped by possible releases), and check what we already have capabilities for in the API and what would need to be added. On a more practical note, should the UI be - in this repo? - powered by the API, or server-side rendered? - provided by a plugin? My feeling is: almost entirely through the API (single page app style) and delivered through a plugin. This could however complicate the repo as the front-end would need all the associated stuff that goes along with JS development, so maybe a separate front-end focused repo would be appropriate? Thoughts? <issue_comment>username_1: One other question: what language/framework? I feel like JS+React is pretty uncontroversial, but it might be nice to do it in Elm. Distribution compatibility will be less of a concern than with Python as we'd probably have to compile the front-end in some form anyway. <issue_comment>username_0: Putting the web UI in a separate repo and using the plugin mechanism sounds like an excellent idea, because: * Logical separation is good (it restricts us from calling into it from the main repo too), * It keeps the `jacquard` install small on machines that don't need the web UI, * It forces us to keep the plugin mechanism good. Going to discuss this is more detail (plus choice of FE framework) with @username_1 IRL. <issue_comment>username_0: @username_1 had a brief discussion about this IRL, and concluded that we probably want to build this as a client-side web app rather than classic server-side rendering with forms. The latter approach would be simpler but the former approach means that we commit to making sure that the web API is high-quality, and gives us more flexibility on what we want to do with the UI.
<issue_start><issue_comment>Title: fix: add href to DOCS in menu username_0: Fixes hoodiehq/hood.ie#193 <issue_comment>username_1: Thanks for this! Your code works as intended :) Looks like we have a failure on the testing side of things because Bower cannot find the right version of angular. I don't think this is a problem with your work though! I'll ask in our [Slack chat](http://hood.ie/chat) what the best route is to fix this bug! <issue_comment>username_2: @username_3 didn't we fix this? https://travis-ci.org/hoodiehq/faq/builds/78372460 <issue_comment>username_3: @username_2 I fixed it on the staging repo, will apply it to this repo. <issue_comment>username_3: https://github.com/hoodiehq/faq/commit/e8f87ed5b8a2ec7c105844bd3c96335c470a25b9 <issue_comment>username_1: Awesome, thanks for this @username_0 (& Zoe & Stephan!):tada: <issue_comment>username_2: Cool, thanks @username_3 <3 I was just wondering :)
<issue_start><issue_comment>Title: Add mock status code username_0: for example, mock status code = 404 <issue_comment>username_1: I think I've got something workable (works for my tests): https://github.com/ebsco/superagent-mocker/tree/mockable-status-codes <issue_comment>username_2: Super! Could you make pull request? <issue_comment>username_3: @username_1 :+1: to a pull request to support this <issue_comment>username_4: Was this merged?
<issue_start><issue_comment>Title: "Selection" broken in v0.7.4 username_0: It seems that something in v0.7.4 (possibly earlier, I haven't checked) broke the ability to "select" one or more annotations by clicking on the "bucket bar" or on highlights. ## Steps to reproduce 1. Go to a page with many annotations, such as [this one](https://hypothes.is/blog/annotating-ocred-pdfs/). 2. Click on a highlight, or on a bucket bar arrow. ## Expected result The sidebar narrows focus on just the "selected" annotations. ## Actual result All annotations remain visible, even though the "Clear selection" text at the top of the sidebar appears. <issue_comment>username_1: The logic to check whether a thread should be shown was altered in 44edb50b18f5cd1a78aae32384551c74e6c19304 to remove the check of whether the annotation is in the selected set or not. Based on the commit message, that looks like a mistake to me.<issue_closed>
<issue_start><issue_comment>Title: Cookies size limitations when storing JSON objects username_0: Hello All, Im trying to use the library to store a cookie with an array of objects. my arrays are quite big actually(each object has 16 fields). I noticed when testing, that it would only store up to 6 objects on my cookie. Is there anyway to increase the size limit for a cookie? or what I am trying to do is not possible? <issue_comment>username_1: Please, take a look here: https://github.com/js-cookie/js-cookie/issues/37#issuecomment-217620753. I have added a new bullet point with this issue there. <issue_comment>username_0: I see.. thanks!<issue_closed>
<issue_start><issue_comment>Title: make __VERIFIER funcs more efficient username_0: klee losses performance when using a lot of symbolic object. Make the value returned from __VERIFIER_nondet functions static, so that we always return the same value without creating new symbolic object (the value is copied, so its fine). We should do similar stuff for uninitialized memory in Prepare.cpp <issue_comment>username_0: Ok, we have the stuff for Prepare.cpp, so we could reuse the variables from that - but how to intergange them between modules? Some specific naming?<issue_closed> <issue_comment>username_0: This would not work...
<issue_start><issue_comment>Title: JSound recursion throws an unknown type error username_0: Example: { "$namespace" : "http://28.io/test", "$types" : [ { "$name" : "recursive-type", "$kind" : "object", "$content" : { "Children" : { "$type" : { "$kind" : "array", "$content" : [ "recursive-type" ] } } } } ] } throws: "<.../zorba/uris/core/3.1.0/io/jsound/modules/jsound.module>:305,10: error [jse:UNKNOWN_TYPE]: \"Q{http://28.io/test}recursive-type\": unknown type"
<issue_start><issue_comment>Title: Library fails while constructing ErrorResponse username_0: I was trying to contact my Minio instance specifying a wrong address earlier, and it seems like error reporting is broken. `ErrorResponse.parseXml` throws some exception about the xml being malformed. Stack trace: ``` ! org.xmlpull.v1.XmlPullParserException: only whitespace content allowed before start tag and not 4 (position: START_DOCUMENT seen 4... @1:1) ! at org.xmlpull.mxp1.MXParser.parseProlog(MXParser.java:1519) ~[xpp3-1.1.4c.jar:na] ! at org.xmlpull.mxp1.MXParser.nextImpl(MXParser.java:1395) ~[xpp3-1.1.4c.jar:na] ! at org.xmlpull.mxp1.MXParser.next(MXParser.java:1093) ~[xpp3-1.1.4c.jar:na] ! at com.google.api.client.xml.Xml.parseElementInternal(Xml.java:245) ~[google-http-client-xml-1.20.0.jar:1.20.0] ! at com.google.api.client.xml.Xml.parseElement(Xml.java:222) ~[google-http-client-xml-1.20.0.jar:1.20.0] ! at io.minio.messages.XmlEntity.parseXml(XmlEntity.java:62) ~[minio-1.0.1.jar:1.0.1] ! at io.minio.messages.ErrorResponse.parseXml(ErrorResponse.java:134) ~[minio-1.0.1.jar:1.0.1] ! at io.minio.messages.ErrorResponse.<init>(ErrorResponse.java:60) ~[minio-1.0.1.jar:1.0.1] ! at io.minio.MinioClient.execute(MinioClient.java:522) ~[minio-1.0.1.jar:1.0.1] ! at io.minio.MinioClient.executeGet(MinioClient.java:635) ~[minio-1.0.1.jar:1.0.1] ! at io.minio.MinioClient.getObject(MinioClient.java:836) ~[minio-1.0.1.jar:1.0.1] ! at io.minio.MinioClient.getObject(MinioClient.java:769) ~[minio-1.0.1.jar:1.0.1] ``` <issue_comment>username_1: I see ```errorResponse = new ErrorResponse(new StringReader(errorXml));``` fails due to wrong xml format/content from http body. Is it possible to provide what http body content it got? <issue_comment>username_0: Of course, if you can suggest me how to retrieve it. `minio-java` doesn't implement any kind of logging, right? <issue_comment>username_2: @username_0 - there is way to enable `s3Client.traceOn(stream)` <issue_comment>username_0: Thank you! I will investigate the issue ASAP and report here. <issue_comment>username_2: Checking back on this were you able to investigate this furher? <issue_comment>username_0: Sorry, not yet. I've been very busy with my thesis in the last few months, and completely forgot about the issue. I will try to do it in the next few days. <issue_comment>username_2: No problem @username_0, all the best on your thesis :-)<issue_closed> <issue_comment>username_2: Closing this since there hasn't been any update, please re-open if it happens again.
<issue_start><issue_comment>Title: Prevent single dashes from being stripped out username_0: A recent example was a post titled: `Ruby-Like `split` in Elixir` being slugified to: `rubylike-split-in-elixir` The dash between `ruby` and `like` was being erroneously stripped out. This commit fixes that by not stripping out any dashes in the first `gsub` of `#slugified_title`. <issue_comment>username_1: Great catch! Thank you 👍
<issue_start><issue_comment>Title: Add Search Bar to CustomNavBar username_0: Adds more functionality to the`CustomNavBar` as well as a way to search a ListView (Also a few refactors & cleanups) ## Please verify the following: - [X] Everything works on iOS/Android - [X] `ignite-base` **ava** tests pass - [X] `fireDrill.sh` passed ## Describe your PR #### iOS ![ignite - search](https://cloud.githubusercontent.com/assets/10098988/20114426/ee8404e6-a5b9-11e6-8953-f32908966c9f.gif) #### Android ![ignite - search-android](https://cloud.githubusercontent.com/assets/10098988/20114439/f63bc192-a5b9-11e6-9e5a-01e02dea6809.gif) <issue_comment>username_1: Any chance you could rebase this? It's hard to see what's changed. <issue_comment>username_2: LGTM 👍 Do you think we should do a generator for this option? <issue_comment>username_2: ![yay](https://i.imgur.com/BEszBAg.gif)
<issue_start><issue_comment>Title: Split HCs into two to allow different checks to be run separately username_0: I've separated the contents of two different checks to allow different sets of checks to be made independently. The main reason for doing this is to allow EG databases to ignore stable ID mapping and RefSeq checks, which are generally not relevant for EG databases. I've run these on human and they've not changed the outcome (there is still 1 failure for SeqRegionsTopLevelRefSeq which was there in SeqRegionsTopLevel anyway) <issue_comment>username_1: Thanks, looks all right to me.
<issue_start><issue_comment>Title: Integrated/External Console debugging isn't working properly. username_0: ## Environment data VS Code version: 1.7.2 Python Extension version: 0.5.5 Python Version: 3.5.2 OS and version: windows 10 build 14393 Your launch.json (if dealing with debugger issues): ```json { "name": "Integrated Terminal/Console", "type": "python", "request": "launch", "stopOnEntry": true, "pythonPath": "${config.python.pythonPath}", "program": "${file}", "cwd": "null", "console": "integratedTerminal", "debugOptions": [ "WaitOnAbnormalExit", "WaitOnNormalExit" ] }, { "name": "External Terminal/Console", "type": "python", "request": "launch", "stopOnEntry": true, "pythonPath": "${config.python.pythonPath}", "program": "${file}", "cwd": "null", "console": "externalTerminal", "debugOptions": [ "WaitOnAbnormalExit", "WaitOnNormalExit" ] } ``` Your settings.json: ```json { "editor.fontSize": 17, "python.pythonPath": "python3" } ``` ## Logs Output from ```Python``` output panel ``` [when debugging with integrated console] C:\Users\name\Documents\python\hello>cd null && cmd /C "set "PYTHONIOENCODING=UTF-8" && python3 C:\Users\name\.vscode\extensions\donjayamanne.python-0.5.5\pythonFiles\PythonTools\visualstudio_py_launcher.py null 10369 34806ad9-833a-4524-8cd6-18ca4aa74f14 WaitOnAbnormalExit,WaitOnNormalExit c:\Users\name\Documents\python\hello\hello.py " The system cannot find the path specified. ``` Output from ```Console window``` (Help->Developer Tools menu) ``` [when debugging with external console] (I don't know the exact message in english. I translated the message into english.) debug adapter process is terminated unexpectedly. ``` ## Actual behavior the messages described above are shown and debugging task isn't started. when trying debugging with integrated console twice, debugging task seem to be started with debugging control panel but actually not and it can be stopped. ## Expected behavior start debugging task ## Steps to reproduce: - try Integrated/External Console debugging - <issue_comment>username_1: Please change ```json "cwd": "null", ``` to ```json "cwd": "${workspaceRoot}", ```<issue_closed>
<issue_start><issue_comment>Title: Get rid of some naked `get_db`s username_0: Misc cleanup I came across while looking in to the domains migration. We still have a bunch of `get_db`s left in the codebase, but these ones I was able to figure out no problem. It seems like there are three categories of `get_db`s: 1. I have an id and I don't know what kind of doc it corresponds to 2. `get_db().get('HARDCODED_DOC_ID')` - standalone metadata documents 3. Things that should immediately be converted to a `Document.get_db()` The first of those will become more problematic as we continue to split up the dbs. There are already some that I think can refer to users, but only check the "main" db. I don't know of a good solution to this other than to simply abstract "look up this id in all of these databases". @nickpell @dannyroberts FYI <issue_comment>username_1: I think we should basically never do 1 except for the doc in couch / doc in es admin utils, which seem fine to do the way you're describing <issue_comment>username_1: (lgtm but didn't review too closely so will let someone else merge)
<issue_start><issue_comment>Title: CuesElement does not exist username_0: Hi there, I'm trying to play back content generated by following along [this](http://wiki.webmproject.org/adaptive-streaming/instructions-to-playback-adaptive-webm-using-dash) tutorial. I run into the following error: `CuesElement does not exist. webm_segment_index_parser.js:121` Is this an element that is missing from my mpd manifest? ```xml <?xml version="1.0" encoding="UTF-8"?> <MPD xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns="urn:mpeg:DASH:schema:MPD:2011" xsi:schemaLocation="urn:mpeg:DASH:schema:MPD:2011" type="static" mediaPresentationDuration="PT10.033S" minBufferTime="PT1S" profiles="urn:webm:dash:profile:webm-on-demand:2012"> <Period id="0" start="PT0S" duration="PT10.033S" > <AdaptationSet id="0" mimeType="video/webm" codecs="vp9" bitstreamSwitching="true" subsegmentAlignment="true" subsegmentStartsWithSAP="1"> <Representation id="0" bandwidth="124382" width="160" height="90"> <BaseURL>video_160x90_250k.webm</BaseURL> <SegmentBase indexRange="132303-132369"> <Initialization range="0-429" /> </SegmentBase> </Representation> <Representation id="1" bandwidth="266837" width="320" height="180"> <BaseURL>video_320x180_500k.webm</BaseURL> <SegmentBase indexRange="313960-314026"> <Initialization range="0-430" /> </SegmentBase> </Representation> <Representation id="2" bandwidth="579451" width="640" height="360"> <BaseURL>video_640x360_750k.webm</BaseURL> <SegmentBase indexRange="797366-797432"> <Initialization range="0-431" /> </SegmentBase> </Representation> <Representation id="3" bandwidth="767450" width="640" height="360"> <BaseURL>video_640x360_1000k.webm</BaseURL> <SegmentBase indexRange="1055472-1055538"> <Initialization range="0-431" /> </SegmentBase> </Representation> <Representation id="4" bandwidth="1110647" width="1280" height="720"> <BaseURL>video_1280x720_500k.webm</BaseURL> <SegmentBase indexRange="1544015-1544081"> <Initialization range="0-431" /> </SegmentBase> </Representation> </AdaptationSet> <AdaptationSet id="1" mimeType="audio/webm" codecs="vorbis" audioSamplingRate="44100" bitstreamSwitching="true" subsegmentAlignment="true" subsegmentStartsWithSAP="1"> <Representation id="5" bandwidth="60219"> <BaseURL>audio_128k.webm</BaseURL> <SegmentBase indexRange="86666-86731"> <Initialization range="0-4700" /> </SegmentBase> </Representation> </AdaptationSet> </Period> </MPD> ``` <issue_comment>username_0: hmm, angel one [doesn't seem to have](https://github.com/google/shaka-player/blob/master/assets/angel_one.mpd) Cues elements either. <issue_comment>username_1: Hi Nick, The CuesElement is part of the WebM content itself, not the manifest. It's an index that tells the library where individual segments are in the file. For each Representation element in your manifest, there is a SegmentBase element with an indexRange property. This is supposed to be the location of the cues element within the WebM file. So it would seem that the bytes ranges in your manifest don't agree with your files. <issue_comment>username_0: @username_1 , can you reproduce, with this input file and script: https://www.dropbox.com/s/vw0y6823y6ih5z3/short.mp4?dl=0 ``` FILE=short.mp4 VP9_DASH_PARAMS="-tile-columns 4 -frame-parallel 1" ffmpeg -i ${FILE} -c:v libvpx-vp9 -s 160x90 -b:v 250k -keyint_min 150 -g 150 \ ${VP9_DASH_PARAMS} -an -f webm -dash 1 video_160x90_250k.webm ffmpeg -i ${FILE} -c:v libvpx-vp9 -s 320x180 -b:v 500k -keyint_min 150 -g 150 ${VP9_DASH_PARAMS} -an -f webm -dash 1 video_320x180_500k.webm ffmpeg -i ${FILE} -c:v libvpx-vp9 -s 640x360 -b:v 750k -keyint_min 150 -g 150 ${VP9_DASH_PARAMS} -an -f webm -dash 1 video_640x360_750k.webm ffmpeg -i ${FILE} -c:v libvpx-vp9 -s 640x360 -b:v 1000k -keyint_min 150 -g 150 ${VP9_DASH_PARAMS} -an -f webm -dash 1 video_640x360_1000k.webm ffmpeg -i ${FILE} -c:v libvpx-vp9 -s 1280x720 -b:v 1500k -keyint_min 150 -g 150 ${VP9_DASH_PARAMS} -an -f webm -dash 1 video_1280x720_500k.webm ffmpeg -i ${FILE} -c:a libvorbis -b:a 128k -vn -f webm -dash 1 audio_128k.webm ffmpeg \ -f webm_dash_manifest -i video_160x90_250k.webm \ -f webm_dash_manifest -i video_320x180_500k.webm \ -f webm_dash_manifest -i video_640x360_750k.webm \ -f webm_dash_manifest -i video_640x360_1000k.webm \ -f webm_dash_manifest -i video_1280x720_500k.webm \ -f webm_dash_manifest -i audio_128k.webm \ -c copy -map 0 -map 1 -map 2 -map 3 -map 4 -map 5 \ -f webm_dash_manifest \ -adaptation_sets "id=0,streams=0,1,2,3,4 id=1,streams=5" \ manifest.mpd ``` I'm curious now if this is a bug in ffmpeg mpd generation, webm generation, or I'm doing something wrong? <issue_comment>username_0: nvm! user error, missed a damn `\` for the shell script! I was doing something stupid, thanks for your time.<issue_closed> <issue_comment>username_1: Happy to help! <issue_comment>username_0: Interestingly, I now get this error for 2 of the 5 different resolutions. <issue_comment>username_0: @username_1 , if you have time, could you look at this again. I've worked on improving my reported steps to reproduce. If you clone this repo: https://github.com/username_0/mpd-dash STR: 1. `git clone https://github.com/username_0/mpd-dash.git && cd mpd-dash` 2. `./download_source.sh` 3. `./test.sh`, wait for transcodes to finish, takes about a minute on mid-2012 MBP 4. serve shaka player from localhost, browse to it 5. change test manifest to custom 6. set custom manifest url to served manifest.mpd file 7. click load stream 8. right click on video, check Loop (otherwise video is too short, 10s) 9. click cycle video tracks Expected: Player to cycle through various streams. Actual: Numerous `CuesElement does not exist.` errors, sometimes stream doesn't start when clicking loading stream, but will after subsequent clicks without page reload. Only the 320 x 180 stream loads and plays. This is testing with Opera Beta 29.0.1795.21, and ffmpeg version 2.5.2. <issue_comment>username_0: Hi there, I'm trying to play back content generated by following along [this](http://wiki.webmproject.org/adaptive-streaming/instructions-to-playback-adaptive-webm-using-dash) tutorial. I run into the following error: `CuesElement does not exist. webm_segment_index_parser.js:121` Is this an element that is missing from my mpd manifest? ```xml <?xml version="1.0" encoding="UTF-8"?> <MPD xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns="urn:mpeg:DASH:schema:MPD:2011" xsi:schemaLocation="urn:mpeg:DASH:schema:MPD:2011" type="static" mediaPresentationDuration="PT10.033S" minBufferTime="PT1S" profiles="urn:webm:dash:profile:webm-on-demand:2012"> <Period id="0" start="PT0S" duration="PT10.033S" > <AdaptationSet id="0" mimeType="video/webm" codecs="vp9" bitstreamSwitching="true" subsegmentAlignment="true" subsegmentStartsWithSAP="1"> <Representation id="0" bandwidth="124382" width="160" height="90"> <BaseURL>video_160x90_250k.webm</BaseURL> <SegmentBase indexRange="132303-132369"> <Initialization range="0-429" /> </SegmentBase> </Representation> <Representation id="1" bandwidth="266837" width="320" height="180"> <BaseURL>video_320x180_500k.webm</BaseURL> <SegmentBase indexRange="313960-314026"> <Initialization range="0-430" /> </SegmentBase> </Representation> <Representation id="2" bandwidth="579451" width="640" height="360"> <BaseURL>video_640x360_750k.webm</BaseURL> <SegmentBase indexRange="797366-797432"> <Initialization range="0-431" /> </SegmentBase> </Representation> <Representation id="3" bandwidth="767450" width="640" height="360"> <BaseURL>video_640x360_1000k.webm</BaseURL> <SegmentBase indexRange="1055472-1055538"> <Initialization range="0-431" /> </SegmentBase> </Representation> <Representation id="4" bandwidth="1110647" width="1280" height="720"> <BaseURL>video_1280x720_500k.webm</BaseURL> <SegmentBase indexRange="1544015-1544081"> <Initialization range="0-431" /> </SegmentBase> </Representation> </AdaptationSet> <AdaptationSet id="1" mimeType="audio/webm" codecs="vorbis" audioSamplingRate="44100" bitstreamSwitching="true" subsegmentAlignment="true" subsegmentStartsWithSAP="1"> <Representation id="5" bandwidth="60219"> <BaseURL>audio_128k.webm</BaseURL> <SegmentBase indexRange="86666-86731"> <Initialization range="0-4700" /> </SegmentBase> </Representation> </AdaptationSet> </Period> </MPD> ``` <issue_comment>username_0: because there are so many dependencies, I'm not sure if this is a bug in: * how I'm using ffmpeg, starting with a mp4 -> webm * ffmpeg's mpd generator * ffmpeg's webm encoder * ffmpeg's mp4 decoder * shaka's CuesElement parser * Opera/Blink's MSE implementation <issue_comment>username_1: I am unable to reproduce this error. When I run your script, I get valid WebM streams with Cues elements, and the byte ranges specified in the manifest are correct. Can you please send me the exact webm and mpd files you have produced with these steps? If ffmpeg were to blame, I might fail to repro because I'm using a different version of ffmpeg. <issue_comment>username_0: @username_1 , here's the folder: https://www.dropbox.com/sh/kmk5nhwj824xy6q/AAADCZ_zpiWkxpxZ79yDyyxAa?dl=0. What version of ffmpeg are you using? I'm not building ffmpeg from source, just installing from homebrew (rather, homebrew might be building from source. I don't think they maintain mirrors to prebuilt binaries). <issue_comment>username_1: I periodically update my build of ffmpeg from source. But ffmpeg doesn't seem to be the problem, since I am unable to reproduce with these exact files you sent me. I've also double-checked the manifest against the WebM files themselves, and the byte ranges are accurate. So now we're down to either a Shaka Player bug, or a problem with your browser or your web server. To check if it's the browser, can you try reproducing in Chrome instead of Opera? <issue_comment>username_0: Interesting, broken in Opera Beta 29.0.1795.21 but not Chrome 43. Chrome 41 throws [an error](https://pastebin.mozilla.org/8826842) but continues playing. <issue_comment>username_1: That "endOfStream" error you're referencing from Chrome 41 is a red herring. That's a Shaka Player which is unrelated. (Filed as #43.) I just tried in Opera 28 and 29 on Linux, and I'm unable to reproduce your WebM error on either of those versions, either with the files you sent me or with the ones I produced myself. I'm going to label this "working as intended". You may want to make sure your web server is serving the correct files and byte ranges as requested by the client. I don't see any evidence that the WebM parser in Shaka is to blame.<issue_closed> <issue_comment>username_0: unbelievable, switching the static file server did the trick. I was using `python -m SimpleHTTPServer`, now I'm using [this](https://github.com/indexzero/http-server) and not seeing any issues. If you can repro (should have python on linux), maybe it would be worthwhile to add a note to the readme that python's simpleserver has issues? <issue_comment>username_0: Thanks for you assistance @username_1 , I appreciate it! <issue_comment>username_1: I don't think there's a place in the docs for listing non-HTTP-compliant or otherwise broken web servers, in my opinion. That's somewhat outside the scope of the project. If you can figure out what SimpleHTTPServer doesn't support correctly, maybe they would like to hear about it in their issue tracker. My first guess would be that it either ignores or incorrectly implements support for the "Range" request header. But that's just a guess. Good luck!
<issue_start><issue_comment>Title: why my xaml tip cefsharp.core can't load ? username_0: System.IO.FileNotFoundException Could not load file or assembly 'CefSharp.Core, Version=45.0.0.0, Culture=neutral, PublicKeyToken=40c4b6fc221f4138' or one of its dependencies. 系统找不到指定的文件。 <issue_comment>username_1: @username_0 Please fill in the bug report and post a comment to let us know when your done. <issue_comment>username_1: @username_0 Please provide additional information. <issue_comment>username_0: @username_1 thanks , I had resovled it .<issue_closed> <issue_comment>username_2: How can I solve this problem?I have the same problem? plz help me. thx
<issue_start><issue_comment>Title: Can "Content" tab of reports be full text indexed? username_0: ## Use case We would like to search the content in the content tab when global search. (ie. "Content" tab can be searched in global search ) ## Current Workaround N/A ## Proposed Solution "Content" tab of reports can be full text indexed. ## Additional Information <!-- Any additional information, including logs or screenshots if you have any. --> ## If the feature request is approved, would you be willing to submit a PR? Yes
<issue_start><issue_comment>Title: `gulp-imagemin` with `gulp-cache` 'Unhandled rejection TypeError: Cannot read property 'kill' of undefined' username_0: I'm running `gulp-imagemin` with `gulp-cache`, like this: ```javascript // gulpfile.js var cache = require('gulp-cache'), imagemin = require('gulp-imagemin') gulp.task('img', function() { gulp .src('./img/**') .pipe(cache(imagemin())) .pipe(gulp.dest('./site/img/')); }); ``` If I run it with like this (with `gulp-cache`), I get this error: ```bash $ gulp img [21:26:36] Using gulpfile ./gulpfile.js [21:26:36] Starting 'img'... [21:26:36] Finished 'img' after 10 ms Unhandled rejection TypeError: Cannot read property 'kill' of undefined at module.exports (./node_modules/gulp-imagemin/node_modules/imagemin-jpegtran/node_modules/exec-buffer/node_modules/execa/index.js:80:24) at ./node_modules/gulp-imagemin/node_modules/imagemin-jpegtran/node_modules/exec-buffer/index.js:31:15 ``` If I remove the `cache` though, ```javascript // gulpfile.js gulp.task('img', function() { gulp .src('./img/**') .pipe(imagemin()) .pipe(gulp.dest('./site/img/')); }); ``` It works: ```bash $ gulp img [21:36:09] Using gulpfile ./gulpfile.js [21:36:09] Starting 'img'... [21:36:09] Finished 'img' after 8.97 ms ``` What do I have to change or add to make `gulp-imagemin` work with `gulp-cache`? <issue_comment>username_1: I think this may be related to the fix I listed here: https://github.com/sindresorhus/gulp-imagemin/issues/173#issuecomment-230737023<issue_closed> <issue_comment>username_2: Thanks, @username_1. Probably https://github.com/sindresorhus/gulp-imagemin/issues/173#issuecomment-230737023 is the answer.
<issue_start><issue_comment>Title: Filesystem based WorkflowStore which resubmits previously running workflows. Closes #1118 username_0: <issue_comment>username_1: I don't understand why this is fs based, why not wait a day or two and do it with a db? <issue_comment>username_2: :+1: <issue_comment>username_1: Disregard my previous comment, can't delete on phone. I would say to make sure zero time is spent reviewing the fs pet though <issue_comment>username_3: LGTM for what it is 👍
<issue_start><issue_comment>Title: Add options for popup and focus username_0: New option allows tab to be moved to a popup window rather than a normal browser window. New option allows focus to remain on new window. Support pop-in of a window back to a tab in original window. <issue_comment>username_1: Thanks for this! I've merged your changes, but have altered things slightly. I've made the popup window a separate key command, as I like having both available without changing options.
<issue_start><issue_comment>Title: Use native Rails' `routes.draw` instead of custom `draw_routes` username_0: * Solidus manages it's own route list, but Rails' RoutSet should be good enough. <issue_comment>username_1: This looks good to me, thanks, but I wasn't really clear on why we needed this in the first place. @username_2? For record I pulled it down into the sandbox, edited routes, and everything worked as expected. <issue_comment>username_2: The original reason this was added was related to rails/rails#12367, we'll have to see if that is still an issue in extensions. <issue_comment>username_2: @username_0 Thanks for your patience I tried this branch on a few extensions and couldn't spot any changes or issues :shipit: . Hopefully whatever this code was working around was fixed at some point since rails 4.0. :+1: This is great. The backwards compatibility with deprecation warnings are especially appreciated.
<issue_start><issue_comment>Title: String and bool types are not supported under generic object type username_0: When using Deserialize<T>, using object does not handle valid JSON strings and bool (**true**, **false**). They return null when they actually should be returning appropriate JSON values <issue_comment>username_1: Thanks I will look into it. Can you post a sample if possible? Thanks,<issue_closed>
<issue_start><issue_comment>username_0: <issue_comment>username_0: ### Introduction to GitHub flow Now that you're familiar with issues, let's use this issue to track your path to your first contribution. People use different workflows to contribute to software projects, but the simplest and most effective way to contribute on GitHub is the GitHub flow. :tv: [Video: Understanding the GitHub flow](https://www.youtube.com/watch?v=PBI2Rz-ZOxU) <hr> <h3 align="center">Read below for next steps</h3>
<issue_start><issue_comment>Title: Error in index_expanded_line_numbers() username_0: Tried to use m4 and it bombs on line 792. I first tried a normal .psm file with the --m4 option, failed. Then I tried it on load.psm4 in the .\test\asm directory, failed. Problem(s) are that active_file = None and source_lines = {}. Traceback (most recent call last): File "C:\Tools\opbasm-1.3\opbasm.py", line 2530, in <module> main() File "C:\Tools\opbasm-1.3\opbasm.py", line 2381, in main for fname in asm.process_includes(): File "C:\Tools\opbasm-1.3\opbasm.py", line 810, in process_includes pp_source_file = self.preprocess_with_m4(source_file) File "C:\Tools\opbasm-1.3\opbasm.py", line 687, in preprocess_with_m4 self.index_expanded_line_numbers(m4_result, pp_source_file, source_file) File "C:\Tools\opbasm-1.3\opbasm.py", line 792, in index_expanded_line_numbers index.append((cur_line, source_lines[active_file], active_file)) KeyError: None <issue_comment>username_1: I can't reproduce this issue. Can you provide an exact command line, directory you're executing from, and the Python version that triggers the problem for you. A crude fix would be to change line 782 to "active_file = source_file" but that introduces some subtle bugs with tracking line numbers in included files. In all my test cases m4 puts out a syncline entry as the first line of the expanded source code so the else condition is never reached before active_file has been set. The regex must be failing for some unexpected reason. I have added a --verbose command line option to the Github repo head which is at version 1.3.2. If you install that and run with the verbose option on it will print the exact m4 command line to the output. Can you manually run that command, replacing the final "-" with the path to load.psm4, and capture the output to a file. Version 1.3.2 will also terminate with an error message when active_file is None. Please copy the result from that. You can install direct from the repo using pip without needing git installed: pip install --upgrade https://github.com/username_1/opbasm/tarball/master <issue_comment>username_0: <!-- DIV {margin:0px;} -->It's been quite awhile since I visited this, but since you were kind enough to reply I looked into it.  I was using the opbasm by calling the python script, ala:"c:opbasm.py -i i2c_pb.psm -t ROM_form.vhd -6 -m 2048 -n i2c_rom"Not sure if that was right, but in trying to figure out where I was, I see opbasm.exe in the python scripts directory, so I substituted that for opbasm.py and viola, it compiled without errors.  That probably means it was operator error on my part.  Not sure, but good enough for now.  I think I asked you already, but just in case, I'll ask again.  Would it be possible to support Mediatronix syntax with opbasm?  I find it much nicer than the Xilinx syntax, and that's what all my current code is written in.Thanks - Woody-----Original Message-----
<issue_start><issue_comment>Title: Create gifs/small videos for finding handles on the websites. username_0: <issue_comment>username_1: Is this some sort of GIPHY integration? <issue_comment>username_0: @username_1 : It does not require any GIPHY integration. All you need to do is take screenshots of -logging into a profile site (CodeChef, Codeforces, HackerEarth, HackerRank, Spoj) and through this gif help users locate which part exactly is the "handle". You can merge the screenshots into a gif using [this](http://gifmaker.me/) <issue_comment>username_1: Is there some sort of wireframe or rough sketch, even series of action/event will help <issue_comment>username_0: Every site has its own way. Like mostly the profile page url will be having the handle itself. So you can edit the screenshots to circle the handles in the url. So one of the example gif could be - For codeforces.gif -> Combination of 2 screenshots 1. Register page - http://codeforces.com/register 2. After register open the profile page - http://codeforces.com/profile/username_0 Now in the second image highlight or circle or whatever around the handle (username_0). Similarly for all the sites. <issue_comment>username_2: can I work on this issue? <issue_comment>username_0: @username_2 : Sure. Feel free to send a PR :) The above comments will be more than descriptive. Reply back in case you want any further clarifications. <issue_comment>username_2: @username_0, upon creation of the GIFs where do I integrate the created files in the project, (newbie here, please bear with me if the questions sound absurd). <issue_comment>username_0: @username_2 not a problem at all. You need to keep the gifs along with the others in `static/images/gifs/`. Where you need to add is on two pages - 1. Register page (in code [here](https://github.com/stopstalk/stopstalk-deployment/blob/master/controllers/default.py#L655)) (You will need to read a bit [here](http://web2py.com/books/default/chapter/29/09/access-control?search=auth#Authentication)) 2. Add custom friend page (in code [here](https://github.com/stopstalk/stopstalk-deployment/blob/master/controllers/user.py#L742)) So I have not decided any fixed layouts as in where should the images be kept in the page, but I am open to suggestions such that it matches the UI of the website. <issue_comment>username_2: Thank you @username_0 , I will work on this and will turn up to you if there's anything. <issue_comment>username_2: @username_0 , can I use a dummy account to guide the location of the handle in the respective sites? <issue_comment>username_0: Well yes, but you have to blur the name/institute/.. anyway so wouldn't matter which account to use. <issue_comment>username_2: @username_0 ,please review.<issue_closed>
<issue_start><issue_comment>Title: fixed messageActionBar and avatar layering username_0: <!-- Please read https://github.com/matrix-org/matrix-js-sdk/blob/develop/CONTRIBUTING.md before submitting your pull request --> <!-- Include a Sign-Off as described in https://github.com/matrix-org/matrix-js-sdk/blob/develop/CONTRIBUTING.md#sign-off --> <!-- To specify text for the changelog entry (otherwise the PR title will be used): Notes: Changes in this project generate changelog entries in element-web by default. To suppress this: element-web notes: none ...or to specify different notes: element-web notes: <notes> --> <!-- CHANGELOG_PREVIEW_START --> --- This PR currently has no changelog labels, so will not be included in changelogs. A reviewer can add one of: `T-Deprecation`, `T-Enhancement`, `T-Defect`, `T-Task` to indicate what type of change this is, or add `Type: [enhancement/defect/task]` to the description and I'll add them for you.<!-- CHANGELOG_PREVIEW_END --> <issue_comment>username_1: @username_0 are you fixing a known issue? If so, please add that information to the description at the top of this page. <issue_comment>username_1: @username_0 please read https://github.com/matrix-org/matrix-js-sdk/blob/master/CONTRIBUTING.md and make sure you sign off your commits. <issue_comment>username_1: Closing in favour of #8190
<issue_start><issue_comment>Title: CDAP-3445 Adding support for Google Custom Search Engines username_0: This replaces the closed PR https://github.com/caskdata/cdap/pull/7630 Changes from using the embedded JavaScript to using a Google Custom Search Engine. Running as a Quick Build: http://builds.cask.co/browse/CDAP-DQB237-2 Pages of interest: (when run completes) Main index page: http://builds.cask.co/artifact/CDAP-DQB237/shared/build-2/Docs-HTML/4.1.0-SNAPSHOT/en/index.html Search page: http://builds.cask.co/artifact/CDAP-DQB237/shared/build-2/Docs-HTML/4.1.0-SNAPSHOT/en/search.html Search page, searching for "flows": http://builds.cask.co/artifact/CDAP-DQB237/shared/build-2/Docs-HTML/4.1.0-SNAPSHOT/en/search.html?q=flows Search page, searching for "Apache Hadoop KMS": http://builds.cask.co/artifact/CDAP-DQB237/shared/build-2/Docs-HTML/4.1.0-SNAPSHOT/en/search.html?q=Apache+Hadoop+KMS Currently searching using the "current" (or 4.0.1) docs... See also caskdata/docs-site#4 <issue_comment>username_1: It seems to be a good idea if we can add checkbox to choose search across releases. By default current release only, when check - all releases. <issue_comment>username_0: I have created https://issues.cask.co/browse/CDAP-8304 to track that option so that this issue can be completed. <issue_comment>username_0: Merged with permission of @bdmogal
<issue_start><issue_comment>Title: APPLICATION FAILED TO START username_0: I clean and build application. After that i ran command this gradlew bootRun (or java -jar cas/build/libs/cas.war). It throw an exception "Caused by: java.io.FileNotFoundException: \etc\cas\thekeystore". I think it must be initialized for first time with no ssl certificate and https security. Already cas application throws an exception for ssl security.<issue_closed> <issue_comment>username_1: See Readme for more info <issue_comment>username_0: I read info. And i ran command step step. But it is not working. It always throw an exception "Caused by: java.io.FileNotFoundException: \etc\cas\thekeystore". I created a keystore and pasted it under \etc\cas\ folder. <issue_comment>username_0: I solved this. I copied etc file to C:\ directory. Because of cas searchs at root directory. On windows, root directory is c:\ . Could you change properties file or readme for windows configurations. <issue_comment>username_1: I cannot. Instead, please w submit a pull request with this change you have in mind. Thanks
<issue_start><issue_comment>Title: Use HTML5 <time> element for post datetime username_0: References: https://developer.mozilla.org/en-US/docs/Web/HTML/Element/time https://www.w3.org/TR/html-markup/datatypes.html#common.data.datetime http://alanwsmith.com/jekyll-liquid-date-formatting-examples <issue_comment>username_0: @barryclark ping
<issue_start><issue_comment>Title: LatentDirichletAllocation's number of iterations username_0: When I amend `examples/applications/topics_extraction_with_nmf_lda.py` to print `lda.n_iter_` after fitting, it reports 81. That's clearly more than `max_iter=5`. <issue_comment>username_1: I am new around here and would like to take this up. <issue_comment>username_0: Cool, go ahead! <issue_comment>username_1: As far as I can understand the problem arises due to improper naming of the variable "n_iter_" . "n_iter " is expected to store the number of times lda has been executed for all of the n samples whereas it currently stores, the number of iterations of the _em_step. The _n_iter is used to update the value of weight in each iteration of the _em_step. I suggest having a different variable for updating the weight in each iteration of the _em_step to avoid the confusion. (updatect is used in the original implementation by Hoffman) @username_0 What is your opinion on this ? <issue_comment>username_2: @username_1 @username_0 yeah. The variable is iteration of EM steps. So in online update, it is the number of mini-batch updates, not number of iterations. We should rename it. <issue_comment>username_1: @username_2 @username_0 Renamed the n_iter variable. I have opened a pull request for the same. Please have a look. <issue_comment>username_0: @username_1 Please post a link: # and then the number of the PR. That automatically posts a link at the PR to this issue. <issue_comment>username_1: Opened pull request #5112 . Thanks @username_0 for your suggestion.<issue_closed>
<issue_start><issue_comment>Title: Can this appear in the Web UI? username_0: The readme says "Graphs are generated in the same directory as the scripts (not the web interface)." Is there a way to get them into the web interface? <issue_comment>username_1: Not really with this scripted method. The correct way to do it would be use the existing FreeNAS status console framework instead of this scripted hack. I looked into that, but it was quicker and easier the wire this than to try and get FreeNAS compiling and figure out how to do it in an integrated way <issue_comment>username_1: Though you could run a web server in a jail and have the graph PNGs written to a directory the it can serve. It wouldn't be part of the normal GUI, but it would be web accessible<issue_closed>
<issue_start><issue_comment>Title: Look up _last_executed on the underlying cursor username_0: `_last_executed` is quite useful for DEBUG query output; it shouldn't just be empty. It's used internally here: ``` python sql = self.db.ops.last_executed_query(self.cursor, sql, params) ``` <issue_comment>username_1: Version 0.4.2 is published to PyPI! Thanks for your help! 🎉