content
stringlengths
240
2.34M
<issue_start><issue_comment>Title: go get doesn't work. username_0: It looks like a dependency of golint changed because I can't perform go get ``` $ go get github.com/golang/lint/golint # github.com/golang/lint src/github.com/golang/lint/lint.go:1480: too many arguments in call to types.Eval ``` <issue_comment>username_0: BTW if I perform go get types before it works without problems. go get golang.org/x/tools/go/types go get github.com/golang/lint/golint<issue_closed> <issue_comment>username_1: That's why the README says to pass `-u` to `go get`. You need to use the latest version of x/tools. <issue_comment>username_2: I did the "go get" on types and still get the same error message: ../../../github.com/golang/lint/lint.go:1480: too many arguments in call to types.Eval ../../../github.com/golang/lint/lint.go:1480: assignment count mismatch: 2 = 3 <issue_comment>username_1: Run `go get -u golang.org/x/tools/go/types` and try again. <issue_comment>username_2: Same thing go get -u golang.org/x/tools/go/types Ryans-MacBook-Pro-2:glew-cortex (master*) $ go get -u github.com/golang/lint/golint # github.com/golang/lint ../../../github.com/golang/lint/lint.go:1480: too many arguments in call to types.Eval ../../../github.com/golang/lint/lint.go:1480: assignment count mismatch: 2 = 3 <issue_comment>username_1: I don't know what else to suggest except clearing out your GOPATH and starting over. The signature to types.Eval changed ages ago; you've got an out of date version of something. Maybe your x/tools repo is pointing at the old code.google.com location. <issue_comment>username_2: Thanks, that worked. Not really sure how that happened. <issue_comment>username_3: I faced the same problem. I solved this by ``go get -u golang.org/x/tools/go/types`` after remove ``$GOPATH/src/golang.org/x/tools``. The old repository seemed to use Mercurial, while the current one uses git. This caused the failing of ``go get -u golang.org/x/tools/go/types``.
<issue_start><issue_comment>Title: Copy paste a text to the step and it shows as already implemented username_0: ### Expected behavior If the step is not implemented yet it show as unimplemented and provide an option to implement it. ### Actual behavior The step is not implemented yet but it is shown as implemented and does not provide an option to implement it. ### Steps to reproduce 1. Create a spec ``` Specification Headings ===================== Abc ----- * ``` 2. Copy text If your application supports Unicode, use \uFFFF or \x{FFFF} to insert a Unicode character. 3. Now paste it next to the star option. 4. This step is shown as already implemented. 5. It does not give the "implement step option as well" ### Gauge version ``` Gauge version: 0.4.1.nightly-2016-04-19 Plugins ------- html-report (2.1.0) java (0.3.4.nightly-2016-04-19) ``` <issue_comment>username_1: Cannot reproduce this issue. ``` IntelliJ version: 2016.1.3 Gauge IntelliJ plugin version: 0.2.1 Gauge version: 0.5.1 Plugins ------- java (0.4.1) ``` <issue_comment>username_0: The option of implementation is given when the `{` is escaped. However even after implementing it, the step is marked as unimplemented.
<issue_start><issue_comment>Title: GraphiteExporter: monitors registered with an "id" are not properly exported username_0: The `GraphiteExporter`doesn't take the ìd` assigned to the monitor into account when exporting the corresponding metrics value. Example: ``` // A counter declared by annotation @Monitor(name="mycounter1") private AtomicInteger mycounter1 = new AtomicInteger(); // Another counter private Counter mycounter2; public MyConstrctor() { mycounter2 = Monitors.newCounter("mycounter2"); Monitors.registerObject("InstanceId", this); } ```` The `GraphiteExporter` won't consider the *id* used when registering the monitors (_InstanceId_ in this example) - but only the monitor's *names*. So the produced metrics will be named: ```` mycounter1 mycounter2 ````` instead of ```` InstanceId.mycounter1 InstanceId.mycounter2 ````` This is causing overlaps when multiple instance of the instrumented/monitored object is created. This happens for {{NFHttpClient.getConnectionsInPool()}} for instance... <issue_comment>username_1: The id should get used. I'm not that familiar with spring cloud, but they were on a pretty old version. Please try the latest release.<issue_closed>
<issue_start><issue_comment>Title: suggest and reverse problems username_0: I'm having a hard time getting the API to work properly, and also understanding which components I actually need. I changed the layers URL in the demo to work with our maps. This works. I also changed the API URL to our installation, but it will only show search results. Suggest queries don't return anything ``` json {"type":"FeatureCollection","features":[],"date":1427269933035} ``` reverse actually returns an error ` TypeError: Cannot read property 'geometry' of undefined at demo.js:145 at angular.js:8598 at angular.js:12234 at k.$eval (angular.js:13436) at k.$digest (angular.js:13248) at k.$apply (angular.js:13540) at q (angular.js:8884) at u (angular.js:9099) at XMLHttpRequest.E.onreadystatechange (angular.js:9038)angular.js:10683 (anonymous function)angular.js:7858 (anonymous function)angular.js:12242 (anonymous function)angular.js:13436 k.$evalangular.js:13248 k.$digestangular.js:13540 k.$applyangular.js:8884 qangular.js:9099 uangular.js:9038 E.onreadystatechange ` Using the vagrant image is not an option for me, I need to integrate this into our infrastructure properly. Any pointers what I could be missing? <issue_comment>username_1: Hey @username_0! Is there a public facing link to your demo (with the changed URL layers parameters) that I can check out? My suspicion for why ```/suggest``` is not working is because maybe no geobias (```lat``` and ```lon``` or ```bbox```) is passed in the URL as parameters. ```/suggest``` required lat/lon for now as [documented here](https://github.com/pelias/api/wiki/API-Endpoints#suggest) ```/reverse``` is failing because either the demo is not making a successful API call with the required params (```lat```, ```lon```) or the API returns no results or results with incorrect geometry and this can happen sometimes if you pass ```lat``` for ```lon``` and vice-versa. Do let me know if you still facing problems and I'll be glad to help you :)<issue_closed> <issue_comment>username_0: hey @username_1, thank you for your comment. Actually, I just set up a test instance for you, where everything is working as expected o.O I guess I must have made a mistake in our development environment, I'll go through the process again. Thank you very much, nevertheless :)
<issue_start><issue_comment>Title: Unable to suppress the confirmation dialogs when running New-AzureRmResourceGroupDeployment with debug switch username_0: $ConfirmPreference or $DebugPreference don't work <issue_comment>username_1: Would like to use the debug output in automation <issue_comment>username_2: @username_0 @TianoMS Please add to appropriate milestone, or close if this is resolved <issue_comment>username_0: @username_1 - This should work if you set $DebugPreference="Continue". It will still log all the debug messages without prompting. Please let me know if this doesn't work for you. CLosing this for now.<issue_closed> <issue_comment>username_1: $ConfirmPreference or $DebugPreference don't work <issue_comment>username_1: No, I still get prompted when adding -debug even with $DebugPreference='Continue' <issue_comment>username_0: You don't need to add -debug when you set this preference. Sorry didn't call it out earlier <issue_comment>username_1: ok, that works...<issue_closed>
<issue_start><issue_comment>Title: also volume-mount /etc/sysconfig/docker username_0: for recent RHEL (or CentOS) atomic hosts with docker-recent/docker-latest split, it's neccessary to mount /etc/sysconfig/docker, because the /usr/bin/docker shell script reads this file and fails, if it does not exist. <issue_comment>username_1: LGTM <issue_comment>username_1: We'll rip all this out once 1.3 images ship which include the running `chroot /rootfs docker` <issue_comment>username_0: I hope so! <issue_comment>username_1: Thanks BTW!
<issue_start><issue_comment>Title: Cleanup the codebase, please! username_0: This code is a mess. First of all, you guys use different syntax styles. ``` if (foo) bar if (another foo) { bar; bla; } ``` or ``` if (foo) do_somthing; else { bar; } ``` This looks like a mess. Sencondly, there are functions in files, grouped with code they don't belong to. Example: there is a eval_rules() function in the colors.c file. I don't think this function does something with colors, when reading the prototype. If it does belong to the colors stuff, why is no documentation (or just so little) there? This leads me to the third point: documentation! There is no documentation in the code (or very little). Why don't you use a documentation generator like (which I recommend) doxygen? It would automatically clean up your code as you would have to write comments after a given styleguide! Other code style "issues": Well, code style is a much of personal taste. But look: ``` char foo = 42, *bar, *bla = NULL; int a = 2, b, *c, d = 12; double some = 2.23, other, dbl = 1.0; ``` looks much less nice than: ``` char foo; char *bar; char *bla; int a; int b; int *c; int d; double some; double other; double dbl; ``` I know it is much longer, but it's also very much easier to read. Function length is another point. I know C programmers love long functions. But long functions are not that readable. Think about static helper functions when function length is more than 50 lines, especially if there are several (nested?) loops in the function! Notice the 80-characters convention! Another point: Think about compile-constants for messages which get printed to the log or stdout/stderr. This makes the code more readable! --- All this stuff keeps me away from contributing! Don't get me wrong: I want to contribute, but that messy code keeps me away, as I don't get an overview that easy! I would love to contribute code-style commits! But just if I know, you would like to merge them. If not, I don't want to put that much effort into the code! Well, I think I'm not able to write commits for documentation, but I could embellish the codebase what code style concerns. <issue_comment>username_1: With mjheagle8's blessing, I'm taking over maintenance of this project. If you still care about this issue, please reopen it over at [my fork](https://github.com/username_1/tasknc). <issue_comment>username_1: @username_0 More specifically, I'd love it if you contributed style commits over at my fork.
<issue_start><issue_comment>Title: Projection Tests are off slightly username_0: e.g. ``` AssertionError: "POINT (111319.4907932735659415 110579.9652218975970754)" == "POINT (111319.4907932735659415 110579.9652218968840316)" at Object.<anonymous> (/home/username_0/Projects/node-geos/test/test.js:128:8) at Module._compile (module.js:402:26) at Object..js (module.js:408:10) at Module.load (module.js:334:31) at Function._load (module.js:293:12) at Array.<anonymous> (module.js:421:10) at EventEmitter._tickCallback (node.js:126:26) ``` ``` AssertionError: "LINESTRING (0.0000000000000000 0.0000000007081155, 111319.4907932735659415 110579.9652218975970754)" == "LINESTRING (0.0000000000000000 0.0000000007081155, 111319.4907932735659415 110579.9652218968840316)" at Object.<anonymous> (/home/username_0/Projects/node-geos/test/test.js:135:8) at Module._compile (module.js:402:26) at Object..js (module.js:408:10) at Module.load (module.js:334:31) at Function._load (module.js:293:12) at Array.<anonymous> (module.js:421:10) at EventEmitter._tickCallback (node.js:126:26) ```<issue_closed>
<issue_start><issue_comment>Title: Losing Infowindow and/or Click Event on Map Update username_0: <h1>{{item.project_name}}</h1> <p style="margin: 0;">Location: <b>{{item.town}}, {{item.country}}</b></p> <p style="margin: 0;">Tech Description: <b>{{item.tech_desc}}</b></p> </google-map-marker> </template> </template> </google-map> </template> ``` Upon initial loading of the webapp, things work really well. I can click on a marker and the infowindow shows the content. However, if I change any values in my sites array, I seem to lose the infowindow and/or the click event. I have to refresh the browser to get back to my initial condition. Also, The marker locations will update perfectly if I change lat/long and hovering shows tooltip aka. title, appropriately as well. I've also added a click event which calls a console.log to validate that it functions.. It works well until a value is changed in the {{sites}} binding, so it seem I am losing click events when the google-map updates itself? There are no scripts in this element. If I can provide more information, please let me know. Thanks in Advance, Scott <issue_comment>username_1: Can you post jsbin that repos the problem? <issue_comment>username_0: Sure can, in fact, Scott Miles responded to this issue on stackoverflow. Here is his response + jsbin with workaround. He was able to elaborate on my observations more coherently. http://stackoverflow.com/questions/35318751/generating-google-map-markers-via-iterating-over-an-array-using-data-binding-wh and his jsbin with workaround. http://jsbin.com/hobixi/edit?html Thanks for looking into this!
<issue_start><issue_comment>Title: add: format req body and res body username_0: This is my first pr. If there is anything inappropriate, please let me know. Thank you very much. <issue_comment>username_1: Thanks for this pull request @username_0, I'll do some testing and try get this merged shortly. <issue_comment>username_1: These changes have made it in via #12. Thanks for this contribution @username_0
<issue_start><issue_comment>Title: Who preformed an action username_0: Hi, I just installed Security Monkey and started to get alerts. I cant find which user account performed an action. Is there a way to tell Security Monkey to add the name of the user who performed an action. Example: Security Group was changed, i can see the diff JSON and to understand what has been changed but i can't tell which user performed the action <issue_comment>username_1: Hey @username_0 - Good question. So security_monkey asks the AWS API's what things look like, but it doesn't currently have information on who changed something. CloudTrail is your best bet for finding out who changed something. We've talked about having CloudTrail integrate with security_monkey for a while but haven't coded anything up in a releasable form.<issue_closed>
<issue_start><issue_comment>Title: duration of CronJobLog username_0: I have added a duration of CronJobLog to CronJobLogAdmin as human readable representation. It is also possible to order and filter by duration. <issue_comment>username_1: There is an issue: `'long' object has no attribute 'days'` in line https://github.com/username_0/django-cron/blob/master/django_cron/admin.py#L54 When I change this line to: `return humanize_duration(obj.end_time - obj.start_time)` everything is working OK. Could you please check if this extra parameter works properly? <issue_comment>username_0: Well, I presume it depends of database engine (I've tested it on PostgreSQL, and you?). It will be better to skip extra parameter and use your solution instead. Thank you. <issue_comment>username_1: Ha, I have tested on MySQL. Lemme check some more and I will merge.
<issue_start><issue_comment>Title: Window goes totally black while resizing it on OS X username_0: While resizing the window, the entire window goes black. It remains black for as long as the corner of the window is held. Once released, the window redraws and returns. Needless to say this looks awful. Any ideas why this would happen and how to fix it? Using Mac OS X 10.11.5 El Capitan ``` [INFO ] [Logger ] Record log in /Applications/Kivy.app/Contents/Resources/.kivy/logs/kivy_16-06-04_5.txt [INFO ] [Kivy ] v1.9.1-dev0 [INFO ] [Python ] v2.7.10 (default, Oct 23 2015, 19:19:21) [GCC 4.2.1 Compatible Apple LLVM 7.0.0 (clang-700.0.59.5)] [INFO ] [Factory ] 179 symbols loaded [INFO ] [Image ] Providers: img_tex, img_imageio, img_dds, img_gif, img_sdl2 (img_pil, img_ffpyplayer ignored) [INFO ] [OSC ] using <multiprocessing> for socket [INFO ] [Window ] Provider: sdl2 [INFO ] [GL ] OpenGL version <2.1 INTEL-10.14.66> [INFO ] [GL ] OpenGL vendor <Intel Inc.> [INFO ] [GL ] OpenGL renderer <Intel(R) HD Graphics 6000> [INFO ] [GL ] OpenGL parsed version: 2, 1 [INFO ] [GL ] Shading version <1.20> [INFO ] [GL ] Texture max size <16384> [INFO ] [GL ] Texture max units <16> [INFO ] [Window ] auto add sdl2 input provider [INFO ] [Window ] virtual keyboard not allowed, single mode, not docked [INFO ] [Text ] Provider: sdl2 [INFO ] [Base ] Start application main loop [INFO ] [GL ] NPOT texture support is available ``` <issue_comment>username_1: Duplicate of https://github.com/kivy/kivy/issues/2844, you can find more details there.<issue_closed>
<issue_start><issue_comment>Title: Changing cursor color? username_0: how to change cursor color? ![hide_underline](https://cloud.githubusercontent.com/assets/10336613/9596726/18012d8a-508e-11e5-94ff-693ffd92e2ed.png) <issue_comment>username_1: android:textCursorDrawable="@null" should result in the use of android:textColor as the cursor color. <issue_comment>username_2: It does change the color, but leaves you with a very slim cursor. Also the cursor "handle" that appears when moving the cursor still uses the old color. You probably have to copy the cursor resource file, modify its color, add it to your drawables folder and then set it in xml or code.
<issue_start><issue_comment>Title: Convert ORK to PSR-1 username_0: The ORK should conform to PSR-1, the [Basic Coding Standards](http://www.php-fig.org/psr/psr-1/) recommendation. This will simplify the process of onboarding new developers as well as make maintaining the ORK easier going forward. <issue_comment>username_0: This is blocked by #98. <issue_comment>username_1: I have been reviewing PSR requirements on and off for the last 2 years. I am not sold on this. I understand the impetus, but it will be a lot of work, and it largely solves a problem which may not be completely pertinent. Just for reference, I have been writing most of my new code in PSR-standards, including libraries, so I have nothing against it. It's just with limited coding resources available, I'm hoping to focus on value-add for our users. <issue_comment>username_0: Mind if I start converting pieces to PSR-1?
<issue_start><issue_comment>Title: [layertree/layerselection] not the same layer title in selection and layertree username_0: UC: open https://map.geo.admin.ch/?topic=swisstopo&lang=de&bgLayer=ch.swisstopo.pixelkarte-farbe&catalogNodes=1538&layers=ch.swisstopo.wk50-papierkarte.metadata , look at the layertree/catalog and the name of the activated layer, look at the name of the layer in the selection Result: not the same layer name in selection and catalog (yes, there is also an error in geocat, in DE there is a RM alternativtitle, data owner is informed to solve this) Expected: it is always the same layer name and language in catalog and selection <issue_comment>username_1: @username_0 you already found the geocat problem. About the difference, I really don't know what to say. The two are coupled by a trigger in the BOD, which seems to have not worked in this case (we should have had the RM version in the selection too). I cant reproduce the mishap, since the trigger reacts correctly every time I call it. To sum up: I fixed the issue but I'm unable to see the reason of it. Maybe @username_2 ? <issue_comment>username_2: it is hard to find the reason for the problem. The catalog translation comes directly from the bod (re3.view_catalog). Currently the bod entries for this layer is wrong on prod and fixed on dev (thanks @username_1 ). The layer selection translation comes from the chsdi code, translation source is public.transations. The current entries in bod_prod are correct. Since the table dataset and translations are linked with a trigger there are two possible reasons for the difference: 1. trigger does not work 1. translations table has been fixed manually without updating dataset table<issue_closed> <issue_comment>username_3: This is fixed. <issue_comment>username_0: I have found the same issue today again: **UC I: s**earch for Grundwasser in the search field, choose the layer result "NAQUA-QUANT Messstellen" and load it to the map. **UC II:** open the bafu-topic -> waters -> state of groundwaters and add the same layer **Result:** the layer has a different title in the selection than in the catalog and in the search result list. The title from the search and from the catalog is correct since it corresponds the current geocat-alternate title (the title from the selection is an previously used in geocat, but overwritten for quite a while now) **Expected:** layer has the same title everywhere in map.geo.admin.ch ![layername_issue_selection](https://cloud.githubusercontent.com/assets/4577749/19154872/803aacf2-8bdc-11e6-8e89-fbe9574a6adb.PNG) <issue_comment>username_0: UC: open https://map.geo.admin.ch/?topic=swisstopo&lang=de&bgLayer=ch.swisstopo.pixelkarte-farbe&catalogNodes=1538&layers=ch.swisstopo.wk50-papierkarte.metadata , look at the layertree/catalog and the name of the activated layer, look at the name of the layer in the selection Result: not the same layer name in selection and catalog (yes, there is also an error in geocat, in DE there is a RM alternativtitle, data owner is informed to solve this) Expected: it is always the same layer name and language in catalog and selection ![layertreeselectionissue_20160429](https://cloud.githubusercontent.com/assets/4577749/14911028/b9d7d7f4-0df2-11e6-8445-68d86dbcac2c.PNG) <issue_comment>username_3: I can't reproduce here. Caching issue? Did you try this in the morning? If yes, it might be fixed now because we had old layersConfig in cache.<issue_closed> <issue_comment>username_0: Observed that by chance yesterday afternoon. But now I can't reproduce it anymore. Must have been something old in the cache... <issue_comment>username_0: UC: open https://map.geo.admin.ch/?topic=swisstopo&lang=de&bgLayer=ch.swisstopo.pixelkarte-farbe&catalogNodes=1538&layers=ch.swisstopo.wk50-papierkarte.metadata , look at the layertree/catalog and the name of the activated layer, look at the name of the layer in the selection Result: not the same layer name in selection and catalog (yes, there is also an error in geocat, in DE there is a RM alternativtitle, data owner is informed to solve this) Expected: it is always the same layer name and language in catalog and selection ![layertreeselectionissue_20160429](https://cloud.githubusercontent.com/assets/4577749/14911028/b9d7d7f4-0df2-11e6-8445-68d86dbcac2c.PNG) <issue_comment>username_0: I have again a difference: English app, load the new layer ch.bazl.spitallandeplaetze, compare catalog and selection: Catalog: layername in EN. Selection: layername in FR Expected: all the same language everywhere. ![lang_diff_selection_catalog](https://cloud.githubusercontent.com/assets/4577749/19754512/7b6705e4-9c0e-11e6-9dd5-66a93777488c.PNG) <issue_comment>username_0: Another issue for this particular layer that's seriously concerning me: The layer title in the selection does not correspond to the layer title in the catalog even in FR language. In the catalog we have the correct title according to geocat.ch, in the selection not. Why that? <issue_comment>username_4: This is only in english, Translations in the layer tree come from geocat, and translations in catalog from your spreadsheet. We probably didn't have them. A make translate should do the trick. <issue_comment>username_4: And make translate it is. https://github.com/geoadmin/mf-chsdi3/pull/2298 Should add that to the process. always update translations before deploy. <issue_comment>username_0: No, layer translations always come from geocat. only category translations for the layertree come from the spreadsheet. <issue_comment>username_4: So, it's already in the guidelines, second step of deploying CHSDI. https://github.com/geoadmin/doc-guidelines/blob/master/DEPLOY.md And yes, it's possible, so let's put it this way, in the catalog they come directly from DB and in layertree they come from chsdi transaltions (po/mo). <issue_comment>username_4: By the way those translations were added last minute on Tuesday. bgdi_modified in translations 2016-10-25 14:32:28.527456 <issue_comment>username_4: So this is fixed at least in dev. I'll let @username_3 decide whether we need an emergency deploy. (which requires a deploy of a new snapshot of chsdi and redeployment of the actual snapshot of geoadmin for caching reasons).<issue_closed> <issue_comment>username_3: This has been a repeated issue and we need to solve this once and for all. The problem: 1) catalog gets layer translations from [view in bod](https://github.com/geoadmin/mf-chsdi3/blob/master/chsdi/models/bod.py#L390). it always delivers what's in bod 2) layersconfig gets layer translation from po files [via translate](https://github.com/geoadmin/mf-chsdi3/blob/master/chsdi/models/bod.py#L108), which only delivers what was translated by hand with make translate IMO, we should remove the layer translations from the po translations files and get it 'live' from the bod for the layersConfig - unless we have other occurences where we get them from the po translations. Then we can say wie remove the translations from the catalog view and also use the po file approach. @username_4 Thoughts? Please do an emergency deploy. I'll try to move this issue to mf-chsdi3. <issue_comment>username_3: UC: open https://map.geo.admin.ch/?topic=swisstopo&lang=de&bgLayer=ch.swisstopo.pixelkarte-farbe&catalogNodes=1538&layers=ch.swisstopo.wk50-papierkarte.metadata , look at the layertree/catalog and the name of the activated layer, look at the name of the layer in the selection Result: not the same layer name in selection and catalog (yes, there is also an error in geocat, in DE there is a RM alternativtitle, data owner is informed to solve this) Expected: it is always the same layer name and language in catalog and selection ![layertreeselectionissue_20160429](https://cloud.githubusercontent.com/assets/4577749/14911028/b9d7d7f4-0df2-11e6-8445-68d86dbcac2c.PNG) <issue_comment>username_3: Can't move issues, so I keep it here even though fix is eiter in bod or in mf-chsdi3. <issue_comment>username_4: Redeploying now. Well, the main advantage of having those in the po, is performance related. We avoid a bunch of requests to the bod with this solution. If you follow the guidelines, this should not happen. I think we have more that a couple of dependencies on that mechanism. Rather than changing that, I would add the make translate CMD, to the deploydev script, thus making sure we don't forget to update those. As a side note translations issues we had last time were coming from a bad deploy timing between chsdi and geoadmin, and not a missing translation. In general I think that our translation mechanism could be simpler as instead of adding translations manually to the empty.po file. We could simply "SELECT * FROM translations ORDER BY msg_id" <issue_comment>username_3: Hmm. We have two sources for the same thing. This is a bad thing. I doubt performance plays a role here - views with so view entries are very fast. Anyhow, I understand that there might be other services requiring these resources in the *.po. How about using the po translate mechanism for the catalog layer entries instead of getting those from the catalog view (and removing them from the view)? Regarding make translate and po files: We want to track the po files with git and creating snapshots based on git commits/tags. When we update translations during creation of the snapshot (in deploydev), then we don't have same state in git and snapshot. I'd rather have a check if translations are up-to-date before deploydev and abort it when it's not the case. Also, agree, to have empty po file does not make sense. <issue_comment>username_4: The idea is that you abort the deployment if some translations are missing. <issue_comment>username_4: It would be consistent but still wrong... <issue_comment>username_3: +1 and +1. Still, no way to use translate for the catalog easy? (Instead of duplication info in the view?) <issue_comment>username_4: Yes, we could do both. then it would make sense. <issue_comment>username_4: Though as @username_0 rightly pointed it out. Categories translations come from spreadsheet. If we are consistent with it, we should also put those in the translations. So the proposal is as follow. 1. Abort deployment scripts if translations are not updated 2. Add categories in translation table. (via DB trigger most likely as it is the case in dataset) 3. Generate empty.po from DB. Then it's a 2 step deploy process 4. Adapt views 5. Remove/adapt code in chsdi OR 1. Abort deployment scripts if translations are not updated. <issue_comment>username_4: translations and legends updated. https://github.com/geoadmin/mf-chsdi3/releases/tag/r_161027 <issue_comment>username_3: Another proposal: Can't we create the po files on the fly either in 1) when creating deploying snapshot 2) when creating snapshot 3) on post deploy hook 4) on application start 3 + 4 would assure that bod and po cache would always be in sync. No manual make translate anymore or check if it's done. This would mean we don't rack po files here anymore. I think that would be ok as we could track it with our bod_diff utility. <issue_comment>username_4: I think tracking it is important. When reviewing PR essentially for tooltips, extended and so on. We can control their quality. We often fix translations thanks to the PR in CHSDI. <issue_comment>username_4: Bod don't show all translations and only shows fields and layers that are different in geocat and bod <issue_comment>username_3: With bod_diff, I meant this: https://github.com/geoadmin/db/compare/tag_bod_master20160713...tag_bod_master20160803 We could extend this easily to include the translations, no? And what about our bod review process with started? Do we stop it? Start from scratch? Start using it? <issue_comment>username_4: The BOD process is ready. I can't do that alone, need to work with @username_2, document and form people. <issue_comment>username_5: Do we really need `gettext`? I mean, we only use the simpliest feature of it (no plurals, placeholder replacement, languages having many plurals form, etc.) <issue_comment>username_3: Alternatives? <issue_comment>username_3: The deploy process has been adapted to check for pending translations.<issue_closed>
<issue_start><issue_comment>Title: Improvements to generating apps from CDISC ODM docs username_0: This PR improves the apps generated from CDISC ODM docs. Easiest to review by commit. This is part of what I demoed. I've broken the Novartis work into several PRs: * Stuff that won't break the current Novartis project (#10507 and this PR) * Stuff that will break the current Novartis project (branches "metadata_from_ws", "odm_export", "export_with_ws") Hold off for now. One more commit to come. @username_1, @dannyroberts <issue_comment>username_0: OK, tests have been updated. This PR: * Adds support for CDISC ODM field validation * Fixes a bug with single-select multiple choice questions * Adds better case management to the CommCare app generated from CDISC ODM * Allows study event data to be compiled in a more generic way for export to OpenClinica/CDISC ODM-supporting clinical study management software. (This PR does not include changes to the CDSIC ODM export. That is PRed separately because it will break the export needed by KEMRI in March.) <issue_comment>username_1: Looks OK but feels like there is a fair bit of duplication in the question paths / case properties. Not sure if its worth the effort to clean up though. <issue_comment>username_0: Re-rebased off master, and updated tests for changes since this PR was opened. cc @username_2 <issue_comment>username_2: @username_0 Should I just be reviewing the last commit, or the whole PR? <issue_comment>username_0: @username_1 has reviewed most (all?) of this code already, if all this looks really daunting. The last commit isn't very interesting, @username_2; it just updates the tests for changes that happened later in this branch and in master. I'd recommend :office: but focus on [custom/openclinica/management/commands/**odm_to_app.py**][1] and [corehq/apps/app_manager/**xform_builder.py**][2] (and maybe [custom/openclinica/**README.rst**][3], to help explain what this is all about). The rest are test cases and expected results. A little context might help to make sense of this, if you're not familiar with what this code is for. This PR allows us to take an XML document, which describes forms defined in OpenClinica (a platform for managing clinical studies), and generate an app from that document, using `./manage.py odm_to_app ...`. The tool that builds the app is `XFormBuilder`. [1]: https://github.com/dimagi/commcare-hq/blob/6093b6ddb1fbe0a1a48067fc63002d64337124bc/custom/openclinica/management/commands/odm_to_app.py [2]: https://github.com/dimagi/commcare-hq/blob/6093b6ddb1fbe0a1a48067fc63002d64337124bc/corehq/apps/app_manager/xform_builder.py [3]: https://github.com/dimagi/commcare-hq/blob/6093b6ddb1fbe0a1a48067fc63002d64337124bc/custom/openclinica/README.rst
<issue_start><issue_comment>Title: Impossible to mirror the libsass binaries username_0: After having investigated the proxy issue, I found out an extra problem: Even when the proxy call is made to work, my corporate proxy does not allow downloading native executables (.so, .dll, etc..) from the internet. The binaries provided by node-sass are blocked by the proxy and prevent the proper installation of node-sass. As a workaround for this kind of issue, my organisation has an internal server dedicated to hosting trusted binaries that can be accessed without going through the proxy. That sounded like a reasonable solution to my problem until I disovered how the path to the binary is resolved: ```javascript function getBinaryUrl() { return flags.binaryUrl || package.nodeSassConfig ? package.nodeSassConfig.binaryUrl : null || process.env.SASS_BINARY_URL || ['https://github.com/sass/node-sass/releases/download//v', `package.version, '/', sass.binaryName].join(''); } ``` and ``` function getBinaryName() { var binaryName; if (flags.binaryName) { binaryName = flags.binaryName; } else if (package.nodeSassConfig && package.nodeSassConfig.binaryName) { binaryName = package.nodeSassConfig.binaryName; } else if (process.env.SASS_BINARY_NAME) { binaryName = process.env.SASS_BINARY_NAME; } else { binaryName = [process.platform, '-', process.arch, '-', process.versions.modules].join(''); } return [binaryName, 'binding.node'].join('_'); } ``` This means that if I want to override the location of the binary, I need to point to the exact location of the particular binary that works for each machine's platform/architecture/module version/node-sass version combination. This is currently pretty hard to do given the various environments that our build scripts must run on. node-sass should provide an option, say `--sass-binary-mirror-url`, that allows me to build a dumb mirror of all the binaries and that tells the node-sass installer to perform its binary resolution algorithm using this URL as base (instead of the hardcoded `https://github.com/sass/node-sass/releases/download/`) As a workaround, I guess I could copy both of these methods to my `gulpfile.js` and compute the binary URL there. However, I'm not sure how I could make node-sass pick up that url. Is it possible? <issue_comment>username_1: Maybe we could use npmconf for this? <issue_comment>username_2: The OP probably need what phantomJS does: provide the ability to mirror the CDN. We are half way through. I thought it would make no sense to provide same organisation as we have in our own setup, so I didn't provided the same naming scheme as we have in node-sass. Given this use-case, I am pretty convinced and we would augment the same name as our (or probably configurable format via package.json). All the abilities are available in `extensions.js` but they need to be tested thoroughly. Related: https://github.com/sass/node-sass/pull/743. <issue_comment>username_1: I think @username_0 needs something like this 62d991c and set `SASS_DIST_SITE` accordingly. <issue_comment>username_0: I can confirm that this sounds exactly like what I need. <issue_comment>username_3: :+1: This is a blocker issue for us at LinkedIn. <issue_comment>username_1: @username_3 Our premium support customers are always a top priority for us, can you try applying pull request #835 and set `SASS_BINARY_SITE` environment variable to the root of your mirror? <issue_comment>username_3: @username_1 Thank you for the quick turn-around! Hey, @winding-lines, does this look sufficient for our needs? <issue_comment>username_3: Actually, @username_4 is going to be looking into this :) <issue_comment>username_4: @username_1 will try and get an update to you about the patch by tomorrow. <issue_comment>username_1: Which timezone? :) <issue_comment>username_4: @username_1 @username_3 I had a chance to test the patch locally against our repository and it works great. <issue_comment>username_1: Thanks, I just uploaded the hopefully final version of the patch. <issue_comment>username_5: Catching up. @username_1 that patch looks reasonable for the time being. Please open PR. I'll aim to get it into 3.0.0. My only suggestion would be put the default url into the `package.json`. <issue_comment>username_1: Good idea, will update in #835. What do you think about test refactoring? <issue_comment>username_5: Apologies I missed your PR. Let's move this discussion to #835.<issue_closed>
<issue_start><issue_comment>Title: got 'warning: changing a uncontrolled input' when updated react to v15.0.0 username_0: Updated react to v15.0.0 today, and I got this warning: ``` Warning: <Component> is changing a uncontrolled input of type text to be controlled. Input elements should not switch from uncontrolled to controlled (or vice versa). Decide between using a controlled or uncontrolled input element for the lifetime of the component. More info: https://fb.me/react-controlled-components ``` And it seems relate to [the initial value of the \<input>'s `value` attribute](https://github.com/facebook/react/issues/6222)<issue_closed> <issue_comment>username_1: Duplicate of #735. Known problem, thanks. <issue_comment>username_1: React v15 support released with [`redux-form v5.0.0`](https://github.com/username_1/redux-form/releases/tag/v5.0.0). <issue_comment>username_0: cooool
<issue_start><issue_comment>Title: c++20 <coroutine> generators for boost beast username_0: I am starting work on a new c++20 project that uses boost beast websocket and http. I would like to use async with coroutines, similar to how async/await works in Python and Rust. However there are a lot of different coroutine implementations for c++, and I would like some advice on which one I should use with beast. Are there currently any plans for beast to support c++20 coroutines, and has work already started on any experimental c++20 coroutine generators for beast? I know how to write my own, but if somebody else is already working on it I would like to make use of it and perhaps make some contributions, instead of redoing what has already been done. I think it would be good to have an async generator for messages being received over the websocket, possibly also implemented as an infinite sequence that supports std::ranges? On the other hand perhaps c++20 coroutines are still too new, and I should stick with boost coroutines? I see from the http client coro_ssl example that beast is using boost coroutines instead of coroutines2. Should I stick with boost coroutines even though it has been deprecated, or should I use boost coroutines2 instead? <issue_comment>username_1: If you have C++20 and recent compiler suite, I'd be happy to recommend C++20 coroutines. The asio::awaitable support is pretty good and is well maintained. e.g.: `auto result = co_await object.async_something(asio::use_awaitable);` You can write generators, or you can use the `asio::experimental::channel` as a rendezvous point for multiple streams of events. `asio::steady_timer` also makes the basis of an efficient asynchronous semaphore, whichever coroutine system you choose. If you have older tooling, then you may want to consider boost::coroutine. <issue_comment>username_0: Thank you, yes. I am using c++20 and recent gcc. Is there any good example code for how to connect as a websocket client using asio:awaitable and beast? I looked at the example directory in the master branch and the coro example in there is I believe 3 years old and using boost::coroutine? <issue_comment>username_1: Here is something I wrote some months ago in both C++17 and C++20 https://github.com/test-scenarios/boost_beast_websocket_echo I am available most days on the CPPLang slack in the `#beast` channel. https://cppalliance.org/slack/
<issue_start><issue_comment>Title: Remove `no-ternary` rule username_0: I believe we all understand the arguments against using ternaries such as: * Usage for overly complex logic. * Usage with nested ternaries. * It can easily be replaced with a simple `if`. However I do believe that using ternaries can promote readibility and most importantly the final decision is up to the reviewing team which is often 2 people. The strict restriction is a bit of an *overkill* in my opinion. I suggest we keep the rules `no-nested-ternary` and `no-unneeded-ternary` as it indeed does limit the usage of the ternary operator. <issue_comment>username_1: :+1: <issue_comment>username_2: :+1: <issue_comment>username_3: +1. <issue_comment>username_4: :+1: if we keep the `no-nested-ternary` and `no-unneeded-ternary` rules. <issue_comment>username_5: :+1:<issue_closed>
<issue_start><issue_comment>Title: postcss-merge-rules can be unsafe in certain conditions username_0: Merging rules with common declarations can be unsafe when the selectors have different browser support: ``` CSS a{color:blue}b:nth-child(2){color:blue} ``` becomes ``` CSS a,b:nth-child(2){color:blue} ``` While the support for those selectors is different, so if you'd merge them, IE8 won't render anything, while in the non-optimized case it would render the first rule. <issue_comment>username_1: Good call. What do you think about making this optional for those consumers who don't want IE8 support? <issue_comment>username_0: Ideally, there should be a way to detect such things for any optimization and then look at the general browser support option (like for autoprefixer) decide what to use. There could be a lot of optimizations that are possible only for a limited set of browsers, or which could possibly break something for older browsers, so it should be a single framework that could be used through all of the cssnano plugins. <issue_comment>username_1: https://github.com/username_1/postcss-merge-rules/pull/9 <issue_comment>username_2: We found another situation, where this rule is unsafe. Consider this less source: ```less .list-unstyled { list-style-type: none; margin: 0; padding: 0; } ul { .list-unstyled; margin-top: 10px; li { .list-unstyled; } } ``` This yields the following css source: ```css .list-sorting-item ul { list-style-type: none; margin: 0; padding: 0; margin-top: 10px; } .list-sorting-item ul li { list-style-type: none; margin: 0; padding: 0 } ``` After minifying it with `postcss-merge-rules` one gets: ```css .list-sorting-item ul { margin-top: 10px; } .list-sorting-item ul,.list-sorting-item ul li { list-style-type: none; margin: 0; padding: 0 } ``` Due to reordering now the `margin: 0;` again overwrites the `margin-top: 10px`.<issue_closed> <issue_comment>username_1: @username_0 If you cannot be bothered to review the functionality which I wrote for you back in October, I guess that I cannot help you with this issue.
<issue_start><issue_comment>Title: 1.1.9 broke our app username_0: Upgrading to 1.1.9 changed the item exported by this library from a directly-usable react component to an object with a `default` property. Does it require a newer version of React now (should be reflected in dependencies?) or perhaps this is an interface change that warrants a major version number change? We're on React 0.13.x, if that matters<issue_closed> <issue_comment>username_1: Oh drat, with the change to ES2015, we switched the module style from commonjs. That was unnecessary and I just [switched it back](https://github.com/username_1/react-spinkit/commit/451ac333a607345eb2ebb32fca10cb3cd7d384a3) and released 1.1.10. Let me know if it works now. Thanks for reporting the issue! <issue_comment>username_2: Looks like 1.1.10 isn't built correctly. When i install it, i still see that `react-spinkit` exports ES6-like module with constructor in `default property`. ``` $ npm install web4@1.0.0 /Users/username_2/projects/tiq_backend/web └── react-spinkit@1.1.10 Object.defineProperty(exports, "__esModule", { exports.default = Spinner; ``` Note, that i've checked-out `v1.1.10` and it builds correctly: ``` $ git checkout v1.1.10 Previous HEAD position was eedffc4... 1.1.9 HEAD is now at 9c19e73... 1.1.10 |1|x86_64| username_2@antari:/Users/username_2/projects/react-spinkit $ npm run build (...) src/index.jsx -> dist/index.js |1|x86_64| username_2@antari:/Users/username_2/projects/react-spinkit $ grep exports dist/index.js module.exports = Spinner; ``` <issue_comment>username_1: Oh boo. Some of my projects auto build on publishing and others don't... this one didn't and I forgot to build it. Just published 1.1.11. Hopefully that fixes things. Thanks for the report.
<issue_start><issue_comment>Title: Bug/safelong coverage username_0: This PR introduces near 100% test covearge for SafeLong, and fixes several issues that were discovered while adding tests. At this point the behavior of SafeLong and BigInt should be identical for all values (except that SafeLong will be faster). <issue_comment>username_0: Review by @username_1, @tixxit. <issue_comment>username_0: Yeah, you raise a good point about using `copy`. I'll move the types to be private. <issue_comment>username_1: LGTM!
<issue_start><issue_comment>Title: Where is the documentation ? username_0: When i open http://app-framework-software.intel.com/ i´m redirected to this github, but here dont have the full documentation. <issue_comment>username_1: Looks like intel dropped the project?! <issue_comment>username_2: :0 i use a lot this framework <issue_comment>username_3: V2.1: https://web.archive.org/web/20141026093051/http://app-framework-software.intel.com/api.php https://web.archive.org/web/20141030075053/http://app-framework-software.intel.com/components.php https://web.archive.org/web/20141026073852/http://app-framework-software.intel.com/documentation.php#upgrade/upgrade V3: https://web.archive.org/web/20160512235834/http://app-framework-software.intel.com/api.php https://web.archive.org/web/20160511163349/http://app-framework-software.intel.com/documentation.php#intro/welcome
<issue_start><issue_comment>Title: compiling partials as templates username_0: Outside of node.js I have always set all my templates to be partials so I can use any template anywhere. I cannot figure out a way to do this with grunt pre-compilation. Is there a way to do this? <issue_comment>username_1: If anyone is still wondering, partials exist on the instance of Handlebars you are using, so Handebars.partials.templateName is available when you use {{ > templateName }} within another template. You can make them available by assigning them like so: Handlebars.partials.templateName = MyPrecompiledTemplate However you are defining your precompiled templates. In my case its this['JST']('myfilename).
<issue_start><issue_comment>Title: Update and rename README.rdoc to README.md username_0: Add syntax highlighting (: <issue_comment>username_1: Sorry, but I want to keep the README in rdoc format. If you have any content changes you would like to make to the README, please submit them as a separate pull request. <issue_comment>username_0: Ok, but I think a good project must have a nice documentation and syntax highlighting helps a lot in the readability. <issue_comment>username_1: The README is syntax hightlighed: http://sequel.username_1.net/rdoc/files/README_rdoc.html GitHub may not respect the syntax highlighting, but that's an issue with GitHub itself. <issue_comment>username_0: Ok, but many people are used to read the README direct on GitHub. The ideal would be having syntax highlighted docs in any source (in your site, GitHub etc). <issue_comment>username_1: Well, contact support@github.com and ask them to enable syntax highlighing for rdoc files. That way you benefit all projects using rdoc files. <issue_comment>username_2: I hope it's ok that I chime in, as someone who has adopted Jeremy's way of generating documentation in one of my own projects. @username_0 First of all, you'll notice that the documents linked on the website don't point to the equivalent documents on GitHub, but instead point to RDoc-generated versions on the time of the release. This ensures that they don't contain any unreleased changes. You generate the HTML-versions of these documents using the same way that you're generating them for method comments in Ruby files. You can choose either `rdoc` syntax, or `markdown` syntax. I chose `markdown` syntax, because I'm more comfortable with it, and this allows me to also use GitHub-flavoured fenced code blocks, which don't make any difference for RDoc itself, but makes it so GitHub adds syntax highlighting. The `rdoc` syntax doesn't support fenced code blocks. I *think* that this is all-or-nothing, either you can use `rdoc` syntax everywhere, or `markdown` syntax everywhere. In that case the only way that Jeremy could have GitHub correctly syntax highlight them is to completely switch everything from `rdoc` to `markdown` syntax, which would obviously be an unimaginable amount of work. Furthermore, `markdown` syntax has some downsides: it doesn't have the same syntax for generating tables as GitHub (it's arguably [more awkward](https://github.com/username_2/shrine/blob/f93cbfe7ed24e4ccb02256f216bc29052f302673/lib/shrine/plugins/determine_mime_type.rb#L15-L38)), code examples need 4-spaces ident (while `rdoc` only 2), and I think it also doesn't support explicit links to other classes. Lastly, Jeremy probably prefers the `rdoc` syntax. Just my explanation from personal experience using the same tools as Jeremy. Since many web apps support automatic detection of language and syntax highlighting it (StackOverflow, Disqus, ...), I think it makes a lot of sense to request the same from GitHub.
<issue_start><issue_comment>Title: Meshless Identity Map, Regularization for Active Cell models username_0: - Meshlesses Identity Map (takes nP instead of a mesh) - Tikhonov regularization if active cells are used (don't take derivs across interfaces between active cells and not) - testing improvements: test 1D, 2D, 3D on a random tensor mesh , also test that for a constant mref, phi_m(ref) = 0 (closes #78) <issue_comment>username_0: @username_1 : what do you think of this? <issue_comment>username_1: @username_0 : I think this looks very nice, I am for it being merged in. <issue_comment>username_0: @username_2 , @sgkang : good to go?? <issue_comment>username_2: Can we get rid of the _Meshless class and just make that default behaviour in the Identity? ``` eg = IdentityMap(5) ``` instead of _meshless? <issue_comment>username_2: @username_0 I have updated this functionality. Let me know what you think, after the tests are run we can merge? <issue_comment>username_0: Nice, I think this looks good. Is there any reason that we should only take the mesh or nP (ie. not allow you to give both?). I can't think of a case off-hand where you would need / want both, but maybe there is no harm in specifying both? <issue_comment>username_2: I was just thinking of this. It actually might need some tweaking to error if you specify both, because we don't actually support this functionality. <issue_comment>username_0: would you want both in the case of an active cell model? <issue_comment>username_0: TODO: add a test to look at the gradient when there is topography for a half-space (should be zero!) <issue_comment>username_2: This will conflict with regularization in PR #214. @username_0 is this good to merge in? <issue_comment>username_0: I think it is good to go. (still need to give some thought to taking both or only one of nP, mesh. there may be cases where we do want to give both --> for dealing with topo? but this can be handled in an issue) <issue_comment>username_0: It looks like these changes got over-written somewhere down the line... <issue_comment>username_2: It looks as if they got to dev? https://github.com/simpeg/simpeg/blame/dev/SimPEG/Maps.py#L15 <issue_comment>username_0: ah, I was confused about what should be in regularization.py
<issue_start><issue_comment>Title: transform Object.assign for IE11 Support username_0: Object.assign is not supported in IE11, so this component will not currently work on that browser ![ie11-object-assign](https://cloud.githubusercontent.com/assets/1483361/16419989/a3c14c52-3d1d-11e6-843e-5b8c36459347.PNG) <issue_comment>username_1: 👍 this would be great
<issue_start><issue_comment>Title: Check for 200 response code. username_0: Without this check, missing sourcemaps cause the package to continue as if the 404 message is the sourcemap, leading to crashes when responses like "Not Found" cannot be parsed as source maps. <issue_comment>username_0: @evanw bump <issue_comment>username_1: Is 200 the only the only valid status code? <issue_comment>username_0: @username_1 I guess so, it makes things more complicated if you need to do things like follow redirects. Initially thought to set range to `>= 200 && < 300`, or perhaps just `not >=400` but there's an awful lot of status codes in those ranges which probably don't make sense, e.g. `204 No Content` or undefined codes like `222`. Perhaps a whitelist would work: #### Accept Obviously Ok. * 200 OK * 201 Created * 202 Accepted #### Might need to implement redirect following afaik xhr requests that get these responses will have the pointed-to resource automatically automatically followed: * 301 Moved Permanently * 302 Found * 303 See other * 304 Not modified #### Probably just redirect as above, but never really used these Should just check spec. * 305 Use proxy * 306 Switch proxy * 307 Temporary redirect * 308 Permanent redirect #### Not sure how to handle Probably reject? * 203 Non-authoritative information * 204 No content * 205 Reset content * 206 Partial content * 207 Multi-status #### Definitely Reject * >= 400 <issue_comment>username_1: I guess only allowing 200 is reasonable for now, we'll change it later if necessary :) <issue_comment>username_2: :+1:
<issue_start><issue_comment>Title: [WIP] Pre pull images before running tests username_0: @username_2 <issue_comment>username_0: For openshift I have only searched in origin/test/ dir and could find only 2 unique images: gcr.io/google_containers/pause, openshift/origin Also if you have any suggestions to improve the image search method in this PR, let me know. <issue_comment>username_1: I just saw a ci failure due to gcr.io temporarily not being resolveable: https://ci.openshift.redhat.com/jenkins/job/test_pr_origin_extended/271/ In the case of the running select extended tests (in this case networking), the proposed approach of downloading all images would add unnecessary overhead where only a subset of images are required. A better solution would be a pull-through caching registry, but upstream docker appears to be dragging their feet on allowing the registry to cache from private registries like gcr.io: https://github.com/docker/distribution/issues/1431 <issue_comment>username_2: Our integrated registry supports pull-through, and I think it supports auth. Given that most if not all of the e2e images are fairly static tags (i.e. not "latest"), we could consider pre-pulling and baking the images into the AMI. <issue_comment>username_3: bump. this is still flaking regularly on us. <issue_comment>username_2: @username_4 @username_3 @username_7 @username_5 given what @username_1 wrote above, do we want to bake the e2e images into the amis? <issue_comment>username_0: @username_2 static tags will also change over time and given the number of images, wouldn't it require updating AMIs frequently? <issue_comment>username_2: @username_0 we update and rebuild the AMIs quite frequently <issue_comment>username_2: We alternatively could pull the e2e images, retag them to 1 of our registries running in AWS, and then add code in the various e2e scripts to pull from our registry and then retag them back to gcr.io (this saves us from having to bake them in to the AMIs and it saves us from having to talk to gcr.io for each test) <issue_comment>username_0: Also thinking if baking so many (third party) images into AMIs would be good from security point of view. <issue_comment>username_2: ? <issue_comment>username_0: Just thinking out loud, If there is any security issue with any of the images, wouldn't keeping track and updating AMIs be an issue? <issue_comment>username_2: These AMIs are strictly for running tests and as devenvs, and we rebuild them so often, the old ones get deleted soon enough. <issue_comment>username_4: I don't see a problem with pre-pulling a few common images. <issue_comment>username_2: The list has ~37 in it <issue_comment>username_5: @username_2 As long as individual tests associated with a particular pull aren't going to be dependent on particular versions of the images, it should be fine to cache them. <issue_comment>username_2: @username_5 yeah these are pretty much all upstream tests in Kube. You can see the list of images in this PR <issue_comment>username_6: Will we be able to use those images in extended tests? <issue_comment>username_7: Poor Rosie is working overtime labeling this as needs-rebase. @username_0 I'll close this one out for now, if you revisit this effort we can open it back up.
<issue_start><issue_comment>Title: Add Codecov Yaml username_0: Ignore `EZSwiftExtensionsTests` to calculate code coverage based on the `Source` files only. https://github.com/codecov/support/wiki/Codecov-Yaml https://codecov.io/gh/username_2/EZSwiftExtensions <img width="1130" alt="screenshot" src="https://cloud.githubusercontent.com/assets/2032500/19624897/a2105716-993b-11e6-986f-4f98ee8e24da.png"> <issue_comment>username_1: @username_0 nice catch! 😉 <issue_comment>username_2: Oh man, much worse than we thought :( <issue_comment>username_1: @username_2 that's for inspiration! We can do it 💪💪💪
<issue_start><issue_comment>Title: Ignore whitespace text nodes when parsing projects username_0: This just ignores purely whitespace text nodes which seem to be part of the parsing in .NET Core and newer versions of System.Xml Closes #270 Fixes test failures in #1004 when this change is ported to master. <issue_comment>username_0: @rainersigwald figured out the problem. The XML parser only looks at the first 4k of text to see if it's whitespace or not. Since our whitespace is 70k, it gives up and assumes it's text. https://github.com/dotnet/corefx/blob/bffef76f6af208e2042a2f27bc081ee908bb390b/src/System.Private.Xml/src/System/Xml/Core/XmlTextReaderImpl.cs#L5571
<issue_start><issue_comment>Title: Coulde not resolve dependencies username_0: Trying to install ```ghci-ng``` I get this error. ``` ~/githubs $ ls ghci-ng/ LICENSE README.md Setup.hs ghc ghci-ng.cabal rts scripts ~/githubs $ cabal install ghci-ng/ Resolving dependencies... cabal: Could not resolve dependencies: next goal: ghci-ng (user goal) rejecting: ghci-ng-7.6.3.5, 7.6.3.4, 7.6.3.3, 7.6.3.2, 7.6.3.1, 7.4.2.1 (global constraint requires ==0.0.0) trying: ghci-ng-0.0.0 next goal: ghc (dependency of ghci-ng-0.0.0) rejecting: ghc-7.6.3/installed-0d1... (conflict: ghci-ng => ghc>=7.8) Dependency tree exhaustively searched. ``` <issue_comment>username_0: My bad, I had an old ```ghc``` .<issue_closed>
<issue_start><issue_comment>Title: Add introspection service username_0: Requests for introspection and rules No Models yet Note1: Tests require recent (latest) fog-core > 1.36.0 (no gem yet) [1] Note2: Ironic-inspector rules creation is not easy (broken?) [2] [1] https://github.com/fog/fog-core/commit/7c26f346c19b3761ac0a0fd398649aa5a831f185 [2] https://bugs.launchpad.net/ironic-inspector/+bug/1564238 <issue_comment>username_1: @username_0 looks good in overall, please fix the tests, hound issues and the gemfile nit. Also could you add some doc under https://github.com/fog/fog-openstack/tree/master/lib/fog/openstack/docs <issue_comment>username_2: This is really promising. Thanks! Could we use `Fog::Introspection::OpenStack` rather than `Fog::Introspection[:openstack]` ? <issue_comment>username_2: Let us know when you're ready for merge @username_0 - 1,000+ lines of code will take us a while to review. <issue_comment>username_0: @username_2, thanks, I'm ready for merge. Yes it's a chunk meanwhile this code is mostly green because it adds introspection files and very little of existing code is impacted. <issue_comment>username_1: @username_0 looks good to me, follows the patterns we have here ,so :+1: Nice docs btw. :-) <issue_comment>username_2: Nice one @username_0 👍
<issue_start><issue_comment>Title: Docs: describe all RuleTester options (fixes #4810) username_0: **What issue does this pull request address?** #4810 **What changes did you make? (Give an overview)** Added descriptions for `parser`, `settings`, and `globals` to test descriptions. Also added mention of using `line` and `column` for invalid tests. **Is there anything you'd like reviewers to focus on?** Nothing in particular. <issue_comment>username_1: Per #6709, perhaps you might want to add a reference to `filename` as well? That way you could kill two issues with one PR. <issue_comment>username_0: Yeah, good call. <issue_comment>username_2: LGTM, but waiting another day for others to look <issue_comment>username_3: LGTM, waiting for others to review.
<issue_start><issue_comment>Title: Can you please explain how to you the Affix Deirective please username_0: Can you please explain how to you the Affix Deirective please <issue_comment>username_1: Sorry I have been very busy lately finishing several projects for customer, so I haven't had the time to fully test this lib with latest versions of Angular2 and create proper examples for each part. That said, this is how I am currently using on one of my projects: ``` <div [scrollSpyAffix]="{topMargin: 70, bottomMargin: 70}"></div> ``` also, make sure you import ```ScrollSpyModule.forRoot()``` in your main module and declare ```ScrollSpyAffixDirective``` in the module you want to use it in. <issue_comment>username_0: Hi thanks for you quick response i implemented it i placed the ScrollSpyModule.forRoot() in the imports of my app.module.ts and i also placed the ScrollSpyAffixDirective in the app.module.ts in the declarations and **import {ScrollSpyAffixDirective} from "ng2-scrollspy/src/plugin/affix.directive";** then i place **[scrollSpyAffix]="{topMargin: 245, bottomMargin: 70}"** on one of my divs in my app.component.html but i cannot get it to work. <issue_comment>username_2: There has to be scrollSpy directive used in some parent element and then it will start to work. But it does not work same as bootstrap affix. This directive only set affixTop or affixBottom - single class affix missing. <issue_comment>username_1: Sorry been busy but I am in the middle of finished this lib now. I hope to finish it during the weekend and include documentation for each component. <issue_comment>username_1: @username_2 Also with next update I will include class **affix** that will be active if any off **affixTop** or **affixBottom** are active. Hope this solves your use case. <issue_comment>username_2: I think it would be great if does it work like bootstrap affix : http://getbootstrap.com/javascript/#affix - there is also affix-top, affix-bottom and affix. <issue_comment>username_1: Please try new version [v0.3.0](https://github.com/username_1/ng2-scrollspy/releases/tag/v0.3.0). Make sure you read docs for new declaration system. I will try to create more documentation and example during next week. <issue_comment>username_3: @username_1 thanks for your comment! I can't find the documentation you mentioned. I tried to use it this way: ``` <div scrollSpy class="col-md-4 menu blog-1"> <aside [scrollSpyAffix]="{topMargin: '70px'}"> (...) </aside> </div> ``` I expected to have a class added / removed according to the scroll. Is it the expected behavior? Thanks very much for your help! <issue_comment>username_1: Yes that should work. <issue_comment>username_3: @username_1 following your message, I investigated a bit more the problem. In fact, we need to specify a value as a number: ``` <div scrollSpy class="col-md-4 menu blog-1"> <aside [scrollSpyAffix]="{topMargin: 70}"> (...) </aside> </div> ``` Thanks for the great tool! <issue_comment>username_4: Can any one give compile example how to make it work ? i Import `import {ScrollSpyAffixDirective} from "ng2-scrollspy/dist/plugin/affix.directive";` than in `imports: ScrollSpyAffixDirective` and in template: `[scrollSpyAffix]="{topMargin: 70}"` and i get error: ScrollSpyAffixDirective is not an NgModule <issue_comment>username_3: In fact, it's not related to ng2-scrollspy but to Angular2. You can only specify modules into the `imports` property of a module. Regarding the `ScrollSpyAffixDirective` directive, you need to set it in the `declarations` one: ``` (...) import { ScrollSpyModule } from 'ng2-scrollspy'; import { ScrollSpyAffixDirective } from 'ng2-scrollspy/dist/plugin/affix.directive'; (...) @NgModule({ declarations: [ (...) ScrollSpyAffixDirective ], imports: [ (...) ScrollSpyModule.forRoot() ] }) export class SomeModule { } ``` <issue_comment>username_4: yes ive done this but on first time I enter page it works but when i get back i get error: `ScrollSpy: duplicate id "window". Instance will be skipped!` <issue_comment>username_1: @username_4 That is just a warning message. Basically you are using ```<div scrollSpy>``` in more then one place. Because there is no point in having more then one listener in window, it skips all but the first. <issue_comment>username_4: Well fine but scroll is not working after. I enter page X were there is scroll. I go page Y there is no scroll there all angular 2 routing component. I get back to page X scroll not working and i have this message. <issue_comment>username_1: 1. ```ScrollSpyModule.forRoot()``` must be imported in main module. 2. ```scrollSpy``` directive should be placed in main component template. 3. import ```ScrollSpyAffixModule``` in the modules you want to use ```scrollSpyAffix``` <issue_comment>username_4: app.module `import {ScrollSpyAffixDirective} from "ng2-scrollspy/dist/plugin/affix.directive"; import {ScrollSpyAffixModule} from "ng2-scrollspy/dist/plugin/affix"; declarations[..... ScrollSpyAffixDirective]...imports[....ScrollSpyModule.forRoot(),] ` component `import { ScrollSpyModule, ScrollSpyService } from 'ng2-scrollspy';` template ``` <div scrollSpy> <div [scrollSpyAffix]="{topMargin: 10}">aaaaa</div> </div> ``` I enter page 1 time ok, i refresh page ok. I go via click on different path in ap and get back same error. What I`am missing ? <issue_comment>username_4: Ok it will work when i give scrollSpy on <router-outlet ></router-outlet > dont know if this is intended but it works. <issue_comment>username_5: I have the same problem! imported `ScrollSpyModule.forRoot()` in app.module imported `ScrollSpyAffixModule` in component module ``` <div scrollSpy> <tool-bar [scrollSpyAffix]="{topMargin: 100}" [baseFontSize]="16" (fontSize)="fontSize = $event"></tool-bar> </div> ``` no effect <issue_comment>username_6: @username_1 looking back through this thread, the most troubling bit is this: "and declare ScrollSpyAffixDirective in the module you want to use it in" This will not work if you want to use this directive in more than one module of your application, which many of us will. A given directive can only be declared in one module. It seems like the module system is not set up correctly for scroll spy. There should be a module which I can import to every module where I want to use this directive and others, only that module should declare the directives. <issue_comment>username_6: @username_1 Version 0.3.8, appears to be current on NPM, does not export a ScrollSpyAffixModule. <issue_comment>username_1: It does from ```ng2-scrollspy/plugin/affix``` I am keeping core separate from plugins. <issue_comment>username_6: @username_1 Thank you, I was able to import it from 'ng2-scrollspy/dist/plugin/affix' <issue_comment>username_7: I have the following template ``` <div scrollSpy scrollSpyElement="test" style="max-height: 100px; overflow: auto;"> <div style="height: 500px;"> <br/> <br/> <br/> <br/> <br/> <p>bbb</p> <div [scrollSpyAffix]="{topMargin: 10, bottomMargin: 10}">aaaaa</div> </div> </div>` ``` scrollSpy is working well, but scrollSpyAffix isn't. When scrolling, I always have ``` <div _ngcontent-cyv-7="" ng-reflect-options="[object Object]" class="**affix affix-top**">aaaaa</div> ``` So my div is alway fixed. Why is it? Thanks. <issue_comment>username_8: With the help of @username_1 I've created an example implementation https://ngx-scrollspy.now.sh The source can be found here https://github.com/username_8/ngx-scrollspy-angular-cli-demo <issue_comment>username_9: Has anyone successfully gotten the affix directive working using the latest version of the package from npm - ngx-scrollspy, ver 1.2. I see lots of comments about this, but most references are old and relating to older packages. It appears the package structure has changed and the dist folder is no longer within the package. Is this a viable package anymore (for Angular 4) or should we abandon trying to get this to work? <issue_comment>username_10: @username_9 July this year isn't that old. Try this. ```diff diff --git a/src/app/pages/pages.module.ts b/src/app/pages/pages.module.ts index 400f26c..585e0bd 100644 --- a/src/app/pages/pages.module.ts +++ b/src/app/pages/pages.module.ts @@ -1,8 +1,8 @@ import { NgModule } from '@angular/core'; import { CommonModule } from '@angular/common'; -import { ScrollSpyIndexModule } from 'ngx-scrollspy/dist/plugin' -import { ScrollSpyAffixModule } from 'ngx-scrollspy/dist/plugin/affix' +import { ScrollSpyIndexModule } from 'ngx-scrollspy' +import { ScrollSpyAffixModule } from 'ngx-scrollspy' import { PagesRoutingModule } from './pages-routing.module'; ``` <issue_comment>username_10: @username_9 July this year isn't that old. Try this. ```diff diff --git a/src/app/pages/pages.module.ts b/src/app/pages/pages.module.ts index 400f26c..585e0bd 100644 --- a/src/app/pages/pages.module.ts +++ b/src/app/pages/pages.module.ts @@ -1,8 +1,8 @@ import { NgModule } from '@angular/core'; import { CommonModule } from '@angular/common'; -import { ScrollSpyIndexModule } from 'ngx-scrollspy/dist/plugin' -import { ScrollSpyAffixModule } from 'ngx-scrollspy/dist/plugin/affix' +import { ScrollSpyIndexModule } from 'ngx-scrollspy/plugin' +import { ScrollSpyAffixModule } from 'ngx-scrollspy/plugin/affix' import { PagesRoutingModule } from './pages-routing.module'; ```
<issue_start><issue_comment>Title: Problem with gulp.watch, mocha and promisify username_0: When I using gulp.wach to run tests (mocha) on each changes of the files, `nodegit` initialized without error only first time. I've created repo to reproduce this problem (see `README` for error stack). Please check it, if you have a time https://github.com/username_0/promisify-issue <issue_comment>username_1: I ran into this issue earlier today, and found [`gulp-spawn-mocha`](https://github.com/knpwrs/gulp-spawn-mocha) to be a quick fix.
<issue_start><issue_comment>Title: MapR Loopback NFS Does not work username_0: Due to an issue with shared memory, the NFS Loopback doesn't work at this time. Currently working with MapR. Alternatives: If using M3 (community) MapR, start NFS on a node, and have all individual nodes mount NFS there. <issue_comment>username_0: This may be fixed in #35, it needs to be tested.
<issue_start><issue_comment>Title: Cache not added to history if unsaved and logged from popup username_0: Been running the latest nightly builds for som time now and there's one annoying bug with the ones I've used this week. The history doesn't contain all the offline logs resulting in missing found logs when exporting the feild notes. I noticed this the first time July 16th and it's the same today. All caches are saved and updated on the phone prior to caching. The offline logs are saved and I'm able to open the logs from the individual caches, but they simply don't appear in the history. Today 2 of the caches are not shown, last time it was 3 or 4. Currently running 2014-07-18-NB1-0822830 and can confirm that the bug is still there. <issue_comment>username_1: I know that there are many people that suffer from this bug and as a consequence doesn't use the History at all. Please change priority of this bug if possible. <issue_comment>username_2: Raising priority as we get quite some complaints where offline logs have been lost due to this reason! <issue_comment>username_2: It might even get worse: A chat with a user made me aware that caches logged offline under that condition might stay forever in the database (making it increasing) without beeing visible to the user in any list (not saved=not in lists, not in history). Only by explicitly loading them (e.g. by GC-code) the user would bring those caches back into the normal cycle. <issue_comment>username_2: Complaints keep coming in on support mail. So either users have started using this usecase or another manifestation of this issue has been introcuced recently. <issue_comment>username_2: Here is a (german) description of that problem from FB. Maybe the hint that the logged date is missing is the clue for finding the root cause why it is not listed in the history: [...]normal steht dann wenn man den Cache geloggt hat und das Listing aufmacht als Status "Gespeicherter Log und das DATUM" z.B. Gespeicherter Log 11.08.2015 wenn man den Cache loggt, ohne ihn vorher zu speichern steht nur "Gespeicherter Log" also ohne Datum daher scheint er wohl dann auch im Verlauf nicht gelistet zu werden. <issue_comment>username_1: I have also found that a cache is not added to the History list when long-pressing on it's name in a list and selecting Log visit. It does not add the Logged date in that case either. Cache that was logged via Cache details - with Logged datum: ![image](https://cloud.githubusercontent.com/assets/1811963/9294236/27a7f48c-4448-11e5-9876-3d459bbb0bd9.png) Cache that was logged via popup menu in List view - missing Logged datum: ![image](https://cloud.githubusercontent.com/assets/1811963/9294240/5f82035c-4448-11e5-9390-8b29b4e2f9ca.png) Both marked as found in List: ![image](https://cloud.githubusercontent.com/assets/1811963/9294234/f826780a-4447-11e5-8039-c5e96a34b24d.png) Only cache that was logged via Cache details is listed in History: ![image](https://cloud.githubusercontent.com/assets/1811963/9294247/a5b9e376-4448-11e5-8f68-078881e07913.png) I hope that this helps identifying the bug and squashing it once and for all. <issue_comment>username_0: 2015-08-17-NB-c65e413 This happened again. 1 event, 1 traditional and 1 unknown. The unknown is not visible in the history, but it is saved and if I open the cache it clearly states that there's a saved log. <issue_comment>username_1: First of all, sorry for nagging... But can someone of the developers, @username_3 @username_7 @username_2 @username_6 @kumy, tell me why this bug has not been fixed yet? It is one of five issues at the moment with both the labels High and Bug so I think it is severe enough to be prioritized. Don't you think? I think that the offline logging feature is really great in c:geo and I find it sad to hear about friends that avoids using it because they have been "burned" after loosing Found logs because of this particular bug. <issue_comment>username_3: Sure, I can explain why it is not fixed yet: nobody has submitted a patch for it. And nobody has offered a bounty either to encourage a new developer to jump in the boat and fix it. This is as simple as that. <issue_comment>username_2: As @username_3 described. The `high` flag only gives some indication of which bugs might be the first ones to tackle if someone is interested in helping with bugfixing and cannot give a guarantee of a certain fixing time. But it also shows that we understood the priority of this bug. <issue_comment>username_2: Many many complaints on support regarding this issue. It would be really helpful if this can be fixed. Meanwhile I can reproduce it quite good: If a cache is not details (i.e. only loaded via live map) and the user opens the popup on the map and selects Menu - Offline log - Found it wont be added to the History list as it is not stored in our DB This seems to be a very common online usecase (just work with live map, no stored caches and just save the found info for later on). The "easiest" way here would be IMHO to store it at least to our "cached caches list" when the user triggers some logging actions. <issue_comment>username_3: Since we only save a cache when we have information about it (*e.g.*, its description), we would not be able to save such an incomplete cache if there is no network for example, thus preventing logging completely. I have not worked with the history before (and I don't use it much myself), but maybe the unsaved cache could be put in history anyway. <issue_comment>username_2: The main issue for me here is, that at some point of time it was working (so it is a regression). Although this issue is quite olf the problem as it is today is newer (the orginal case was solved, then reoccured unreproducible, then a new scenario appeared). <issue_comment>username_4: I also encountered this bug, which is fully reproducable to me. I really would appreciate, having it resolved. If I can assist in testing, please, let me know. <issue_comment>username_2: This is really one of the long runners which I ofter get confronted with on support mails. Can anyone try to tackle this? Here is a recent support mail (extract): I haven’t heard much about this issue since opening my ticket back on 4/9/2015. I was burned by using the store logs offline feature at that time and really haven’t used it much since. This past weekend I did a power trail in Oklahoma and was thinking the feature was fixed. Haha… When I got home, I used the filter option in Cgeo to pull up all caches with offline notes. 86 caches came up. Hum, I know that is not correct for I just knew we found more. Here is what I found and hopefully this will help solve the issue. [...] When I pulled up the filter option to find caches with offline logs. The filter pulled up only those caches that I had loaded to my phone before the trip. They were offline caches, even though I was using the live map feature in Cgeo to cache by. [...] Those caches I found and put an offline log on, that were not load previously, in other words, were on my live map view as real time, those caches were not found by the filter HOWEVER, if I used the search function in Cgeo, either by name or GC code, they were found and my notes were still there. Just had to find each one manually, which was a huge pain. So again, I was burned by the offline log feature If the programmers want Cgeo to continue to be used by geocachers, this year old issues really needs to be resolved. <issue_comment>username_3: One obvious solution would be to disable logging and navigation from the popup. But that would probably not be the most friendly way. <issue_comment>username_0: It's been more than 2 years since I started this thread. Any progress here? Version 2016.08.27, same issue still. Downloaded two caches (one traditional, one mystery) to a list to be certain I had all data. Saved the logs as offline logs, and none of them are visible from found caches. If I go to the live map they are shown with a red smiley. <issue_comment>username_2: That is strange compared to the latest clues we had on this issue: IMHO only unsaved caches should be affected and only if the are logged from the popup window on live map. In your case non of these conditions apply. Are you sure...I am doing the same thing all the day when caching and don't miss a cache in the history. <issue_comment>username_0: I had to read my old posts regarding this and as far as I can tell, I've had issues with caches saved back in 2014. This time, I know for a fact that the traditional was saved from the live map, and the mystery was saved by searching for the GC and saving it. I know I had all the data as it was shown on the correct position on the map as I was close enough to use the live map while driving towards it. <issue_comment>username_5: Interesting bug. Just did some short investigations. The History list searches for geocaches in table `cg_caches` with `visiteddate > 0` and `detailed = 1` (optionally filtering by cache Type). If saving an offline log without prior saving the cache only an entry in table `cg_logs_offline` is added. So it can't find it. If you save the cache and the offline log from the Map popup the `visiteddate` is not set correctly. Don't know yet, why. I would like to continue on this one. <issue_comment>username_2: That would be great. It seems detailed=1 is the first problem as unsaved caches with offline los might be missing on history then. {"api_version":"1.0","publisher":{"api_key":"05dde50f1d1a384dd78767c55493e4bb","name":"GitHub"},"entity":{"external_key":"github/cgeo/cgeo","title":"cgeo/cgeo","subtitle":"GitHub repository","main_image_url":"https://cloud.githubusercontent.com/assets/143418/17495839/a5054eac-5d88-11e6-95fc-7290892c7bb5.png","avatar_image_url":"https://cloud.githubusercontent.com/assets/143418/15842166/7c72db34-2c0b-11e6-9aed-b52498112777.png","action":{"name":"Open in GitHub","url":"https://github.com/cgeo/cgeo"}},"updates":{"snippets":[{"icon":"PERSON","message":"@username_5 in #4086: Interesting bug. Just did some short investigations.\r\n\r\nThe History list searches for geocaches in table `cg_caches` with `visiteddate \u003e 0` and `detailed = 1` (optionally filtering by cache Type).\r\n\r\nIf saving an offline log without prior saving the cache only an entry in table `cg_logs_offline` is added. So it can't find it.\r\n\r\nIf you save the cache and the offline log from the Map popup the `visiteddate` is not set correctly. Don't know yet, why.\r\n\r\nI would like to continue on this one."}],"action":{"name":"View Issue","url":"https://github.com/cgeo/cgeo/issues/4086#issuecomment-243597428"}}} <issue_comment>username_2: Just some more thoughts: It might be a straight forward plan to trigger saving the cache as soon as an offline log is added. That will avoid any necessary case destinctions. AFAIR we do the same if personal notes are added on an unsaved cache <issue_comment>username_5: @username_2 I was thinking about that, too. <issue_comment>username_6: There is unfortunately a fundamental difference between the personal note and the popup case - in the personal note case we have the full details (even already in the database if I am not mistaken) and we just need to 'store' it (assign the standard list). In the case of the popup this might not be true and AFAIK is our code tailored to expect only caches with full details in the database. So for the popup we have two cases IMO: # cache is saved/has full details: If necessary store it to standard list and 'just' fix the ```visiteddate``` handling # cache is not fully detailed: I guess here we have no chance other than to revisit our data flow to enable all parts to correctly handle partially available caches... <issue_comment>username_0: Glad to see that you are getting closer. I just wanted to ask if any of you have experienced this after importing a gpx? I haven't had time to test that myself. (just in case there's any difference between importing or saving from the map or searching) <issue_comment>username_5: I made a small test with saving the cache if not saved yet, before writing the log. I had to use the `Geocache.store()` method which also downloads the details if necessary. `DataStore.saveCache()` which is used in `CacheDetailActivity.ensureSaved()` wasn't sufficient. Open issue: when the Phone has no internet connection (e.g. Airplane mode). Then we can't download the missing details and are not able to store the Cache. But we still write the offline log. In this case it's still missing in the History list. Another strategy could be that the History list also looks in the offline logs table and merges them into the List (if missing). But then we would miss some Data which is present in each list entry. @username_0 I use GPXs (Pocket Queries) most of the time and can store offline logs, which show up in History. <issue_comment>username_6: @username_5 Downloading the details in case of offline logging is IMO not an option. It just takes too much time when it works and has a high chance of not working at all. Of course this is true whenever you download a cache 'in the field', but then it is an explicit action/choice made by the user, not something that just happens. And writing an offline log for a cache which is just in memory doesn't sound like an awfully good idea either... <issue_comment>username_5: @username_6 yes, it needs some time. Maybe too long to be acceptable. I've done it Async at the moment, but this has other drawbacks. What else can we do? Forbid offline Logs if the cache has not been saved before? Not user friendly either. Redesign the offline logs table to also contain some basic information about the Cache (e.g. title, type, ...) to be able to show it in the List activity. Other ideas? <issue_comment>username_5: Why do we only save caches where `detailed = true`? We have this flag in the DB and can react on it. I've made some tests and allowed to save the cache without detailed information. Then the save is fast (doesn't need to download anything) and it shows up in the History. The CacheDetails can still be opened and details loaded afterwards with `refresh`. Still sometimes the `visiteddate` is not set correctly. I can't find out why, yet. I suspect some race conditions. I'll continue with tests. <issue_comment>username_6: The history behind this is, that initially all caches were saved in the database always, even the live-map caches. This led to performance issues, and I assume one of the ways to avoid accidental save of a cache to the database was, to explicitly require ```detailed = true```. It is good to hear that it does not lead to immediate problems, if this restriction is lifted, but we should be careful, that this does not lead to flooding the database with partially downloaded caches. Perhaps we can require 'personal information' (like an offline log) in addition?<issue_closed>
<issue_start><issue_comment>Title: Is it possible to run Resque with different version of Ruby than the Rails app requires? username_0: We have a legacy Rails app that currently requires Ruby 1.8.7 (we use 1.8.7 REE in production) and Rails 2.3.14 and is using delayed_job as its background processor. We want to upgrade the background processing part to use Resque instead of delayed_job but noticed the current requirement is that minimum version of Ruby is 1.9.3. Is it possible for us to install 1.9.3 for Resque's purpose but when our application is run by Resque, it could still be using 1.8.7 as it does now? Or is there a way to allow Resque to run with 1.8.7? Even an older version? Thoughts? The reason we want to do this is because it is a much larger effort (and would required a bigger approval process) to start upgrading the rest of the application to use a new Ruby version. <issue_comment>username_1: Since the activity on this project is a bit low lately, I'll throw in my 2 cents. This could be possible depending on two things: 1. if your jobs and their dependencies can independently run under a newer ruby version. 2. If Resque enqueue works under 1.8.7 (you likely still need to queue from your core application) You would have to create a new Ruby 1.9.3+ project that loads only resque, your jobs and their dependencies. You could then run your resque workers from this new project. Then the critical test is to see if the client side `enqueue` works without issue from your existing Ruby 1.8.7 app. So the bottom line is that it really depends on what your jobs are doing and if they are compatible with the newer ruby version and if you can get `enqueue` to work with your existing app.<issue_closed> <issue_comment>username_2: Thanks @username_1 ! That's accurate. Also, since Resque 2 is gone, Resque 1 still supports 1.8.7.
<issue_start><issue_comment>Title: Special characters in filenames username_0: There is a similar open issue here but it was announced that it's fixed. My issue is when looking at a commit, name for file "sündmused.php" isn't displayed at all. I mean it takes a while to work out from the changes what file it was that changed. I tried both the last release and latest code from master. <issue_comment>username_0: Basically it's an issue with the git show command called in src/Git/Respository.php function getCommit. The output on my server is a mess. The reason no filename at all is shown is that the filename in the output has quotations around it, where ASCII file names do not. There is a regex there parsing the output that doesn't account for this. When I fix that I see the name but it's still a mess, with "ü" being replaced with "\303\274". After some digging I found git allows to disable that odd feature with `git config --global core.quotepath off`. It was an effort to make this execute under www-data but worth it. After that everything works fine.
<issue_start><issue_comment>Title: Segmentation Fault in 0.17.1 codegen username_0: System: Arch Linux Architecture: x86_64 LLVM: 3.7.1-1 from repositories LDC: 0.17.1-1 from repositories Compiling the attached code with "ldc *.d dast/*.d translate/*.d" causes a segmentation fault. The code fails to link as is with DMD, but the full project (from which this was taken via DustMite) compiles. Here's the stacktrace: #0 0x1b0cc40 llvm::sys::PrintStackTrace(llvm::raw_ostream&) (/usr/bin/ldc+0x1b0cc40) #1 0x1b0ba21 (/usr/bin/ldc+0x1b0ba21) #2 0x7f6658ed1e80 __restore_rt (/usr/lib/libpthread.so.0+0x10e80) #3 0x79fb42 DtoCallFunction(Loc&, Type*, DValue*, Array<Expression*>*, llvm::Value*) (/usr/bin/ldc+0x79fb42) #4 0x838439 (/usr/bin/ldc+0x838439) #5 0x836b50 toElemDtor(Expression*) (/usr/bin/ldc+0x836b50) #6 0x841e6f (/usr/bin/ldc+0x841e6f) #7 0x841f14 (/usr/bin/ldc+0x841f14) #8 0x847064 Statement_toIR(Statement*, IRState*) (/usr/bin/ldc+0x847064) #9 0x7c214c DtoDefineFunction(FuncDeclaration*) (/usr/bin/ldc+0x7c214c) #10 0x7a8c43 (/usr/bin/ldc+0x7a8c43) #11 0x7a99ce Declaration_codegen(Dsymbol*) (/usr/bin/ldc+0x7a99ce) #12 0x795608 DtoDeclarationExp(Dsymbol*) (/usr/bin/ldc+0x795608) #13 0x83867a (/usr/bin/ldc+0x83867a) #14 0x836b50 toElemDtor(Expression*) (/usr/bin/ldc+0x836b50) #15 0x841e6f (/usr/bin/ldc+0x841e6f) #16 0x841f14 (/usr/bin/ldc+0x841f14) #17 0x841f14 (/usr/bin/ldc+0x841f14) #18 0x847064 Statement_toIR(Statement*, IRState*) (/usr/bin/ldc+0x847064) #19 0x7c214c DtoDefineFunction(FuncDeclaration*) (/usr/bin/ldc+0x7c214c) #20 0x7a99ce Declaration_codegen(Dsymbol*) (/usr/bin/ldc+0x7a99ce) #21 0x7b5b5d codegenModule(IRState*, Module*, bool) (/usr/bin/ldc+0x7b5b5d) #22 0x6861f0 ldc::CodeGenerator::emit(Module*) (/usr/bin/ldc+0x6861f0) #23 0x63cc0b main (/usr/bin/ldc+0x63cc0b) #24 0x7f6657c1c710 __libc_start_main (/usr/lib/libc.so.6+0x20710) #25 0x67fd09 _start (/usr/bin/ldc+0x67fd09) [testcase.tar.gz](https://github.com/ldc-developers/ldc/files/201623/testcase.tar.gz) <issue_comment>username_1: Did you test compilation with 1.0.0? (or 1.1.0-alpha)
<issue_start><issue_comment>Title: [Bug] Nth order promotion username_0: The promotion is applied to cart, but after completing the checkout the promotion is removed. Looks like the cart is recalculated after summary page (which shouldn't happen). ``` Given I have promotion on first order When I complete the checkout Then the promotion is not applied ```<issue_closed>
<issue_start><issue_comment>Title: Remove extraneous dev dependencies username_0: 'karma-shinon-chai' succeeds both 'karma-chai' and 'karma-sinon' <issue_comment>username_1: does this work? sinon chai was failing on me...i guess it does since the tests are green <issue_comment>username_0: At first I implemented it too in `R-B` https://github.com/react-bootstrap/react-bootstrap/pull/751, and then I removed it https://github.com/react-bootstrap/react-bootstrap/pull/771 :smile: A quote from https://github.com/react-bootstrap/react-bootstrap/pull/771: With babel we can do it like this now: Instead of `let props = Object.assign({}, this.props);` use `let props = {...this.props};` which is transpiled into `var props = _extends({}, this.props);` What do you think ? -- I with no hesitance will remove `object.assign()` part from this PR - if you still want it of course :cherries: <issue_comment>username_1: Ya, most times the spread operator is good enough and built into babel. The reason I keep it around is for situations when I want to do `Object.assign(someObj, anotherObj)` where I need to mutate `someObj` instead of creating a new one, which you can't do with the spread syntax <issue_comment>username_0: Ah.. didn't know about this. I removed the second commit about `object.assign`.
<issue_start><issue_comment>Title: HEAD-request hangs when served with body username_0: Handling a HEAD-request with response-body hangs the connection. An example: ```scala val service = HttpService { // Works as expected case HEAD -> Root / "works" -> Ok() // Keeps connection open case HEAD -> Root / "hangs" -> Ok("with content") } ``` I would argue that ideally both handlers should yield the same response. <issue_comment>username_1: Which server backend? <issue_comment>username_0: `Http4sServlet` with `BlockingServletIo` (ver. 0.9.0) <issue_comment>username_1: Are you sending the HEAD request with `curl -X HEAD`? The first command does as I expect. The second command hangs. ``` % curl --head http://localhost:8080/http4s/science/hangs HTTP/1.1 200 OK Date: Tue, 25 Aug 2015 03:34:44 GMT Content-Type: text/plain;charset=utf-8 Content-Length: 12 Server: Jetty(9.3.2.v20150730) % curl -X HEAD http://localhost:8080/http4s/science/hangs ^C ``` Blaze is a little different: it sends the body if the client reads it despite sending a HEAD request. ``` % curl --head http://localhost:8080/http4s/science/hangs HTTP/1.1 200 OK Content-Type: text/plain; charset=UTF-8 Content-Length: 12 Date: Tue, 25 Aug 2015 03:40:18 GMT % curl -X HEAD http://localhost:8080/http4s/science/hangs with content ``` I think this is a [curl bug](http://sourceforge.net/p/curl/bugs/694/), whose resolution I disagree with. On both backends, a properly behaved HTTP client gets the correct result. <issue_comment>username_0: Yup, noticed the issue with curl. Tested with jQuery `$.ajax({type: 'HEAD', ...})` and it works fine. Of course, one could argue that it would be better if the server closed the connection by itself before the body. <issue_comment>username_1: We don't get control over the connection in Jetty or Tomcat. We do have this control in Blaze. @username_2, is there a reason we shouldn't? I can't think why not, but the fact that something as mature as Jetty doesn't makes me fear that I'm missing something. <issue_comment>username_2: Its simple enough to do in blaze though I seem to take the stance of 'if you sent a body in a response to a HEAD request, your wish is my command'. As an alternative, this could be addressed with a middleware very easily for all backends. <issue_comment>username_3: I would like to see the middleware (or blaze) work as follows: Any body to a response to a HEAD request is discarded (so that HEAD requests can be served using the same service that does e.g. GET). The Writer (and hence headers) are chosen normally, i.e. if the corresponding GET response would have had Content-Length or Transfer-Encoding, so will the HEAD response. Likewise (and I'll make an other issue about this): if a service gives a Content-Length, but then doesn't write enought bytes, the connection should be closed and no subsequent request should be read from it. <issue_comment>username_3: I.e. This should work so that GET gets the body and HEAD silently discards it. ``` val service = HttpService { case GET -> Root / "hello" -> go // serves headers, entity-headers and entity-body case HEAD -> Root / "hello" -> go // serves headers, entity-headers, but no entity-body } def go = Ok("World") ``` <issue_comment>username_1: Fixed by #487<issue_closed>
<issue_start><issue_comment>Title: Architecture diagram? username_0: As a developer I would like a diagram of the project architecture (including planned components) so that I can quickly and visually comprehend the project design Is there any diagram, even a simple drawing, depicting the initial design plans for this project? <issue_comment>username_1: What kind of diagram do you need? Which things do you need to be reflected? On its base clyde is simply a app that read a configuration and connects a set of middlewares in the specified order. It is up to you to decide which middlewares (filters) execute for each request. <issue_comment>username_0: Perhaps a basic diagram listing all the different filters, base compobent(s), and other relevant components such as routing, storage adapters, etc. <issue_comment>username_1: Yes, I need to improve much more the documentation. I will do once finished the configuration module. As a summary the filters (middlewares) contains all the functionalities: cache, rate limiting, authentication, etc, Clyde core is only the engine that runs them in the right order.
<issue_start><issue_comment>Title: Let's decode the file read right away username_0: Instead of decoding all of the byte objects in salt's test suite files Refs the change made in #65 where SpooledTemporaryFiles are read with `w+b` by default. Instead of tracking down all of the places byte strings need to be decoded in Salt proper, let's just do it sooner in salttesting. An example of `decode()` being used in salt's instegration suite is in https://github.com/saltstack/salt/pull/34990. Many more decodes will be needed throughout that file and this file if we wait to decode. (And that change will need to be reverted in https://github.com/saltstack/salt/pull/34990 if this is merged.) <issue_comment>username_1: I think this works only for ASCII contents....
<issue_start><issue_comment>Title: (QENG-1852) install_puppet unable to install windows msi (regression) username_0: - busted curl command on windows - busted msiexec command for installation - i cry myself to sleep every night <issue_comment>username_1: LGTM. <issue_comment>username_0: Updated based upon review comments. <issue_comment>username_0: Failure caused by test timeout, not releated to patch.
<issue_start><issue_comment>Title: Added Euphoria Class username_0: Enables users to increase on the limited script/native implementation of Natural Motions Euphoria animation engine <issue_comment>username_0: To use Euphoria use something like this Game.Player.Character.ArmsWindMill.DisableOnImpact = true; Game.Player.Character.ArmsWindMill.Start(5000);//5 seconds Game.Player.BodyBalance.Start();//Makes the ped actively try to not fall over when swinging arms around
<issue_start><issue_comment>Title: Remove "Features" option from header. username_0: The features page is already accessible by clicking the bevy icon in the header. This makes the "Features" menu option redundant. Removing it also frees up some space in the header. <issue_comment>username_1: bors r+ <issue_comment>username_1: (this was blessed by Cart on Discord already)
<issue_start><issue_comment>Title: Comment methods for schema builder username_0: Hi. I start work with issue #8148. I add 4 methods for Migration: * addCommentOnColumn($table, $column, $comment) * dropCommentFromColumn($table, $column) * addCommentOnTable($table, $comment) * dropCommentFromTable($table) and one method to ColumnSchemaBuilder: * comment($comment) Now i add support for * mysql (all methods) * postgresql (Migration methods and empty for column string, but if we create method in migrate controller - we add comments on each column) * sqllite (NotSupportedException for all) I hope for your code review... <issue_comment>username_1: it can be usefull be able to set comment on array of columns via method addCommentOnColumns. ``` public function addCommentOnColumns($table, $comments) { if (ArrayHelper::isAssociative($comments)) { foreach ($comments as $column => $comment) { $this->addCommentOnColumn($table, $column, $comment); } } } ``` We had discussed this method in #8148 <issue_comment>username_1: @username_0 are you gonna implement this feature for cubrid and sqlServer? <issue_comment>username_0: @username_1, about add comment to many columns. I think it is not necessary. We have not alterColumns or addIndexes methods. Why we must have addComments method? And of course, i will implement it for all supported database. But i can`t test it, because i have not computer with windows os. And now I want to know what my techniques right. <issue_comment>username_0: I implemented cubrid, mssql and oracle support. But i can't run tests. I'm sure, that it have problems (especially mssql) and i need help. Also it need code review. <issue_comment>username_1: What is the status of this PR? It has been completed almost 3 month ago. It would be nice at least set the milestone <issue_comment>username_2: @username_1 it's unclear about some DBs support. <issue_comment>username_1: Its already implemented for: * mysql * mssql * postgresql * sqllite * cubrid * oracle we need something else? Lets decide what needs to be done. Maybe i will try to complete this PR if @username_0 is too busy <issue_comment>username_2: @username_1 it was never tested for all DBs supported. <issue_comment>username_0: @username_1 it will be good. I have not computer or server with windows. It big problem for run test. <issue_comment>username_3: I added a checklist to the first post, so let's check this PR in all DBMS. Just check and comment that some DBMS works. I will try to test MSSQL <issue_comment>username_3: Works well in MySQL, PgSQL. SQLite does not support comments, so it's ok too. I was failed to run MSSQL so if anybody can do it - I will appreciate it. Thank you. <issue_comment>username_4: @username_3 which test do i need to run and in which environments to get this PR approved? Does this require more travis testing? Or should I make the tests locally and provide screenshots? <issue_comment>username_3: @username_2 as we decided in neighbor PR, we can merge this PR and test/fix MSSQL and OCI later. Huh? <issue_comment>username_2: OK. <issue_comment>username_3: Merged with adjustments. Thank you! <issue_comment>username_5: For comment any example?
<issue_start><issue_comment>Title: Get value of widget over api username_0: Hi A newbie here. I have found out how I could post values to different widgets over the REST api. I would like to know how I can get values from a widget. E.g. 'current' value from a number widget. Please could you advise - thanks in advance.<issue_closed> <issue_comment>username_0: I have raised this on Stack Overflow. Closing this one.
<issue_start><issue_comment>Title: Server table username_0: What is the purpose of the **server** table? I am unclear reviewing the code. - jss <issue_comment>username_1: It stores server-wide settings. <issue_comment>username_0: Does that mean that there will be only 1 row in this table? <issue_comment>username_1: Correct. There should only one row.<issue_closed>
<issue_start><issue_comment>Title: Fixes username_0: The first line change there is a fail on windows I think.. at least for me it was producing something like `C:/foo/bar/baz/../C:/foo/bar/baz` which is two absolute paths put together and after a realpath() you get false and it tries to mirror nothing into something which fails. The second line I don't really get it if it's a change depending on the bootstrap version or what, but for me that path is just invalid there is one bootstrap too many. Please check though.. <issue_comment>username_1: @username_2 I'm not sure about the symlink command, but the sass file I believe is still correct for bootstrap < 3.2, should be using mopabootstrapbundle-3.2.scss for 3.2+ which has that path fix already <issue_comment>username_2: @username_0 should we add a command line switch to still support scss bootstrap < 3.2 which uses the other path ? <issue_comment>username_0: I am not sure what the deal is and tbh I don't really have time to look at fixing that in details. <issue_comment>username_2: the deal is that the path's changes in scss bottstrap, from time to time, so if you use a older version, it will after merging again not work there :( i was just thinking about an --scss-version=3.0 or something, which will then use the old path, without it we use the new one ... or something like that. <issue_comment>username_2: if you got a better ideo please throw it to me :D otherwise it will find time this weekend to merge this and add a switch somehow <issue_comment>username_0: Well I guess the switch would be nicer than no switch for users stuck on 3.0/3.1/.. <issue_comment>username_2: finally merged, could you please test this. the 3.0 - 3.2 problematic was of course only there i just found a custom absolute path funktion in there that doesnt really make sense for me since realpath should manage this ... anyways on my linux machine this works also with symlinks etc and i still dont have a win machine to play around :/
<issue_start><issue_comment>Title: Feature: Add Typescript declarations username_0: Hi, it would be great to have typescript declarations because it would make it easier to use with Angular2 (Typescript) projects. https://www.typescriptlang.org/docs/handbook/writing-declaration-files.html Thanks and BR, Michael <issue_comment>username_1: I'm happy to accept a pull request for this, even one that's incomplete!<issue_closed> <issue_comment>username_1: The TypeScript declarations should live in the [DefinitelyTyped repo](https://github.com/DefinitelyTyped/DefinitelyTyped), so I'm going to close this issue. <issue_comment>username_2: Added TypeScript type definitions to DefinitelyTyped under `@types/humanize-duration`.
<issue_start><issue_comment>Title: Unify mouse and keyboard item focus/highlight username_0: Currently the user can have two separate highlighted rows in the dropdown: one for the mouse cursor and one for the keyboard. These two should be one and the same, and the mouse cursor should move the keyboard focus to the highlighted item. When the mouse cursor leaves the dropdown, the keyboard focus should be cleared, unless there is a previous value/selection in the input. If there is a previous value/selection, the focus/highlight should return to that item. <issue_comment>username_1: Things to consider when estimating: - Currently, changing focused item will trigger it to be scrolled into view (we most likely don't that to happen when using mouse) - Mouse event listeners need to be added to all item elements - Not sure how smooth the behaviour is when cursor sits on top of the dropdown while navigating with keyboard if they both change the focused item. (mouse cursor might even prevent you from navigating with the keyboard) I'm slightly worried about the ROI of the used effort and added complexity - we could have a short discussion in the grooming on how common the targeted use case of using both mouse and keyboard for navigation is. As food for thought, I'd like to propose a visual UX improvement instead, e.g. using border highlighting instead of background highlight for the items focused with a keyboard. <issue_comment>username_0: Thanks, good points! And sure, this should be discussed before jumping to implementation. As a reference, seems to work fine for the current combo box: http://demo.vaadin.com/sampler/#ui/data-input/multiple-value/combo-box I think mouse hover isn’t triggered if an element moves under the mouse without the mouse actually moving. But the scrolling should be prevented when you hover over the items, that’s true. Why do we need mouse event listeners on all items? <issue_comment>username_1: Well.. don't we need to listen to mouse over on the item elements in order to trigger the necessary changes and figure out which item the mouse is actually hovering on? Or is there some trick to use the CSS `:hover` ? <issue_comment>username_0: I mean that we only need a mousemove listener on the dropdown element, not all of the individual item elements in the dropdown. Nothing to do with CSS, just minimizing the amount of event listeners needed. <issue_comment>username_1: Well.. thinking that way it might be difficult to dig up the index of the individual items - we would probably need to expose the item indexes in the DOM then. <issue_comment>username_0: Hmm, right, because of iron-list. Perhaps there’s a method in iron-list that maps elements to indexes? <issue_comment>username_1: It seems to have something called `modelForElement(el)` which might work - at least combined with some private API from `<iron-list>` <issue_comment>username_0: 1d prototyping research. <issue_comment>username_0: I think we need to rethink this a little now that we have the value pre-filling implemented (when you navigate with arrow keys the value is pre-filled in the input). Should the same thing happen when the user hover overs items? Probably yes, if the keyboard focus follows the mouse focus. Trying to play that in my head, it seems like it might work, though I’m fearing users will find it a little odd. <issue_comment>username_0: Another option for this issue is to visually separate the mouse focus from the keyboard focus (#142). <issue_comment>username_2: The current FW8 ComboBox doesn’t actually pre-fill the value when focusing items with the mouse. I think we should follow that design: https://demo.vaadin.com/sampler/#ui/data-input/multiple-value/combo-box
<issue_start><issue_comment>Title: screen.scaleFactor always returns 1 username_0: * Electron version: 1.2.8 * Operating system: Windows I have a hidpi screen on my laptop and a normal main monitor that has some scaling applied through windows' display properties. In both cases the following code returns 1 even though I would expect something larger: require('electron').screen.getPrimaryDisplay().scaleFactor <issue_comment>username_1: `getPrimaryDisplay()` probably didn't return the display you expected, it returns the display that the system considers primary, which may not be your laptop's screen. You should use APIs like `getDisplayNearestPoint()` to get the scale factor for current display.<issue_closed> <issue_comment>username_0: I've tried to use the APIs to find the current screen and it still returns a scaleFactor of 1 despite windows being set to scale to 125%. Also `screen.getAllDisplays()` returns just a single display regardless of whether I'm using my laptop screen or my external monitor (my laptop screen is disabled in that case so that is to be expected). Is it possible that scaleFactor only supports integer values? <issue_comment>username_1: Thanks for the information, after testing I found that `screen.scaleFactor` returns 1 when DPI is 125%, however DPI 150% can work. <issue_comment>username_1: * Electron version: 1.2.8 * Operating system: Windows I have a hidpi screen on my laptop and a normal main monitor that has some scaling applied through windows' display properties. In both cases the following code returns 1 even though I would expect something larger: require('electron').screen.getPrimaryDisplay().scaleFactor <issue_comment>username_2: Chromium has this issue from two years ago, but it has been fixed a few days ago. See [https://bugs.chromium.org/p/chromium/issues/detail?id=410696](https://bugs.chromium.org/p/chromium/issues/detail?id=410696)<issue_closed> <issue_comment>username_4: Just a heads up that this is going to have a large impact Because folks at 125% will get properly scaled content, everything will appear to be much larger for them. If those people are upset and don't like this, it can be worked around by passing two additional flags to the base::CommandLine class (you should be able to append via the command line): `--high-dpi-support=1 --force-device-scale-factor=1` <issue_comment>username_5: @username_4 I've been looking everywhere for this information, and I finally found it here. Thank you very much for that. I got my electron app to ignore the desktop display scaling with this in the main process: ``` app.commandLine.appendSwitch('high-dpi-support', 1) app.commandLine.appendSwitch('force-device-scale-factor', 1) ``` <issue_comment>username_6: Thanks @username_5 for posting that solution. My application started scaling after I've npm installed electron (with global install there was no scaling). After adding those lines, the body of the application looks correct, but the top menu bar (on Windows) is scaled down. Any solution for this? <issue_comment>username_7: Should these flags be added to the official documentation, since they work? <issue_comment>username_8: @username_7 There are hundreds of potential chromium CLI switches. We can't (and won't) document all of them especially as we can't guarantee there continued existence <issue_comment>username_7: Ah, I follow. Wasn't aware it was that complicated. I'm still learning. Appreciate the response.
<issue_start><issue_comment>Title: Added Whitley Bay High School username_0: www.whitleybayhighschool.org high school and 6th form where students/teachers are allocated @whitleybayhighschool.org emails. <issue_comment>username_1: Is this the case or am I mistaken? If I am please reopen this pull request, and state **why** your institution should be included in swot. <issue_comment>username_0: Hi there @username_1, The school has a 6th form, which in the UK is a method of further education (A levels). You can see this by looking on the navbar on the left of the page, and looking at the 3 sixth form URL's found at the bottom under Info. One of which is a video (https://www.youtube.com/watch?v=5jr0gfxo1X4) providing further details about the sixth form provided at the school. <issue_comment>username_1: I'm perfectly famii <issue_comment>username_0: Hey @username_1 - Apologies, I did not mean that in a rude way, was merely just trying to explain *(See, I have no idea what you do/do not know)*! Anyhow, do you mind just explaining for me the reason why it's not suitable for inclusion? Totally fine if by your rules it's not allowed - just curious to know. For me, it's just to save me having to send out verification documents every time I try to apply for a student offer. Thanks. <issue_comment>username_1: No worries, in the UK we only accept institutions that offer the following: * BTEC Level 5/ HND * Bachelor's degree * Masters degree * Doctorate/ PHD <issue_comment>username_0: Thanks @username_1, Clears things up :) Appreciate you looking into this anyway!
<issue_start><issue_comment>Title: Auto-Generation Implementation: username_0: **Auto-Generation Types:** -------------------------------------------- Planned: Blocks: - [ ] Ore - [ ] Storage Items: Nugget - [ ] Nugget - [ ] Ingot - [ ] Dust - [ ] Gear - [ ] Plate<issue_closed>
<issue_start><issue_comment>Title: Cmd+D doesn't select multilines as expected username_0: - VSCode Version: Version 1.1.1 (1.1.1) def9e32467ad6e4f48787d38caf190acbfee5880 - OS Version: OS X 10.11.4 Steps to Reproduce: 1. Open a buffer with the following content: ``` func noname() {} panic("lol") func noname() {} panic("lol") func noname() {} panic("lol") func noname() {} panic("lol") ``` 2. Select `"func noname() {}\npanic(\"lol\")"` and press `Cmd+D` to try to select the multiple instances of the string `"func noname() {}\npanic(\"lol\")"`. 3. It doesn't work. 4. Try the same thing in Atom or Sublime Text; it works. A gif that demonstrates the difference in behavior: ![derp](https://cloud.githubusercontent.com/assets/1189716/15412867/49c595ac-1df9-11e6-8d81-b6f94e1efdd7.gif) Thanks! <issue_comment>username_1: Blocked by #313 <issue_comment>username_2: This should be unblocked now? <issue_comment>username_1: This works since quite a while<issue_closed>
<issue_start><issue_comment>Title: Split >5GB Files automatically username_0: Developer's Certificate of Origin 1.1 By making a contribution to this project, I certify that: (a) The contribution was created in whole or in part by me and I have the right to submit it under the Apache License 2.0; or (b) The contribution is based upon previous work that, to the best of my knowledge, is covered under an appropriate open source license and I have the right under that license to submit that work with modifications, whether created in whole or in part by me, under the same open source license (unless I am permitted to submit under a different license), as indicated in the file; or (c) The contribution was provided directly to me by some other person who certified (a), (b) or (c) and I have not modified it. (d) I understand and agree that this project and the contribution are public and that a record of the contribution (including all personal information I submit with it, including my sign-off) is maintained indefinitely and may be redistributed consistent with this project or the open source license(s) involved. <issue_comment>username_0: We cannot guarantee that a single partition will always be under Swift's 5GB size limit. When a single partition reaches the 5GB size limit, this patch will partition the file itself by opening a new stream and write the rest of the contents in another object named "FileName/part-xxxx-split-xxxx-attempt..." @username_1 Can you review this? <issue_comment>username_1: @username_0 Sure, i will check it. Thanks. Interesting idea to split files this way. <issue_comment>username_2: @username_0 I believe hard coding the limit is not the correct idea. The default limit on Swift is indeed 5GB, but that might not be the case. The Swift admin might have chosen to use a lower limit for some particular reason, and your code will fail. The correct way, in my opinion would be to perform (at the creation of Stocator instance) a query to get the Swift cluster capabilities (http://developer.openstack.org/api-ref-objectstorage-v1.html#infoDiscoverability) and thus knowing the maximum size allowed per object. @username_1 What do you think? <issue_comment>username_1: @username_2 @username_0 I agree, but i think we should make it configurable via configuration. I think more important is to understand wether we should use approach as was suggested by @username_0 or use standard approach of SLO with manifest. I have to admit, that my first impression that @username_0 used approach which looks much better than using manifest for SLO or DLO <issue_comment>username_2: @username_1 @username_0 I agree with you, but I would go a little bit further. The logical flow that I propose is the following: 1) Stocator set a default value of 5GB 2) Stocator queries the Object maximum file allowed by the Swift that will be used 2.1) If the query provide some results, override the default value 3) Stocator looks for the configurable parameter 3.1) If it exist, check if it is not greater (if so, error message), and override the previous value In this way the possibility of errors due to miss configuration is almost absent. What do you think? <issue_comment>username_0: @username_2 @username_1 I agree with your points. I already added a commit to make the object size configurable. I just added a commit to check if the value from the configuration is valid. <issue_comment>username_1: @username_0 I think it's very good patch :) much better then my original idea by using SLO and manifest. I didn't had time to test it yet. Did you by chance tested it with real object that writes a single part more then 5 GB? <issue_comment>username_0: @username_1 Yes, @jasoncl and I have tested writing with a single file of 6GB and 10GB with a max object size of 5GB. We've also tested writing 100MB files with 10MB max sizes. In all cases we tried reading from the split files, and the reads work without any loss of data. <issue_comment>username_0: @username_1 Have you had time to test this out yet? I think I'm done with the changes I want to add. Let me know if you think there needs to be more done to get this merged. Thanks. <issue_comment>username_1: @username_0 It looks good. I just need to perform couple of additional tests and had no time so far. Will try to do it during the next week. <issue_comment>username_1: @username_0 It looks good, i just saw the code and left one comment. I will later also try to run the code. Can you please add some unitest to it or a functional test? It's not mandatory, but would be good to have. <issue_comment>username_1: @username_0 Can you please rebase this branch, it seems it has conflicts with master branch <issue_comment>username_0: I rebased and added your suggestion. I'll work on the test and push when it's done. <issue_comment>username_1: @username_0 I think it's very good, but we need to resolve the resiliency issues that this patch adds. As example, assume SF311.csv/part-00000-attempt_201604171048_0000_m_000000_0 is written. If the task is failed there will be new additional(s) attempts, like SF311.csv/part-00000-attempt_201604171048_0000_m_000000_1 SF311.csv/part-00000-attempt_201604171048_0000_m_000000_2 SF311.csv/part-00000-attempt_201604171048_0000_m_000000_3 etc.. Assume that job completed successfully. The list() method will pick up the correct part-0000, based on the size ( the large size is the winner ). For example it may choose SF311.csv/part-00000-attempt_201604171048_0000_m_000000_2 and ignore attempt 0 and 3. The resolution uses the fact that it's the same object name "part-0000" Adding this patch will affect the way list works, since you modify part-ID to part-ID-split For example, if task will fail after split-0008 SF311.csv/part-00000-attempt_201604171048_0000_m_000000_0 SF311.csv/part-00000-split-00001-attempt_201604171048_0000_m_000000_0 SF311.csv/part-00000-split-00002-attempt_201604171048_0000_m_000000_0 SF311.csv/part-00000-split-00003-attempt_201604171048_0000_m_000000_0 SF311.csv/part-00000-split-00004-attempt_201604171048_0000_m_000000_0 SF311.csv/part-00000-split-00005-attempt_201604171048_0000_m_000000_0 SF311.csv/part-00000-split-00006-attempt_201604171048_0000_m_000000_0 SF311.csv/part-00000-split-00007-attempt_201604171048_0000_m_000000_0 SF311.csv/part-00000-split-00008-attempt_201604171048_0000_m_000000_0 and there will be replacement task "1" SF311.csv/part-00000-attempt_201604171048_0000_m_000000_1 SF311.csv/part-00000-split-00001-attempt_201604171048_0000_m_000000_1 SF311.csv/part-00000-split-00002-attempt_201604171048_0000_m_000000_1 SF311.csv/part-00000-split-00003-attempt_201604171048_0000_m_000000_1 SF311.csv/part-00000-split-00004-attempt_201604171048_0000_m_000000_1 SF311.csv/part-00000-split-00005-attempt_201604171048_0000_m_000000_1 SF311.csv/part-00000-split-00006-attempt_201604171048_0000_m_000000_1 SF311.csv/part-00000-split-00007-attempt_201604171048_0000_m_000000_1 SF311.csv/part-00000-split-00008-attempt_201604171048_0000_m_000000_1 then the current code will fail to identify correct attempt, since part-0000 is modified to part-0000-split-number. I think to resolve this we just need to modify list method, so it will pick up "part-NUMBER" and not "part-NUMBER-SPLIT" <issue_comment>username_0: @username_1 Are you referring to the String returned by nameWithoutTaskID()? I added some debug messages and it includes the "split-xxxxx" without adding any extra code. <issue_comment>username_1: @username_0 i think this is exact issue here - the name should not contain "split".. <issue_comment>username_1: @username_0 The algorithm in list() method should be adapted as i wrote in my previous remark. Otherwise it will not work. I can try to adapt it. <issue_comment>username_0: @username_1 Is this so that if any of the split uploads fails, we should start over and look for a part-00000 with a different a attempt number? I assumed that since the listing is alphabetical this wouldn't be a problem. For example if we had 2 attempts, A & B it would be listed: part-0000-attemptA part-0000-attemptB part-0000-split-0001-attemptA part-0000-split-0001-attemptB Then if "part-0000-split-0001-attemptA" fails, we will catch it when it is compared to "part-0000-split-0001-attemptB" for collisions. <issue_comment>username_0: @username_1 I don't think there's an issue with the list because when it checks for collisions the split number is included. Also since we go through the list alphabetically, when we check for collisions it compares objects of the same part and split number when there is a failed attempt. <issue_comment>username_1: @username_0 Spit logic is internal, and the Spark's task is not aware of it. Here is an example Task1 writes data and it split internally to part1-attempt-1-split-1, part1-attempt-1-split-2, part1-attempt-1-split-3 Consider a replacement task , that will generate part1-attempt-2-split-1, part1-attempt-2-split-2, part1-attempt-2-split-3 the list algorithm will not work in this case. <issue_comment>username_0: @username_1 The naming scheme is part-#-split-#-attempt#. In the list logic, everything after the last '-' is stripped so when the objects are compared we compare part-#-split#. Do we want to make sure that we grab parts and splits from the same attempt? Or do we just care about the part and split number? <issue_comment>username_3: Hello @username_0 : when is this fix planned to get merged? <issue_comment>username_0: Hi @username_3 The code in master has a changed a bit since I opened this PR. I'll have this updated by tomorrow. <issue_comment>username_1: @username_3 do you have a use case when a single Spark task write more then 4GB of data?
<issue_start><issue_comment>Title: AndroidVideoCache can not response seek event username_0: when i use AndroidVideoCache in my project ,when i seek the seekbar the mediapalyer stop paly.but if do not add androidVideocache ,when if seek the seekbar ,the mediaplyer is still play and response the seekCompleteListener. what should i do to solve this problem? <issue_comment>username_1: Right. It is expected behaviour. VideoCache downloads video consequentially. And even if you seek to position that VideoCache havn't achieved yet MediaPlayer wait for achieving this position by VideoCache. This behaviour allows to simplify realization. I can suggest you 3 ways: 1. fork project, extend it and make pull request 2. just show downloading progress with help of `[CacheListener](https://github.com/username_1/AndroidVideoCache/blob/master/library/src/main/java/com/username_1/videocache/CacheListener.java)`. It doesn't solve problem, but shows to user that everything is OK and player just waiting for downloading. 3. Do not use library at all.<issue_closed> <issue_comment>username_0: if in my project video playing url from p2p ,while video download url from other. how to resolve ? <issue_comment>username_1: ProxyCache support only direct urls with `url://` and `file://` scheme
<issue_start><issue_comment>Title: basics of a mask overlay username_0: Based on the Overlays plugin, I've distilled the creation/displaying of an mask overlay on an ImageViewCanvas to the [following gist](https://gist.github.com/username_0/25bf6287c2c724cb9cc7). Just wondering if this is the basics, or is there something more basic that can be done. Note: I'm in need of doing this not in a reference viewer, but using only the ImageView-based widgets, hence why distilling out the mask viewing. <issue_comment>username_1: Just brainstorming here: If you can somehow convert your mask into, say, a Ginga polygon object, then it already has built-in options like "fill" and "alpha", and you can overlay it by using something like `canvas.add(...)` without having to build RGB image. <issue_comment>username_2: @username_0, I think you've nailed the basics here. Does it work for you? I don't have access to some of the classes referenced in the gist so I cannot test it, but it looks right to me. <issue_comment>username_2: @username_1, I have found that rendering many small polygons with each one having a large number of vertices is slower than rendering a pixel overlay. So there is a tradeoff there. I think with a large number of irregularly shaped areas a traditional pixel mask might be more efficient. <issue_comment>username_0: Yes, it works quite well actually. Just wanted to make sure there wasn't some concept/code I missed, such as exactly what @username_1 suggested. @username_1 Yes, which is why I wanted to get the basic mask down, so the option of either or would be available. As, expected, while developing masks, one will have shapes. But to save the region, which will ultimately be a composite of many shapes, the shapes will get render out, then later read back in, as pixel masks anyways. So, w00t1, not too far off the boat... <issue_comment>username_1: With what you are doing, @username_0 , you might need an extra option to specify mask color, to distinguish the different types of mask. But yeah, if polygon is not the thing for you, this should be okay. Good luck!<issue_closed> <issue_comment>username_0: @username_1 Oh, most definitely, and opacity at least. Thanks to all! <issue_comment>username_1: It just so happens this `masktorgb()` function also comes in handy for me to overlay a mask for #162. Any chance a more general version can be include in Ginga? If so, I am willing to open a PR for it; Just need to know where to put the code. <issue_comment>username_2: There are a lot of possibilities, but how about `.../ginga/util/dp.py`? It's kind of a grab bag of module functions for dealing with images or direct numpy arrays.
<issue_start><issue_comment>Title: Riot starts own router module username_0: Hello @gabrielmoreira! Thanks for your contribution to the Riot community. We're making [the new repo](https://github.com/riot/router) extracting the router feature from Riot.js. To accelerate development, we have [a plan to break Riot into several submodules](https://github.com/riot/riot/issues/1063): observable, compiler, ..etc. Then, can I ask you some favor? 1. Could you free please the npm namespace `riot-router`? 2. Could you join as a collaborator if you like? :smile: Thank you. <issue_comment>username_0: ping? :eyes:<issue_closed> <issue_comment>username_0: Sorry for bothering you. We decided to go with `riot-route`. Thank you!
<issue_start><issue_comment>Title: java.net.ConnectException: failed to connect to /serverHostName (port 8080) after 10000ms: connect failed: EMFILE (Too many open files) username_0: Hello, I have an Android app where I am basically running an Async task to ping my server for an Access Token every 3 minutes. But after about an hour the app freezes and I see this exception in logcat. The app eventually crashes. ` 10-25 16:38:47.527 19845-19845/com.test.app E/SharedPreferencesImpl﹕ Couldn't create directory for SharedPreferences file /data/data/com.test.app/shared_prefs/MY_SHARED_PREF.xml 10-25 16:38:47.536 19845-19845/com.test.app E/SharedPreferencesImpl﹕ Couldn't create directory for SharedPreferences file /data/data/com.test.app/shared_prefs/MY_SHARED_PREF.xml 10-25 16:38:47.545 19845-23000/com.test.app W/SpotifySDK﹕ Player::deliverAudio called with 0 frames 10-25 16:38:47.545 19845-23000/com.test.app I/SpotifySDK﹕ Got notification: Pause 10-25 16:38:47.547 19845-23000/com.test.app I/SpotifySDK﹕ Got notification: Track ended 10-25 16:38:47.552 19845-23000/com.test.app I/SpotifySDK﹕ Got notification: Track changed 10-25 16:38:47.553 19845-19845/com.test.app E/SharedPreferencesImpl﹕ Couldn't create directory for SharedPreferences file /data/data/com.test.app/shared_prefs/MY_SHARED_PREF.xml 10-25 16:38:47.560 19845-19913/com.test.app W/System.err﹕ java.net.ConnectException: failed to connect to /51.21.21.111 (port 8080) after 10000ms: connect failed: EMFILE (Too many open files) 10-25 16:38:47.572 19845-19913/com.test.app W/System.err﹕ at libcore.io.IoBridge.connect(IoBridge.java:124) 10-25 16:38:47.572 19845-19913/com.test.app W/System.err﹕ at java.net.PlainSocketImpl.connect(PlainSocketImpl.java:183) 10-25 16:38:47.572 19845-19913/com.test.app W/System.err﹕ at java.net.PlainSocketImpl.connect(PlainSocketImpl.java:456) 10-25 16:38:47.572 19845-19913/com.test.app W/System.err﹕ at java.net.Socket.connect(Socket.java:882) 10-25 16:38:47.573 19845-19913/com.test.app W/System.err﹕ at com.squareup.okhttp.internal.Platform$Android.connectSocket(Platform.java:190) 10-25 16:38:47.573 19845-19913/com.test.app W/System.err﹕ at com.squareup.okhttp.Connection.connectSocket(Connection.java:196) 10-25 16:38:47.573 19845-19913/com.test.app W/System.err﹕ at com.squareup.okhttp.Connection.connect(Connection.java:172) 10-25 16:38:47.573 19845-19913/com.test.app W/System.err﹕ at com.squareup.okhttp.Connection.connectAndSetOwner(Connection.java:367) 10-25 16:38:47.573 19845-19913/com.test.app W/System.err﹕ at com.squareup.okhttp.OkHttpClient$1.connectAndSetOwner(OkHttpClient.java:128) 10-25 16:38:47.573 19845-19913/com.test.app W/System.err﹕ at com.squareup.okhttp.internal.http.HttpEngine.connect(HttpEngine.java:328) 10-25 16:38:47.573 19845-19913/com.test.app W/System.err﹕ at com.squareup.okhttp.internal.http.HttpEngine.sendRequest(HttpEngine.java:245) 10-25 16:38:47.574 19845-19913/com.test.app W/System.err﹕ at com.squareup.okhttp.Call.getResponse(Call.java:267) 10-25 16:38:47.574 19845-19913/com.test.app W/System.err﹕ at com.squareup.okhttp.Call$ApplicationInterceptorChain.proceed(Call.java:224) 10-25 16:38:47.574 19845-19913/com.test.app W/System.err﹕ at com.squareup.okhttp.Call.getResponseWithInterceptorChain(Call.java:195) 10-25 16:38:47.574 19845-19913/com.test.app W/System.err﹕ at com.squareup.okhttp.Call.execute(Call.java:79) 10-25 16:38:47.575 19845-19913/com.test.app W/System.err﹕ at com.test.app.async.AccessTokenUpdateTask.doInBackground(AccessTokenUpdateTask.java:57) 10-25 16:38:47.575 19845-19913/com.test.app W/System.err﹕ at com.test.app.async.AccessTokenUpdateTask.doInBackground(AccessTokenUpdateTask.java:21) 10-25 16:38:47.575 19845-19913/com.test.app W/System.err﹕ at android.os.AsyncTask$2.call(AsyncTask.java:292) 10-25 16:38:47.575 19845-19913/com.test.app W/System.err﹕ at java.util.concurrent.FutureTask.run(FutureTask.java:237) 10-25 16:38:47.575 19845-19913/com.test.app W/System.err﹕ at android.os.AsyncTask$SerialExecutor$1.run(AsyncTask.java:231) 10-25 16:38:47.575 19845-19913/com.test.app W/System.err﹕ at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1112) 10-25 16:38:47.576 19845-19913/com.test.app W/System.err﹕ at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:587) 10-25 16:38:47.576 19845-19913/com.test.app W/System.err﹕ at java.lang.Thread.run(Thread.java:818) 10-25 16:38:47.576 19845-19913/com.test.app W/System.err﹕ Caused by: android.system.ErrnoException: connect failed: EMFILE (Too many open files) 10-25 16:38:47.576 19845-19913/com.test.app W/System.err﹕ at libcore.io.Posix.connect(Native Method) 10-25 16:38:47.577 19845-19913/com.test.app W/System.err﹕ at libcore.io.BlockGuardOs.connect(BlockGuardOs.java:111) 10-25 16:38:47.577 19845-19913/com.test.app W/System.err﹕ at libcore.io.IoBridge.connectErrno(IoBridge.java:154) 10-25 16:38:47.577 19845-19913/com.test.app W/System.err﹕ at libcore.io.IoBridge.connect(IoBridge.java:122) 10-25 16:38:47.577 19845-19913/com.test.app W/System.err﹕ ... 22 more 10-25 16:38:47.578 19845-19845/com.test.app W/art﹕ Large object allocation failed: ashmem_create_region failed for 'large object space allocation': Too many open files 10-25 16:38:47.603 19845-19845/com.test.app I/art﹕ Alloc sticky concurrent mark sweep GC freed 75611(1974KB) AllocSpace objects, 15(384KB) LOS objects, 13% free, 48MB/55MB, paused 1.317ms total 23.837ms 10-25 16:38:47.605 19845-19845/com.test.app W/art﹕ Large object allocation failed: ashmem_create_region failed for 'large object space allocation': Too many open files 10-25 16:38:47.626 19845-19845/com.test.app I/art﹕ Alloc partial concurrent mark sweep GC freed 9352(329KB) AllocSpace objects, 6(1169KB) LOS objects, 25% free, 46MB/62MB, paused 965us total 20.962ms 10-25 16:38:47.628 19845-19845/com.test.app W/art﹕ Large object allocation failed: ashmem_create_region failed for 'large object space allocation': Too many open files 10-25 16:38:47.650 19845-19845/com.test.app I/art﹕ Alloc concurrent mark sweep GC freed 399(28KB) AllocSpace objects, 0(0B) LOS objects, 25% free, 46MB/62MB, paused 922us total 21.750ms 10-25 16:45:19.014 19845-19845/com.test.app E/Surface﹕ dequeueBuffer failed (Unknown error 2147483646) 10-25 16:45:19.022 19845-19845/com.test.app E/ViewRootImpl﹕ Could not lock surface java.lang.IllegalArgumentException at android.view.Surface.nativeLockCanvas(Native Method) ` Here is my analysis of this log: I understand why the app UI froze, it was due to the "dequeueBuffer failed" and the "lock surface" exception. But the line before it tells me that the app is keeping too many files open from each 3 min ping I make to my server. I am using Square's okhttp library for making the http request and here is my implementation of the http call inside my Async task ``` OkHttpClient client = new OkHttpClient(); RequestBody body = RequestBody.create("application/json; charset=utf-8", jsonString); Request request = new Request.Builder() .url(url) .post(body) .build(); Response response = null; try { response = client.newCall(request).execute(); if(response.isSuccessful()) { String responseBody = response.body().string(); return responseBody; } } catch (Exception e) { e.printStackTrace(); } ``` Any idea what I am doing wrong here? <issue_comment>username_1: Don't create a new OkHttpClient for each request. Create it once and reuse. <issue_comment>username_0: I did that and I am getting the same exception. I removed that comment from this issue as I think it was misleading. <issue_comment>username_1: Well the error is simple: something is opening too many files or sockets. Tracking it down is just a matter of seeing what's using these resources without freeing them. <issue_comment>username_0: @username_1 I am sorry but it was an issue with another SDK I was using and not with okhttp. I am sorry. Closing this issue now.<issue_closed>
<issue_start><issue_comment>Title: More Bots and Organization of Bot List username_0: * Alphabetized bot list to make merging multiple pull-requests easier in the future (and improves lookup of supported values) * added a lot of missing bots Note: `make build` is broken: https://github.com/biggora/express-useragent/issues/70 <issue_comment>username_0: Note: would be even better to abandon maintaining another bot list and use the `isbot` npm package.
<issue_start><issue_comment>Title: On finish hook for resolves username_0: In my GraphQL schema, one of my resolvers gets a connection from the database pool and passes that connection down to its child fields. Is there a way to register a callback during resolution to allow me to release the connection back to the pool? <issue_comment>username_0: Is there something for this or plans to develop it? If someone could point me in the right direction I could submit a PR. <issue_comment>username_1: @username_0 Is there a good reason this needs to be done on a per-resolver basis? Couldn't you just do it on a per-query basis? I.e. get the connection and stick it on the context before you start executing the query and release the connection when the whole query is done. I think that would also be more maintainable than passing it down through the resolve functions. For the GraphQL data stack I'm working on (Apollo), we decided it was best if resolve functions didn't know anything about how the data fetching happens behind the scenes. <issue_comment>username_0: The key thing here for my case is composability. Ideally the GraphQL schema I'm developing could be another GraphQL schema's field. This makes context level actions difficult and potentially means multiple copies which need different connections. A context level cleanup hook would be fine. <issue_comment>username_2: Is this something you could do via Promise composition? Conceptually e.g.: ```js getConnection().then(connection => performQuery(connection).then(result => { releaseConnection(connection); return result; }) ``` <issue_comment>username_2: I think this is pretty sound advice <issue_comment>username_2: This is conceptually very similar to how GraphQL operates at Facebook <issue_comment>username_0: The tricky thing is if you want a field to control session information such as the authed user. Such a field would need to get a new client connection and pass that connection down to child resolvers. The project I'm working on would like to see authentication done fully through GraphQL and currently the GraphQL JS API doesn't seem to have a good story for this. <issue_comment>username_2: In general we recommend against this kind of schema design because it produces the kind of issues you're running into now. We've also not really encountered reasons for having field-specific context like this. Authentication is a great example of something which typically applies to an entire request, and not just one particular field. <issue_comment>username_2: Also, GraphQL intentionally does not have a story for authentication because it's strongly recommended to perform authentication before executing the GraphQL request. The `context` variable provided to resolvers is primarily intended to hold this kind of authentication information.<issue_closed> <issue_comment>username_0: Ok, thanks for explaining 👍
<issue_start><issue_comment>Title: Any way to mark a record as unread? username_0: Hi, I want to mark a marked record as unread, I thinks it should be easy to add `mark_as_unread!` method. But after dig into the code, I found that unable to mark a record as unread if I have called `SomeClass.mark_as_read! :all, for: user`. So do you have any suggestion to implement the `mark_as_unread!` method? <issue_comment>username_1: Seems like the best way to go about it for items that are shared between users is to use a join table as the model for determining whether or not something is read. For example, you might have the following (p.s. none of this is tested so it may not work): ```ruby class User acts_as_reader has_many :participations has_many :conversations, through: :participations def unread_conversations Conversation.find(Participation.where(user: self).unread_by(self).pluck(:conversation_id)) end end class Participation acts_as_readable on: :updated_at belongs_to :user belongs_to :conversation end class Conversation has_many :participations has_many :users, through: :participations def mark_unread_for(user) participations.where(user: user).first.touch end end ``` <issue_comment>username_2: For me it works this way: `post.read_mark(current_user).destroy!` <issue_comment>username_3: https://www.postgresql.org/docs/9.5/static/queries-order.html) If we delete that ReadMark, a record is unread for that user. Isn't that correct? I might be wrong because I don't exactly know how the Garbage Collector class of this gem works completely.<issue_closed> <issue_comment>username_4: @username_3, @username_2 The garbage collector [`.cleanup_read_marks!`](https://github.com/username_4/unread/blob/master/lib/unread/garbage_collector.rb) searches for the oldest unread record (if there is any). Then all read_marks before its timestamp are deleted and the global timestamp is moved. The cleanup is essential for this gem, because it keeps the number of read_marks as low as possible. I recommended to run it once a day (via a CRON job or similar). Because old read_marks are deleted this way, we cannot mark a record as unread after a cleanup was done. Note: There are situations where it does work. Example: If you have an old unread record, the garbage collector cannot remove newer read_marks, so you can delete them manually to mark records as unread. But in a real world app, it's unlikely that a user keeps an old record unread for a long time. So: Without big changes in the algorithm it's not possible to mark a record as unread.
<issue_start><issue_comment>Title: Native optimizer dumps user-unfriendly error message on input. username_0: Running `optimizer emcc-3-meminit.js asm eliminate simplifyExpressions registerize asmLastOpts last` on the file http://clb.demon.fi/emcc/dump/emcc-3-meminit.js spams the following error message: bad parseExpression state: ========== global.SIMD === "undefined") { // SIMD module. .... thousands of times the same error .... bad parseExpression state: ========== global.SIMD === "undefined") { // SIMD module. bad parseExpression state: ========== global.SIMD === "undefined") { // SIMD module. Segmentation fault: 11 It would be nice to get a compact error message, along with file:line:column info of where the error occurs. <issue_comment>username_1: The native optimizer isn't meant to be run on a file like that - it should just be run on the individual asm.js functions. cashew cannot parse general JS. `js_optimizer.py` should split out the asm.js functions and only pass those to the optimizer. <issue_comment>username_0: Yeah, I realize it is not, but I think it would still be good to be able to handle malformed input. This was just something I noticed while debugging #3672. <issue_comment>username_1: Hmm, I believe the thousands of repetitions are because your build has assertions turned off, and we do `assert(0)` where we should put an `abort()`. I fixed that now. <issue_comment>username_0: Ah right, that makes sense. Thanks for the fix!<issue_closed>
<issue_start><issue_comment>Title: status: don't retrieve exitstatus for running pods username_0: Only call getExitStatuses on pods that are exited. Fixes #856. <issue_comment>username_1: lgtm <issue_comment>username_2: huh, how come this didn't come up earlier? @username_3 shouldn't this be initialized with the pod? <issue_comment>username_3: @username_2 something regressed here... the status directory is seeded in stage1: https://github.com/coreos/rkt/blob/master/stage1/rootfs/aggregate/install.d/99misc#L19 We query the exit status of a running pod because some of the processes may be exited with the pod still running, depending on restart policies, or at least that was supposed to be possible. <issue_comment>username_2: @iaguis PTAL? <issue_comment>username_2: Closing in favour of #883
<issue_start><issue_comment>Title: Prohibit installation via Python2 username_0: Presently, the app has been [tested with Python3.4](http://django-mentor-connect.readthedocs.org/en/latest/installation.html#installation). If a user tries to install via Python2, he gets no immediate warnings. It'd be nice to warn users if installation is being performed via Python2. This can be implemented with simple check in `manage.py ` maybe? ```python import sys if not sys.version_info > (3, 3): print("Hey! We support Python 3.4 or later.") exit(13) ```
<issue_start><issue_comment>Title: More tests for Questionpool username_0: <!-- Reviewable:start --> This change is [<img src="https://reviewable.io/review_button.svg" height="35" align="absmiddle" alt="Reviewable"/>](https://reviewable.io/reviews/username_1/guenther/52) <!-- Reviewable:end --> <issue_comment>username_1: Reviewed 1 of 1 files at r1. Review status: all files reviewed at latest revision, all discussions resolved. --- *Comments from the [review on Reviewable.io](https://reviewable.io:443/reviews/username_1/guenther/52)* <!-- Sent from Reviewable.io -->
<issue_start><issue_comment>Title: conda blpapi install fail username_0: Hello, I was trying to install the Bloomberg API library for Python using conda install -c https://conda.anaconda.org/mbonix blpapi and received the traceback below. Does anyone know what the issue is? I am running Anaconda Python 3.5.1. Thanks! Traceback (most recent call last): File "C:\..\..\Anaconda\Scripts\conda-script.py", line 4, in <module> File "C:\..\..\Anaconda\lib\site-packages\conda\cli\main.py", line 173, in main File "C:\..\..\Anaconda\lib\site-packages\conda\cli\main.py", line 180, in args_func File "C:\..\..\Anaconda\lib\site-packages\conda\cli\main_install.py", line 45, in execute File "C:\..\..\Anaconda\lib\site-packages\conda\cli\install.py", line 423, in install File "C:\..\..\Anaconda\lib\site-packages\conda\plan.py", line 538, in execute_actions File "C:\..\..\Anaconda\lib\site-packages\conda\instructions.py", line 148, in execute_instructions File "C:\..\..\Anaconda\lib\site-packages\conda\instructions.py", line 91, in LINK_CMD File "C:\..\..\Anaconda\lib\site-packages\conda\instructions.py", line 87, in link File "C:\..\..\Anaconda\lib\site-packages\conda\install.py", line 616, in link File "C:\..\..\Anaconda\lib\os.py", line 241, in makedirs PermissionError: [WinError 5] Access is denied: 'C:\\..\\..\\Anaconda\\Doc'<issue_closed> <issue_comment>username_1: Definitely a problem with the original package provided by mbonix on anaconda.org.
<issue_start><issue_comment>Title: Connector authentication issue with the Tableau Online Sync Client username_0: The connector does not function when the Elasticsearch data source is configured with authentication in the Tableau Online Sync Client. The sync client logs indicate a 'password not found' error. To fix this, a change is needed to pass the username/password credentials for the datasource as normal Tableau connection data. This puts these credentials in cleartext in log files and wherever the connector metadata is stored (Tableau Online, users with Tableau Desktop).<issue_closed>
<issue_start><issue_comment>Title: this is probably something dumb on my end but.... username_0: my entire app (from docs): ```python #!/usr/bin/env python from flask import Flask, session from flask.ext.session import Session from flask import current_app import pprint app = Flask(__name__) # Check Configuration section for more details #SESSION_TYPE = 'redis' app.config.from_object('cvms_flask.config') Session(app) @app.route('/set/') def set(): pprint.pprint(current_app.config) session['key'] = 'value' return 'ok' @app.route('/get/') def get(): return session.get('key', 'not set') if __name__ == '__main__': app.run() ``` exception: ```bash Traceback (most recent call last): File "/Users/magregor/.virtualenvs/cvms/lib/python3.5/site-packages/flask/app.py", line 1836, in __call__ return self.wsgi_app(environ, start_response) File "/Users/magregor/.virtualenvs/cvms/lib/python3.5/site-packages/flask/app.py", line 1820, in wsgi_app response = self.make_response(self.handle_exception(e)) File "/Users/magregor/.virtualenvs/cvms/lib/python3.5/site-packages/flask/app.py", line 1403, in handle_exception reraise(exc_type, exc_value, tb) File "/Users/magregor/.virtualenvs/cvms/lib/python3.5/site-packages/flask/_compat.py", line 33, in reraise raise value File "/Users/magregor/.virtualenvs/cvms/lib/python3.5/site-packages/flask/app.py", line 1817, in wsgi_app response = self.full_dispatch_request() File "/Users/magregor/.virtualenvs/cvms/lib/python3.5/site-packages/flask/app.py", line 1477, in full_dispatch_request rv = self.handle_user_exception(e) File "/Users/magregor/.virtualenvs/cvms/lib/python3.5/site-packages/flask/app.py", line 1381, in handle_user_exception reraise(exc_type, exc_value, tb) File "/Users/magregor/.virtualenvs/cvms/lib/python3.5/site-packages/flask/_compat.py", line 33, in reraise raise value File "/Users/magregor/.virtualenvs/cvms/lib/python3.5/site-packages/flask/app.py", line 1475, in full_dispatch_request rv = self.dispatch_request() File "/Users/magregor/.virtualenvs/cvms/lib/python3.5/site-packages/flask/app.py", line 1461, in dispatch_request return self.view_functions[rule.endpoint](**req.view_args) File "/Users/magregor/src/cisco/cvms/tests/testsession.py", line 16, in set session['key'] = 'value' File "/Users/magregor/.virtualenvs/cvms/lib/python3.5/site-packages/werkzeug/local.py", line 341, in __setitem__ self._get_current_object()[key] = value File "/Users/magregor/.virtualenvs/cvms/lib/python3.5/site-packages/flask/sessions.py", line 126, in _fail raise RuntimeError('the session is unavailable because no secret ' RuntimeError: the session is unavailable because no secret key was set. Set the secret_key on the application to something unique and secret. ``` however dumping current_app.config (right before attempting to use the session): ``` { #snip 'RULES_LOGLEVEL': 'DEBUG', 'SECRET_KEY': '505<hash-snipped/>', 'SEND_FILE_MAX_AGE_DEFAULT': 43200, 'SERVER_NAME': None, #snip } ``` <issue_comment>username_0: well it's not in the docs and the exception is misleading at best but apparently thereis no default session handler and that caused the exception. When I selected filesystem and commented out all of the config options set to their Null defaults everything worked.<issue_closed>
<issue_start><issue_comment>Title: cron_job table data type and table index username_0: The ID inside the cron_manager table is unsigned and the foreign key in cron_job is not, so that joins will not work with an index. I also added an index, containing the name and cron_manager_id for further queries. <issue_comment>username_1: Thanks for improving Cron!
<issue_start><issue_comment>Title: Error: Invalid arguments supplied for memnstr() after change controller name at event 'dispatch:beforeDispatch' username_0: Example code: <pre> /* * Auto camelize controller name */ $eventsManager->attach( 'dispatch:beforeDispatch', function ($event, $dispatcher) { $controller = \Phalcon\Text::camelize($dispatcher->getControllerName()); $dispatcher->setControllerName($controller); } ); </pre> If to change event 'dispatch:beforeDispatch' to 'dispatch:beforeDispatchLoop' the error disappears. I catch exception at function 'phalcon_memnstr_str' in dispatch.c line 695. Phalcon 1.3.2 <bountysource-plugin> --- Want to back this issue? **[Post a bounty on it!](https://www.bountysource.com/issues/4367081-error-invalid-arguments-supplied-for-memnstr-after-change-controller-name-at-event-dispatch-beforedispatch?utm_campaign=plugin&utm_content=tracker%2F50707&utm_medium=issues&utm_source=github)** We accept bounties via [Bountysource](https://www.bountysource.com/?utm_campaign=plugin&utm_content=tracker%2F50707&utm_medium=issues&utm_source=github). </bountysource-plugin><issue_closed>
<issue_start><issue_comment>Title: feature request - support for Service Bus for Windows username_0: "Service Bus for Windows Server is a set of installable components that provides the messaging capabilities of Windows Azure Service Bus on Windows. Service Bus for Windows Server enables you to build, test, and run loosely-coupled, message-driven applications in self-managed environments and on developer computers. Service Bus queues offer reliable message storage and retrieval with a choice of protocols and APIs. Building on the same foundation as queues, Service Bus topics provide rich publish/subscribe capabilities that allow multiple, concurrent subscribers to independently retrieve filtered or unfiltered views of the published message stream." http://msdn.microsoft.com/en-us/library/dn282144.aspx NOTE: There may be minor incompatibilities between the DLL versions used by Windows and Asure - see: https://code.msdn.microsoft.com/windowsapps/service-bus-explorer-f2abca5a<issue_closed>
<issue_start><issue_comment>Title: RC6 Forms: Empty form array enable itself username_0: **I'm submitting a ...** (check one with "x") ``` [x] bug report => search github for a similar issue or PR before submitting [ ] feature request [ ] support request => Please do not submit support request here, instead see https://github.com/angular/angular/blob/master/CONTRIBUTING.md#question ``` **Current behavior** An empty form array enable itself after an `updateValueAndValidity` **Expected/desired behavior** The form array should stay disabled. **Reproduction of the problem** [If the current behavior is a bug or you can illustrate your feature request better with an example](http://plnkr.co/edit/yIu2axqSzXWVJPhF6Hw6) I think that the form array is re enabled by this [line](https://github.com/angular/angular/blob/70b0ab457bf7942b74a3f2b659684d0328e9d2dc/modules/%40angular/forms/src/model.ts#L914) * **Angular version:** 2.0.0-rc.6 * **Browser:** [all] * **Language:** [all] <issue_comment>username_1: Thanks for reporting! On it.<issue_closed> <issue_comment>username_0: Thanks @username_1 for fixing it
<issue_start><issue_comment>Title: Auth Request to local URL causing timeout issue. username_0: This might also be related to https://github.com/username_1/laravel-echo-server/issues/8 My setup in `server.js` during development is to use a `.dev` url on Homestead. ![image](https://cloud.githubusercontent.com/assets/2513663/18047072/d2ec29da-6dd2-11e6-87cf-67126828a5a4.png) I was having no issues using normal channels and then I started to play with private channels. I was having very strange results where I could only listen to a broadcasted event in my private channel about 30% of the time. So I knew it was working, and that it was authorising ok, only about 70% of the time I was testing I couldn't receive the event.... After a lot of debugging I finally found out that the `serverRequest` sent by `laravel-echo-server` to my `authHost` url was taking exactly 2mins to get a response. ![image](https://cloud.githubusercontent.com/assets/2513663/18047597/9cb631dc-6dd5-11e6-916a-85be8ea64cf2.png) I added some `console.log` here and this is what I got when I ran the server: ```sh ✘ vagrant@homestead  ~/Code/new.nn.com  node server.js development EchoServer: "Server running at http://new.wn.dev:6001" authentication {"url":"http://new.wn.dev/broadcasting/auth","form":{"channel_name":"private-App.User.1"},"headers":{"X-CSRF-TOKEN":"ktHclK1jxdnerkkV49df1r0LyydcOuHHjcoxXafH"},"rejectUnauthorized":false} About to send Request at 1472463808 Response received at 1472463928 ``` Because I wasn't always waiting the full 2 mins before firing an event, it appeared to me that the channel wasn't working when in fact I just hadn't authorised yet. If I waited the 2 mins then fired my events, I got them every single time. 2 mins seemed like a very precise time, so i did some more testing. If I changed it to a normal URL (say google.com), the request was sent instantly and I got a response (albeit an incorrect one), but as soon as I tried setting it back to my development URL it took exactly 2 mins again. That makes me believe that there's some issue with the dnslookup for sending requests. It seems like it was trying to lookup my `.dev` URL on the internet, and not use my local dns `/etc/hosts` to route directly back to the same server. It appears that it's set for a timeout of 120 secs before it gives up on looking online and reverts back. Again I know very little about `Node.js`, but from what i can gather, it seems like you are using the `Request` library, and when I went looking in there it seems like the `Request` library using the core `Node.js` library to deal with dns lookups etc. SO. Do you have any idea how to fix this? Is there something in your library we can add as a request option? Is the a bug in the `Request` library, is it a `Node.js` issue? That's as far as I could debug too and now I need some help if you are able!? Thank you. <issue_comment>username_1: @username_0 Does this help? https://github.com/username_1/laravel-echo-server/issues/14 <issue_comment>username_0: No. This is still plaguing me and I cannot find a reason. I can get receive ALL broadcasts sent to the page, as long as I open the page and wait *exactly* 2 mins for something to timeout. It's crazy, I don't even know where to debug. I ran a test. I open my webpage and have 3 listeners set up. 1 for public broadcast (no auth needed) 1 for private broadcast (auth needed) 1 for presence channel (auth needed). I can see that as soon as I open my page, an ajax/socket.io request is instantly sent to my authHost (`http://new.wn.dev`) with all the correct data. However it never gets to the laravel app....it seems to timeout exactly 2 mins later. In another window I fire off 3 events, 1 public, 1 private and 1 presence channel. The public one gets received (console log) instantly. The other 2 never arrive. As soon as 2 mins is up and I reload page 2 to send the 3 events, all 3 get logged immediately! Any thoughts gladly received! <issue_comment>username_0: I've never spent so much time on a bug before - coming up on 3 weeks now. But I finally have an answer to this. And of course it's incredibly simple and complex all at the same time. After **hours and hours of testing** code and debugging variables, I finally noticed that one of my sites was resolving to 127.0.53.53 and not 127.0.0.1 as would be the norm on vagrant. This stood out like a sore thumb and I went googling. I finally came across a few threads and it all came together. My websites all use `.dev` as the domain TLD and I've been using that for quite a while without ever having an issue before. However, it appears google is in the process of being the owner of the `.dev` domain and in doing so a name collision occurs when you try and use it for your own projects in certain circumstances and software versions. Here's more info : https://www.icann.org/resources/pages/name-collision-2013-12-06-en#127.0.53.53 So what was happening was that node was trying to resolve my domain of `new.wn.dev` and was being directed to 127.0.53.53 instead of 127.0.0.1 For whatever reason with software versions installed and Ubuntu versions etc, this then caused a timeout for exactly 120 secs before everything started to work. (May also be related to this http://www.sekuda.com/overriding_the_default_linux_kernel_20_second_tcp_socket_connect_timeout) # The Fix So simple. Stop using ANY TLD that is in the process of being setup/bought (eg `.dev`) and use one that has been designed specifically not to be used in the future. eg `.test` More suggestions here: https://iyware.com/dont-use-dev-for-development/ As soon as I changed my developement domains to `.test` everything just clicked into place. Public, private and presence channels all work now instantly without any timeouts etc. And finally I can sleep without having this at the back of my mind! For the record, here's what my `laravel-echo-server.json` file looks like. ```json { "appKey": "k7mo168sssssssssssssssssssssss6qtfh", "authEndpoint": "/broadcasting/auth", "authHost": "http://new.wn.test", "database": "redis", "databaseConfig": { "redis": {}, "sqlite": { "databasePath": "/database/laravel-echo-server.sqlite" } }, "devMode": false, "host": "localhost", "port": "6001", "protocol": "http", "referrers": [], "sslCertPath": "", "sslKeyPath": "", "verifyAuthPath": true, "verifyAuthServer": false } ``` I hope this helps someone.<issue_closed>
<issue_start><issue_comment>Title: [feature request] Update Electron to 1.4.6 username_0: <issue_comment>username_1: You can read about our electron upgrade procedure in the following document https://github.com/atom/design-decisions/blob/master/electron-update-procedure.md We have a PR open for upgrading electron already.<issue_closed>
<issue_start><issue_comment>Title: PhantomJS release v2.1.1 username_0: The download page has 2.1.1 http://phantomjs.org/download.html Please could you bundle this up for `phantomjs-maven-plugin` and update here when it's ready? (I'd submit a PR, but I don't know how to for this.) <issue_comment>username_1: +1 <issue_comment>username_2: +1 <issue_comment>username_3: It already works. Try it: <plugin> <groupId>com.github.username_4</groupId> <artifactId>phantomjs-maven-plugin</artifactId> <version>0.7</version> <executions> <execution> <goals> <goal>install</goal> </goals> </execution> </executions> <configuration> <version>2.1.1</version> </configuration> </plugin> <issue_comment>username_2: My bad, works for me now. Our local Nexus had problems. <issue_comment>username_4: Sorry I'm just looking at this now, but as others have noted `2.1.1` should be in maven central as of 1/28/2016: http://search.maven.org/#artifactdetails%7Ccom.github.username_4%7Cphantomjs%7C2.1.1%7CN%2FA<issue_closed>
<issue_start><issue_comment>Title: lib: add a wrapper to release a buffer which comes from out_lib_flush username_0: I added a new wrapper function `flb_lib_free` to release a buffer which comes from `out_lib_flush`. Now, `out_lib` sends raw msgpack to user callback. User should release the msgpack with `free(3)` after using that. In the future, the data format which `out_lib` sends may be changed. The way to release the buffer also may be changed. (e.g. It may be need to call mk_list_del and so on) It also affect user program which uses fluent-bit as library. So, new release function prevents rewriting user program even if the data format is changed. <issue_comment>username_1: I think the right approach to this is _always_ let the user take care of the memory allocation. Exposing a new API function for a specific plugin internal is not desired. If a future out_lib supports JSON, the rows should have a new allocation per call, there is nothing wrong it since the main goal is to protect Fluent Bit internals (e.g: think of Kernel buffer copy for userspace) <issue_comment>username_0: Well, now, `out_lib` return only 1 record per call. If out_lib returns some records per call (to reduce overhead of function calling), it may improve performance. <issue_comment>username_1: Well, should we move it to in_lib/in_lib.c You are totally right. So the think is: - Library API (flb_lib) is the only one that exposes something to handle internals. - in_lib requires flb_lib_push() to ingest data -> we are doing an exception. - how to handle out_lib buffer in the callback context ? what I am thinking now is that out_lib should do: 1. iterate records 2. allocate one buffer and copy one record 3. copy record into the buffer 4. invoke user callback 5. next iteration, copy new record into the same buffer (realloc(2) if required) 6. continue... so at the end who take cares of the buffer release is the same out_lib, the user receives a reference and a size, up to him what todo. what do you think ? <issue_comment>username_0: Here, the buffer means a buffer for user, right? If so, I think it is not so good. Because fluent-bit can't determine if user finish using that buffer. (e.g. User may store some records. Then, fluent-bit breaks stored buffer) <issue_comment>username_0: Discard this PR .Clean up.
<issue_start><issue_comment>Title: Project tails to build on OSX 10.9.5 running a recent MacPorts username_0: 1) There are some CMake warnings: ``` $ cmake .. -- Build type: Debug -- Configuring done CMake Warning (dev) in CMakeLists.txt: Policy CMP0043 is not set: Ignore COMPILE_DEFINITIONS_<Config> properties. Run "cmake --help-policy CMP0043" for policy details. Use the cmake_policy command to set the policy and suppress this warning. This warning is for project developers. Use -Wno-dev to suppress it. . . (repetition of above CMake warning...) . -- Generating done -- Build files have been written to: /Users/marko/WC/GIT/krakenapi/build ``` 2) And `make` fails: ``` $ make [ 13%] Built target kapi [ 18%] Building CXX object CMakeFiles/libjson.dir/libjson/_internal/Source/JSONChildren.cpp.o In file included from /Users/marko/WC/GIT/krakenapi/libjson/_internal/Source/JSONChildren.cpp:1: In file included from /Users/marko/WC/GIT/krakenapi/libjson/_internal/Source/JSONChildren.h:4: In file included from /Users/marko/WC/GIT/krakenapi/libjson/_internal/Source/JSONMemory.h:6: In file included from /Users/marko/WC/GIT/krakenapi/libjson/_internal/Source/JSONDebug.h:4: In file included from /Users/marko/WC/GIT/krakenapi/libjson/_internal/Source/JSONDefs.h:12: /Users/marko/WC/GIT/krakenapi/libjson/_internal/Source/JSONDefs/GNU_C.h:1:9: warning: 'JSON_GNU_C_HEADER' is used as a header guard here, followed by #define of a different macro [-Wheader-guard] #ifndef JSON_GNU_C_HEADER ^~~~~~~~~~~~~~~~~ /Users/marko/WC/GIT/krakenapi/libjson/_internal/Source/JSONDefs/GNU_C.h:2:9: note: 'JSON_GUN_C_HEADER' is defined here; did you mean 'JSON_GNU_C_HEADER'? #define JSON_GUN_C_HEADER ^~~~~~~~~~~~~~~~~ JSON_GNU_C_HEADER /Users/marko/WC/GIT/krakenapi/libjson/_internal/Source/JSONChildren.cpp:75:29: error: comparison between pointer and integer ('JSONNode *' and 'int') JSON_ASSERT(*runner != JSON_TEXT('\0'), JSON_TEXT("a null pointer within the children")); ~~~~~~~ ^ ~~~~ /Users/marko/WC/GIT/krakenapi/libjson/_internal/Source/JSONDebug.h:27:58: note: expanded from macro 'JSON_ASSERT' #define JSON_ASSERT(bo, msg) JSONDebug::_JSON_ASSERT(bo, msg) ^ 1 warning and 1 error generated. make[2]: *** [CMakeFiles/libjson.dir/libjson/_internal/Source/JSONChildren.cpp.o] Error 1 make[1]: *** [CMakeFiles/libjson.dir/all] Error 2 make: *** [all] Error 2 ``` <issue_comment>username_1: Hi, I've updated CMakeFile.txt and a #define inside libjson. Could you retry? <issue_comment>username_0: Warnings gone, error persists. <issue_comment>username_1: On Fedora I can compile the program, no errors. I need more information, can you paste your last output with error messages? Thank you. <issue_comment>username_0: Tried this again on OS X 10.10.5 with Apple's compilers resulting in many warnings like these: ``` $ make Scanning dependencies of target kapi [ 4%] Building CXX object CMakeFiles/kapi.dir/kraken/kclient.cpp.o In file included from /Users/kieran/WC/krakenapi/kraken/kclient.cpp:14: In file included from /Users/kieran/WC/krakenapi/kraken/kclient.hpp:9: In file included from /Users/kieran/WC/krakenapi/kraken/ktrade.hpp:6: In file included from /Users/kieran/WC/krakenapi/kraken/../libjson/libjson.h:177: In file included from /Users/kieran/WC/krakenapi/kraken/../libjson/_internal/Source/JSONNode.h:5: In file included from /Users/kieran/WC/krakenapi/kraken/../libjson/_internal/Source/internalJSONNode.h:5: /Users/kieran/WC/krakenapi/kraken/../libjson/_internal/Source/JSONChildren.h:66:17: warning: 'this' pointer cannot be null in well-defined C++ code; comparison may be assumed to always evaluate to true [-Wtautological-undefined-compare] JSON_ASSERT(this != 0, JSON_TEXT("Children is null push_back")); ^~~~ ~ /Users/kieran/WC/krakenapi/kraken/../libjson/_internal/Source/JSONDebug.h:27:58: note: expanded from macro 'JSON_ASSERT' #define JSON_ASSERT(bo, msg) JSONDebug::_JSON_ASSERT(bo, msg) ^ ``` and later followed by said error: ``` /Users/kieran/WC/krakenapi/libjson/_internal/Source/JSONDebug.h:27:58: note: expanded from macro 'JSON_ASSERT' #define JSON_ASSERT(bo, msg) JSONDebug::_JSON_ASSERT(bo, msg) ^ /Users/kieran/WC/krakenapi/libjson/_internal/Source/JSONChildren.cpp:75:29: error: comparison between pointer and integer ('JSONNode *' and 'int') JSON_ASSERT(*runner != JSON_TEXT('\0'), JSON_TEXT("a null pointer within the children")); ~~~~~~~ ^ ~~~~ /Users/kieran/WC/krakenapi/libjson/_internal/Source/JSONDebug.h:27:58: note: expanded from macro 'JSON_ASSERT' #define JSON_ASSERT(bo, msg) JSONDebug::_JSON_ASSERT(bo, msg) ^ /Users/kieran/WC/krakenapi/libjson/_internal/Source/JSONChildren.cpp:81:17: warning: 'this' pointer cannot be null in well-defined C++ code; comparison may be assumed to always evaluate to true [-Wtautological-undefined-compare] JSON_ASSERT(this != 0, JSON_TEXT("Children is null doerase")); ^~~~ ~ /Users/kieran/WC/krakenapi/libjson/_internal/Source/JSONDebug.h:27:58: note: expanded from macro 'JSON_ASSERT' #define JSON_ASSERT(bo, msg) JSONDebug::_JSON_ASSERT(bo, msg) ^ 20 warnings and 1 error generated. make[2]: *** [CMakeFiles/libjson.dir/libjson/_internal/Source/JSONChildren.cpp.o] Error 1 make[1]: *** [CMakeFiles/libjson.dir/all] Error 2 make: *** [all] Error 2 $ clang --version Apple LLVM version 7.0.2 (clang-700.1.81) Target: x86_64-apple-darwin14.5.0 Thread model: posix ``` Don't know which other (log) files or additional info would be interesting for you... <issue_comment>username_0: This makes it build at least: ``` $ git diff diff --git a/libjson/_internal/Source/JSONChildren.cpp b/libjson/_internal/Source/JSONChildren.cpp index a5492d9..f0e9883 100644 --- a/libjson/_internal/Source/JSONChildren.cpp +++ b/libjson/_internal/Source/JSONChildren.cpp @@ -72,7 +72,7 @@ void jsonChildren::inc(json_index_t amount) json_nothrow { void jsonChildren::deleteAll(void) json_nothrow { JSON_ASSERT(this != 0, JSON_TEXT("Children is null deleteAll")); json_foreach(this, runner){ - JSON_ASSERT(*runner != JSON_TEXT('\0'), JSON_TEXT("a null pointer within the children")); + JSON_ASSERT(*runner != 0, JSON_TEXT("a null pointer within the children")); JSONNode::deleteJSONNode(*runner); //this is why I can't do forward declaration } } ``` <issue_comment>username_1: I think the problem is the compiler. I'm using GNU whereas you're using CLang. I've updated CMakeList.txt and the JSON library to suppress warnings and remove the error. <issue_comment>username_0: Yeah, I know. Please see my comments to your latest commit 6cd5cbf32523a0255c509ed387f87b8394ee8350. <issue_comment>username_1: I've update CMakeLists.txt to add flat_namespace to Apple linker. <issue_comment>username_0: This is what I get on my system if I take your code: ``` clang: warning: argument unused during compilation: '-flat_namespace' clang: warning: argument unused during compilation: '-undefined suppress' warning: unknown warning option '-Wno-tautological-undefined-compare'; did you mean '-Wno-tautological-compare'? [-Wunknown-warning-option] ``` The warning I had already pointed out as a comment in your code. <issue_comment>username_0: If I revert back to the use of `-Wno-tautological-compare` the corresponding warning vanishes. I forgot to mention that `ranlib`'s warnings are still there: ``` [ 77%] Building CXX object CMakeFiles/libjson.dir/libjson/_internal/Source/libjson.cpp.o clang: warning: argument unused during compilation: '-flat_namespace' clang: warning: argument unused during compilation: '-undefined suppress' [ 81%] Linking CXX static library libjsond.a /Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/ranlib: file: libjsond.a(JSONAllocator.cpp.o) has no symbols /Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/ranlib: file: libjsond.a(JSONMemory.cpp.o) has no symbols ... ```<issue_closed>
<issue_start><issue_comment>Title: 🛑 Politiche Agricole is down username_0: In [`cc0cd37`](https://github.com/username_0/infosecnews-help-gov-it/commit/cc0cd37d42e804e411c2b3f64fcc92d65b1ef83b ), Politiche Agricole (https://www.politicheagricole.it/) was **down**: - HTTP code: 0 - Response time: 0 ms <issue_comment>username_0: **Resolved:** Politiche Agricole is back up in [`8fcebaf`](https://github.com/username_0/infosecnews-help-gov-it/commit/8fcebafc373c4582a39ea282598f0f4c029acaff ).<issue_closed>
<issue_start><issue_comment>Title: Removed logic breaking returning of JSON content username_0: There is no documentation for a send function in node's HTTP, so I assume it's supposed to be Express response. Anyway, I have never seen a call to `.send` with a response object passed as argument in any documentation. On the other hand it breaks passing objects (intended as json responses) to the `.send` method if they have a field called body. <issue_comment>username_1: Hey Alan, where did we leave off with this pr? <issue_comment>username_2: I didn't get a chance to look over the pull request. I won't have time to until this weekend. <issue_comment>username_3: @username_1 / @username_2, @username_0 is correct. Is the argument passed to `res.send()` is an object, Express does not check the object for status codes or use only the `body` key when present. This is definitely a deviation from Express' implementation. When Express' `res.send()` encounters an object it actually passes it directly to `res.json()` (which in turn stringify's it and passes it to `res.send()`). @username_0's PR set's `_data` to the passed object, instead of passing it to `.json()`, which is also inconsistent with Express' implementation. While we definitely need to address @username_0's concerns, this PR should not be merged as-is. <issue_comment>username_3: This issue should be take care of once we roll out release `2.0` (see our [Roadmap to Release 2.0](https://github.com/username_1/node-mocks-http/issues/54), issue #54).
<issue_start><issue_comment>Title: simpleCart php database username_0: I am trying to get simpleCart to send variables to a database. I have tried almost everything but it comes with an error PHP Notice: Undefined index: itemCount in E:\inetpub\gadekoekkenet_umbraco\php\order.php on line 34 if I use print_r ($_POST); I get an empty array so it looks like it doesn't get the data from the cart. This is my php-file so I hope someone can help me in the right direction. <!DOCTYPE html> <html> <head> <meta charset="utf-8"> <meta http-equiv="X-UA-Compatible" content="IE=edge"> <title></title> <meta name="description" content=" " /> <meta name="author" content=" " /> <meta name="HandheldFriendly" content="true" /> <meta name="MobileOptimized" content="320" /> <meta name="viewport" content="width=device-width" /> <link rel="stylesheet" href="../css/style.css"> <script src="https://ajax.googleapis.com/ajax/libs/jquery/2.1.4/jquery.min.js"></script> <script src="../scripts/simpleCart.js"></script> </head> <body> <h1>Hello, World!</h1> <p>Go ahead and add your content!</p> jgjgj <?php print_r(PDO::getAvailableDrivers()); ?> <?php include('connect-db.php'); ?> <form name="input" action="" method="post"> <div class="simpleCart_items"></div> KundeNavn: <input type="text" name="KundeNavn"><br> KundeAdresse: <input type="text" name="KundeAdresse"><br> KundePostnummer: <input type="text" name="KundePostnummer" value="">&ensp;KundeBy:<input type="text" name="KundeBy"><br><br> <?php $content = $_POST; for($i=1; $i < $content['itemCount'] + 1; $i++) { $name = 'item_name_'.$i; $quantity = 'item_quantity_'.$i; $price = 'item_price_'.$i; $total = $content[$quantity]*$content[$price]; } ?> <input type="submit" value="Submit" name="Submit" onclick="simpleCart.checkout()"> </form> <?php if(isset($_POST['Submit'])){ $KundeNavn = $_POST['KundeNavn']; $KundeAdresse = $_POST['KundeAdresse']; $KundePostnummer = $_POST['KundePostnummer']; $KundeBy = $_POST['KundeBy']; $query = "INSERT INTO Ordre ([KundeNavn], [KundeAdresse], [KundePostnummer], [KundeBy], [MenuItem], [MenuQuantity], [MenuPrice], [MenuTotal]) VALUES('$KundeNavn', '$KundeAdresse', '$KundePostnummer', '$KundeBy', '$name', '$price', '$quantity', '$total' )"; $result = sqlsrv_query($conn, $query)or die('Error querying MSSQL database'); } ?> <div class=""> <?php if( $conn === false ) { die( print_r( sqlsrv_errors(), true)); } $sql = "SELECT * FROM Ordre"; $stmt = sqlsrv_query( $conn, $sql ); if( $stmt === false) { die( print_r( sqlsrv_errors(), true) ); } while( $row = sqlsrv_fetch_array( $stmt, SQLSRV_FETCH_ASSOC) ) { [Truncated] // simpleCart.bind( 'beforeCheckout' , function( data ){ // data.first_name = document.getElementById("first_name").value; // data.last_name = document.getElementById("last_name").value; // data.email = document.getElementById("email").value; // data.phone = document.getElementById("phone").value; // data.comments = document.getElementById("comments").value; // }); // example of modifying the price of the item based on a 'size' attribute simpleCart.bind( 'beforeAdd' , function( item ){ if( item.get( 'size' ) == '1' ){ item.price( 48 ); } else if( item.get( 'size' ) == '2' ){ item.price( 55 ); } }); </script> </html>
<issue_start><issue_comment>Title: Problems during unzipping on xls file parsing username_0: I have a normal xls file i want to parse. So, I'm doing this: ```ruby RubyXL::Parser.parse "path/to/file.xls", skip_filename_check: true ``` And I get the following error : Zip::ZipError: Zip end of central directory signature not found from /usr/local/lib/ruby/gems/1.9.1/gems/rubyzip-0.9.9/lib/zip/zip_central_directory.rb:97:in `get_e_o_c_d' from /usr/local/lib/ruby/gems/1.9.1/gems/rubyzip-0.9.9/lib/zip/zip_central_directory.rb:55:in `read_e_o_c_d' from /usr/local/lib/ruby/gems/1.9.1/gems/rubyzip-0.9.9/lib/zip/zip_central_directory.rb:85:in `read_from_str from /usr/local/lib/ruby/gems/1.9.1/gems/rubyzip-0.9.9/lib/zip/zip_file.rb:67:in `block in initialize' from /usr/local/lib/ruby/gems/1.9.1/gems/rubyzip-0.9.9/lib/zip/zip_file.rb:66:in `open' from /usr/local/lib/ruby/gems/1.9.1/gems/rubyzip-0.9.9/lib/zip/zip_file.rb:66:in `initialize' from /usr/local/lib/ruby/gems/1.9.1/gems/rubyzip-0.9.9/lib/zip/zip_file.rb:87:in `new' from /usr/local/lib/ruby/gems/1.9.1/gems/rubyzip-0.9.9/lib/zip/zip_file.rb:87:in `open' from /usr/local/lib/ruby/gems/1.9.1/gems/rubyXL-1.2.10/lib/rubyXL/zip.rb:10:in `unzip' from /usr/local/lib/ruby/gems/1.9.1/gems/rubyXL-1.2.10/lib/rubyXL/parser.rb:326:in `decompress' from /usr/local/lib/ruby/gems/1.9.1/gems/rubyXL-1.2.10/lib/rubyXL/parser.rb:47:in `parse' from (irb):4 from /usr/local/lib/ruby/gems/1.9.1/gems/railties-3.2.11/lib/rails/comma from /usr/local/lib/ruby/gems/1.9.1/gems/railties-3.2.11/lib/rails/comma Do you have an idea what it might be? The file doesn't seem corrupt, at least I can open it in microsoft excel. <issue_comment>username_1: I am using this library to open .xlsx file and got the same error <issue_comment>username_2: @username_1 : I'm 99.9% sure your file is in `xls` format but with changed extension. If you want me to analyze it, upload it somewhere and give me the link. You want to use https://github.com/zdavatz/spreadsheet for `.xls` files. <issue_comment>username_3: Im having same prob, is there any mail where I can send you the file? <issue_comment>username_2: @username_3: Here's an easy way for you to test on your own: 1. Change the extension of your `.xlsx` file to `.zip` 2. Try opening it in Windows like a `zip` file. 3. If it doesn't open, then it is `.xls` file, and not `.xlsx` file. <issue_comment>username_3: Ok, thanks, im getting the error OLE2 signature is invalid, try everything I found researching but nothing make it work. <issue_comment>username_2: @username_3: In that case, it is NOT an Excel 2003+ file, it's an Excel 97 file, and this gem won't help you. You need the other gem: https://github.com/zdavatz/spreadsheet <issue_comment>username_3: Thanks, Ive just read this, but I started using spreadhseet and that one is the one who faile with ole2. Thanks for everything <issue_comment>username_2: @username_3: OK, you can upload your file somewhere (for example, https://mega.co.nz/) and give me the link so I can look at it. It's strange, it should be either one or the other. <issue_comment>username_3: @username_2 https://mega.nz/#!bwAX1CqK <issue_comment>username_2: @username_3 you made that file private. I can't download it. <issue_comment>username_3: @username_2 its public, try with this one https://mega.nz/#!bwAX1CqK!5A1o4Ldl03nmZzJb1nX1P9YnQr0LylpULwqIoGMzJq8 key : !5A1o4Ldl03nmZzJb1nX1P9YnQr0LylpULwqIoGMzJq8 <issue_comment>username_2: @username_3: this is NOT an Excel file at all. It's an HTML file. Just open it with Notepad. <issue_comment>username_3: @username_2 sorry men... <issue_comment>username_4: this happens when xlsx file is currently open by an another application(excel...)
<issue_start><issue_comment>Title: add: Show how many lines does an author contribute username_0: This tip comes from http://stackoverflow.com/questions/1265040/how-to-count-total-lines-changed-by-a-specific-author-in-a-git-repository Besides, ```git shortlog -sne``` shows how many commits does an author contribute <issue_comment>username_1: Thanks for the PR, but it needs a rebase. ![reba-mcentire-mygifsets-irh1ntI6CuZws](https://camo.githubusercontent.com/e0d66835e5a3c9a6c36b1e0c255115e00fb3aa08/68747470733a2f2f6d65646961332e67697068792e636f6d2f6d656469612f697268316e74493643755a77732f67697068792e676966>) <issue_comment>username_0: rebase done.
<issue_start><issue_comment>Title: Dialogs in dark theme username_0: I don't know if this is a bug, but if you change the Theme to Dark and open a dialo, it will be shown in the Light-Theme. Is this by-design? <issue_comment>username_1: Yes, this is actually by design. Last time I checked there were no recommendations for dark dialogs on Google's specs, and certainly most apps on my Android phone have light dialogs, even if the are using a dark theme. Plus, I think lighter dialogs provide better consistency and looks, even in a dark theme. However, you should be able to override it by bringing in the dark resource dictionary into your content.<issue_closed>
<issue_start><issue_comment>Title: Fixed publishing of VS15 symbols for CI builds username_0: Fixes NuGet/Home#3185 - Added a parameter pointing to the right version of NuGet.Core for pdb publishing - Added pdb to the command line package nuspec - Minor script fixes //cc @drewgil @rrelyea @username_1 @emgarten <issue_comment>username_1: If E2E passes, :shipit:
<issue_start><issue_comment>Title: Dimension wildcard for <Line> username_0: In ``` <ComponentType name="Line"> <Parameter name="scale" dimension="*"/> <Parameter name="timeScale" dimension="*"/> <Path name="quantity"/> <Text name="color"/> <Simulation> <Record quantity="quantity" timeScale="timeScale" scale="scale" color="color"/> </Simulation> </ComponentType> ``` we should say that `scale` takes the dimension of `<Path name='quantity'>`, whereas `timescale` should point to the "magic" universal time (magic in the sense that it has "special" behaviour in e.g. TimeDerivatives). Ideally, we should be able to use any other variable in the "x-axis", so the same reasoning as above applies. And I guess that the scaling factors should be made adimensional, and all dimensinality delegated to the timeseries being plotted.
<issue_start><issue_comment>Title: Producing sourcemaps with Compass and Autoprefixer returns runtime error username_0: I added this to my `config.rb` file. ``` require 'autoprefixer-rusername_1ls' on_stylesheet_saved do |file| css = File.read(file) map = file + '.map' if File.exists? map result = AutoprefixerRusername_1ls.process(css, from: file, to: file, map: { prev: map, inline: false }) File.open(file, 'w') { |io| io << result.css } File.open(map, 'w') { |io| io << result.map } else File.open(file, 'w') { |io| io << AutoprefixerRusername_1ls.process(css) } end end ``` This returns the error: ``` ExecJS::RuntimeError on line ["47"] of /Users/me/.rvm/gems/ruby-2.1.0@global/gems/execjs-2.2.1/lib/execjs/ruby_racer_runtime.rb: SyntaxError: Unexpected token / ``` If I declare the following without source map detection, then everything outputs fine: ``` require 'autoprefixer-rusername_1ls' on_stylesheet_saved do |file| css = File.read(file) File.open(file, 'w') do |io| io << AutoprefixerRusername_1ls.process(css) end end ``` <issue_comment>username_1: Yeap, I know about this issue, but has no idea how to fix it. It happens somewhere in ExecJS. Problem is that on [47 line](https://github.com/username_1/autoprefixer-rusername_1ls/blob/master/vendor/autoprefixer.js#L47). If you know Ruby, I can suggest you to change ExecJS runtime (to node.js, for example). But best case is to move to Gulp, because Gulp will open to you entire world of other awesome plugins (like other PostCSS plugins, or imagemin tool).<issue_closed>
<issue_start><issue_comment>Title: Accordion body incorrect height calculation username_0: Accordion body height is calculated incorrectly when it has hidden content. Please check [plnkr](http://plnkr.co/edit/ZhPtRFB11zq3EXsClsmF?p=preview) Initially it has height that exceed visible content. After toggling body is shown correctly, but when I'm toggling hidden content it shows cutted. <issue_comment>username_1: It is more of an collapse issue probably and similar to #4561 (will probably be the same issue). <issue_comment>username_1: When the PR get merged, you will ned to update to 1.4.5 because there is a related bug fixed in that version.<issue_closed>
<issue_start><issue_comment>Title: ScrollBehavior refactor username_0: * When using an OverscrollIndicator, have the corresponding ScrollBehavior actually clamp at min/max offset. #5226 * When the ScrollBehavior has clamped, it should fire an "overscroll" notification saying by how much more it would have scrolled if it hadn't in fact clamped. * OverscrollIndicator should use that to grow itself / darken itself. #5274 #5275 * OverscrollIndicator should drive all its animations of its own animation controller, not the scroll offset. #5279 * OverscrollIndicator should paint based on the scroll offset, though. #5276 This doesn't help #5277. <issue_comment>username_0: See also #2164, #844.<issue_closed> <issue_comment>username_0: Much of this bug has been done, the rest is being tracked in individual bugs.
<issue_start><issue_comment>Title: old vs new hg support should be tested with funcargs username_0: also needs skipping for old hg ---------------------------------------- - Bitbucket: https://bitbucket.org/pypa/setuptools_scm/issue/8 - Originally reported by: Ronny Pfannschmidt - Originally created at: 2010-05-27T10:21:14.066<issue_closed>
<issue_start><issue_comment>Title: Support for a Comment Character in Jline username_0: We were recently trying to figure out how to support comment characters in JLine. While we are able to parse content after it has been read from ConsoleReader this doesn't let us pre-empt the event Expansion. This means lines like `#This is a comment!@ ` Will throw an error in the parser prior to the line being given to the client application. `java.lang.IllegalArgumentException: !@: event not found ` It would be useful if JLine could support comments at the interpreter level. Text appearing after an unquoted `#` could be preemptively filtered out of the readLine method. <issue_comment>username_1: JLine 3 has a pluggable parser which can support that.<issue_closed>
<issue_start><issue_comment>Title: svc issue username_0: Hi, I am trying to use svc on a cohort of samples. When I run it I get the following error message... does this mean I need to update something or is there some other fix? Thank you Traceback (most recent call last): File "/share/PI/euan/apps/bcbio/anaconda/lib/python2.7/site-packages/bcbio/distributed/ipythontasks.py", line 50, in _setup\ _logging yield config File "/share/PI/euan/apps/bcbio/anaconda/lib/python2.7/site-packages/bcbio/distributed/ipythontasks.py", line 362, in detec\ t_sv return ipython.zip_args(apply(structural.detect_sv, *args)) File "/share/PI/euan/apps/bcbio/anaconda/lib/python2.7/site-packages/bcbio/structural/__init__.py", line 144, in detect_sv for svdata in caller_fn(items): File "/share/PI/euan/apps/bcbio/anaconda/lib/python2.7/site-packages/bcbio/structural/cnvkit.py", line 29, in run return _cnvkit_by_type(items, background) File "/share/PI/euan/apps/bcbio/anaconda/lib/python2.7/site-packages/bcbio/structural/cnvkit.py", line 43, in _cnvkit_by_ty\ pe return _run_cnvkit_population(items, background) File "/share/PI/euan/apps/bcbio/anaconda/lib/python2.7/site-packages/bcbio/structural/cnvkit.py", line 113, in _run_cnvkit_\ population out.extend(_associate_cnvkit_out(ckouts, [cur_input])) File "/share/PI/euan/apps/bcbio/anaconda/lib/python2.7/site-packages/bcbio/structural/cnvkit.py", line 57, in _associate_cn\ vkit_out ckout = _add_variantcalls_to_output(ckout, data, is_somatic) File "/share/PI/euan/apps/bcbio/anaconda/lib/python2.7/site-packages/bcbio/structural/cnvkit.py", line 404, in _add_variant\ calls_to_output effects_vcf, _ = effects.add_to_vcf(calls["vcf"], data, "snpeff") File "/share/PI/euan/apps/bcbio/anaconda/lib/python2.7/site-packages/bcbio/variation/effects.py", line 30, in add_to_vcf ann_vrn_file, stats_file = snpeff_effects(in_file, data) File "/share/PI/euan/apps/bcbio/anaconda/lib/python2.7/site-packages/bcbio/variation/effects.py", line 216, in snpeff_effec\ ts return _run_snpeff(vcf_in, "vcf", data) File "/share/PI/euan/apps/bcbio/anaconda/lib/python2.7/site-packages/bcbio/variation/effects.py", line 304, in _run_snpeff do.run(cmd.format(**locals()), "snpEff effects", data) File "/share/PI/euan/apps/bcbio/anaconda/lib/python2.7/site-packages/bcbio/provenance/do.py", line 21, in run _do_run(cmd, checks, log_stdout) File "/share/PI/euan/apps/bcbio/anaconda/lib/python2.7/site-packages/bcbio/provenance/do.py", line 95, in _do_run raise subprocess.CalledProcessError(exitcode, error_msg) CalledProcessError: Command 'set -o pipefail; export PATH=/share/PI/euan/apps/bcbio/anaconda/bin:$PATH && /share/PI/euan/app\ s/bcbio/anaconda/bin/snpEff -Xms4g -Xmx10g -Djava.io.tmpdir=/local-scratch/lsturm/9838413/bcbiotx/79ba2811-1ba3-4039-9513-a54\ 6d7ac9b5b/tmpHX7S0A/tmp eff -dataDir /share/PI/euan/apps/bcbio/genomes/Hsapiens/GRCh37/snpeff -noHgvs -noLog -i vcf -o vcf -s\ /scratch/PI/euan/projects/elite/bcbio/projects/project_n268_svc_v1/work/structural/01_02/cnvkit/raw/01_02-dedup-call-effects\ -stats.html GRCh37.75 /scratch/PI/euan/projects/elite/bcbio/projects/project_n268_svc_v1/work/structural/01_02/cnvkit/raw/01_\ 02-dedup-call.vcf > /local-scratch/lsturm/9838413/bcbiotx/79ba2811-1ba3-4039-9513-a546d7ac9b5b/tmpHX7S0A/01_02-dedup-call-ef\ fects.vcf java.lang.RuntimeException: Database file '/share/PI/euan/apps/bcbio/genomes/Hsapiens/GRCh37/snpeff/GRCh37.75/snpEffectPredic\ tor.bin' is not compatible with this program version: Database version : '4.2' Program version : '4.3' Try installing the appropriate database. at org.snpeff.serializer.MarkerSerializer.load(MarkerSerializer.java:158) at org.snpeff.snpEffect.SnpEffectPredictor.load(SnpEffectPredictor.java:66) at org.snpeff.snpEffect.Config.loadSnpEffectPredictor(Config.java:555) at org.snpeff.SnpEff.loadDb(SnpEff.java:360) at org.snpeff.snpEffect.commandLine.SnpEffCmdEff.run(SnpEffCmdEff.java:964) at org.snpeff.snpEffect.commandLine.SnpEffCmdEff.run(SnpEffCmdEff.java:947) at org.snpeff.SnpEff.run(SnpEff.java:1009) at org.snpeff.SnpEff.main(SnpEff.java:155) ' returned non-zero exit status 255 [2016-09-16T20:38Z] sh-7-30.local: snpEff effects : 01_02 [2016-09-16T20:38Z] sh-7-30.local: java.lang.RuntimeException: Database file '/share/PI/euan/apps/bcbio/genomes/Hsapiens/GRCh\ 37/snpeff/GRCh37.75/snpEffectPredictor.bin' is not compatible with this program version: [2016-09-16T20:38Z] sh-7-30.local: Database version : '4.2' [2016-09-16T20:38Z] sh-7-30.local: Program version : '4.3' [2016-09-16T20:38Z] sh-7-30.local: Try installing the appropriate database. [2016-09-16T20:38Z] sh-7-30.local: at org.snpeff.serializer.MarkerSerializer.load(MarkerSerializer.java:158) [2016-09-16T20:38Z] sh-7-30.local: at org.snpeff.snpEffect.SnpEffectPredictor.load(SnpEffectPredictor.java:66) [2016-09-16T20:38Z] sh-7-30.local: at org.snpeff.snpEffect.Config.loadSnpEffectPredictor(Config.java:555) [2016-09-16T20:38Z] sh-7-30.local: at org.snpeff.SnpEff.loadDb(SnpEff.java:360) [2016-09-16T20:38Z] sh-7-30.local: at org.snpeff.snpEffect.commandLine.SnpEffCmdEff.run(SnpEffCmdEff.java:964) [2016-09-16T20:38Z] sh-7-30.local: at org.snpeff.snpEffect.commandLine.SnpEffCmdEff.run(SnpEffCmdEff.java:947) [2016-09-16T20:38Z] sh-7-30.local: at org.snpeff.SnpEff.run(SnpEff.java:1009) [2016-09-16T20:38Z] sh-7-30.local: at org.snpeff.SnpEff.main(SnpEff.java:155) [2016-09-16T20:38Z] sh-7-30.local: Uncaught exception occurred Traceback (most recent call last): File "/share/PI/euan/apps/bcbio/anaconda/lib/python2.7/site-packages/bcbio/provenance/do.py", line 21, in run _do_run(cmd, checks, log_stdout) File "/share/PI/euan/apps/bcbio/anaconda/lib/python2.7/site-packages/bcbio/provenance/do.py", line 95, in _do_run raise subprocess.CalledProcessError(exitcode, error_msg) CalledProcessError: Command 'set -o pipefail; export PATH=/share/PI/euan/apps/bcbio/anaconda/bin:$PATH && /share/PI/euan/app\ s/bcbio/anaconda/bin/snpEff -Xms4g -Xmx10g -Djava.io.tmpdir=/local-scratch/lsturm/9838408/bcbiotx/3d0ab058-bceb-4b2c-8f2d-81d\ dc2622868/tmpAUqOOG/tmp eff -dataDir /share/PI/euan/apps/bcbio/genomes/Hsapiens/GRCh37/snpeff -noHgvs -noLog -i vcf -o vcf -s\ /scratch/PI/euan/projects/elite/bcbio/projects/project_n268_svc_v1/work/structural/01_02/cnvkit/raw/01_02-dedup-call-effects\ -stats.html GRCh37.75 /scratch/PI/euan/projects/elite/bcbio/projects/project_n268_svc_v1/work/structural/01_02/cnvkit/raw/01_\ 02-dedup-call.vcf > /local-scratch/lsturm/9838408/bcbiotx/3d0ab058-bceb-4b2c-8f2d-81ddc2622868/tmpAUqOOG/01_02-dedup-call-ef\ fects.vcf java.lang.RuntimeException: Database file '/share/PI/euan/apps/bcbio/genomes/Hsapiens/GRCh37/snpeff/GRCh37.75/snpEffectPredic\ tor.bin' is not compatible with this program version: Database version : '4.2' Program version : '4.3' Try installing the appropriate database. at org.snpeff.serializer.MarkerSerializer.load(MarkerSerializer.java:158) at org.snpeff.snpEffect.SnpEffectPredictor.load(SnpEffectPredictor.java:66) at org.snpeff.snpEffect.Config.loadSnpEffectPredictor(Config.java:555) at org.snpeff.SnpEff.loadDb(SnpEff.java:360) at org.snpeff.snpEffect.commandLine.SnpEffCmdEff.run(SnpEffCmdEff.java:964) at org.snpeff.snpEffect.commandLine.SnpEffCmdEff.run(SnpEffCmdEff.java:947) at org.snpeff.SnpEff.run(SnpEff.java:1009) at org.snpeff.SnpEff.main(SnpEff.java:155) ' returned non-zero exit status 255 <issue_comment>username_1: Thanks for the report, it looks like you updated snpEff to version 4.3 but not the databases. You can do this with: ``` bcbio_nextgen.py upgrade --data ``` Hope this helps, feel free to re-open this issue if you run into other problems.<issue_closed> <issue_comment>username_0: Hi Brad, I successfully updated using bcbio_nextgen.py upgrade --data. However, after rerunning the test, I still get the same error. The update seemed to update a bunch of gemini_data, but I don't believe it updated bcbio/genomes/Hsapiens/GRCh37/snpeff/GRCh37.75/snpEffectPredictor.bin. How can I update the snpEff database? Thank you, Luke <issue_comment>username_1: Luke; Sorry about the issue, I'm not positive why it wouldn't upgrade snpEff as well. If you look at `/path/to/bcbio/manifest/python-packages.yaml` and search for `snpeff`, what version is listed? It should have been updated with 4.3 which is where the installers pulls from to know it needs an update. If this is an older installation I pushed a fix to avoid pulling old versions from `brew` installs which might also help. Hope one of these two ideas helps resolve the problem. <issue_comment>username_0: Hi, I checked the python-packages.yaml and weirdly, there is no entry for snpeff listed at all. What should I do? Luke <issue_comment>username_1: Luke; That's strange it should get updated and reported there on any successful run of an install/upgrade, if you run: ``` bcbio_nextgen.py upgrade -u development --tools ``` does it finish cleanly and update the manifest file? Hopefully that fixes it and you can then run the data update command afterwards. <issue_comment>username_0: I wasn't able to do that update do to a previous issue. I somehow got into a state where conda wasn't in the right place. Specifically bcbio/anaconda/bin/conda doesn't exist. I tried to reinstall conda, but that path still doesn't exist. Any thoughts on how to fix that? <issue_comment>username_1: Luke; You'll probably need to remove the broken anaconda directory (`rm -rf /path/to/bcbio/anaconda`) and re-run the installer to recreate conda and the installed packages: http://bcbio-nextgen.readthedocs.io/en/latest/contents/installation.html#automated You should use the same tool and data directories as previously to avoid re-downloading the data. Hopefully this will get your installation working cleanly again. <issue_comment>username_0: Thank you! That worked!
<issue_start><issue_comment>Title: No scientific notation? username_0: ``` VSC_SCRATCH_PHANPY: used 0 B (0%) quota 950 GiB (1e+03 GiB hard limit) (age of data is 79 minutes) ``` Let's avoid scientific notation for quota? <issue_comment>username_1: Good point. Somewhere it needs to become an int again.<issue_closed>