content
stringlengths
240
2.34M
<issue_start><issue_comment>Title: Stops for font properties username_0: Stops for font properties including sizes, font fill/outline color as in https://github.com/tangrams/tangram/pull/214. Will revert the scene once reviewed. <issue_comment>username_1: Reviewing <issue_comment>username_1: Looks good, just rebase onto master and revert the scene file changes then feel free to merge!
<issue_start><issue_comment>Title: Not working with DialogActvity username_0: I have used windowSoftInputMode is adjustResize and my android os version 5.0 and device is Nexus 5. I am getting IsOpen true or false in normal activity. but If I use Theme.dialog for Activity means DialogActvity then its not working in my case <issue_comment>username_1: @username_0 Fix solution : Use "View view = activity.getWindow().getDecorView()" instead of "View view = activity.getCurrentFocus()" in method UIUtil.hideKeyboard(Activity activity)<issue_closed>
<issue_start><issue_comment>Title: CP-1252 Release route.dart 0.8.1 username_0: JIRA: https://jira.webfilings.com/browse/CP-1252 JIRA and PR's included in this release: CP-1257 - Fix page title setting for all routing cases - https://github.com/Workiva/route.dart/pull/9 Diff Between Last Tag and Proposed Release: https://github.com/Workiva/route.dart/compare/0.8.0...CP-1252_release_0.8.1 <issue_comment>username_1: +1 <issue_comment>username_2: +1
<issue_start><issue_comment>Title: ensure if service restart to wait till mysql is up username_0: The old mysql init scripts waited till mysql was up and running till they finished. With ubuntu 14.04 and upstart (re)starting mysql will no longer wait for mysql to finish starting up. So in some cases, puppet will try to run mysql queries against the mysql server which is still starting up. This patch prevents that by waiting till the configured mysql socket is available if the mysql service gets updated. <issue_comment>username_0: I have integrated your changes. Is there anything else I should do? <issue_comment>username_1: @username_0 nope, i think that's all good so far… @username_2 / @tphoney any comments on this? <issue_comment>username_2: @username_0 basically ok. The question I have is whether the socket is always there or whether there would be a way of configuring the mysql server so that this would always run (and do the wrong thing), (and whether we should care about such a setup)? <issue_comment>username_0: I see no technically reason why it shouldn't be possible to run a mysql server without a file system socket. But I'm not sure if anyone is running such a setup with puppet. To clear the socket value the `mysql::server::options` hash has to be cleared by setting it explicitly. I would suggest to surround the exec with an if statement: ``` if $mysqlsocket != undef { exec { 'wait_for_mysql_socket_to_open': ... } } ``` That way if someone disabled the socket configuration, he isn't placed in a worse situation as before. And if he happens to get in the same situation we can look if we want to add a fall back to check for network sockets instead. Please let me know if my proposal is acceptable and I will update the pull request with the aforementioned if statement around the exec. <issue_comment>username_1: my main issue: when mysql crashes and doesn't cleanup the socket, does that mean we won't re/start it? <issue_comment>username_0: The exec only looks if a file exists and is a socket. If that's the case nothing happens. So if mysql crashes and doesn't cleanup the socket, then the exec will do nothing. If the puppet service configuration prior detects that mysql has crashed and is not running is another story and not part of this change. Or am I mistaken? Also the command to run must be /bin/false not /bin/true, because otherwise if the socket isn't there /bin/true will tell the exec that everything is ok, and no retry will occur. <issue_comment>username_2: Using /bin/false as command it will create an error in the report - even if the socket was created successfully. What about using the same `test -S` check as in the `unless`? It reminds me of the health checks in app orchestration (https://docs.puppetlabs.com/pe/latest/app_orchestration_availability_tests.html) and the validators in https://github.com/puppet-community/puppet-healthcheck Especially the latter might be great to add proper socket validation there, available for all to use. <issue_comment>username_1: @username_0 ping? any comments? <issue_comment>username_0: sorry, because of the "Christmas madness" I didn't have time to think it through. The connection validator solution seems like the more general solution. But at the moment I think this will add a lot of overhead, because the only solution to use it, that i can think of is to add a dependency to every possible mysql usage like that: Class['mysql_connection_validatior'] -> Mysql_user <| |> Class['mysql_connection_validatior'] -> Mysql_grant <| |> Class['mysql_connection_validatior'] -> Mysql_database <| |> Class['mysql_connection_validatior'] -> Mysql_plugin <| |> And I'm not sure, but my first consideration was that this will increase the catalog generation significantly and that is something I want to avoid if possible. But as I said before, I didn't have time to test it. To the point regarding the usage of /bin/false: test -S can be used also for the command. But with puppet 4.3 /bin/false would work also. Is there an issue with older puppet versions, that test -S is preferable? <issue_comment>username_0: I have updated the patch to only use 'test -S'. I also looked into mysql connection validation and for me there are a lot of open questions like should username and password also be validated like the postgresql validate_db_connection.pp. Should the validation be done via a exec in puppet code or in ruby with plain ruby code or should the ruby mysql library used and so on. And for the foreseeable future I won't have the time needed to work on this.
<issue_start><issue_comment>Title: use without Mosek solver - example testing username_0: andrewcz@andrewcz-PORTEGE-Z30t-B ~/qcqp/examples $ python maxcut.py /home/andrewcz/miniconda3/lib/python3.5/site-packages/cvxpy/atoms/quad_form.py:81: UserWarning: Forming a nonconvex expression quad_form(x, indefinite). warnings.warn("Forming a nonconvex expression quad_form(x, indefinite).") Traceback (most recent call last): File "maxcut.py", line 25, in <module> qcqp.suggest(SDR, solver=cvx.MOSEK) File "/home/andrewcz/miniconda3/lib/python3.5/site-packages/qcqp-0.7-py3.5.egg/qcqp/qcqp.py", line 346, in suggest self.sdr_sol, self.sdr_bound = solve_sdr(self.qcqp_form, *args, **kwargs) File "/home/andrewcz/miniconda3/lib/python3.5/site-packages/qcqp-0.7-py3.5.egg/qcqp/qcqp.py", line 89, in solve_sdr rel_prob.solve(*args, **kwargs) File "/home/andrewcz/miniconda3/lib/python3.5/site-packages/cvxpy/problems/problem.py", line 209, in solve return self._solve(*args, **kwargs) File "/home/andrewcz/miniconda3/lib/python3.5/site-packages/cvxpy/problems/problem.py", line 331, in _solve kwargs) File "/home/andrewcz/miniconda3/lib/python3.5/site-packages/cvxpy/problems/solvers/mosek_intf.py", line 239, in solve task.optimize() File "/home/andrewcz/.local/lib/python3.5/site-packages/mosek/__init__.py", line 127, in accept return fun(*[ t(a) for (t,a) in zip(argtlst,args) ]) File "/home/andrewcz/.local/lib/python3.5/site-packages/mosek/__init__.py", line 7430, in optimize raise Error(rescode(res),Env.getcodedesc(rescode(res))[1]) mosek.Error: (1008) A license cannot be located. Hi apologies is there any way to use package with out the mosek solver. Many thanks, Andrew<issue_closed> <issue_comment>username_1: Hi Andrew, Thanks for pointing this out. The MOSEK solver was specified for testing purposes, but QCQP can be executed without it, by simply calling the ``suggest()`` method without the ``solver`` argument: ``qcqp.suggest(SDR)``. We have updated our documentation and sample codes accordingly. Please clone the latest version of the repository and try again. <issue_comment>username_0: Thank you! Amazing package 😃. Best, Andrew
<issue_start><issue_comment>Title: Custom filters should be working on v-html. username_0: In 2.x, you can not apply filters to v-bind attributes. The v-html attribute should be able to use filters because it is only use to display formated text. This should be working : `<span v-html="myParam | line_break"></span>` Where, in this case, line_break detect \n and replaces them with <br /> <issue_comment>username_1: I agreed. Maybe we could improve it for text transformation purposes. <issue_comment>username_2: This would become a special case. Now the rules becomes more complicated. And I fear it would slowly bring the feature creep back to "all directives should support filters". In this case, a dedicated sanitization method works just fine. I don't think it's worth it to open a special case for the simple rules.<issue_closed>
<issue_start><issue_comment>Title: Travis CI build failing on node.js v0.8 with npm failure username_0: The Travis CI build is failing for node.js v0.8. Here is the error output: ``` npm ERR! peerinvalid The package npmconf does not satisfy its siblings' peerDependencies requirements! npm ERR! peerinvalid Peer npm-registry-client@3.2.2 wants npmconf@^2.1.0 npm ERR! System Linux 2.6.32-042stab090.5 npm ERR! command "/home/travis/.nvm/v0.8.28/bin/node" "/home/travis/.nvm/v0.8.28/bin/npm" "install" "-g" "npm" npm ERR! cwd /home/travis/build/username_0/cucumber-js npm ERR! node -v v0.8.28 npm ERR! npm -v 1.2.30 npm ERR! code EPEERINVALID npm ERR! npm ERR! Additional logging details can be found in: npm ERR! /home/travis/build/username_0/cucumber-js/npm-debug.log npm ERR! not ok code 0 The command "npm install -g npm" failed and exited with 1 during . Your build has been stopped. ``` The issue seems to be related to commit e71f549eff565ada14dd8bc4c3898aac4e6e3672 and also this npm issue https://github.com/npm/npm/issues/6246
<issue_start><issue_comment>Title: Update showcase.md username_0: Add table and shields for plugins list ![2016-11-30 14 41 48](https://cloud.githubusercontent.com/assets/557190/20751114/29745026-b70b-11e6-8839-4700c2ad57e8.png) <issue_comment>username_1: I like this. @username_0 can you re-issue this PR to the `auk` branch? We're hoping to get the new docs out in the week or so. <issue_comment>username_0: @username_1 yes. i can do it. it here? https://github.com/feathersjs/feathers-docs/tree/auk/ecosystem <issue_comment>username_1: @username_0 yup! That same table structure in each section will look great. Just make sure to keep the Feathers icon around to denote core maintained modules. 🍻 <issue_comment>username_0: @username_1 cool! I do it tonight! <issue_comment>username_2: I really like this but it has unfortunately gone a little stale. I'm going to close this PR but it would be great to see this for the [new Ecosystem page](https://docs.feathersjs.com/ecosystem/readme.html).
<issue_start><issue_comment>Title: Would be nice if copy supported copying in-memory folders username_0: I have the following in memory but not on disk: ``` /path/to/file1.txt /path/to/file2.txt ``` I would like to perform: ```javascript this.copy('/path/to', '/new/path/to'); ``` So that I end up with: ``` /path/to/file1.txt /path/to/file2.txt /new/path/to/file1.txt /new/path/to/file2.txt ``` Currently this fail with: ``` Trying to copy from a source that does not exist: /path/to ``` Interestingly if ```/path/to``` exists in the physical filesystem, I think this works (has been a couple days so this needs double checking.) @username_2 suggested the following workaround, which does work perfectly even when the files are only in memory: ```javascript this.copy('/path/to/**', '/new/path/to'); ``` <issue_comment>username_0: ```javascript this.copy('/path/to/**', '/new/path/to'); ``` Is not failing, but I'm uncertain it's performing the desired copy operation... <issue_comment>username_1: @username_0 can you please describe `this.copy('/path/to/**', '/new/path/to');`'s behavior?<issue_closed> <issue_comment>username_2: @username_0 mem-fs only handles files. Folder handling is kind of tricky as it is not really a file and it's hard to distinguish between a file without extension and a proper folder. FWIW, `git` itself doesn't handle empty folders at all. The usual workaround is to add a marker file inside these empty directories (usually called `.gitkeep`). I think I'll close this issue as there's the `.gitkeep` file workaround and because even if you could copy empty directories with mem-fs-editor, you won't be able to keep any empty directory in a git project - which mean you'd probably be blocked from copying empty directories by lower level system tools.
<issue_start><issue_comment>Title: 4.6.0 is out! username_0: This PR include updated download links for 4.6.0 and API docs. <issue_comment>username_1: release notes, install instructions links arent live yet. I am assuming they will be once the branch is created. One minor correction: "Configure Package Repository" link is pointing to [4.4 docs](http://docs.cloudstack.apache.org/projects/cloudstack-installation/en/4.4/installation.html#configure-package-repository). I think we should change this to 4.6 ![screen shot 2015-11-17 at 10 48 15 am](https://cloud.githubusercontent.com/assets/186833/11203391/8a5652f8-8d19-11e5-8349-3ef66b2ca235.png) other than that, :+1: <issue_comment>username_2: a bit difficult to review since the files are long. +1 to merge and update if we see any errors on the live site.
<issue_start><issue_comment>Title: New default theme username_0: As part of the next reveal.js version (3.0) I will be replacing the default theme and transition. The new default will most likely use a black background with white text and a flat sliding transition as opposed to the current 3D rotation.<issue_closed> <issue_comment>username_1: how can I reuse the 3D rotation?
<issue_start><issue_comment>Title: Spec updated and this project nolonger works. getVRDevices -> getVRDisplays username_0: This project doesn't seem to be clone-and-go anymore since a spec change. I'm using the chromium build downloadable from https://webvr.info/ The current spec uses "getVRDisplays" https://w3c.github.io/webvr/#interface-navigator The current code uses "getVRDevices" https://github.com/hawksley/webVR-boilerplate/blob/master/js/vr/VREffect.js#L43 It may just be that this needs to update the snapshots taken, i'm unsure!
<issue_start><issue_comment>Title: Add attribute "data-is-moderator" to the <body> element. username_0: The above would be similar to the "data-can-approve-posts" attribute. A use case for the above attribute is shown on the [Danbooru forums](https://danbooru.donmai.us/forum_topics/14077). Basically, certain <table> elements have additional columns depending on if the user is Moderator+ or not (all of the *_versions tables). Adding the above attribute would allow CSS code to address the discrepancies between the two.<issue_closed>
<issue_start><issue_comment>Title: Droppable hoverClass turns off when hovering an element nested inside of it username_0: I believe this is an incorrect behavior, it can be tested on http://ma.username_1.st/touch-dnd/. Drop one of the draggable boxes in the droppable box, then grab another one and carry it over the little box inside the droppable - the hover effect disappears. Since dropping the draggable there works, I believe the `hoverClass` should not go away when hovering a nested element. <issue_comment>username_1: Thanks, should be fixed<issue_closed>
<issue_start><issue_comment>Title: Alive? username_0: Is Joxa still being developed, or is it basically a dead project? If so, what is the reason it was abandoned? <issue_comment>username_1: It has been dormant for awhile. I think there are many other things which keep @username_2 busy so that he could not put a hand on this project for now. <issue_comment>username_2: @username_0 @username_1 has the right of it. I just don't have much time to dedicate to it anymore. <issue_comment>username_0: Totally understand time the time thing. It's too bad though. I was looking through the code and it looks like a really nice project. Perhaps if you could find a bit of time to explain the internals in detail others could pick up development? <issue_comment>username_1: @username_0 About the internals, maybe you could take a look at https://vimeo.com/49116180 <issue_comment>username_0: @username_1 Thanks. That helped a some, definitely. But it would be helpful too if the whole build process was explained. <issue_comment>username_1: @username_0 Joxa is written to be self-bootstrapped. So the whole building process is about three types of username_0form: 1. The ast files is compiled by Eralng compiler to the Joxa compiler beam files 1. Escriptize the Joxa compiler beam files into a standardlone bin 1. The Joxa compiler compiles Joxa source files to ast or beam files It's not easy to understand the whole picture since it is complex indeed. You could refer to this Makefile https://github.com/username_1/joxa/blob/clearer_build/Makefile and take a look at the the make targets specially. It tries to make the building process clearer. <issue_comment>username_0: Are the AST files hand coded? That must have been a hell of a lot of work! Yeah, the makefile process is a challenge to follow (for me at least). I'll keep at it. Thanks. <issue_comment>username_0: On second thought that can't be right. You've bootstrapped it... Joxa compiles Joxa into those ast files? <issue_comment>username_2: Guys, sorry about my slow response. I have been traveling the last few days. @username_0 on the ast. They are not hand coded. Joxa does compile joxa into those ast files. Then we compile them as part of the bootstrapping process. Its actually annoyingly complex at the moment. I have thought for a long time that I shouldn't have bootstrapped the language. Ie, we should have left the core language in Erlang and ran Joxa on top of it. It would have made contributing to and supporting the language much easier. One of the paths I think about going forward is to reset the repo to pre-bootstrap days and start fresh from there. I think it would revive the language a bit. <issue_comment>username_0: OTOH, Joxa in Joxa is *pretty damn cool*. But I hear you.<issue_closed>
<issue_start><issue_comment>Title: Bootstrap*.js files not included when no script used on page username_0: Already fixed in e8662f9154ce8583bb52533762060e16b510e8d3, but for posterity: Requiring 'wunderbar/bootstrap' and 'wunderbar/jquery' in a script to be able to use Bootstrap native javascript features (Tooltip, Collapse, etc.) doesn't actually include the JS files if script doesn't have any _script elements in it. Usercase: I had a simple script creating collapseable sections of content using: ```html <a role="button" data-toggle="collapse" data-parent="#accordion"... etc. ``` but the _html content emitted didn't us any _script elements of it's own: I only needed the native bootstrap features. When run, the output didn't include any JS includes, even the expected bootstrap and jquery .js, meaning the Collapse feature didn't work. Thanks to Sam for fixing so quickly.<issue_closed> <issue_comment>username_1: Fixed: https://github.com/username_1/wunderbar/commit/e8662f9154ce8583bb52533762060e16b510e8d3
<issue_start><issue_comment>Title: which: no skype in (/usr/sbin:/usr/bin:/sbin:/bin) username_0: Skype is not launched here. Also, issue with usermod (adding skype user to group 91, which does not exists in container machine). This is a laptop with Arch and plain docker installed. % docker version Client: Version: 1.8.3 API version: 1.20 Go version: go1.5.1 Git commit: f4bf5c7-dirty Built: Wed Oct 14 11:17:02 UTC 2015 OS/Arch: linux/amd64 Server: Version: 1.8.3 API version: 1.20 Go version: go1.5.1 Git commit: f4bf5c7-dirty Built: Wed Oct 14 11:17:02 UTC 2015 OS/Arch: linux/amd64 % docker info Containers: 1 Images: 12 Storage Driver: btrfs Build Version: Btrfs v4.2.1 Library Version: 101 Execution Driver: native-0.2 Logging Driver: json-file Kernel Version: 4.2.3-1-ARCH Operating System: Arch Linux (containerized) CPUs: 4 Total Memory: 7.696 GiB Name: suns ID: AZ47:NNAJ:YSV3:QE2R:CNHP:BSIQ:WNA4:BWXB:YL3X:3BTJ:WJBG:RPC2 (I have removed -d and /dev/null redirect from "docker run" command) % skype which: no skype in (/usr/sbin:/usr/bin:/sbin:/bin) Cleaning up stopped skype instances... cat: /etc/timezone: No such file or directory Starting skype... usermod: group '91' does not exist <issue_comment>username_1: it appears no group with GID 91 exists in the container, so the usermod fails. To fix this we will need to create a new group and assign it the GID 91 before doing usermod. Will make this changes soon,<issue_closed> <issue_comment>username_0: Fix did not work very well: % skype which: no skype in (/usr/sbin:/usr/bin:/sbin:/bin) Cleaning up stopped skype instances... cat: /etc/timezone: No such file or directory Starting skype... groupadd: group 'video' already exists And skype is still not started (container exits). Do you think it's due to wrong /dev/video0 permissions? Because skype not launching was my main problem. <issue_comment>username_1: hmm.. its because a group with the name `video` already exists. will change the name <issue_comment>username_1: Skype is not launched here. Also, issue with usermod (adding skype user to group 91, which does not exists in container machine). This is a laptop with Arch and plain docker installed. % docker version Client: Version: 1.8.3 API version: 1.20 Go version: go1.5.1 Git commit: f4bf5c7-dirty Built: Wed Oct 14 11:17:02 UTC 2015 OS/Arch: linux/amd64 Server: Version: 1.8.3 API version: 1.20 Go version: go1.5.1 Git commit: f4bf5c7-dirty Built: Wed Oct 14 11:17:02 UTC 2015 OS/Arch: linux/amd64 % docker info Containers: 1 Images: 12 Storage Driver: btrfs Build Version: Btrfs v4.2.1 Library Version: 101 Execution Driver: native-0.2 Logging Driver: json-file Kernel Version: 4.2.3-1-ARCH Operating System: Arch Linux (containerized) CPUs: 4 Total Memory: 7.696 GiB Name: suns ID: AZ47:NNAJ:YSV3:QE2R:CNHP:BSIQ:WNA4:BWXB:YL3X:3BTJ:WJBG:RPC2 (I have removed -d and /dev/null redirect from "docker run" command) % skype which: no skype in (/usr/sbin:/usr/bin:/sbin:/bin) Cleaning up stopped skype instances... cat: /etc/timezone: No such file or directory Starting skype... usermod: group '91' does not exist<issue_closed> <issue_comment>username_1: Skype is not launched here. Also, issue with usermod (adding skype user to group 91, which does not exists in container machine). This is a laptop with Arch and plain docker installed. % docker version Client: Version: 1.8.3 API version: 1.20 Go version: go1.5.1 Git commit: f4bf5c7-dirty Built: Wed Oct 14 11:17:02 UTC 2015 OS/Arch: linux/amd64 Server: Version: 1.8.3 API version: 1.20 Go version: go1.5.1 Git commit: f4bf5c7-dirty Built: Wed Oct 14 11:17:02 UTC 2015 OS/Arch: linux/amd64 % docker info Containers: 1 Images: 12 Storage Driver: btrfs Build Version: Btrfs v4.2.1 Library Version: 101 Execution Driver: native-0.2 Logging Driver: json-file Kernel Version: 4.2.3-1-ARCH Operating System: Arch Linux (containerized) CPUs: 4 Total Memory: 7.696 GiB Name: suns ID: AZ47:NNAJ:YSV3:QE2R:CNHP:BSIQ:WNA4:BWXB:YL3X:3BTJ:WJBG:RPC2 (I have removed -d and /dev/null redirect from "docker run" command) % skype which: no skype in (/usr/sbin:/usr/bin:/sbin:/bin) Cleaning up stopped skype instances... cat: /etc/timezone: No such file or directory Starting skype... usermod: group '91' does not exist <issue_comment>username_1: please try after a while once the build is ready. let me know<issue_closed> <issue_comment>username_0: Ok, works properly now.
<issue_start><issue_comment>Title: Hello? Your npm version look like older than github version username_0: Its about different subdomain has same interface . Like `http://aa.com/inter` and `http://bb.com/inter`. has different response I try npm version , and response is same and pull github version . response is different so if i guess right , Please update npm version quickly<issue_closed>
<issue_start><issue_comment>Title: [1.1] Doors child/parent problem username_0: I found a curious glitch, you need at least two doors : 1. Set door `1` to parent 2. Set door `2` to child 3. After, set door `2` to parent 4. Set door `1` to child 5. Buy one of the two doors 6. Garry's mod immediately freezing I tried to do this with double doors : when I want to buy the first door, the second one is bought automatically. <issue_comment>username_1: Door 1 belongs to door 2 You made door 2 belong to door 1 When you buy a door, it sets the owner for all child doors. So door 1 sets it child then sets door 2's child, since door 2 is also the parent of door 1 it sets the child of door 1, which sets the child of door 2. So you made an infinite loop, but why did you do this in the first place? <issue_comment>username_0: Yeah it's an infinite loop, on server's console you have : ![infinite loop serverside](https://cloud.githubusercontent.com/assets/12173101/7428968/5b567504-eff9-11e4-9916-5426d38c858b.png) I did that for double doors, I want that when a player buys the door 1, the door 2 is automatically bought. For the moment the player needs to buy both of them. ![double door screenshot]http://images.akamai.steamusercontent.com/ugc/716413856111739632/93F9BF59D47DDF4374C08F613C6740D137A328D5/ <issue_comment>username_1: You should just be able to purchase the child door and the parent door will be purchased. <issue_comment>username_2: You only have to set one parent door and if you buy the child door it will buy the parent door to.<issue_closed>
<issue_start><issue_comment>Title: Reactivate regular sloan images ASAP username_0: Galaxy Zoo has less than 400 active subjects as of late 22 Jun 2015. I think that's going to be done in less than a day, so I'm worried about the site being empty. Can we temporarily reactivate the frequency of `sloan` images so that GZ doesn't run out? I think this was originally set by https://github.com/zooniverse/Galaxy-Zoo/commit/1e775e0b2d5bf593e429a81d622a418706140165 and @parrish.<issue_closed> <issue_comment>username_2: Should I update the Android app to do this too, or is this just temporary? <issue_comment>username_0: Temporary, but hopefully on the timescale of a week or so. I looked at the app and it seemed to be correctly generating the SDSS color images; do you need to update anything? <issue_comment>username_2: The commit to the web client makes it use mostly the sloan survey, and sometimes the sloan_singleband survey, instead of just the sloan_singleband survey: https://github.com/zooniverse/Galaxy-Zoo/commit/ea8bfd726d3eff1f5fa4d960a7b6727c001d5e90 If the color images are only in the sloan survey and not in the sloan_singleband survey, I don't see how the Android app could be showing any color images, as the Android app uses only the sloan_singleband survey at the moment. <issue_comment>username_0: Not sure of what might be going on in the backend, but it's definitely delivering color images on my phone. ![screenshot_2015-06-23-11-36-00](https://cloud.githubusercontent.com/assets/2285931/8304329/cbfac414-199c-11e5-9e9e-17c0803caccd.png) <issue_comment>username_2: Maybe you've just got cached images from way back when the Android app uses the sloan survey. To thoroughly test it you could uninstall and reinstall it, or just delete its data and cache via Apps settings. <issue_comment>username_0: Cleared the cache and now it's back to B/W images, so that explains why. We'll update you as soon as we have the parameters for the new sets of images, @username_2.
<issue_start><issue_comment>Title: Selenium Chrome Mobile Emulator testing with specific phone resoultion username_0: ## Actual Behavior - Getting error like `unknown error: unrecognized chrome option: mobileEmulation` ## Steps to reproduce - please compile code given in expected result. <issue_comment>username_1: for chromedriver issues please log reproducible test cases with the chromedriver project: https://bugs.chromium.org/p/chromedriver/issues/list For questions / user help, please post on the selenium-users mailing list: https://groups.google.com/forum/#!forum/selenium-users<issue_closed>
<issue_start><issue_comment>Title: Add provinces as an alias for subdivisions username_0: There already exists an alias for `.subdivisions` as `.states` I think there should be `.provinces` as well! I added a test for this, and also fixed the existing test for `.states` (I think... It used to test against itself) <issue_comment>username_1: I would actually like to remove the states alias, as it's simply clutters the API. Everything is a subdivision. That should be all we need. <issue_comment>username_0: I like the aliases but I understand your reasoning!
<issue_start><issue_comment>Title: Compose Override Should Allow Overrides of Multi-Value Options username_0: It appears that the functionality for extending and overriding compose files is currently the same for multi-value options... this makes it so that it is impossible to remove multi-value declarations via the override mechanism. I propose that the extend functionality remains the same and concatenates multi-value declarations, but override unions these options instead of concatenates. This would allow people the functionality of extending multi-value option declarations and overriding them if they so desire. https://docs.docker.com/compose/extends/#/adding-and-overriding-configuration<issue_closed> <issue_comment>username_1: Closing in favor of #5102
<issue_start><issue_comment>Title: Adding ordered and unordered lists username_0: Hi, Could you please help me understand how to add ordered and unordered lists after reading the documentation I am unsure how this can be added. Many thanks <issue_comment>username_1: Please refer to the docs here: https://github.com/yabwe/medium-editor#all-buttons<issue_closed>
<issue_start><issue_comment>Title: I Owe Data frames a better type spec username_0: `iodata()`... I OWE DATA! Get it? GET IT?! please clap. Also fixes ``` 4.3. Header Compression and Decompression ✓ Sends invalid header block fragment ✓ Sends Dynamic Table Size Update (RFC 7541, 6.3) × Encodes Dynamic Table Size Update (RFC 7541, 6.3) after common header fields - The endpoint MUST terminate the connection with a connection error of type COMPRESSION_ERROR. Expected: GOAWAY frame (ErrorCode: COMPRESSION_ERROR) Connection close Actual: DATA frame (Length: 16, Flags: 1) ``` found with h2spec. thanks for showing me that @tsloughter see username_0/hpack#5 for more info <issue_comment>username_1: don't 100% get what the header changes are for, but code :+1:
<issue_start><issue_comment>Title: deleting job without expiryKey? username_0: Currently I have a job running every hour and new jobs get created at times but there's no expiryKey until they run for the first time. Is there a way to delete a job without expiryKey? <issue_comment>username_1: Per the docs, you can use `queue.remove({ unique: 'myJob' })` where `myJob` is the job name. <issue_comment>username_2: @username_1 have you actually gotten that to work? I'm trying to remove jobs by `unique` right now and they don't get removed. I took a look at the spec and it appears there is no test for that functionality either. <issue_comment>username_3: @username_2 we had the very same issue! the solution was to query for the job id directly to redis, and then use `.remove(jobId)` :( ``` kue.client.hget('[myPrefix]:unique:jobs', myUniqueId, (id) => { kue.remove(Number(id), (err, result) => console.log(err, result)); }); ```
<issue_start><issue_comment>Title: Generate protobuf marshallers for new apps group username_0: Much more targeted fix for #24579, mergeable now Fixes #24579 <issue_comment>username_0: Since there's something wrong with update-generated-protobuf on Jenkins, going to just merge the generate proto for the group. <issue_comment>username_0: Self merging due to broken queue <issue_comment>username_1: LGTM - thanks @username_0 <issue_comment>username_1: All tests passed. Self-merging to unblock the build. <issue_comment>username_1: Actually it wasn't "self-merging", because it wasn't my PR. It was just "manual merge" :) <issue_comment>username_2: @username_0 thanks! I did run update-all.sh before re-uploading my petset pr (and travis ran verify-all), maybe something needs updating? <issue_comment>username_2: oh, https://github.com/kubernetes/kubernetes/issues/24579#issuecomment-212744159 makes sense
<issue_start><issue_comment>Title: V3.0.0 username_0: This is the PR that will ultimately become v3.0.0. Comments and feedback welcomed. <issue_comment>username_1: Anything I can help with to get this merged? <issue_comment>username_0: I have moved some of tv3.0.0 issues to v3.1.0, so there are only 3 left: https://github.com/OptimalBits/bull/milestone/4 I have started to work on naming the workers (including listing workers, etc). The other two are more of "defensive" issues, since as far as I know they do not resolve any known bug. <issue_comment>username_1: Awesome! Happy to review PRs. Excited for 3.0! <issue_comment>username_1: once you merge https://github.com/OptimalBits/bull/pull/537 into this - how about merging this into master and pushing an RC version? I'll start testing it in one of our prod environments.
<issue_start><issue_comment>Title: Can't find pipe username_0: **I'm submitting a ...** (check one with "x") ``` [ ] bug report => check the FAQ and search github for a similar issue or PR before submitting [ X ] support request => check the FAQ and search github for a similar issue before submitting [ ] feature request ``` **Current behavior** Sorry, let me try this again with a better question. I have a component with the following: ``` import { Component, Input, OnInit } from '@angular/core'; import { TranslateService, TranslatePipe } from 'ng2-translate/ng2-translate'; @Component({ selector: 'home', template: ` <h2>{{'HOME.TITLE' | translate}}</h2> ` }) export class HomeComponent { } ``` The console gives me this message: ``` The pipe 'translate' could not be found (" <h2>[ERROR ->]{{'HOME.TITLE' | translate}}</h2> "): HomeComponent@1:4 ; Zone: <root> ; Task: Promise.then ; Value: Error: Template parse errors:(…) Error: Template parse errors: The pipe 'translate' could not be found (" <h2>[ERROR ->]{{'HOME.TITLE' | translate}}</h2> "): HomeComponent@1:4 ``` When I try to add a pipes parameter to the Component construction, tsc gives me: ``` app/home/home.component.ts(9,5): error TS2345: Argument of type '{ selector: string; template: string; pipes: typeof TranslatePipe[]; }' is not assignable to parameter of type 'ComponentMetadataType'. Object literal may only specify known properties, and 'pipes' does not exist in type 'ComponentMetadataType'. ``` In trying to track it down, I couldn't find a \@Pipe call, and incorrectly diagnosed the problem. What is the correct diagnosis of the problem? **Expected/desired behavior** Pipe should be found. **Reproduction of the problem** If the current behavior is a bug or you can illustrate your feature request better with an example, please provide the steps to reproduce and if possible a minimal demo of the problem via https://plnkr.co or similar (you can use this template as a starting point: http://plnkr.co/edit/tpl:btpW3l0jr5beJVjohy1Q). **What is the expected behavior?** **What is the motivation / use case for changing the behavior?** **Please tell us about your environment:** * **ng2-translate version:** 2.4.4 * **Angular version:** 2.0.0-rc.6 * **Browser:** [Chrome 53.0.2785.101] * **Language:** [TypeScript 1.8.10]<issue_closed> <issue_comment>username_0: Answered in another issue. <issue_comment>username_1: Please mention the issue, in which its solved.
<issue_start><issue_comment>Title: Opentrack crashing repeatedly username_0: I'm trying to get opentrack working with ED Headtracker and ED again, but it keep crashing out after working for a few minutes. I've got it extracted into the same directory as ED, all permissions are checked for everyone and I'm on windows 10 64. No idea what's causing it to crash, it just does. <issue_comment>username_0: Nevermind, I fixed it. Just need to make sure freetrack 2 is using the 64 client.dll <issue_comment>username_1: What .dll was it using before? <issue_comment>username_0: It was just using the NPclient.dll originally. Also the crashing came back, so I fixed it from not working with ED in game, but now I just need to keep opentrack from dying during a firefight. I can provide any info you need later on today, but I leave for work soon and can't access my computer until after. <issue_comment>username_1: Looks like there's one more bug in new joystick support. Are you using rc21p2 though? <issue_comment>username_0: I am. I was using 2.2 installed to an old hdd, but I upgraded to the latest 2.3 and put it on my ssd in the frontier program x86 parent folder. <issue_comment>username_1: Hey, here's a new build. if #275 and #274 confirm it's working it's gonna be patchlevel 3. Build: https://db.tt/UthSld0u <issue_comment>username_0: Just got home. I'll download, calibrate my ed HT, and report back. <issue_comment>username_0: So, still getting crashes. If I use NPClient.dll it's pretty much as soon as I start ED (doesn't crash if I have it going and am not playing anything). If I use NPClient64.dll, I get like 2-5 minutes, and then it crashes. <issue_comment>username_1: Does this happen on different trackers? Can you use an irrelevant tracker just to see if it crashes? <issue_comment>username_0: It does. I tried vjoy and it crashed <issue_comment>username_1: Try this: - reboot - start opentrack before other tracking software - start tracking in opentrack - close it, start everything as usual Will you get the crash then? <issue_comment>username_1: @username_0 rc21p3 got released and few other users report that crashes are gone. Try this one instead. <issue_comment>username_0: @username_1 still getting crashes. Tried the above and the new version, no dice. I couldn't even make it into Elite Dangerous before it quit. <issue_comment>username_1: Sadly I need debug info. - Install mingw-w64 from the online installer - Open a Windows console prompt. Change directory to where opentrack is installed. - Run "gdb.exe" from the mingw-w64 installation. You can open the folder graphically, then drag the gdb.exe icon to the Windows console prompt - on gdb prompt, type "file opentrack.exe". It should run opentrack. - wait for it to crash - type "thread apply all bt" - send this output <issue_comment>username_1: You need a build with debug info in it. Here's one - https://db.tt/r5Ve6vIr <issue_comment>username_0: Uh, it's not running opentrack after I run it. I've tried it several times, with no luck. Picture attached to show my steps ![screenshot_1](https://cloud.githubusercontent.com/assets/16128561/11611241/bcda0d1a-9b91-11e5-9a91-e99738f1e028.jpg) <issue_comment>username_1: Sorry. After the "file" command is successful, type "run" by itself as the next command. opentrack will then start. <issue_comment>username_0: There we go. Sorry, didn't know how to load it into a text file. ![screenshot_1](https://cloud.githubusercontent.com/assets/16128561/11611301/59115bc8-9b94-11e5-8f61-15a35fe34532.jpg) ![screenshot_2](https://cloud.githubusercontent.com/assets/16128561/11611300/59113c7e-9b94-11e5-8bfc-3043ad987080.jpg) ![screenshot_3](https://cloud.githubusercontent.com/assets/16128561/11611303/59151e52-9b94-11e5-9ff2-441fd2527a21.jpg) ![screenshot_4](https://cloud.githubusercontent.com/assets/16128561/11611302/59136d96-9b94-11e5-8cd9-470bca0bf708.jpg) <issue_comment>username_1: This one looks strange: `c:/program files (x86)/gigabyte oc_guru ii/GvKm.dll`. It's using DirectInput while injected into the opentrack process. It's obviously calling for trouble. It's recreating the same scenario we had earlier in p1, but this time it's out of our control since it's a third-party DLL. It can also explain why you're getting nonsense when polling keyboard in thread 7. It dies when getting a list of pressed keys. Try to disable this .dll using "sysinternals autoruns" software. <issue_comment>username_0: It's just my 980ti's oc software, so I guess I can try turning it off, even though it's overclocking my graphic card. <issue_comment>username_0: I closed out OC Guru II, so far no hang ups <issue_comment>username_1: I have a theory why it's broken like that. Hold on. <issue_comment>username_1: @username_0 Try this build with uncommitted changes - https://db.tt/DnVZN0KX <issue_comment>username_0: testing <issue_comment>username_1: @username_0 no need. There's no debug info in that binary. I think I know the solution[1] to this and jerky joystick input. [1] register Direct Input handle with a fake invisible window Should take me a few mins. <issue_comment>username_0: take your time man. Also, do you have a donate page? I don't think I've seen a dev be this attentive to issues. <issue_comment>username_1: @username_0 the rc21 release broke joystick support in general. We're getting 1.5k unique downloads a week so I'd like to avoid further alienating users with these few bugs. I hope things won't be as hectic as the last few days. We don't take donations due to the "NPClient" murky legal territory. It's uncertain as it is. :( Good news is I can't get it to crash anymore, with joystick tracker and joystick keystrokes. Have a try - https://db.tt/fvbAGatr <issue_comment>username_1: And keep the overclocking software running, opentrack definitely needs to run despite that. <issue_comment>username_0: That's a shame. You guys really need something for all the work you do. So far no crashes. <issue_comment>username_0: Nevermind, still crashing with oc guru. I uninstalled it (didn't like it anyway) and started using msi afterburner. So far it's working. <issue_comment>username_1: @username_0 do me a favor and confirm lack of freezes/crashes after few days of opentrack usage. Thanks! <issue_comment>username_1: @username_0 any more issues? Please reopen if something bad happened.<issue_closed> <issue_comment>username_2: Hello, may i re-open this thread just to say that i'm using EDTracker Pro, then OpenTrack to use with DCS or Assetto Corsa and even in Arma 3 but i have Oc'GURU and it took me time to found out that opentrack crashes comes from that software. Sometimes i don't run it, but for long time play, it's safer for my GPU. I'm using 2.3-rc21p8, i have W7 Pro 64bits and GTX 970 Gigabyte. Is there any way this could be solved ? I'd like to keep this fine software cuz it's simple and efficient and i'm not into overclocking just need that software to manage the fan % depending on GPU Temp. Thanks anyway for all the hard work. <issue_comment>username_1: @username_2 the software <issue_comment>username_1: I'm trying to get opentrack working with ED Headtracker and ED again, but it keep crashing out after working for a few minutes. I've got it extracted into the same directory as ED, all permissions are checked for everyone and I'm on windows 10 64. No idea what's causing it to crash, it just does. <issue_comment>username_1: @username_2 the software might inject a .dll into the opentrack process. The "RadeonPro" software caused crashes all over due to a buggy injected .dll. This may be similar. Look for OSD or keyboard functionality and disable it in the software, it might help. <issue_comment>username_2: Thanks for the FAST reply !! How can i "catch" this .dll you talking about ? Sorry, i have no idea about that... And whatever way opentrack and Oc'Guru are started, it crashes after a few second. Sorry for my incompetence. <issue_comment>username_1: Go to OC'Guru options and disable OSD and keyboard shortcut functionality. <issue_comment>username_2: I see what you mean. I just look at it, its disabled... So i try OpenTrack with OSD enable is case... still crashes. Don't want to bug you over, thanks for trying to help this late at night and so fast. If you ever have an idea, i'll be all yours... Bonne nuit ;) <issue_comment>username_1: http://forums.steampowered.com/forums/showthread.php?t=2075101 Replace it with MSI Afterburner or similar.<issue_closed> <issue_comment>username_2: Hello back, i think you're right, i'm gonna have to use something else. Sorry for the inconvenience, and thanks again for your help trying to find out. Keep up !!!
<issue_start><issue_comment>Title: LayerRegistry clean up username_0: Remove any remaining unnecessary methods from layer registry. Basic responsibilities of the module should be: - holds state for layers - convenient filtering methods - add/remove - reorder<issue_closed>
<issue_start><issue_comment>Title: Subscribe function parameter implicitly has any type username_0: Using TypeScript I'm constantly getting type errors for implicit any types on typed Observables. Example: ```typescript // Register a function returning a typed observable. getRandomNumberObservable() : Observable<number> { return Observable.of(4); // chosen by fair dice roll. // guaranteed to be random. } // Grab the observable and subscribe to it getRandomNumberObservable().subscribe((number) => console.log('My random number: %d', number) ); ``` This code will be greeted by `TS7006:Parameter 'number' implicitly has 'any' type.`. The typings for `.subscribe()`: ```typescript /** * Registers handlers for handling emitted values, error and completions from the observable, and * executes the observable's subscriber function, which will take action to set up the underlying data stream * @method subscribe * @param {PartialObserver|Function} observerOrNext (optional) either an observer defining all functions to be called, * or the first of three possible handlers, which is the handler for each value emitted from the observable. * @param {Function} error (optional) a handler for a terminal event resulting from an error. If no error handler is provided, * the error will be thrown as unhandled * @param {Function} complete (optional) a handler for a terminal event resulting from successful completion. * @return {ISubscription} a subscription reference to the registered handlers */ subscribe(): Subscription; subscribe(observer: PartialObserver<T>): Subscription; subscribe(next?: (value: T) => void, error?: (error: any) => void, complete?: () => void): Subscription; ``` Is there a way to update the typing for this function such that TypeScript can properly resolve the type from the context?<issue_closed> <issue_comment>username_1: this isn't due to rx typings - your compiler config enabled `noimplicitany` and it doesn't allow to auto infer by specifiyng param without explicit type. with this option enabled, you can observe same behavior on any other cases than other rx. <issue_comment>username_0: That is the issue. It shouldn't infer `any`. If you compare to `.map()` there the type would be correctly inferred as `Observable<T>`: ```typescript // map.d.ts export declare function map<T, R>(this: Observable<T>, project: (value: T, index: number) => R, thisArg?: any): Observable<R>; ``` The problem with `subscribe` is that it loses this type. Shouldn't it be `subscribe<T>(observer: PartialObserver<T>): Subscription<T>;` or similar? <issue_comment>username_0: I don't understand. I create an object `Observable<T>` and it has functions `map<T>` and `subscribe`. What other type is overruled at that point? You'd expect this to work with `do((x) => {})` and `.subscribe({ next: (x) => {}})` too, for example, since you're working specifically with the type you're observing. <issue_comment>username_1: You can simply try your suggestion to see how it works. [Playground link](https://www.typescriptlang.org/play/#src=class%20Observable%3CT%3E%20%7B%0D%0A%20%20%20%20subscribe%3CT%3E(next%3A%20(value%3A%20T)%20%3D%3E%20void)%20%7B%0D%0A%20%20%20%20%20%20%20%20%0D%0A%20%20%20%20%7D%0D%0A%7D%0D%0A%0D%0Aclass%20ObservableActual%3CT%3E%20%7B%0D%0A%20%20%20%20subscribe%3CR%3E(next%3A%20(value%3A%20R)%20%3D%3E%20void)%20%7B%0D%0A%20%20%20%20%20%20%20%20%0D%0A%20%20%20%20%7D%0D%0A%7D%0D%0A%0D%0Aclass%20CustomType%20%7B%0D%0A%20%20%20%20id%3A%20string%0D%0A%7D%0D%0A%0D%0Aconst%20v1%20%3D%20new%20Observable%3CCustomType%3E()%0D%0Av1.subscribe((a)%20%3D%3E%20a.id)%3B%20%2F%2Fyou%20see%20compiler%20error%2C%20T%20is%20inferred%20to%20%7B%7D%0D%0A%0D%0Aconst%20v2%20%3D%20new%20ObservableActual%3CCustomType%3E()%0D%0Av2.subscribe((a)%20%3D%3E%20a.id)%3B%20%2F%2Fthis%20is%20actual%20shape%20of%20generic%20you%20suggested%20to%20have%20subscribe%20have%20generic%0D%0A%0D%0A%2F%2Fto%20make%20type%20works%20with%20your%20suggestion%2C%20you%20have%20to%20specify%20type%20for%20each%20subscribe%20calls%20%0D%0Av1.subscribe%3CCustomType%3E((a)%20%3D%3E%20a.id)%3B%0D%0Av2.subscribe%3CCustomType%3E((a)%20%3D%3E%20a.id)%3B) ```js class Observable<T> { subscribe<T>(next: (value: T) => void) { } } class ObservableActual<T> { subscribe<R>(next: (value: R) => void) { } } class CustomType { id: string } const v1 = new Observable<CustomType>() v1.subscribe((a) => a.id); //you see compiler error, T is inferred to {} const v2 = new ObservableActual<CustomType>() v2.subscribe((a) => a.id); //this is actual shape of generic you suggested to have subscribe have generic //to make type works with your suggestion, you have to specify type for each subscribe calls v1.subscribe<CustomType>((a) => a.id); v2.subscribe<CustomType>((a) => a.id); ``` This is really about behavior of typescript, not about RxJS. <issue_comment>username_2: What version of the compiler are you using? With the current playground, I'm not able to get any errors to popup, with similar code. [see here](http://www.typescriptlang.org/play/#src=class%20Observable%3CT%3E%20%7B%0D%0A%20%20%20%20public%20static%20_of%3CR%3E(value%3A%20R)%3A%20Observable%3CR%3E%20%7B%20return%20undefined%3B%20%7D%0D%0A%0D%0A%20%20%20%20public%20subscribe(next%3A%20(value%3A%20T)%20%3D%3E%20void)%20%7B%0D%0A%20%20%20%20%20%20%20%20%0D%0A%20%20%20%20%7D%0D%0A%7D%0D%0A%0D%0A%2F%2F%20Register%20a%20function%20returning%20a%20typed%20observable.%0D%0Afunction%20getRandomNumberObservable()%20%3A%20Observable%3Cnumber%3E%20%7B%0D%0A%20%20return%20Observable._of(4)%3B%20%2F%2F%20chosen%20by%20fair%20dice%20roll.%0D%0A%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%2F%2F%20guaranteed%20to%20be%20random.%0D%0A%7D%0D%0A%0D%0A%2F%2F%20Grab%20the%20observable%20and%20subscribe%20to%20it%0D%0AgetRandomNumberObservable().subscribe((number)%20%3D%3E%0D%0A%20%20console.log('My%20random%20number%3A%20%25d'%2C%20number)%0D%0A)%3B%0D%0A)
<issue_start><issue_comment>Title: Glorious End cast, game hanged username_0: I attacked with 4 creatures. My Gust Walker exert trigger went on the stack (after choosing to exert). Opponent cast Glorious End, but no effect seemed to occur. The end the turn text was in the in-game log, but Glorious End and the exert trigger still showed on the stack. My timer continues to count down. Attempted to rollback, but it did not do anything except switch to the opponent's timer counting down instead (still in attacks, etc). <issue_comment>username_1: This looks like a nightmare to work out....with the extra combat steps, etc. <issue_comment>username_2: Is this still occurring with the commit referenced in #3221 ? commit 80beebc <issue_comment>username_3: The fix only concerns activated abilities. Never found a problem with spells ending the turn.<issue_closed> <issue_comment>username_3: Should be fixed with 80beebccf58400ac1ee0118d76e4cddaa2fec422 (1.4.23V3).
<issue_start><issue_comment>Title: Enable GPORCA to generate better plans for non-correlated exists subquery in the WHERE clause username_0: Consider the following exists subquery, `(select * from bar)`. GPORCA generates an elaborate count based implementation of this subquery. If bar is a fact table, the count is going to be expensive. ``` vraghavan=# explain select * from foo where foo.a = foo.b and exists (select * from bar); QUERY PLAN ------------------------------------------------------------------------------------------------------------------ Gather Motion 3:1 (slice3; segments: 3) (cost=0.00..1368262.79 rows=400324 width=8) -> Nested Loop (cost=0.00..1368250.86 rows=133442 width=8) Join Filter: true -> Table Scan on foo (cost=0.00..461.91 rows=133442 width=8) Filter: a = b -> Materialize (cost=0.00..438.57 rows=1 width=1) -> Broadcast Motion 1:3 (slice2) (cost=0.00..438.57 rows=3 width=1) -> Result (cost=0.00..438.57 rows=1 width=1) Filter: (count((count()))) > 0::bigint -> Aggregate (cost=0.00..438.57 rows=1 width=8) -> Gather Motion 3:1 (slice1; segments: 3) (cost=0.00..438.57 rows=1 width=8) -> Aggregate (cost=0.00..438.57 rows=1 width=8) -> Table Scan on bar (cost=0.00..437.95 rows=332395 width=1) Optimizer status: PQO version 2.35.1 (14 rows) ``` Planner on the other hand uses LIMIT as shown in the INIT plan. ``` vraghavan=# explain select * from foo where foo.a = foo.b and exists (select * from bar); QUERY PLAN ------------------------------------------------------------------------------------------------ Gather Motion 3:1 (slice2; segments: 3) (cost=0.03..13611.14 rows=1001 width=8) -> Result (cost=0.03..13611.14 rows=334 width=8) One-Time Filter: $0 InitPlan (slice3) -> Limit (cost=0.00..0.03 rows=1 width=0) -> Gather Motion 3:1 (slice1; segments: 3) (cost=0.00..0.03 rows=1 width=0) -> Limit (cost=0.00..0.01 rows=1 width=0) -> Seq Scan on bar (cost=0.00..11072.84 rows=332395 width=0) -> Seq Scan on foo (cost=0.00..13611.11 rows=334 width=8) Filter: a = b Settings: optimizer=off Optimizer status: legacy query optimizer (12 rows) ``` While GPORCA doesnot support init-plan, we can nevertheless generate a better plan by using LIMIT instead of count. After this PR, GPORCA will generate the following plan with LIMIT clause. ``` vraghavan=# explain select * from foo where foo.a = foo.b and exists (select * from bar); QUERY PLAN ------------------------------------------------------------------------------------------------------------ Gather Motion 3:1 (slice3; segments: 3) (cost=0.00..1368262.73 rows=400324 width=8) -> Nested Loop EXISTS Join (cost=0.00..1368250.80 rows=133442 width=8) Join Filter: true -> Table Scan on foo (cost=0.00..461.91 rows=133442 width=8) Filter: a = b -> Materialize (cost=0.00..438.57 rows=1 width=1) -> Broadcast Motion 1:3 (slice2) (cost=0.00..438.57 rows=3 width=1) -> Limit (cost=0.00..438.57 rows=1 width=1) -> Gather Motion 3:1 (slice1; segments: 3) (cost=0.00..438.57 rows=1 width=1) -> Limit (cost=0.00..438.57 rows=1 width=1) -> Table Scan on bar (cost=0.00..437.95 rows=332395 width=1) Optimizer status: PQO version 2.35.1 (12 rows) ``` In https://github.com/greenplum-db/gporca/pull/193, I tried doing it in the pre-processing step. But the feedback was right that such an approach violated Orca's design principle that pre-processing does not introduce plan alternatives. In this attempt, I have added it as part of the subquery handler. While it does not reduce the search space like https://github.com/greenplum-db/gporca/pull/193, it does however generate the better plan. <issue_comment>username_1: For the future, I would suggest not to mix comment reformatting commits into actual PR. We should create a separate PR for comment reformatting. It makes it hard to review.
<issue_start><issue_comment>Title: Changed the XHTML question username_0: is believed to be taken from [here](http://www.skilledup.com/articles/html-html5-interview-questions-answers/) (along with several others). While there's nothing wrong with it, I believe it has the protential to extract more about a candidate skills and knowledge if he's asked to compare HTML and XHTML instead. Just my $0.02. Would be happy if you guys can correct me. <issue_comment>username_1: :+1: I like this change. **Quick note though:** The website/blog you reference was posted in *August, 2013*, doing a quick `git blame` will show you that the question `What are the limitations when serving XHTML pages?` has been in this document well before *May, 2013*. If anything, that site seems to have taken questions from us. Not the other way around. This document has existed since *February, 2012* and is often ripped off without citation (so I'm not surprised). I definitely understand topical overlap but the phrasing is the exact same #JustSayin <issue_comment>username_0: Oops, sorry. I didn't realize that this repos has been around for this long. Whoa. <issue_comment>username_1: @username_0 Here's my original blog post from *February 9th, 2012* http://username_1.me/development/front-end-job-interview-questions/ The questions looked a lot different back then... <issue_comment>username_1: Oh wow... yea that question originates from this original document... so you can put a date going back to then <issue_comment>username_0: Haha I love those "no one can" bonus points :smile:
<issue_start><issue_comment>Title: Fix sequencer recovery with fastSMRLoader username_0: This patch moves important data for checkpoint entries from the payload into the metadata map. This is neaded because when the corfu server restart, it may not be able to deserialize the payload. <issue_comment>username_1: I have added the verification step. I still need to add exception handling in the necromancer to avoid recovery in case of irrecoverable exceptions. <issue_comment>username_2: Results automatically generated by _CorfuDB Benchmark Framework_ to assess the performance of this pull request for commit **c5a4f56**. *** 0.0% transaction FAILURE rate for NonConflictingTx+Scan workload, 1 threads, Disk mode *** 0.0% transaction FAILURE rate for NonConflictingTx+Scan workload, 5 threads, Disk mode *** 0.0% transaction FAILURE rate for NonConflictingTx+Scan workload, 10 threads, Disk mode *** 0.0% transaction FAILURE rate for NonConflictingTx+Iterator workload, 1 threads, Disk mode *** 0.0% transaction FAILURE rate for NonConflictingTx+Iterator workload, 5 threads, Disk mode *** 0.0% transaction FAILURE rate for NonConflictingTx+Iterator workload, 10 threads, Disk mode *** 0.0% transaction FAILURE rate for NonConflictingTx workload, 1 threads, Disk mode *** 0.0% transaction FAILURE rate for NonConflictingTx workload, 5 threads, Disk mode *** 0.0% transaction FAILURE rate for NonConflictingTx workload, 10 threads, Disk mode An interactive dashboard with Pull Request Performance Metrics for ALL cluster types and numbers of threads in run, is available at: [Pull Request #803 Graphs ](https://snapshot.raintank.io/dashboard/snapshot/8ZpmqoE4Hy2M73hEqsSDn9wMbIJLAWfr)
<issue_start><issue_comment>Title: Add missing associated type Item to Iterator username_0: <issue_comment>username_1: Thanks for the pull request, and welcome! The Rust team is excited to review your changes, and you should hear from @GuillaumeGomez (or someone else) soon. If any changes to this PR are deemed necessary, please add them as extra commits. This ensures that the reviewer can see what has changed since they last reviewed the code. Due to the way GitHub handles out-of-date commits, this should also make it reasonably obvious what issues have or haven't been addressed. Large or tricky changes may require several passes of review and changes. Please see [the contribution instructions](https://github.com/rust-lang/rust/blob/master/CONTRIBUTING.md) for more information. <issue_comment>username_2: @username_3 r+ rollup <issue_comment>username_3: :pushpin: Commit 1a00c8f has been approved by `username_2` <!-- @username_3 r=username_2 1a00c8fe0f2c5c20509a7337a5776386cb00f09b -->
<issue_start><issue_comment>Title: feature: load next src when current src return 404 username_0: ``` src: [ { media: 'audio/mp4', path: 'http://0.0.0.0:1109/m/spec/media/sound.m4a' }, { media: 'audio/mp3', path: 'http://0.0.0.0:1109/m/spec/media/sound.mp3' } ] ``` It would be nice if boombox could load next provided src if current src return 404
<issue_start><issue_comment>Title: RTC Interrupts stops after 5-10 seconds username_0: Hi All, For our hardware I have changed the MCU to a STM32L151xE type instead of the STM32L151xB. Now when upload the firmware without any changes it runs fine. But when I change the build define of the MCU type to STM32L151xE (I have added the header file stm32l151xe.h version V2.1.2 same version as the stm32l151xb.h) the RTC interrupt for handling all the timer events stops firing after 5-10 seconds of running. I'm using the latest master commit version 4.3.0 Any ideas of why this is happening? Thanks, Michael <issue_comment>username_0: I have found the issue. The RtcStartWakeUpAlarm should also set the SubSecond values for certain MCU types. I suggest the RtcStartWakeUpAlarm is changed to: `static void RtcStartWakeUpAlarm( uint32_t timeoutValue ) { RtcCalendar_t now; RtcCalendar_t alarmTimer; RTC_AlarmTypeDef alarmStructure; HAL_RTC_DeactivateAlarm( &RtcHandle, RTC_ALARM_A ); HAL_RTCEx_DeactivateWakeUpTimer( &RtcHandle ); // Load the RTC calendar now = RtcGetCalendar( ); // Save the calendar into RtcCalendarContext to be able to calculate the elapsed time RtcCalendarContext = now; // timeoutValue is in ms alarmTimer = RtcComputeTimerTimeToAlarmTick( timeoutValue, now ); alarmStructure.Alarm = RTC_ALARM_A; alarmStructure.AlarmDateWeekDaySel = RTC_ALARMDATEWEEKDAYSEL_DATE; alarmStructure.AlarmMask = RTC_ALARMMASK_NONE; alarmStructure.AlarmTime.TimeFormat = RTC_HOURFORMAT12_AM; #if defined(STM32L100xBA) || defined (STM32L151xBA) || defined (STM32L152xBA) || defined(STM32L100xC) || defined (STM32L151xC) || defined (STM32L152xC) || defined (STM32L162xC) || defined(STM32L151xCA) || defined (STM32L151xD) || defined (STM32L152xCA) || defined (STM32L152xD) || defined (STM32L162xCA) || defined (STM32L162xD) || defined(STM32L151xE) || defined(STM32L151xDX) || defined (STM32L152xE) || defined (STM32L152xDX) || defined (STM32L162xE) || defined (STM32L162xDX) alarmStructure.AlarmSubSecondMask = RTC_ALARMSUBSECONDMASK_ALL; alarmStructure.AlarmTime.SubSeconds = 0; #endif alarmStructure.AlarmTime.Seconds = alarmTimer.CalendarTime.Seconds; alarmStructure.AlarmTime.Minutes = alarmTimer.CalendarTime.Minutes; alarmStructure.AlarmTime.Hours = alarmTimer.CalendarTime.Hours; alarmStructure.AlarmDateWeekDay = alarmTimer.CalendarDate.Date; if( HAL_RTC_SetAlarm_IT( &RtcHandle, &alarmStructure, RTC_FORMAT_BIN ) != HAL_OK ) { assert_param( FAIL ); } } `<issue_closed> <issue_comment>username_0: Why did you close this? I think you should really add this because it is part of the HAL. <issue_comment>username_2: Your fix also worked for us on STM32L151CCT6. Without it we observe the same problem - RTC interrupts stop working after a random period of time. <issue_comment>username_1: Your fix for the SubSeconds looks valid even though it may also be necessary to fully handle the SubSeconds field in order to not loose time precision. For your information the boards directory files are meant for a specific end-device hardware thus a specific MCU. None of the current used MCU has the SubSecond that's why your fix isn't included. If I'm not wrong the latest version of the used MCU have the SubSecond thus, what we could do is to add the specific conditional pre-processing for this new MCU version. We are currently developing a new end-device hardware that includes an MCU with the SubSencond field which when ready will provide an example on how to fully handle the SubSecond field. <issue_comment>username_0: Thanks for your elaboration! Could this "not handling the SbuSecond correctly" issue be linked to this other issue I have? [RX2 Issue](https://github.com/Lora-net/LoRaMac-node/issues/131) <issue_comment>username_1: It may be a reason. But, as you said your timing measurements look right. In order to debug what is going on I would advise you to 1. Verify the second window parameters. (Datarate and Frequency) 2. If the previous are correct you may try to open the windows earlier in time and for longer time. 3. If you get the downlinks working then it may mean that the RTC timings aren't precise enough. Hope it helps you find a solution for your issue. <issue_comment>username_0: Is it possible to get some development version of this SubSecond example? Or do you have any indication on when this will be released? <issue_comment>username_3: HI, @username_1 ,we encounter the same problem. we use STM32L072/STM32L052. we found RTC interrupt stop after several times. and the "subseconde issues " does not work for us... Is that the sdk RtcStartWakeUpAlarm can not work on STM32L0xx MCU ? <issue_comment>username_4: STMicroelectronics recently created a fork of LoRaMac-node with support of more STM32 families. Maybe this can help. http://www.st.com/content/st_com/en/products/embedded-software/mcus-embedded-software/stm32-embedded-software/stm32cube-expansion-software/i-cube-lrwan.html <issue_comment>username_3: the STMicroelectronics's code works.. Rtc_subseond needs to be set in Stm32L0xx MCU thank u... username_3@foxmail.com <issue_comment>username_1: Hi All, For our hardware I have changed the MCU to a STM32L151xE type instead of the STM32L151xB. Now when upload the firmware without any changes it runs fine. But when I change the build define of the MCU type to STM32L151xE (I have added the header file stm32l151xe.h version V2.1.2 same version as the stm32l151xb.h) the RTC interrupt for handling all the timer events stops firing after 5-10 seconds of running. I'm using the latest master commit version 4.3.0 Any ideas of why this is happening? Thanks, Michael <issue_comment>username_1: Currently we are working on a new platform that is based on the STM32L051C8 which has the sub <issue_comment>username_1: We close this issue as we just pushed the MoteII platform support which is based on the STM32L051C8 MCU.<issue_closed>
<issue_start><issue_comment>Title: improve error text username_0: <img width="577" alt="screen shot 2016-06-05 at 10 47 24 am" src="https://cloud.githubusercontent.com/assets/963093/15806141/f0744ab0-2b0a-11e6-904d-aaf4f7e31f50.png"> Save current notebook as a file? -> Download a copy of the current notebook?<issue_closed>
<issue_start><issue_comment>Title: Compile Issue With New Packager For IOS username_0: Everytime I try to package for IOS using the new faster packager for Adobe AIR with this ANE it never works for IOS. I always get a duplicate symbol error: [java] duplicate symbol _traits:so.cuo.platform.admob::AdmobFunNames$ in: [java] AOTBuildOutput-0000000000_17_1.o [java] so.cuo.platform.admob-0_0_1.o [java] duplicate symbol _traits:so.cuo.platform.admob::AdmobFunNames in: [java] AOTBuildOutput-0000000000_17_1.o [java] so.cuo.platform.admob-0_0_1.o [java] Compilation failed while executing : ld64 [java] ld: 2 duplicate symbols for architecture armv7 <issue_comment>username_1: not use faster package. and download the last version. <issue_comment>username_0: We have to start using the faster packager it is the default compiler now going forward. Why don't you recompile the ANE correctly or release the source so someone else can re-compile it. Otherwise this becomes useless. <issue_comment>username_1: download the last version try again. we have used it for month. <issue_comment>username_0: I've tested the latest ANE and it keeps throwing the duplicate symbol error. Are you using new packager with the -useLegacyAOT true flag? Are you compiling for IOS? This only happens for IOS. Please fix. Its just a simple re-compile.<issue_closed> <issue_comment>username_1: upgraded
<issue_start><issue_comment>Title: Tables have no spacing username_0: Hi, I trying to create some tables, but there's no border and no spacing on the cells. Am I missing something? Thanks. ``` First Header | Second Header --------------- | ------------- Content Cell | Content Cell Content Cell | Content Cell ``` <issue_comment>username_1: by default tables seem to have no style. we could adjust the markdown parser to add bootstrap classes by default. <issue_comment>username_2: +1 <issue_comment>username_1: anyone willing to send a pull request? As far as I see this could be added by extending the method [`renderTable()`](https://github.com/username_1/markdown/blob/e2a490ceec590bf5bfd1b43bd424fb9dceceb7c5/block/TableTrait.php#L86) in https://github.com/yiisoft/yii2-apidoc/blob/master/helpers/ApiMarkdown.php with something like `str_replace('<table>', '<table class="table table-bordered">', parent::renderTable(...))`<issue_closed> <issue_comment>username_1: fixed.
<issue_start><issue_comment>Title: gochimp support for handlebars username_0: Currently, gochimp does not support handlebars templates with Mandril. The interface assumes merge vars are string/string when in fact the API documentation stipulates they can be any json value. There is also no parameter to tell Mandril which rendering engine to use on a per message basis: https://mandrillapp.com/api/docs/messages.JSON.html#method=send http://blog.mandrill.com/handlebars-for-templates-and-dynamic-content.html <issue_comment>username_1: Hi @mattbaird , what do you think of this pull request? The MergeLanguage support seems good to me. And we need the support. Thanks @username_0 .
<issue_start><issue_comment>Title: Ruins in Black Market username_0: Ruins are supply cards, which means they should appear in the Black Market. But they aren't in the CardDb, so they don't. Ruins need to be added to the CardDb.<issue_closed> <issue_comment>username_0: I was in error here - it seems that Ruins are not Kingdom Cards, but are in the Supply. Since the Black Market grabs all Kingdom Cards, not all Supply cards, they cannot be in the market.
<issue_start><issue_comment>Title: Restore WebProxy in CoreCLR username_0: WebProxy is ifdefed out for CoreCLR builds, we need it back by implementing IWebProxy ourselves <issue_comment>username_0: We need to look at getting rid of WebProxy on coreCLR, and potentially all over the place. CC // @username_1 <issue_comment>username_1: Yes - `WebProxy` was a convenience class that was part of the .NET Framework managed HTTP stack. .NET Core uses a different HTTP implementation and `WebProxy` type is no longer part of this. Consider using the following instead: 1. Create a custom type that implements `IWebProxy` interface. All .NET Core HTTP APIs related to proxies accept any type that implements `IWebProxy`. 2. Use `WinHttpHandler.ProxyUsePolicy` property to select the appropriate default for your scenario. If you want to use the same proxy that the user has configured in Internet Explorer settings, use `WinHttpHandler.ProxyUsePolicy = WindowsProxyUsePolicy.UseWinInetProxy` 3. To use a custom proxy, set `WinHttpHandler.ProxyUsePolicy = WindowsProxyUsePolicy.UseCustomProxy` and `WinHttpHandler.Proxy` to an instance of your custom proxy type. 4. If you are writing a service and want to set a specific proxy for all services on the system, you can set it for WinHTTP stack via a script (https://technet.microsoft.com/en-us/library/cc731131(v=ws.10).aspx) and then set `WinHttpHandler.ProxyUsePolicy = WindowsProxyUsePolicy.UseWinHttpProxy` (Note: This is different from setting the Internet Explorer proxy) 5. For getting or setting proxy credentials, use `WinHttpHandler.DefaultProxyCredentials` or for custom proxies, set the credentials property on an instance of your custom proxy type. @DavidSh might have more recommendations/suggestions. Thanks, Sid <issue_comment>username_2: There are simpler ways to replace `WebProxy`. You don't need to start changing all of your code to rip out calls to 'WebProxy'. You should first just create your own `System.Net.WebProxy` class (i.e. a shim) and implement the `iWebProxy` interface on it. How exactly you implement this class depends on how you are using `WebProxy`. I could assist more if I knew exactly how you are using the `WebProxy` class. What parameters are you passing the constructor? What properties are you setting? Why use this class at all if you can simply use the "default system" proxy? Are you using `HttpWebRequest` or `HttpClient`? If you can describe in detail how/why you are using this class, I will be able to help more. <issue_comment>username_3: @username_2 nuget uses ``HttpClient`` <issue_comment>username_4: @username_2 NuGet uses WebProxy instances to populate [ProxyCache](https://github.com/NuGet/NuGet.Client/blob/dev/src/NuGet.Core/NuGet.Configuration/Proxy/ProxyCache.cs). First, it attempts to read proxy settings from a config file or from an environment variable. Proxy settings can be set as a triplet of host, user name, and password values. Optional bypass list can be added too. Next, it validates whether system proxy is set by testing combination of values `WebRequest.GetSystemWebProxy()` and ```WebRequest.DefaultWebProxy```. Thus acquired IWebProxy objects are consumed by initializing `HttpClientHandler.Proxy` attribute of our custom [CredentialPromptWebRequestHandler](https://github.com/NuGet/NuGet.Client/blob/dev/src/NuGet.Core/NuGet.Protocol.Core.v3/HttpSource/HttpHandlerResourceV3Provider.cs#L86) => WebRequestHandler => HttpClientHandler.<issue_closed>
<issue_start><issue_comment>Title: Support inline comments username_0: Danger-rb's methods support the following parameters: ```rb warn (message: String, sticky=false: Boolean, file=nil: String, line=nil: String) ``` These methods allow for adding a warning/error/messages inline on a specific file. What would be involved in porting that functionality to Danger-js? The current type definition of `warn` in Danger-js is: ```ts warn(message: MarkdownString): void ``` Could we just add more parameters? Would we accept an options object as a second parameter? ```ts // perhaps this warn(message: MarkdownString, file=undefined: String, line=undefined: String) // or warn(message: MarkdownString, options: { file: String, line: String }) ``` Also, where would `sticky: Boolean` come into play - would that be a part of this work, or should that be a part of a separate issue? <issue_comment>username_1: Both of those would work (and be backwards compatible) I think I'm in favour of the options hash. Gives more room for further improvement in the future without making a lot of args on those functions. Generally the usage of `file: line:` has been inside danger-rb plugins, where elegance of the code is less impusername_1nt ( you want lots of 2-3 LOC rules ideally ) Implementation: I see two options here, you can start a review, and make the comments inside there ( see [this discussion](https://github.com/danger/danger/issues/684) on what that could look like ) or just outright use the commit API. Either way, there needs to be * A "main violations" [abstraction](https://github.com/danger/danger/blob/master/lib/danger/request_sources/github.rb#L120) that posts the normal GitHub comment (what we have right now) * A "file violation" abstraction that handles`file: line:` -> [PR Diff](https://github.com/danger/danger/blob/master/lib/danger/request_sources/github.rb#L316-L363) [positioning system](https://github.com/danger/danger/blob/master/lib/danger/request_sources/github.rb#L216-L250) This is by far the most gnarly code in Danger-rb, so I advise taking your time on it and trying to find better abstractions than we have in ruby. <issue_comment>username_0: Regarding https://github.com/danger/danger/issues/684, I'm thinking adding a GitHub review feature could be separate from just adding a `{ file, line }` options hash to the warn/message/fail/markdown functions. I can see value in making one-off comments using Danger as well as integrating into GitHub's review feature. If Danger-rb supports both (or is in the process of doing so), Danger-js can support the same API for parity between tools (we can leave `sticky` out of Danger-js, though 😃). Thanks for posting links and explaining the PR Diff positioning system - they will be useful for understand how to parse the diff. A coworker went through a similar experience [trying to parse GitHub diffs](https://github.com/stkent/githubdiffparser) for an [Android static analysis tool](https://github.com/btkelly/gnag) he and another coworker have been working on, so I think those could also be some helpful references. I'm thinking the diff parser could be added to this repo, and if generic enough, it could be pulled out into an npm package for reuse - I didn't see anything when searching npmjs.com that did quite what we need here. <issue_comment>username_1: We do already have a diff parser in Danger JS - [here](https://github.com/danger/danger-js/blob/master/source/platforms/GitHub.ts#L50) but what I mean is WRT the diff parser is: You need to take an absolute file path + line, e.g. `{ file: "CHANGELOG.md", line: 6 }` and come back with something that _could_ be found inside the existing diff object that we have e.g. `{ commit: 918b1dc, file: "CHANGELOG.md", line: 9}` or if it wasn't found, then it needs to go inside the main comment. The API docs will [give you a sense](https://developer.github.com/v3/repos/comments/#create-a-commit-comment) of what I mean Other than that, all sounds fine to me 👍 <issue_comment>username_0: Thanks @username_1, think I misunderstood about the diff parsing thing originally. I'll dive into the code and get my hands dirty. Will open a PR if I have any more questions. 😄 <issue_comment>username_0: This issue is open if someone would like to contribute! An initial attempt to solve this was in #99, which might give a starting point. Feel free to comment here to ask more questions or open a PR to get work-in-progress feedback. 😃 <issue_comment>username_2: I would be pleased to help on this. At www.bam.tech, we want to use danger as a poka-yoke, to share good practice and learning accros the teams. Inline comment have a big impact on it. That being said, I have a bad overview of danger js, so maybe it is to complicated for a first task. My plan so far: - backport `MessagingOptions` from #99 - access danger context to get the diff: - if the inline comment is in the diff, add the inline object to a list of object and the a summary of inline comment (necessary ?) - else do the details summary thing (https://github.com/dear-github/dear-github/issues/166#issuecomment-236342209) inside the `githubResultsTemplate` - if the inline object, implement a review call and inline feedback - add a setting (env variables/ api) to fallback to use the commit api if the user do not want a review Does it seems a good plan for you ? What are the difficult points / points to carefully design the abstraction ? <issue_comment>username_1: That feels about right, yeah 👍 Here's the main chunk of the [work in danger ruby](https://github.com/danger/danger/blob/0354f6dd943234420639fcd1c1a13a083f723f66/lib/danger/request_sources/github/github.rb#L289-L406). I've slowly grown to the idea that I want to see something like https://github.com/danger/danger-js/issues/196 which will end up requiring the "did this danger instance create this table" but that's further down the line. Most of the work is in deciding where a comment should go, and making sure every comment is up-to-date when danger does a second run <issue_comment>username_0: If it's possible to weave GitHub reviews and inline comments together through some API configuration option, that sounds awesome. I guessed that inline commenting would be the simplest first step, but I'm not familiar with GitHub's API on either of these things, so I'm not sure what would be doable in one PR. For reference, here is the Ruby PR where the GitHub reviews feature was introduced in Danger - https://github.com/danger/danger/pull/702. <issue_comment>username_2: @username_0 @username_1: until September 6, I have a bit of vacation and two talks for meetups/training to prepare so it will be pretty busy. I would definitively have more time after that and your links is so valuable to me. I have now a clearer idea about the work required. <issue_comment>username_3: Any progress on this? <issue_comment>username_4: @username_3 https://github.com/danger/danger-js/pull/529 <issue_comment>username_2: Wow.<issue_closed> <issue_comment>username_1: ah yeah! This can be closed 👍
<issue_start><issue_comment>Title: Initial release considerations username_0: For an initial release I'm thinking just running a command to output IL for current class would suffice. Long term goal would be to have a side by side pane (similar to VS Code's the markdown editor) that highlighted IL based on what you've selected in the code window.<issue_closed>
<issue_start><issue_comment>Title: First Chance Exception when Serializing username_0: I have an object in the Product Meta Data value instead of a string and when posting the Product I get a A first chance exception of type 'System.Runtime.Serialization.InvalidDataContractException'. I replaced your SerializeJSon<T>(T t) function contents with a Newtonsoft JSON.NET function: return JsonConvert.SerializeObject(t); Now it works. Don't know if this is the best solution but it works :) <issue_comment>username_1: I'm thinking of using JSON.NET in the future version, however, as the WooCommerce API doesn't always return json as its specification, there might still be some workarounds to do. Have you set DataContract attribute for your object? and DataMember attribute for its properties? If all set, it should be fine. <issue_comment>username_0: Do you see any problems using JSON.NET for the serialization portion as it is now? I've tried with products, customers and categories (adding, removing and updating) and it ran through everything okay... I did set DataContract for classes and DataMember attributes for all properties. <issue_comment>username_1: I will have a look and see if JSON.NET works fine. <issue_comment>username_1: @username_0 You can use JSON.NET now, please see the wiki pages.<issue_closed>
<issue_start><issue_comment>Title: [HttpKernel] Added use of instantiateController method provided in (#12022) username_0: | Q | A | ------------- | --- | Bug fix? | no | New feature? | yes | BC breaks? | no | Deprecations? | no | Tests pass? | yes | Fixed tickets | | License | MIT | Doc PR | <issue_comment>username_1: Thank you @username_0.
<issue_start><issue_comment>Title: GRIP as a ROS Node username_0: I'm opening this issue to keep track of all of the links/information I've needed to get this to work. The ROS documentation is spotty and very scattered. Hopefully this will help keep track of how I got this to work. Virtual Box with ROS already installed: http://nootrix.com/software/ros-indigo-virtual-machine/ Adding ROS as a dependency: http://wiki.ros.org/rosjava/Tutorials/indigo/No%20Ros%20Installation <issue_comment>username_1: Please move this to a project if you feel like we need to keep this information around <issue_comment>username_0: I should probably move it into a docs in the repository.
<issue_start><issue_comment>Title: Ability to sort parameters by name or type username_0: For APIs with a lot of parameters it would be useful to let the user sort the parameters either by name or by type (headers together for example). <issue_comment>username_1: i really don't know that we want this. Parameters are ordered by the server in an array intentionally. If we allow sorting, it should be an option, not the default. <issue_comment>username_2: If it's a "nice to have" lets move it to Future. I guess it should be like when you click on column header it sorts based on column <issue_comment>username_1: i think this belongs as part of a redesign. I'm going to close it as an issue.<issue_closed>
<issue_start><issue_comment>Title: phoenix.new creates faulty page when --no-brunch is used username_0: Not critical, but using the `--no-brunch` switch with `phoenix.new` results in a javascript error: ``` ReferenceError: require is not defined <script>require("web/static/js/app")</script> ``` I'm using Phoenix 0.13.1 Steps to reproduce: 1. Create project with `mix phoenix.new testapp --no-brunch` 2. Start the server and open localhost:4000 in a browser 3. You will see the javascript error in the browser console The reason is that without brunch, phoenix.js is not merged into app.js and thus `require` is not available. app.js is an empty file. Proposed solution: When not using brunch the layout template (application.html.eex) should explicitely load phoenix.js and provide a stub app.js. The question is what this stub should contain: maybe register an empty module named `app`? <issue_comment>username_1: I'll take a stab at this one - not generating the require line should prevent this issue, but still let you use app.js from `priv`<issue_closed> <issue_comment>username_0: Thanks for looking into this! Not generating the require will suppress the javascript error. But then phoenix.js is still not available without brunch. What about including the line ```html <script src="<%= static_path(@conn, "/js/phoenix.js") %>"></script> ``` in the layout template? Then `require` is also available.
<issue_start><issue_comment>Title: [rb] implement Remote::Driver as a class not a module username_0: I don't love this implementation, but it lets me subclass the way I need to as described in #4232 @username_1 is there a reason not to go this route other than it is a little uglier? <issue_comment>username_1: I remember trying to do something like this, but for some reason I chose a different path, though out of top of my head I don't see why this is worse. If you can make specs pass, let's merge it. If not, I'll try doing that later this week. <issue_comment>username_0: merged b7007615205970df5ca4e651f1573a46b3535ebf
<issue_start><issue_comment>Title: Make /ai command username_0: Credit goes to AeonLucid for sniping me to it <issue_comment>username_0: We have tested it and we have proof it works ![](https://image.prntscr.com/image/27b3f06e75774dd2be632eeefbe92b0b.png) <issue_comment>username_1: You need to escape HTML <issue_comment>username_0: @Lord-Haji <issue_comment>username_2: I honestly don't see why this should be a command. This is only going to get spammed. <issue_comment>username_3: Agree with bumba <issue_comment>username_4: Personally I think it'd be fun to have. <issue_comment>username_0: We're gonna work on this and reopen later <issue_comment>username_5: What about broadband usage on this? <issue_comment>username_5: What's to stop someone from doing this command as fast as they can? There should totally be some kind of throttle for it, that's stricter than the chatting throttle. <issue_comment>username_0: @username_5 I suggest we make this a chat-commands.js thing and make it use the command cooldown thing that was made for broadcasting cooldown. <issue_comment>username_1: If this is a concern, it should be a throttle per IP - it's too easy to send say 500 bots spamming this at ~93 times a minute x 500 bots <issue_comment>username_4: A global cooldown makes more sense, tbh. <issue_comment>username_1: I'm not sure if this (https://github.com/username_4/Pokemon-Showdown/blob/master/chat-plugins/tcgtabletop.js) is the only place, but if a filter is used, maybe it should account for the other commadns that make HTTP requests as well <issue_comment>username_0: Very true @username_1 thanks for pointing that out, I didn't even consider that as a possibility.
<issue_start><issue_comment>Title: Fixes unexpected 404 error username_0: This modification fixes a bug I just encountered with a very basic configuration of this role. Here are the properties of my system : - Role version : `v0.4.1` - Nginx version : `nginx/1.12.0` - Ubuntu 16.04 Here are my variables : ``` nginx_sites: api-staging: domains: "{{ domains }}" default_server: True upstreams: - name: 'api' servers: ['unix:///home/some/path.sock'] ``` Which generates me, as expected : ``` location / { try_files $uri $uri.html $uri/ @api =404; } location @api { proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-Proto $scheme; proxy_set_header Host $http_host; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_redirect off; proxy_pass http://api; } ``` However, calling a working endpoint gives me a 404 Error. After a few tries, I found two ways of modifying this configuration to fix the error : ``` location / { proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-Proto $scheme; proxy_set_header Host $http_host; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_redirect off; proxy_pass http://api; } ``` and ``` location / { try_files $uri $uri.html $uri/ @api; } location @api { proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-Proto $scheme; proxy_set_header Host $http_host; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_redirect off; proxy_pass http://api; } ``` This pull request implements the second solution. After some research, i found in [the documentation](https://nginx.org/en/docs/http/ngx_http_core_module.html#try_files) that "The last parameter can also point to a named location, as shown in examples below. Starting from version 0.7.51, the last parameter can also be a code". I'm no Nginx expert, but from what I understand, only the last parameter can be a named parameter, so this modification seems necessary. I've tried to imagine what regressions this change could introduce, but so far can't see any : - Users using a `custom_root_location_try_files` are not affected - Users with no upstream are not affected - The last parameter of a try_files MUST be defined, but in our case, it is necessarily so, because of the structure of the template, so this shouldn't be an issue - in case of unavailability of the upstream, we get a 502 error. I'm not sure how the current configuration is supposed to behave in such a case, but the 502 error sounds like what we want in such a situation. Does this sound like a sensible change ? <issue_comment>username_1: Hi, Thanks for the PR, and you are correct. I reviewed the docs and yep, you can't have both an upstream and code defined like that. The code approach works ok for static sites tho, and in the case of an upstream, it will just use the `error_pages` generated by this role. Have you tested your code change? The current implementation is: `... $uri/{{ (' @' + item.upstreams[0].name) if (item.upstreams) else '' }} =404;` Your patch is: `... $uri/{{ (' @' + item.upstreams[0].name) if (item.upstreams) else '=404' }};` I'm a little concerned that your patch might result in `$uri/=404` without a space. I didn't test it, so I might be wrong. I'm not sure offhand if Jinja will add a space, but I'm thinking no. Perhaps your patch should be adjusted to: `try_files $uri $uri.html $uri/ {{ ('@' + item.upstreams[0].name) if (item.upstreams) else '=404' }};` This way there's always a space in both conditions of the if statements. <issue_comment>username_0: You're right, I had not tested this new version without upstream, but I guess it wouldn't work. Anyway, i made the modification that you suggested, and tested both with and without upstream, everything seems to work fine (either when the static file is present, or when it is not). <issue_comment>username_1: Thanks. I'll test and merge it tonight.
<issue_start><issue_comment>Title: Fix for outdated addon error username_0: If you are getting this error: "you are using an outdated addon. to view filepup streams please use salts or another up to date Kodi addon." Go into sprectro settings and Disable movienight in sources and it will work. Thanks to Matt on Facebook <issue_comment>username_1: Yes, thx Important bugs to fix in this week : watch1080p - cloudflare Pubfilm trakt.tv api v2 Important bugs to fix in next week : realdebrid api v3 <issue_comment>username_1: Fixed in 2016.02.20.2<issue_closed>
<issue_start><issue_comment>Title: Pokemon names username_0: Guys, there is a question on kaggle https://www.kaggle.com/forums/f/1614/predict-em-all/t/24308/pokemon-names: I could see that there are pokemon id's in the dataset. Is there a way to map these ids to the pokemon names? Can you help?<issue_closed> <issue_comment>username_1: done
<issue_start><issue_comment>Title: Select list Refresh blows-up username_0: If a service is added to Windows, then the Select tab Refresh button is clicked, CSM blows-up with an array error. Workaround: Just restart CSM. The current list of services is read when the program starts so the added service will be included. The problem is likely an array resize issue after services have been read the first time.<issue_closed> <issue_comment>username_0: **# Workaround** Go to File, Restart after new services have been installed. A restart will cause CSM to re-read the available services so they are available to be selected. <issue_comment>username_0: If a service is added to Windows, then the Select tab Refresh button is clicked, CSM blows-up with an array error. Workaround: Just restart CSM. The current list of services is read when the program starts so the added service will be included. The problem is likely an array resize issue after services have been read the first time. <issue_comment>username_0: Done, build 173.<issue_closed>
<issue_start><issue_comment>Title: Ganeti expects to get FQDNs from socket.gethostname() username_0: Originally reported of Google Code with ID 8. ``` There are a fair number of comparisons of FQDNs and socket.gethostname() in Ganeti. This is incorrect, since that is meant to return the local hostname, not the FQDN. You should use socket.getfqdn() instead, which is the equivalent of 'hostname -f'. ``` Originally added on 2007-09-27 14:32:01 +0000 UTC. Attachments: [ganeti-fqdn.patch](https://storage.googleapis.com/google-code-attachments/ganeti/issue-8/comment-0/ganeti-fqdn.patch) <issue_comment>username_0: ``` Hi there, This was already fixed in svn revision 173, in a different way (as it was commited before your report). thanks, iustin ``` Originally added on 2007-10-02 09:47:16 +0000 UTC. Changed State: Fixed <issue_comment>username_0: ``` -- Empty comment -- ``` Originally added on 2012-10-22 08:57:07 +0000 UTC. Changed State: Released<issue_closed>
<issue_start><issue_comment>Title: WIP: Adding support for file uploads username_0: :construction: Support for file uploads :construction: Will eventually fix: https://github.com/openjournals/brief-ideas/issues/66 <issue_comment>username_0: The Zenodo worker needs extending to add support for posting files before this can be merged.
<issue_start><issue_comment>Title: Icon Request: fa-molecule username_0: an icon to ilustrate molecules like this http://previews.123rf.com/images/molekuul/molekuul1402/molekuul140200031/25767675-Propylparaben-paraben-preservative-molecule-flat-icon-style-Oxygen-and-carbon-atoms-shown-as-circles-Stock-Vector.jpg <issue_comment>username_1: Duplicate of #5190. Please +1 that.<issue_closed> <issue_comment>username_0: thanks
<issue_start><issue_comment>Title: Docs for training entity out-dated? username_0: <!--- Please provide a summary in the title and describe your issue here. Is this a bug or feature request? If a bug, include all the steps that led to the issue. If you're looking for help with your code, consider posting a question on StackOverflow instead: http://stackoverflow.com/questions/tagged/spacy --> Hi, I'm pretty new to spacy, actually trying out some of the example code in the docs, seems like the example at the following link is out-dated: https://spacy.io/docs/usage/training#train-entity Am I correct in saying that that call to entity.update should take a GoldParse object? Or am I just doing something wrong? <issue_comment>username_1: I think you're probably right, thanks. <issue_comment>username_2: The new version 1.8.0 comes with bug fixes to the NER training procedure and a new `save_to_directory()` method. We've also updated the docs with more information on training and NER training in particular: * Workflow: [Training the Named Entity Recognizer](https://spacy.io/docs/usage/training-ner) * Workflow: [Saving and loading models](https://spacy.io/docs/usage/saving-loading) * Example: [Training an additional entity type](https://github.com/explosion/spaCy/blob/master/examples/training/train_new_entity_type.py) * [Command line interface](https://spacy.io/docs/usage/cli#model) for initialising, training and packaging models I hope this helps!<issue_closed>
<issue_start><issue_comment>Title: dotnet new prints useless (?) content generation time number username_0: From @ellismg on https://github.com/dotnet/cli/issues/5570 This feels like a fit and finish thing, but it was weird to me and I didn't see an existing issue, so I opened one for discussion. ## Steps to reproduce ``` matell@matell-ub:~/git/hello$ ~/bin/dotnet new classlib Content generation time: 131.0473 ms The template "Class library" created successfully. ``` ## Expected behavior I would expect that there was no output (or perhaps just the I generated this library successfully) ## Actual behavior A message telling me how long it took to generate the template, which I don't care about. ## Environment data `dotnet --info` output: ``` .NET Command Line Tools (1.0.0-rc4-004756) Product Information: Version: 1.0.0-rc4-004756 Commit SHA-1 hash: ce59e6b19f Runtime Environment: OS Name: ubuntu OS Version: 14.04 OS Platform: Linux RID: ubuntu.14.04-x64 Base Path: /home/matell/bin/sdk/1.0.0-rc4-004756 ``` <issue_comment>username_0: This is fixed by #245<issue_closed>
<issue_start><issue_comment>Title: Only place show/hide all output buttons on notebooks with output blocks username_0: Right now this is done manually, in build_html_site.py. But this could be automated, either in build_html_site.py or in a template, by checking whether the template has any output blocks. from build_html.py: # This is manually determining which notebooks get show/hide all output buttons. # Might be better to scrape the files, look for output cells, and only place # show/hide all buttons on notebooks with output cells. Can this be done # in the template?
<issue_start><issue_comment>Title: Unknown custom metadata item with latest Roslyn release username_0: When reading the PDB file generated by Roslyn following exception generated ``` Exception: Microsoft.Cci.Pdb.PdbDebugException: Unknown custom metadata item kind: 6 at Microsoft.Cci.Pdb.PdbFunction.ReadCustomMetadata(BitAccess bits) in c:\Data\Temp\cecil-light\symbols\pdb\Microsoft.Cci.Pdb\PdbFunction.cs:line 327 at Microsoft.Cci.Pdb.PdbFunction..ctor(String module, ManProcSym proc, BitAccess bits) in c:\Data\Temp\cecil-light\symbols\pdb\Microsoft.Cci.Pdb\PdbFunction.cs:line 251 at Microsoft.Cci.Pdb.PdbFunction.LoadManagedFunctions(String module, BitAccess bits, UInt32 limit, Boolean readStrings) in c:\Data\Temp\cecil-light\symbols\pdb\Microsoft.Cci.Pdb\PdbFunction.cs:line 125 at Microsoft.Cci.Pdb.PdbFile.LoadFuncsFromDbiModule(BitAccess bits, DbiModuleInfo info, IntHashTable names, ArrayList funcList, Boolean readStrings, MsfDirectory dir, Dictionary`2 nameIndex, PdbReader reader) in c:\Data\Temp\cecil-light\symbols\pdb\Microsoft.Cci.Pdb\PdbFile.cs:line 298 at Microsoft.Cci.Pdb.PdbFile.LoadFunctions(Stream read, BitAccess bits, Boolean readAllStrings, Int32& age, Guid& guid) in c:\Data\Temp\cecil-light\symbols\pdb\Microsoft.Cci.Pdb\PdbFile.cs:line 396 at Mono.Cecil.Pdb.PdbReader.PopulateFunctions() in c:\Data\Temp\cecil-light\symbols\pdb\Mono.Cecil.Pdb\PdbReader.cs:line 91 at Mono.Cecil.Pdb.PdbReader.ProcessDebugHeader(ImageDebugDirectory directory, Byte[] header) in c:\Data\Temp\cecil-light\symbols\pdb\Mono.Cecil.Pdb\PdbReader.cs:line 75 at Mono.Cecil.ModuleDefinition.ProcessDebugHeader() in c:\Data\Temp\cecil-light\Mono.Cecil\ModuleDefinition.cs:line 763 at Mono.Cecil.ModuleDefinition.ReadSymbols(ISymbolReader reader) in c:\Data\Temp\cecil-light\Mono.Cecil\ModuleDefinition.cs:line 824 at Mono.Cecil.ModuleReader.ReadSymbols(ModuleDefinition module, ReaderParameters parameters) in c:\Data\Temp\cecil-light\Mono.Cecil\AssemblyReader.cs:line 105 at Mono.Cecil.ModuleReader.CreateModuleFrom(Image image, ReaderParameters parameters) in c:\Data\Temp\cecil-light\Mono.Cecil\AssemblyReader.cs:line 83 at Mono.Cecil.ModuleDefinition.ReadModule(Stream stream, ReaderParameters parameters) in c:\Data\Temp\cecil-light\Mono.Cecil\ModuleDefinition.cs:line 855 at Mono.Cecil.ModuleDefinition.ReadModule(String fileName, ReaderParameters parameters) in c:\Data\Temp\cecil-light\Mono.Cecil\ModuleDefinition.cs:line 839 at Cosmos.IL2CPU.AppAssembler.GetCecilMethodDefinitionForSymbolReading(MethodBase methodBase) in c:\Users\<user>\Documents\GitHub\Cosmos-1\source\Cosmos.IL2CPU\AppAssembler.cs:line 1575 at Cosmos.IL2CPU.AppAssembler.MethodBegin(MethodInfo aMethod) in c:\Users\<user>\Documents\GitHub\Cosmos-1\source\Cosmos.IL2CPU\AppAssembler.cs:line 226 at Cosmos.IL2CPU.AppAssembler.ProcessMethod(MethodInfo aMethod, List`1 aOpCodes) in c:\Users\kant\Documents\GitHub\Cosmos-1\source\Cosmos.IL2CPU\AppAssembler.cs:line 462 at Cosmos.IL2CPU.ILScanner.Assemble() in c:\Users\<user>\Documents\GitHub\Cosmos-1\source\Cosmos.IL2CPU\ILScanner.cs:line 929 at Cosmos.IL2CPU.ILScanner.Execute(MethodBase aStartMethod) in c:\Users\kant\Documents\GitHub\Cosmos-1\source\Cosmos.IL2CPU\ILScanner.cs:line 251 at Cosmos.IL2CPU.CompilerEngine.Execute() in c:\Users\<user>\Documents\GitHub\Cosmos-1\source\Cosmos.IL2CPU\CompilerEngine.cs:line 238 ``` <issue_comment>username_1: what version of cecil? can you upload the offending binary and pdb somewhere? <issue_comment>username_0: Version 0.9.4 and 0.9.5 both produce same results. I working on create minimum viable library for testing. <issue_comment>username_1: the current released version is 0.9.6.1 can you try that http://www.nuget.org/packages/Mono.Cecil/0.9.6.1<issue_closed> <issue_comment>username_0: I just update to 0.9.6.1 and this was fixed. Thanks. <issue_comment>username_1: no worries :)
<issue_start><issue_comment>Title: Can potentially remove the wrong caller for a read/write lock username_0: For read/write locks there can be multiple readers, this can cause different readers to lock at the same time which means that order of the callers with regard to readers can be incorrect. The count within the stats struct should remain correct during add/remove of callers for that type of call, this only affects the callers GList. One potential fix is to declare a session object before starting a session. Then the session object passed in to the start/end session can hold information about that session that way we remove the correct caller. The session object could then have a pointer to the GList element for the associated caller object that it has. If no session is passed in then it behaves the same way.<issue_closed>
<issue_start><issue_comment>Title: When will 2.2 official version be available? username_0: @username_1 please tell me <issue_comment>username_1: @username_0 This weekend im hopeing to refactor the database engine. I plan to release a beta version in early april, and the full version mid april <issue_comment>username_0: @username_1 and will you need to add translation strings? Tell me so i can update my translation when 2.2 comes out. <issue_comment>username_1: @username_0 First beta <issue_comment>username_0: ok <issue_comment>username_0: Closing this issue because it's solved<issue_closed>
<issue_start><issue_comment>Title: With proto3 syntax, default values aren't being assigned username_0: protobuf.js version: 6.7.0 First of all, this is a very impressive, fast library and we're looking forward to doing a lot with it, particularly with GeoJSON. Thanks! Using proto3 syntax (which we are doing for C#/Swift use), PBFs that are encoded without default enum values are not being assigned the default value upon decoding in protobuf.js. e.g instead of: `{"precision":3,"geometry":{"type":"POINT","coords":["-112002","46587"]}}` We get: `{"precision":3,"geometry":{"coords":["-112002","46587"]}}` This is the message being used: ```proto message Geometry { Type type = 1; repeated uint32 lengths = 2 [packed = true]; // coordinate structure in lengths repeated sint64 coords = 3 [packed = true]; // delta-encoded integer values repeated Geometry geometries = 4; repeated Value values = 13; repeated uint32 custom_properties = 15 [packed = true]; enum Type { POINT = 0; MULTIPOINT = 1; LINESTRING = 2; MULTILINESTRING = 3; POLYGON = 4; MULTIPOLYGON = 5; GEOMETRYCOLLECTION = 6; } } ``` It would be great to have a settable useDefaults attribute, perhaps when the decode method is called, e.g. `var decoded = gbData.decode(new Uint8Array(xmlhttp.response), true);` Or, if the syntax is detected as proto3 just go ahead and set the default value since that's what Google's spec calls for and it's what other languages and protoc-generated code are doing. <issue_comment>username_1: When the field is not set on the wire, its default value is still on the prototype of the created message instance: ```js console.log(decoded.hasOwnProperty("type"), typeof decoded.type, decoded.type); // false, "number", 0 ``` <issue_comment>username_0: Using: `var decodedObj = gbData.toObject(decoded, { defaults: true, enums: String });` The default values are now populated from the enum, whether passing a simple GeoJSON geometry or a varied FeatureCollection. This leaves us with a completely usable object. Thanks!<issue_closed>
<issue_start><issue_comment>Title: Cache latest sensor measurement data username_0: - The latest sensor measurement data is stored on the RhtSensorFacade. When a fragment, or a listener is registered, is automatically notified of the latest sensor values. Fragments will have immediatly the latest sensor data and will not have to wait until the Humigadget devices sends a new notification again. <issue_comment>username_0: Did an amend: 84a787e...f406930 <issue_comment>username_0: Did an amend: f406930...270e0e9 <issue_comment>username_0: Did an amend: 270e0e9...3462cd5 <issue_comment>username_1: merged as be51e21e52d6427aa977a0a0bed69f40b445f07e
<issue_start><issue_comment>Title: Error in processing text username_0: Sometimes lib produces empty string instead of actual strings. I can get right values by changing next lines ``` parser.on('text', function (txt) { if (addvalue) { cell.val = (cell.val ? cell.val : '') + txt; } }); ``` with these ``` parser.on('text', function (txt) { console.log(txt); //if (addvalue) { cell.val = (cell.val ? cell.val : '') + txt; //} }); ``` You can look at file in attachment [CompetitiveReport_A.xlsx](https://github.com/username_1/xlsx-extract/files/925302/CompetitiveReport_A.xlsx) <issue_comment>username_1: Thank you for reporting this bug! This was caused by xlsx-extract not implementing the full cell specification. The demo file you provided showed that strings of [cell type "inlineStr"](http://officeopenxml.com/SScontentOverview.php) have been ignored. Excel seems never to save files in that way, but other xlsx libraries do. Please note: the fix you used has unintended side effects, so don't use that, just update your version. It is fixed in version 0.0.16 on npm.<issue_closed>
<issue_start><issue_comment>Title: Support for different resource types username_0: There are more types of resource in a computing infrastructure landscape than just instances (compute). The current focus on managing instances have led to the development of top level concepts like groups, instances, and flavors. While these primitives and their operations (rolling update, scale, destroy) are quite general, we need to see how they can be applied to other resource types and if they don't fit, define new abstractions and models where they make sense. The resource types that are important and need to be addressed include + networks + volumes + security groups / firewalls + load balancers As state convergence of more resource types are addressed, it will also solve the *bootstrapping* problem. Bootstrapping is defined as the process to set up the environment enough such that InfraKit can then take over and ensure the convergence of infrastructure state to user specification. Carried to the extreme, the bootstrapping process can be reduced to the creation of a single node, the seed, from which everything else will be created, as InfraKit drives the entire infrastructure towards the user's specification.<issue_closed> <issue_comment>username_0: There are more types of resources in a computing infrastructure landscape than just instances (compute). The current focus on managing instances have led to the development of top level concepts like groups, instances, and flavors. While these primitives and their operations (rolling update, scale, destroy) are quite general, we need to see how they can be applied to other resource types and if they don't fit, define new abstractions and models where they make sense. The resource types that are important and need to be addressed include + networks + volumes + security groups / firewalls + load balancers As state convergence of more resource types are addressed, it will also solve the *bootstrapping* problem. Bootstrapping is defined as the process to set up the environment enough such that InfraKit can then take over and ensure the convergence of infrastructure state to user specification. Carried to the extreme, the bootstrapping process can be reduced to the creation of a single node, the seed, from which everything else will be created, as InfraKit drives the entire infrastructure towards the user's specification. <issue_comment>username_0: In terms of implementation, I see a few immediate tasks: + We need to define a schema for specifying a collection of resources. This specification will also need to support dependencies amongst resources to be created. This will be composed into a larger specification document with the existing specifications of groups. + Specify a new plugin type / API for creating different resources. Do we want a single plugin that can handle different types of resources (e.g. subnets, security groups) or one for each type? My sense is the former is simpler and better. We can call it the *Resource plugin*. + Implement an engine that can perform dependency analysis and create a plan of actions such as creating resources via calls to the resource plugin. Thoughts? @username_1 @fermayo <issue_comment>username_1: Are we ready for a pull request, or should we settle the schema and SPI first? <issue_comment>username_0: I think so. Let's start with a PR and we can work on that iteratively? Curious to see about the package layout, etc.
<issue_start><issue_comment>Title: fixed MapUtils test to not rely on HashMap's internal ordering username_0: MapUtils seems it should work for any instance of Map but this test is assuming a certain ordering from the HashMap; HashMap's spec does not guarantee any order. Changed to LinkedHashMap, which guarantees the order; the pretty printing should work for any kind of map. <issue_comment>username_1: thx
<issue_start><issue_comment>Title: When a fragment 404 and an other one comes back after compoxure should not log an error username_0: You get this error via this: https://github.com/tes/compoxure/blob/1ac79e000d088239091b46ba5571738c2ddaf159/src/middleware/htmlparser.js#L164 Get request for Page X - Parse, get 4 fragments (A, B, C, D) - Make requests for fragment (A, B, C, D) - Fragment A 404's so the response ends early with a 404 (https://github.com/tes/compoxure/blob/1ac79e000d088239091b46ba5571738c2ddaf159/src/middleware/htmlparser.js#L105) - Responses from B, C, D come back, so the response "completes" - triggering this error. In that case we should just ignore the other fragment for that request <issue_comment>username_0: So I believe that fixed it for fragment that 404 but it is still happening with other errors (we see it when other fragment returns 401). I wonder if we should just not log this error at all? <issue_comment>username_1: Probably easiest, it isn't really an error the more you think about it.<issue_closed> <issue_comment>username_0: Ok done: https://github.com/tes/compoxure/commit/e8d28719c86c9f0b5f846a1e7a42e7b82499b615
<issue_start><issue_comment>Title: Bugfix: Get() via index fails on string username_0: **Description**: This PR fixes bug where this test fails due to quotes being stripped prematurely from string, resulting in `UnknownValueTypeError`. ``` desc: "get string from array", json: `{"a":[{"b":1},"foo", 3],"c":{"c":[1,2]}}`, path: []string{"a", "[1]"}, isFound: true, data: "foo", ``` This is due to `searchKeys()` calling `ArrayEach()` to look up the index match, which calls `Get()`, which strips quotes from strings, resulting in `valueFound` pointing to string contents without quotes, which then looks like an unknown value type. There may be a better suggested fix than the one I did (like just updating the offset directly in the slice), open to suggestions. <issue_comment>username_0: @buger is there a more idiomatic way to fix this? <issue_comment>username_1: Also ran into this would love to see a merge. Also, is there any reason why we don't break out of that loop once we find the index match? Sure seems like we could no? <issue_comment>username_0: @username_1 afaik there isn't a way to break out of `ArrayEach()` since no return value is provided from the callback to signal that to the main method. <issue_comment>username_1: @username_0 Ya, you are right, wonder if maybe we should adopt something like `filepath.Walk` whereby the func can return something to stop execution. Seems could be a bit savings for large arrays. <issue_comment>username_0: @username_1 yeah there are several places I could use something like that in code where I'm using this! With `ObjectEach` since the callback returns an error I guess you could use a custom error to signal that, similar to `filepath.Walk`. Maybe could add something like that here too.
<issue_start><issue_comment>Title: Remove remove_indices username_0: # Which issue does this PR close? <!--- We generally require a GitHub issue to be filed for all bug fixes and enhancements and this helps us generate change logs for our releases. You can link an issue to this PR using the GitHub syntax. For example `Closes #123` indicates that this PR will close issue #123. --> Closes #1482. # Rationale for this change <!--- Why are you proposing this change? If this is already explained clearly in the issue then this section is not needed. Explaining clearly why changes are proposed helps reviewers understand your changes and offer better suggestions for fixes. --> # What changes are included in this PR? <!--- There is no need to duplicate the description in the issue here but it is sometimes worth providing a summary of the individual changes in this PR. --> # Are there any user-facing changes? <!--- If there are user-facing changes then we may require documentation to be updated before approving the PR. --> <!--- If there are any breaking changes to public APIs, please add the `breaking change` label. --> <issue_comment>username_1: Thank you @username_0 and @tustvold
<issue_start><issue_comment>Title: Is it possible to show it on readonly mode? username_0: It seems it's not possible but in case I'm missing something I thought I'd to ask :) Is any way to disallow edition. I'd like to use the same control to provide the feedback and for showing it on a list. Great work on this, thanks for sharing! <issue_comment>username_1: +1 for this feature and +1 for sharing indeed. It's not hard to subclass it, and return from handle touch earlier, ``` - (void)_handleTouch:(UITouch *)touch ``` then it's readonly. <issue_comment>username_2: Hi! Sorry for the holy-crap-this-is-so-late reply! Like @username_1 said, setting `enabled` to `NO` should do the trick for this. Doing the same for `userInteractionEnabled` could be considered an alternative, but personally I'd rather use `enabled`. Anyway, thanks for using the control and taking the time to create the issue. :+1:<issue_closed>
<issue_start><issue_comment>Title: Explain serialization conversions in docs username_0: E.g `undefined` and `Infinity` will be converted to `null` when stored, due to JSON.serialize. It would probably be meaningful to have all little quirks like this documented somewhere in one place. See #109 <issue_comment>username_1: Hey :) Some of those caveats are explained [MDN here](https://developer.mozilla.org/en/docs/Web/JavaScript/Reference/Global_Objects/JSON/stringify#Description) Is that what we're after?
<issue_start><issue_comment>Title: Fix the today's date check username_0: This extends #159 by leaving the function type safe, fixing the function as it was mutating the source `Date` object, and adding a check/error if the date cannot be extracted from the document. <issue_comment>username_1: Tested, and looks OK to me. Assigning now to @plehegar as this depends on w3c/echidna#100, according to @username_0. <issue_comment>username_0: Actually, it's the opposite. This ticket must be merged (and version `1.0.1` must be published) so that @plehegar can work on https://github.com/w3c/echidna/issues/100. I am doing it now so that it doesn't impact @plehegar's progress.
<issue_start><issue_comment>Title: Java : Find and Hightlight is not working. Index out of bound username_0: While trying to run the find and highlight, Index out of bound exception is coming. I executed following example. https://github.com/aspose-words/Aspose.Words-for-Java/blob/master/Examples/src/main/java/com/aspose/words/examples/programming_documents/find_replace/FindAndHighlightText.java <issue_comment>username_1: Hi there, Thanks for your inquiry. We have fixed this issue. Please download the code again.<issue_closed>
<issue_start><issue_comment>Title: jQuery .load Problem username_0: Hey, first of all... THX for your addon it work rly nice ;) but i´ve got a rly bad Problem... i load my Content with jQuery (.load function) and when i do that i load this Code: <html> <head> <!--<script src="js/bootstrap/bootstrap-select.js"></script>--> <script src="js/bootstrap/bootstrap-toggle.js"></script> <script language="JavaScript" src="js/webinterface/admin_user.js"></script> </head> <body> <!-- angle-double-left --> <button id="admin_back_bttn" class="menu-button-back fa fa-arrow-circle-left"></button> <div id="admin_page_user" style="margin-bottom:10px;margin-top:10px;"> <div class="container" style="background-color:rgba(248, 218, 180, 0.6);"> <!-- Haupteinstellungen --> <div class="row"> <div class="col-md-12 page-header"> <h1><?php echo $_POST['name'] . " bearbeiten"; ?><small style="margin-left:20px;color:rgba(0,0,0,0.4);font-weight:570;"> <?php echo "Alle Clienteinstellungen bearbeiten"; ?></small></h1> </div> </div> <!-- Globale Rechte --> <div class="row" style="margin-top:20px;margin-bottom:20px;"> <div class="col-md-2"></div> <div class="col-md-8"> <div class="row"> <div class="col-md-6" style="padding-top:10px;"> <?php echo $language['hp_user_blocked']; ?>: </div> <div class="col-md-6" style="text-align:center;"> <input id="Test123" type="checkbox" checked data-toggle="toggle" data-off="<?php echo $language['no']; ?>" data-on="<?php echo $language['yes']; ?>" data-onstyle="success" data-offstyle="danger"> </div> </div> <div class="row" style="margin-top:20px;"> <div class="col-md-6"> <?php echo $language['hp_newest_version']; ?>: </div> <div class="col-md-6" style="text-align:center;"> <font style="color:green;font-weight:bold;">1.00-ALPHA</font> </div> </div> <div class="row" style="margin-top:20px;"> <div class="col-md-6 input-label"> <?php echo "System " . $language['language']; ?>: </div> <div class="col-md-6"> <select class="selectpicker" data-width="100%" disabled> <option value="de"><?php echo $language['german']; ?></option> </select> </div> </div> </div> <div class="col-md-2"></div> </div> if this Code load the fist time all is fine ;) but if i .load (without refresh the Webpage) and click the toggle the Event will open 2 time and if do this again then 3 time and so on.... if there something i can do that the Event will just load once ?<issue_closed>
<issue_start><issue_comment>Title: Investigate Panic username_0: panic: runtime error: invalid memory address or nil pointer dereference [signal SIGSEGV: segmentation violation code=0x1 addr=0x0 pc=0x11220] goroutine 42 [running]: panic(0x3c21f0, 0x1080c010) /usr/local/go/src/runtime/panic.go:500 +0x33c main.getIP(0x108675d8, 0xa, 0x0, 0x0, 0x0, 0x0, 0x0) /home/simon/code/src/github.com/username_0/gobuildslave/commandapi.go:34 +0x220 main.updateState(0x1093f5a0) /home/simon/code/src/github.com/username_0/gobuildslave/commandapi.go:40 +0xdc main.(*Server).List(0x10812230, 0x754121d0, 0x10948b20, 0x5a9288, 0x0, 0x0, 0x0) /home/simon/code/src/github.com/username_0/gobuildslave/commandapi.go:63 +0xac github.com/username_0/gobuildslave/proto._GoBuildSlave_List_Handler(0x3fa978, 0x10812230, 0x754121d0, 0x10948b20, 0x109f1860, 0x0, 0x0, 0x0, 0x0, 0x0) /home/simon/code/src/github.com/username_0/gobuildslave/proto/slave.pb.go:296 +0xf8 google.golang.org/grpc.(*Server).processUnaryRPC(0x109880b0, 0x578e38, 0x10976380, 0x10978870, 0x108107a0, 0x596994, 0x10948b80, 0x0, 0x0) /home/simon/code/src/google.golang.org/grpc/server.go:772 +0x1100 google.golang.org/grpc.(*Server).handleStream(0x109880b0, 0x578e38, 0x10976380, 0x10978870, 0x10948b80) /home/simon/code/src/google.golang.org/grpc/server.go:971 +0xde4 google.golang.org/grpc.(*Server).serveStreams.func1.1(0x10919350, 0x109880b0, 0x578e38, 0x10976380, 0x10978870) /home/simon/code/src/google.golang.org/grpc/server.go:539 +0x9c created by google.golang.org/grpc.(*Server).serveStreams.func1 /home/simon/code/src/google.golang.org/grpc/server.go:540 +0x98<issue_closed>
<issue_start><issue_comment>Title: [Task]: ci: Add workflow to deploy a new version of Helm charts, whenever there's a change in Helm chart templates username_0: ### Is there an existing issue for this? - [X] I have searched the existing issues ### SubTasks _No response_ <issue_comment>username_0: We should deploy the chart from the release branch on a separate domain like `release.helm.appsmith.com` or something like that. The location could be a separate S3 bucket, or a separate folder in the currently used bucket.
<issue_start><issue_comment>Title: Uncaught Error: spawn /Users/Mitch/Library/Application Support/com.github.atom.ShipIt/update.7v9wl5y/Atom.app/Contents/Frameworks/Atom Helper.app/Contents/MacOS/Atom Helper ENOENT username_0: Uncaught Error: spawn /Users/Mitch/Library/Application Support/com.github.atom.ShipIt/update.7v9wl5y/Atom.app/Contents/Frameworks/Atom Helper.app/Contents/MacOS/Atom Helper ENOENT **Atom Version**: 0.154.0 **System**: Mac OS X 10.10.1 **Thrown From**: Atom Core ### Steps To Reproduce 1. ... 2. ... ### Stack Trace ``` At events.js:85 Error: spawn /Users/Mitch/Library/Application Support/com.github.atom.ShipIt/update.7v9wl5y/Atom.app/Contents/Frameworks/Atom Helper.app/Contents/MacOS/Atom Helper ENOENT at exports._errnoException (util.js:742:11) at Process.ChildProcess._handle.onexit (child_process.js:1051:32) at child_process.js:1142:20 at process._tickCallback (node.js:378:11) ``` <issue_comment>username_1: Fixed by https://github.com/atom/notifications/pull/62<issue_closed>
<issue_start><issue_comment>Title: T411 domain changed from t411.in to t411.ch username_0: Hi :) first pull request, hope i'm doing it right t411 change the domain from t411.in to t411.ch so i update api references to the new domain. sorry for differents commits, i've update files online and haven't find how to modify all in one commit. <issue_comment>username_0: i forget to mention that the pull request is currently running on my system : 2016-3-1 22:58 VERBOSE details t411-series Produced 100 entries (using t411 plugin)
<issue_start><issue_comment>Title: Maker for matlab linter mlint username_0: By default mlint doesn't output filename when linting a single file. Workaround is to paste current buffer name to the begining of each line of the linter output. This is achieved via ``` 'mapexpr': "neomake_bufname.':'.v:val" ``` in maker configuration as suggested by @username_1. <issue_comment>username_1: Thanks! Is it guaranteed to have no multiline output? Otherwise another approach would be needed. The Golang makers also used this before, but actually produced output across multiple lines. <issue_comment>username_0: Unfortunately there's no official documentation for mlint but I've checked on my repo and mlint never produced multiline output. But just in case I've found an alternative solution which is used by [MatlabFilesEdition](https://github.com/vim-scripts/MatlabFilesEdition) plugin. Apparently mlint starts to output filenames if more then one file is linted. Here's the solution from the plugin above: ``` " mlint doesn't provide filename information except if multiple " filenames are given " With the following command : " mlint <filename> <filename without extension> " mlint produces an output like that : " ========== <filename> ========== " L x (C y): ID : Message " L x (C y): ID : Message " .. " .. " ========== <filename without extension> ========== " L 0 (C 0): MDOTM :Filename 'filename' must end in .m or .M " " The filename can then be parsed CompilerSet makeprg=mlint\ -id\ %\ %< CompilerSet errorformat= \%-P==========\ %f\ ==========, \%-G%>==========\ %s\ ==========, \%-G%>L\ %l\ (C\ %c):\ MDOTM%m, \L\ %l\ (C\ %c):\ %m, \L\ %l\ (C\ %c-%*[0-9]):\ %m, \%-Q ``` <issue_comment>username_1: Not sure if the hack is better, let's go with the `mapexpr` for now. Otherwise we should also have something like defaults, or a postprocessor instead. <issue_comment>username_0: I agree, ```mapexpr``` solution is cleaner. <issue_comment>username_1: You should also add: ```vim function! neomake#makers#ft#mlint#EnabledMakers() abort return ['mlint'] endfunction ``` Ok, you just did it.. :) <issue_comment>username_0: Finally :) <issue_comment>username_1: Thanks a lot. Slightly amended in 6182a88. <issue_comment>username_1: Please add it to the wiki's list of makers. <issue_comment>username_0: Just did. Cheers!
<issue_start><issue_comment>Title: add tests for compiler processors username_0: currently only half of the processors from here: https://github.com/juttle/juttle/tree/master/lib/compiler/flowgraph are being unit tested and they're just ending up getting coverage due to the fact they're used by every juttle program that executes. Lets add the individual spec files so we get complete coverage and actually test they're doing what is expected of them. Few examples already at: https://github.com/juttle/juttle/tree/master/test/compiler/flowgraph
<issue_start><issue_comment>Title: Visual fixes username_0: We have to do some small fixes on the specification layout to bring it 100% to spec. Side by side comparison: ![screen shot 17](https://cloud.githubusercontent.com/assets/8484061/23491787/50ef026c-feb5-11e6-8e8d-cd7b2cdf332c.png) ![screen shot 18](https://cloud.githubusercontent.com/assets/8484061/23491790/52614b0a-feb5-11e6-8a75-2835d0ecee1a.png) Section title font color must be darker; Check spacing between elements; Fix development specification styles; <issue_comment>username_0: Section title isn't the correct color `tc-gray-70` ![screen shot 6](https://cloud.githubusercontent.com/assets/8484061/23634077/5a89054a-027d-11e7-8271-fafd77271a16.png)<issue_closed> <issue_comment>username_0: fixed <issue_comment>username_1: @username_0 please double-check if needed or close.
<issue_start><issue_comment>Title: Keymaps do not register for linux -- Needs 'atom-workspace' username_0: Hello, I figured this issue would be too small to warrant a pull request. The default keybindings (at least for linux) requires `atom-workspace` in the selector string in order to function properly. In other words... **This..** ```cson '.platform-linux': 'ctrl-t': 'fuzzy-finder:toggle-file-finder' 'ctrl-p': 'fuzzy-finder:toggle-file-finder' 'ctrl-b': 'fuzzy-finder:toggle-buffer-finder' 'ctrl-B': 'fuzzy-finder:toggle-git-status-finder' ``` **Should be this...** ```cson '.platform-linux atom-workspace': 'ctrl-t': 'fuzzy-finder:toggle-file-finder' 'ctrl-p': 'fuzzy-finder:toggle-file-finder' 'ctrl-b': 'fuzzy-finder:toggle-buffer-finder' 'ctrl-B': 'fuzzy-finder:toggle-git-status-finder' ``` <issue_comment>username_1: defaults works fine for me on arch linux <issue_comment>username_2: @username_0 Can you still reproduce this on Atom 1.7.4 in safe mode (`atom --safe`)?<issue_closed> <issue_comment>username_0: Can't reproduce this anymore. Must have been fixed in some previous version (or, more realistically, some weird glitch when I first posted this issue).
<issue_start><issue_comment>Title: Update Json.net version to the latest (10.0.1) username_0: We need the [latest](https://www.nuget.org/packages/Newtonsoft.Json/10.0.1) Json.net package which has the fix for the issue: https://github.com/aspnet/Mvc/issues/5413 <issue_comment>username_1: @username_0 this is all done now, right?<issue_closed> <issue_comment>username_0: Oh yes.
<issue_start><issue_comment>Title: Supports Cat's Xor as an Endpoint result username_0: As [discussed](https://gitter.im/finagle/finch?at=5679ab54653b30761d75be97) on Gitter. It would be usefull to support endpoints that return `Xor[A, B]` representing a faulure and success case. <issue_comment>username_1: We've made the same thing with scalaz `\/` in glopart. I could port some code here. But it would be nice to know how to work with left branch in the end, when we fold it it `Response`. I mean, should we just apply encoder to an error? And what about HTTP status codes? <issue_comment>username_0: @username_1 that's a great question actually. I probably need think this over, but for now I imagine this to be something like this: ```scala case class Error(message: String) val a: Endpoint[Xor[Error, Int]] = get("div" / int / int) { (a: Int, b: Int) => if (b == 0) Output.payload(Xor.left(Error("Can't divide by 0")), Status.BadRequest) else Ok(a / b) } ``` So basically this should work fine (not need to change `io.finch.internal.*`) as long as we have `EncodeResponse` defined for `Xor[A: EncodeResponse, B: EncodeResponse]`. <issue_comment>username_1: Still need a folder for `Xor[Response, Response] => Response`, but yea, I got it. <issue_comment>username_1: Well, it would be nice to pick `Content-Type` from instance of `EncodeResponse` which is applied to `A` or `B`, depending on `Xor` content. But with current API it seems impossible: ``` new EncodeResponse[Xor[A, B]] { override def apply(rep: Xor[A, B]): Buf = ??? override def contentType: String = ??? } ``` <issue_comment>username_0: Fixed in #518.<issue_closed>
<issue_start><issue_comment>Title: Remove title() metod from MicroFragment username_0: As mentioned in discussion, there is probably no need for that on the `MicroFragment` interface, many implementations just throw an exception currently. --- Maybe screens that would use that generally, like wizards, could use `TitledMicroFragment`s if needed.
<issue_start><issue_comment>Title: [Scheduler]org.zstack.scheduler.APIUpdateSchedulerTriggerMsg username_0: ### 描述 更新定时器(Update scheduler trigger). ### API `org.zstack.scheduler.APIUpdateSchedulerTriggerMsg` ### 举例(Example) 管理员可以使用UpdateSchedulerTrigger来更新一个定时器. 例如: ``` UpdateSchedulerTrigger uuid=d1652f4da61c45f9b9ce68b69f3d0ae4 name=new description=new ``` 返回结果: ``` { "inventory": { "createDate": "Jun 19, 2017 3:18:56 PM", "description": "new", "jobsUuid": [], "lastOpDate": "Jun 19, 2017 3:18:56 PM", "name": "new", "repeatCount": 1, "schedulerType": "simple", "startTime": "Jun 19, 2017 3:18:56 PM", "stopTime": "Jun 19, 2017 3:18:56 PM", "uuid": "d1652f4da61c45f9b9ce68b69f3d0ae4" }, "success": true } ``` ### 参数(Parameters) | 名字 | 描述 | 可选的参数 | 起始支持版本 | | ---- | --- | --- | --- | | uuid | | | | | name | |是 | | | description | |是 | | ### 备注 ### Feature_link https://github.com/zstackio/functional-spec/issues/29
<issue_start><issue_comment>Title: Unable to use raw Blather::Stream username_0: # Blather::Stream can be used to build your own handler system if Blather's # doesn't suit your needs. https://github.com/adhearsion/blather/blob/develop/lib/blather/stream.rb#L5 example in code is not work. even if it started in EM block I got: ``` NameError: undefined local variable or method `start' for #<Blather::Stream:0x007fdcb2694010> from /Users/username_0/.rbenv/versions/2.2.0/lib/ruby/gems/2.2.0/gems/blather-1.1.4/lib/blather/stream.rb:160:in `connection_completed' ``` I came to raw Stream, because I need to init new session in custom way, sending some custom stanza's <issue_comment>username_1: Please provide the code you used. <issue_comment>username_1: Additionally, try using `Blather::Stream::Client`. <issue_comment>username_0: ``` # @example Create a new stream and handle it with our own class # class MyClient # attr :jid # # def post_init(stream, jid = nil) # @stream = stream # self.jid = jid # p "Stream Started" # end # # # Pretty print the stream # def receive_data(stanza) # pp stanza # end # # def unbind # p "Stream Ended" # end # # def write(what) # @stream.write what # end # end # # client = Blather::Stream.start MyClient.new, "jid@domain/res", "pass" # client.write "[pure xml over the wire]" ``` I cant use it, since it tries to register when I don't need it. Also I need to send some specific stanzas before actual login. <issue_comment>username_1: Actually, [it doesn't](https://github.com/adhearsion/blather/blob/develop/lib/blather/stream/client.rb). You're thinking of `Blather::Client`. Might you perhaps contribute a fix to https://github.com/adhearsion/blather/issues/151 to resolve that problem, though? We're always in need of contributors. <issue_comment>username_0: actually it is. ``` class HandlingClient attr_accessor :jid def post_init #stream started puts 'init' end def unbind #stream closed puts 'unbind' end def receive_data(stanza_or_error) #handle incoming stanza or error puts stanza_or_error end end handler = HandlingClient.new EM.run { Blather::Stream::Client.start handler, jid, password } ``` And I got: ``` D, [2016-01-06T23:11:57.696894 #33273] DEBUG -- : USING JID: user_name@host_name D, [2016-01-06T23:11:57.736473 #33273] DEBUG -- : SENDING: (/stream/client.rb:12:in `start') <stream:stream to='host_name' xmlns='jabber:client' xmlns:stream='http://etherx.jabber.org/streams' version='1.0' xml:lang='en'> D, [2016-01-06T23:11:57.777742 #33273] DEBUG -- : RECEIVING (stream) <stream:stream xmlns:stream="http://etherx.jabber.org/streams" id="1558387797" from="host_name" version="1.0" lang="en"/> D, [2016-01-06T23:11:57.778394 #33273] DEBUG -- : RECEIVING (features) <stream:features xmlns:stream="http://etherx.jabber.org/streams"> <mechanisms xmlns="urn:ietf:params:xml:ns:xmpp-sasl"> <mechanism>PLAIN</mechanism> <mechanism>DIGEST-MD5</mechanism> <mechanism>SCRAM-SHA-1</mechanism> </mechanisms> <register xmlns="http://jabber.org/features/iq-register"/> <sm xmlns="urn:xmpp:sm:3"/> </stream:features> D, [2016-01-06T23:11:57.778996 #33273] DEBUG -- : SENDING: (/stream/features/sasl.rb:102:in `authenticate') <auth xmlns="urn:ietf:params:xml:ns:xmpp-sasl" mechanism="DIGEST-MD5"/> D, [2016-01-06T23:11:57.818931 #33273] DEBUG -- : RECEIVING (challenge) <challenge xmlns="urn:ietf:params:xml:ns:xmpp-sasl">bm9uY2U9IjMxMDQ3NTIwMCIscW9wPSJhdXRoIixjaGFyc2V0PXV0Zi04LGFsZ29yaXRobT1tZDUtc2Vzcw==</challenge> D, [2016-01-06T23:11:57.819953 #33273] DEBUG -- : SENDING: (/stream/features/sasl.rb:168:in `respond') <response xmlns="urn:ietf:params:xml:ns:xmpp-sasl">bm9uY2U9IjMxMDQ3NTIwMCIsY2hhcnNldD11dGYtOCx1c2VybmFtZT0iKzM4MDYzNDg5NDgzNSIscmVhbG09InZlZ2EuYWNjZWxpb3IuY29tIixjbm9uY2U9ImUyOGE0ZmM3MGJkYzI0MTQ5MWU5OTg3M2U5ZmU1MjY0IixuYz0wMDAwMDAwMSxxb3A9YXV0aCxkaWdlc3QtdXJpPSJ4bXBwL3ZlZ2EuYWNjZWxpb3IuY29tIixyZXNwb25zZT1iYzE1NjA1M2UwM2ZmZjc5M2IzMjAwYmYzNDcxOWNlNw==</response> D, [2016-01-06T23:11:57.861320 #33273] DEBUG -- : RECEIVING (challenge) <challenge xmlns="urn:ietf:params:xml:ns:xmpp-sasl">cnNwYXV0aD03YWMxZmMxZTdjYjg1MjFiMzgzOGI3NWRkOGVjNTk0OQ==</challenge> D, [2016-01-06T23:11:57.861820 #33273] DEBUG -- : SENDING: (/stream/features/sasl.rb:168:in `respond') <response xmlns="urn:ietf:params:xml:ns:xmpp-sasl"/> D, [2016-01-06T23:11:57.902055 #33273] DEBUG -- : RECEIVING (success) <success xmlns="urn:ietf:params:xml:ns:xmpp-sasl"/> D, [2016-01-06T23:11:57.902875 #33273] DEBUG -- : SENDING: (/stream/client.rb:12:in `start') <stream:stream to='host_name' xmlns='jabber:client' xmlns:stream='http://etherx.jabber.org/streams' version='1.0' xml:lang='en'> D, [2016-01-06T23:11:57.943719 #33273] DEBUG -- : RECEIVING (stream) <stream:stream xmlns:stream="http://etherx.jabber.org/streams" id="2058319209" from="host_name" version="1.0" lang="en"/> D, [2016-01-06T23:11:57.944176 #33273] DEBUG -- : RECEIVING (features) <stream:features xmlns:stream="http://etherx.jabber.org/streams"> <bind xmlns="urn:ietf:params:xml:ns:xmpp-bind"/> <session xmlns="urn:ietf:params:xml:ns:xmpp-session"/> <register xmlns="http://jabber.org/features/iq-register"/> <sm xmlns="urn:xmpp:sm:3"/> </stream:features> D, [2016-01-06T23:11:57.944975 #33273] DEBUG -- : SENDING: (/stream/features/resource.rb:39:in `bind') <iq type="set" id="blather0001"> <bind xmlns="urn:ietf:params:xml:ns:xmpp-bind"/> </iq> D, [2016-01-06T23:11:57.985178 #33273] DEBUG -- : RECEIVING (iq) <iq id="blather0001" type="result"> <bind xmlns="urn:ietf:params:xml:ns:xmpp-bind"> <jid>user_name@host_name/11660279821452114717961106</jid> </bind> </iq> D, [2016-01-06T23:11:57.985481 #33273] DEBUG -- : USING JID: user_name@host_name/11660279821452114717961106 D, [2016-01-06T23:11:57.985944 #33273] DEBUG -- : SENDING: (/stream/features/session.rb:40:in `session') <iq type="set" id="blather0003" to="host_name"> <session xmlns="urn:ietf:params:xml:ns:xmpp-session"/> </iq> D, [2016-01-06T23:11:58.026644 #33273] DEBUG -- : RECEIVING (iq) <iq type="result" from="host_name" id="blather0003"> <session xmlns="urn:ietf:params:xml:ns:xmpp-session"/> </iq> D, [2016-01-06T23:11:58.028329 #33273] DEBUG -- : SENDING: (/stream/features/register.rb:22:in `receive_data') <iq type="set" id="blather0005"> <query xmlns="jabber:iq:register"> <username>user_name</username> <password>15b9680c-cee0-5c35-bc99-dc80e711be38</password> </query> </iq> D, [2016-01-06T23:11:58.069108 #33273] DEBUG -- : RECEIVING (iq) <iq from="user_name@host_name" to="user_name@host_name/11660279821452114717961106" type="error" lang="en" id="blather0005"> <query xmlns="jabber:iq:register"> <username>user_name</username> <password>15b9680c-cee0-5c35-bc99-dc80e711be38</password> </query> <error code="503" type="cancel"> <service-unavailable xmlns="urn:ietf:params:xml:ns:xmpp-stanzas"/> </error> </iq> D, [2016-01-06T23:11:58.069647 #33273] DEBUG -- : SENDING: (/stream.rb:248:in `stop') </stream:stream> D, [2016-01-06T23:11:58.110034 #33273] DEBUG -- : RECEIVING (end) <stream:end xmlns:stream="http://etherx.jabber.org/streams"/> user_name15b9680c-cee0-5c35-bc99-dc80e711be38 ``` same use-case with `xmpp4r` works perfectly fine <issue_comment>username_1: You are indeed correct - we do not have a Stream which does not respond to features before establishing a session. Is it indeed permitted to send stanzas before session establishment? Could you tell us more about your use-case? <issue_comment>username_0: This is mobile messenger like WhatsUp, of Viber. I need to send few iq stanzas for authorization process. It's should be similar to WhatsUp: 1) `iq:authorization` with phone number to request sms with OTP; 2) `iq:confirm_authorization` with received OTP to get password. btw, my debug log is not about sending custom stanzas, but for next step - actually login with recieved password. <issue_comment>username_1: Unfortunately you didn't fully answer my questions. The best I can do is say you'll need to understand how Stream's statemachine works and hook into that. <issue_comment>username_0: Sorry, I missed last part regarding `xmpp4r` with it I can split connection and auth phases like so: ``` require 'xmpp4r/client' require 'xmpp4r/roster' include Jabber #just to see whats going on Jabber.debug = true cli = Client.new jid cli.connect if(password.empty?) // sending some custom stanza's to get password end cl.auth(password) ``` And here is log for `connect` method. it just sending initial stanza, and parsing responses ``` D, [2016-01-07T15:31:23.663583 #16684] DEBUG -- : RESOLVING: _xmpp-client._tcp.example.com (SRV) D, [2016-01-07T15:31:23.798183 #16684] DEBUG -- : CONNECTING: example.com:5222 D, [2016-01-07T15:31:23.840103 #16684] DEBUG -- : SENDING: <stream:stream xmlns:stream='http://etherx.jabber.org/streams' xmlns='jabber:client' to='example.com' xml:lang='en' version='1.0' > D, [2016-01-07T15:31:23.881185 #16684] DEBUG -- : RECEIVED: <stream:stream from='example.com' id='3940058200' xml:lang='en' xmlns:stream='http://etherx.jabber.org/streams' version='1.0' xmlns='jabber:client'/> D, [2016-01-07T15:31:23.883814 #16684] DEBUG -- : RECEIVED: <stream:features> <mechanisms xmlns='urn:ietf:params:xml:ns:xmpp-sasl'> <mechanism>PLAIN</mechanism><mechanism>DIGEST-MD5</mechanism> <mechanism>SCRAM-SHA-1</mechanism></mechanisms> <register xmlns='http://jabber.org/features/iq-register'/><sm xmlns='urn:xmpp:sm:3'/> </stream:features> D, [2016-01-07T15:31:23.884107 #16684] DEBUG -- : FEATURES: waiting... D, [2016-01-07T15:31:23.885272 #16684] DEBUG -- : FEATURES: received D, [2016-01-07T15:31:23.885957 #16684] DEBUG -- : PROCESSING: <stream:features xmlns='jabber:client'> <mechanisms xmlns='urn:ietf:params:xml:ns:xmpp-sasl'> <mechanism>PLAIN</mechanism><mechanism>DIGEST-MD5</mechanism> <mechanism>SCRAM-SHA-1</mechanism></mechanisms> <register xmlns='http://jabber.org/features/iq-register'/> <sm xmlns='urn:xmpp:sm:3'/></stream:features> (REXML::Element) D, [2016-01-07T15:31:23.886045 #16684] DEBUG -- : FEATURES: waiting finished D, [2016-01-07T15:31:23.886105 #16684] DEBUG -- : TRYING stanzacbs... D, [2016-01-07T15:31:23.886237 #16684] DEBUG -- : TRYING message/iq/presence/cbs... ```
<issue_start><issue_comment>Title: 执行make.py报错 username_0: ```shell lentin@instance-1:~$ git clone https://github.com/username_1/whitepac Cloning into 'whitepac'... remote: Counting objects: 51, done. remote: Total 51 (delta 0), reused 0 (delta 0), pack-reused 51 Unpacking objects: 100% (51/51), done. Checking connectivity... done. lentin@instance-1:~$ cd whitepac/ lentin@instance-1:~/whitepac$ python3 make.py File "make.py", line 69 results[f"{prefix}/{cidr}"] = 1 ^ SyntaxError: invalid syntax lentin@instance-1:~/whitepac$ ``` <issue_comment>username_1: Upgrade python to v3.6+<issue_closed>
<issue_start><issue_comment>Title: ImportWarning on init username_0: I'm using multidict 3.1.0 and python 3.6.1. I get the following warning when my application starts: /usr/lib/python3.6/importlib/_bootstrap.py:205: ImportWarning: can't resolve package from __spec__ or __package__, falling back on __name__ and __path__ Running the same application with `PYTHONWARNINGS=error`, I can track down the issue to multidict: ... File "usr/lib/python3.6/site-packages/aiohttp/__init__.py", line 5, in <module> File "usr/lib/python3.6/site-packages/aiohttp/hdrs.py", line 2, in <module> File "usr/lib/python3.6/site-packages/multidict/__init__.py", line 24, in <module> File "multidict/_multidict.pyx", line 9, in init multidict._multidict (multidict/_multidict.c:16318) ImportWarning: can't resolve package from __spec__ or __package__, falling back on __name__ and __path__ Seems similar to #79. Apparently, #81 didn't fix the issue completely. <issue_comment>username_1: Have no idea what to do. Finally it should be fixed by Cython. <issue_comment>username_0: Oh well, I don't have any clue how to fix this either. Should I close this? <issue_comment>username_1: I guess yes<issue_closed> <issue_comment>username_2: looks like cython has fixed this, multidict just needs a rebuild: https://github.com/aio-libs/multidict/issues/304
<issue_start><issue_comment>Title: MMU page table username_0: Hi, I was wondering there is a page table implementation in secure kernel. Here is my question in detail: Normally in Linux, for translation of virtual address to physical address, system will use TLB table. I know that, when have a miss occur on TLB, it will find physical page from page table. From table index of virtual address and table base address, will have first-level descriptor, the first-level descriptor will be input for page table to get physical page table from page table then fill it to TLB table. If page table miss, it will find physical page form external memory (RAM) and fill it to page table with it's attribute. Does this work same way in Optee OS? If so, could you point me to the code that creates the page table for a trusted app, updates the page table. I went through the documentation of Optee OS. It mentions about a "Pager" which is different from page table I described about I think. Am I correct? Thank you! Ben <issue_comment>username_1: Page table is described in architecture document. page table is normal memory, no different with other memory. TLB is something like cache. I think you may mixed up TLB with page table. For MMU operations, you can refer to `core/arch/arm/mm/core_mmu_lpae.c`. <issue_comment>username_2: We are closing this issue/question, why? - We believe the question has been answered. - Quite often when we have answered the questions, the issuer forget to close the issue which results in many open issues for us which makes it hard to keep track of the ones actually waiting for an answer. - In some cases we ask follow-up questions, but do not hear back from the one creating the issue. If you however feel that you have additional questions or still thinks this is an issue, then please feel free to re-open the issue again. When closing issues we will only show this entire message once. // OP-TEE admins<issue_closed>
<issue_start><issue_comment>Title: defaultValue behaves differently for US based numbers (+1201.....) username_0: When I set the defaultValue to be something like "+12015555555" or "+12025555555" the onPhoneNumberChanged handler is called twice with the last call being: false "(202) 555-5555" Object {} "" When the default value does not start with the +, then it recognises the number: true "1 202-555-5555" Object {name: "United States", iso2: "us", dialCode: "1", priority: 0, areaCodes: null} "+12025555555" This behaviour is reversed for all other countries I've tried (gb, ca, au), ie when the number is preceeded with a +, the onPhoneNumberChanged returns the correctly detected country code and number. It seems that defaultCountry does not affect this behaviour, I've run the same tests with it set to the correct country, a different country, a empty string or not even declaring it and it has no effect. My workaround is atm, if the country code I'm storing is 'us' then I regex remove the + from the number.<issue_closed> <issue_comment>username_1: Fixed in v2.0.7.
<issue_start><issue_comment>Title: Ability to disable Apache and PHP-FPM restart username_0: Hi @w00fz here's the pull request. Just tell me if there's some changes to be done. thanks! <issue_comment>username_0: Hi will redo this pull request since my other account has been reflected on the commit author. my bad. closing this first.
<issue_start><issue_comment>Title: [bluebird_v3.x.x] Promisable type option to Promise/input param on some methods username_0: Fixes #1082 Changes mirror the current API at http://bluebirdjs.com/docs/api-reference.html I'm no pro with this lib, so I'll reach out to the issue's OP, and have him check that my changes are accurate. <issue_comment>username_0: Updated specs to match the changes, this should now be good to merge.
<issue_start><issue_comment>Title: Grant/revoke permissions API username_0: It will be great to have API which allows to grant and revoke permissions on new Android M+ devices <issue_comment>username_1: This is easily solved by just installing the app with the `-g` flag to grant all permissions. No need for Test Butler to do this.<issue_closed>