content
stringlengths
240
2.34M
<issue_start><issue_comment>Title: COMPRESS_PRECOMPILERS break css url() username_0: Since the precompiled files are served from another location, relative links in css files do not work anymore. The Problem can be solved by setting the output dir to an empty string. COMPRESS_OUTPUT_DIR = '' However, it would be great to have processed urls in the css file that link to the right locations of images. <issue_comment>username_1: @woeye Having just stumbled across this problem myself, I propose to incorporate this workaround into django-libsass: https://github.com/torchbox/django-libsass/issues/8 +1 for a general solution to this within django-compressor, though! <issue_comment>username_2: With django-compressor 1.5: the fix for #467 (db731cf01685001cf36188c4a70c44d3eb4dcc94) changed the CSSAbsoluteFilter, which makes breaks the 'PatchedSCSSCompiler' workaround. Quickfix for that is adding the filename argument to the filter call: ```python from compressor.filters.css_default import CssAbsoluteFilter from django_libsass import SassCompiler class PatchedSCSSCompiler(SassCompiler): def input(self, **kwargs): content = super(PatchedSCSSCompiler, self).input(**kwargs) kwargs.setdefault('filename', self.filename) return CssAbsoluteFilter(content).input(**kwargs) ``` <issue_comment>username_3: Thanks @username_2 I was just trying to track this issue down. Thanks @woeye for the original patch... I now have this working under Local dev mode and Heroku Deployment with herokuside collectstatic and compression.... <issue_comment>username_4: +1 for a permanent solution. In my case, I've had the problem with django-libsass, and solved it by using @username_2 's solution. Thank you!<issue_closed> <issue_comment>username_6: shoutout to @username_0 :) i thought it was really funny and random to see you here. (johannes linke speaking) <issue_comment>username_0: Haha, awesome to meet you here @username_6 and thanks for working on this issue. I don't quite remember what I was working on back then but it looks like a relevant issue judging from the number of people in this thread.
<issue_start><issue_comment>Title: Top 100 Article from Punjab Wikipedia username_0: I'd like to volunteer to translate the templates required for displaying Top 100 articles from Punjab Wikipedia. <pa.wikipedia.org> <issue_comment>username_1: Great, thank you! You can make a copy of [this file](https://github.com/hatnote/top/blob/master/top/templates/strings/en_strings.yaml) to submit a translation into Punjab. <issue_comment>username_0: Hello I have submitted the file in a Pull Request. <issue_comment>username_1: Looks good. The Punjabi charts are now up here: http://top.hatnote.com/pa/<issue_closed>
<issue_start><issue_comment>Title: (Re)introduce progress bar username_0: A progress bar was originally implemented with ezcConsoleProgressbar, but this was removed by commit 31d5d775f8bedb6a189ed8289047c3d87b04e7e4 This commit re-introduces a progress bar using Symfony\Component\Console\Helper\ProgressHelper <issue_comment>username_1: I lost interest in this project a long time ago, mostly due to the [limitations](https://github.com/username_1/phpdcd#limitations) of its implementation. Today I [added](https://github.com/username_1/phpdcd/commit/69885271da311d3e849706306ae038f775d1ee99) a note that this project is no longer maintained to its documentation and closed all open tickets. Feel free to continue to work on this in a fork.
<issue_start><issue_comment>Title: cy.request doesn't allowing me to go to next UI test cases username_0: Created two files: API test --> DO Login and writing token & saving it to fixture file Spec test -->Do REAL Login by using username and password and once test case pass then in next test first get the response by using cy.request using token which saved by step1. Based on response keys search the data in search bar API Test File: ``` ``` Once above test is executed then it redirect to first test . Not going to further tests <issue_comment>username_1: Unfortunately we have to close this issue as there is not enough information to reproduce the problem. This does not mean that your issue is not happening - it just means that we do not have a path to move forward. Can you provide a video of this behavior? I'm having a hard time understanding where the unexpected behavior is. Or can you provide the full test code to reproduce the issue on our end? Please comment in this issue with a reproducible example and we will consider reopening the issue.<issue_closed>
<issue_start><issue_comment>Title: Re-order segments? username_0: Is it possible to re-order the segments? <issue_comment>username_1: Not at the moment, but an interesting idea. <issue_comment>username_0: OK, thanks.<issue_closed> <issue_comment>username_2: You can reorder the segments manually in the theme itself, of course. <issue_comment>username_1: I created this issue to implement your idea without hacking the internals: https://github.com/username_1/bullet-train-oh-my-zsh-theme/issues/115
<issue_start><issue_comment>Title: owncast crashes after incoming stream username_0: ``` $ go run *.go INFO[0000] Owncast v0.0.0/localdev (unknown) INFO[0000] Resetting file directories to a clean slate. ----- Stream offline! Showing offline state! Enabling passthrough video /usr/local/bin/ffmpeg -hide_banner -i doc/logo.png -i webroot/thumbnail.png -filter_complex "[0:v]scale=2640:2360[bg];[bg][1:v]overlay=200:250:enable='between(t,0,3)'" -f hls -hls_time 4 -hls_playlist_type event -master_pl_name stream.m3u8 -strftime 1 -use_localtime 1 -hls_flags temp_file -tune zerolatency -g 50 -keyint_min 50 -framerate 25 -preset superfast -sc_threshold 0 -profile:v main -pix_fmt yuv420p -var_stream_map "v:0" -hls_segment_filename webroot/hls/%v/offline-%s.ts webroot/hls/%v/stream.m3u8 <nil> INFO[0000] RTMP server is listening for incoming stream on port 1935. INFO[0000] Starting public web server on port 8181 Enabling passthrough video PANI[0041] exit status 1 panic: (*logrus.Entry) (0x52af2a0,0xc000250070) goroutine 101 [running]: github.com/sirupsen/logrus.Entry.log(0xc00013a000, 0xc0005c8570, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, ...) /Users/bradleyhilton/go/pkg/mod/github.com/sirupsen/logrus@v1.6.0/entry.go:259 +0x335 github.com/sirupsen/logrus.(*Entry).Log(0xc000250000, 0xc000000000, 0xc000082ee0, 0x1, 0x1) /Users/bradleyhilton/go/pkg/mod/github.com/sirupsen/logrus@v1.6.0/entry.go:287 +0xeb github.com/sirupsen/logrus.(*Logger).Log(0xc00013a000, 0xc000000000, 0xc000082ee0, 0x1, 0x1) /Users/bradleyhilton/go/pkg/mod/github.com/sirupsen/logrus@v1.6.0/logger.go:193 +0x7d github.com/sirupsen/logrus.(*Logger).Panic(...) /Users/bradleyhilton/go/pkg/mod/github.com/sirupsen/logrus@v1.6.0/logger.go:234 github.com/sirupsen/logrus.Panic(...) /Users/bradleyhilton/go/pkg/mod/github.com/sirupsen/logrus@v1.6.0/exported.go:129 main.verifyError(...) /Users/bradleyhilton/go/src/github.com/username_0/owncast/utils.go:38 main.fireThumbnailGenerator(0xc00020cff0, 0xb) /Users/bradleyhilton/go/src/github.com/username_0/owncast/thumbnailGenerator.go:71 +0x4dd main.startThumbnailGenerator.func1(0xc000218910, 0xc00020cff0, 0xb, 0xc00027e6c0) /Users/bradleyhilton/go/src/github.com/username_0/owncast/thumbnailGenerator.go:21 +0x4f created by main.startThumbnailGenerator /Users/bradleyhilton/go/src/github.com/username_0/owncast/thumbnailGenerator.go:17 +0x97 exit status 2 ``` Here is my streaming preferences from ProPresenter 7.1.1: <img width="385" alt="Screen Shot 2020-06-18 at 5 24 19 PM" src="https://user-images.githubusercontent.com/850391/85077720-90216e00-b188-11ea-998b-0d13da049ed2.png"> However, when streaming from OBS at 1920x1080 60fps it doesn't crash. <issue_comment>username_1: Interesting! That's a good find. If you want to uncomment https://github.com/username_1/owncast/blob/master/thumbnailGenerator.go#L68 you can see the actual ffmpeg invocation. It should be using the most recent segment generated and build a static image out of it. If you want to send along that invocation and that `.ts` file that it's trying to use I can reproduce it on my side and find what it's unhappy about. Also, as an experiment, could you try turning off `passthrough` in the config file? I'm curious if re-encoding the incoming stream would fix it. <issue_comment>username_0: [stream-20200618-1592522663.ts.zip](https://github.com/username_1/owncast/files/4801684/stream-20200618-1592522663.ts.zip) `ffmpeg -y -i webroot/hls/0/stream-20200618-1592522663.ts -ss 00:00:01.000 -vframes 1 webroot/thumbnail.png` Curiously, I noticed that I was getting a ton of dropped frames when running locally. And generally when the dropped frames went up this happened. I turned off the `passthrough` and it happened a lot less but still occurred every so often. <issue_comment>username_0: Does the server need to panic when the thumbnail generation fails? Or can that fail silently or without panic? <issue_comment>username_1: Not at all, I've defaulted to panics simply so it would draw attention to things that fail so they can be fixed. <issue_comment>username_1: Awesome, thank you for sending that along, it helped me figure out what's going on with creating a thumbnail from that specific segment. It tries to pull a thumbnail 1 second into a segment, and this particular segment is less than a second long. Changing it to pull from 00:00:00 fixes it: `ffmpeg -i stream-20200618-1592522663.ts -ss 00:00:00.000 -vframes 1 test.png` It would actually make the most sense to find a way to pull the last frame out of the segment instead of a specific timestamp. The last frame would be most relevant for a thumbnail, anyway. And then regardless if a segment is a minute long or less than a second long it won't care. I'm going to look into that. Thanks again! <issue_comment>username_1: As for thumbnail generation causing additional issues with dropped frames, I guess it could make sense, since it's doing more work at that particular point in time. Since thumbnail generation is low-priority maybe there's a way we can tell it to not work so hard. I'll look into that too. <issue_comment>username_0: Makes sense to me. I keep getting it now that I've moved to a DO $5 machine. And what's really interesting is that whenever I start streaming, it will be good for a while but then it suddenly starts dropping a ton of frames. Here's a screen recording of it: [Screen Recording 2020-06-18 at 7.17.25 PM.zip](https://github.com/username_1/owncast/files/4801787/Screen.Recording.2020-06-18.at.7.17.25.PM.zip) <issue_comment>username_1: I just pushed a couple thumbnail related changes: * Use JPG instead of PNG since it seems it's a little faster. * Limit ffmpeg to using 1 thread. * Pull the first frame out of the segment instead of a hardcoded location to fix the original crash. I'd still like to pull the *last* frame, but that requires seeking, and would take longer. ``` ffmpeg -hide_banner -threads 1 -y -t 1 -i stream-20200618-1592522663.ts -f 0.09s user 0.02s system 97% cpu 0.110 total ffmpeg -hide_banner -y -t 1 -i stream-20200618-1592522663.ts -f image2 1 0.18s user 0.05s system 161% cpu 0.140 total ``` It takes longer, but takes less CPU to do so. Hopefully these changes make some difference. I agree that turning it off completely for people is a valid option, as well as a customizable time (it's hardcoded for every 20 seconds now), but I'd like to see if these changes help. Let me know if these help!<issue_closed> <issue_comment>username_0: So far locally, it is working amazing! 👏 However, my DO droplet isn't doing so hot. Both OBS and ProPresenter show they are working fine for about six seconds but then they drop down to under 100 bits per second. So, I don't know if it is the droplet, my internet, or owncast :/ I have poor upload speed, so I'm going to try again tomorrow when I'm somewhere that has ample upload speed. Will close this issue as it is fixed, so far, and the above paragraph is about something else (seemingly). <issue_comment>username_1: Awesome, every little bit helps. Thanks so much for helping with this. I look forward to continue to squeeze more performance out of things so it can work well on a $5 machine. I'm running it under a Linode $5 machine and while it runs, it certainly pegs things. Let me know if there's any other performance increases you can think of when trying it out in your particular environments.
<issue_start><issue_comment>Title: On select devices, Android apps with Joda-Time as a dependency won't install username_0: This may be a little cryptic, but I'll try to be as specific as possible. The issue I'm experiencing is at least writing a ticket about. When attempting to install Applications via .apk download (this excludes pushing code to device from Android Studio and remains untested on apps coming from Google Play), the installing dialog gets as far as telling me it's installing with a progress bar, but then finishes with "App not installed." error. I have tested and successfully reproduced this issue several .apks. The ones with Joda-time as Gradle dependencies always fail in this manner on these specific devices. I have never seen this happen before with Joda-Time and any combination of devices and apps, which is frankly a lot. I have also never seen this happen with any other library. Joda-time releases this is occurring on are (at least) `2.3`, `2.7`, and `2.8`. The devices in question are 1. ASUS Tablet, Model ME173x running Android 4.2.2 Jellybean. 2. Samsung phone, Model Galaxy Nexus running Android 4.2.2 Jellybean. Looking at the devices' logs, I see a consistent error and string. I'll paste two instances of this issue. ``` Exception reading res/drawable-hdpi-v4/address.png in /data/app/vmdl-879103662.tmp java.lang.SecurityException: META-INF/CERT.SF has invalid digest for org/joda/time/tz/data/America/Argentina/Tucuman in /data/app/vmdl-879103662.tmp at java.util.jar.JarVerifier.invalidDigest(JarVerifier.java:131) at java.util.jar.JarVerifier.verifyCertificate(JarVerifier.java:350) at java.util.jar.JarVerifier.readCertificates(JarVerifier.java:258) at java.util.jar.JarFile.getInputStream(JarFile.java:378) at android.content.pm.PackageParser.loadCertificates(PackageParser.java:450) at android.content.pm.PackageParser.collectCertificates(PackageParser.java:641) at com.android.server.pm.PackageManagerService.installPackageLI(PackageManagerService.java:8520) at com.android.server.pm.PackageManagerService.access$2200(PackageManagerService.java:185) at com.android.server.pm.PackageManagerService$5.run(PackageManagerService.java:6390) at android.os.Handler.handleCallback(Handler.java:725) at android.os.Handler.dispatchMessage(Handler.java:92) at android.os.Looper.loop(Looper.java:153) at android.os.HandlerThread.run(HandlerThread.java:60) 06-06 12:31:55.719 461-484/? E/PackageParser﹕ Package com.secret.package has no certificates at entry res/drawable-hdpi-v4/address.png; ignoring! ``` ``` 06-05 16:19:20.961 372-391/? W/PackageParser﹕ Exception reading assets/sprite_check.png in /data/app/vmdl-1783207272.tmp java.lang.SecurityException: META-INF/CERT.SF has invalid digest for org/joda/time/tz/data/America/Argentina/Tucuman in /data/app/vmdl-1783207272.tmp at java.util.jar.JarVerifier.invalidDigest(JarVerifier.java:131) at java.util.jar.JarVerifier.verifyCertificate(JarVerifier.java:350) at java.util.jar.JarVerifier.readCertificates(JarVerifier.java:258) at java.util.jar.JarFile.getInputStream(JarFile.java:378) at android.content.pm.PackageParser.loadCertificates(PackageParser.java:446) at android.content.pm.PackageParser.collectCertificates(PackageParser.java:634) at com.android.server.pm.PackageManagerService.installPackageLI(PackageManagerService.java:7859) at com.android.server.pm.PackageManagerService.access$1900(PackageManagerService.java:172) at com.android.server.pm.PackageManagerService$5.run(PackageManagerService.java:5995) at android.os.Handler.handleCallback(Handler.java:725) at android.os.Handler.dispatchMessage(Handler.java:92) at android.os.Looper.loop(Looper.java:137) at android.os.HandlerThread.run(HandlerThread.java:60) 06-05 16:19:20.961 372-391/? E/PackageParser﹕ Package com.different.secret.package has no certificates at entry assets/sprite_check.png; ignoring! ``` Once, I received a error slightly different. This one had a warning level of `error`, whereas the other two had a warning level of `warning`. ``` 06-05 16:27:21.563 6624-9367/? E/Finsky﹕ [589] CertificateUtils.collectCertificates: Error while collecting certificates java.lang.SecurityException: META-INF/CERT.SF has invalid digest for org/joda/time/tz/data/America/Argentina/Tucuman in /storage/emulated/0/Download/name-of-my-application-plus-version.number.apk at java.util.jar.JarVerifier.invalidDigest(JarVerifier.java:131) at java.util.jar.JarVerifier.verifyCertificate(JarVerifier.java:350) at java.util.jar.JarVerifier.readCertificates(JarVerifier.java:258) at java.util.jar.JarFile.getInputStream(JarFile.java:378) at com.google.android.vending.verifier.CertificateUtils.loadCertificates(CertificateUtils.java:57) at com.google.android.vending.verifier.CertificateUtils.collectCertificates(CertificateUtils.java:24) at com.google.android.vending.verifier.PackageVerificationService.getPackageInfo(PackageVerificationService.java:986) at com.google.android.vending.verifier.PackageVerificationService.access$000(PackageVerificationService.java:67) at com.google.android.vending.verifier.PackageVerificationService$WorkerTask.doInBackground(PackageVerificationService.java:371) at com.google.android.vending.verifier.PackageVerificationService$WorkerTask.doInBackground(PackageVerificationService.java:350) at android.os.AsyncTask$2.call(AsyncTask.java:287) at java.util.concurrent.FutureTask.run(FutureTask.java:234) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1080) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:573) at java.lang.Thread.run(Thread.java:838) ``` Another interesting note is that the error seems to be tripping up on a given file in assets/ or res/, but those don't belong to the applications themselves. I have no idea where they come from. The recurring String that exists in all of these Logs are the message with the `SecurityException`, which is `META-INF/CERT.SF has invalid digest for org/joda/time/tz/data/America/Argentina/Tucuman`. A simple search revealed somebody else having this issue posted to a Gist. It is dated late 2011. https://gist.github.com/skayred/1403312 Again, I'm not sure if this even is a Joda-time issue, but it at least constitutes a heads up. Let me know if you have any questions I may be able to help with. <issue_comment>username_1: I'm curious: Does this problem still present itself if you're using [joda-time-android](https://github.com/username_1/joda-time-android)? <issue_comment>username_0: Pardon the slow response, I'm working on it! <issue_comment>username_0: This problem does persist when I drop in joda-time-android:2.8.0 <issue_comment>username_1: Could it be somehow related to this old issue with compiling with JDK 7? http://stackoverflow.com/a/8037732/60261 <issue_comment>username_0: We're looking into some signing/digest configurations. <issue_comment>username_0: I'm not really sure where to leave this for now. We did update some jar signing code, and it did seem to correct the issue. Again, I'm not really sure where this leaves the issue we originally encountered, or why it has a correlation with joda-time. If I discover anything else, I'll report back.<issue_closed>
<issue_start><issue_comment>Title: [Nomination] Nikita for WasmSwap UI username_0: **Who are you nominating?** @username_1 **What are you nominating them for?** For his work on the amazingly beautiful WasmSwap UI! **Please provide links to their work.** - https://www.wasmswap.org/ - https://github.com/Wasmswap/wasmswap-interface <issue_comment>username_1: Thanks so much for the nomination! It was a pleasure to work on the UI, and I'm looking forward to building some new exciting features for WasmSwap UI (spoiler alert; pools are coming up soon!) <issue_comment>username_2: man i love this. looking forward to aping into poo coins by the bucket full <issue_comment>username_3: Nikita has crushed it leading the WasmSwap UI. He has spent a tons of hours working with our designers and getting the front end not only working but looking amazing! His code is super clean and will serve as a great example for future dApps.<issue_closed>
<issue_start><issue_comment>Title: 哨兵同步问题 username_0: 配置如下: conf.version = 1 id = redis-shake-sentinel # log file,日志文件,不配置将打印到stdout (e.g. /var/log/redis-shake.log ) log.file =/data/redis/redis-shake/redis-shake-sentinel.log pid_path =/data/redis/redis-shake source.type = sentinel source.address = shopmaster:slave@192.168.10.109:26359;192.168.10.109:26369;192.168.10.109:26379 source.password_raw = shop@leadeon.cn source.auth_type = auth target.type = sentinel target.address = mymaster:master@192.168.10.110:26359;192.168.10.110:26369;192.168.10.110:26379 target.password_raw = dbbasic@leadeon.cn target.auth_type = auth 当源端选择从slave拉去数据,在源端主从切换过程中,同步进程会断开,查看日志显示无法获取从节点; <issue_comment>username_1: 同问,哨兵模式,主从切换后,redis-shake会退出 <issue_comment>username_2: 目前不支持sentinel模式和切换。<issue_closed>
<issue_start><issue_comment>Title: Always lay down pip.conf on RPCD plays username_0: This fix ensures that a proper pip.conf is layed down before installing RPC-O components. This furthermore ensures that upon deploy/upgrade.sh script errors manual continued playbook installation will install the right pip packages based on the correct OSA version. Closes-Bug: #1391 <issue_comment>username_1: If you want to double-check your formatting, check [the preview](https://staging.developer.rackspace.com/staging.horse/build-14a0674fa5/docs/private-cloud/release-notes/). <issue_comment>username_2: 👎 Add these as dependencies on the role (in meta/main.yml - for example: https://github.com/rcbops/rpc-openstack/blob/master/rpcd/playbooks/roles/kibana/meta/main.yml#L30-L31 ) Additionally, the setup-logging.yml is unrequired, since kibana and elasticsearch already depend on pip_lock_down, and logstash has no pip packages. The MaaS dependency is being changed here: https://github.com/rcbops/rpc-openstack/pull/1405 TL;DR just add the dependency to meta/main.yml for rpc_support and horizon_extension roles. <issue_comment>username_0: Closing in favor of other PR
<issue_start><issue_comment>Title: A vulnerability assessment solution should be enabled on your virtual machines but policy does not target vmss username_0: Hello, If my understanding is correct, this page lists policies applicable to scalesets, however the policy linked for vulnerability assessment for example targets only virtual machine and so does not apply to scaleset. Is it an error in the policy, the docs, or my understanding ? :) Thanks ! --- #### Document Details ⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.* * ID: 520e01f9-2a9b-acf6-01a5-3a9233a373cb * Version Independent ID: d835791e-94b2-781b-6838-901ccea6ff33 * Content: [Azure security baseline for Virtual Machine Scale Sets](https://docs.microsoft.com/en-us/security/benchmark/azure/baselines/virtual-machine-scale-sets-security-baseline) * Content Source: [benchmark/azure/baselines/virtual-machine-scale-sets-security-baseline.md](https://github.com/MicrosoftDocs/security-benchmark-docs-pr/blob/live/benchmark/azure/baselines/virtual-machine-scale-sets-security-baseline.md) * Service: **virtual-machine-scale-sets** * GitHub Login: @msmbaldwin * Microsoft Alias: **mbaldwin** <issue_comment>username_1: @username_0 Thanks for the feedback! I have assigned the issue to content author to check and update the document as appropriate. <issue_comment>username_1: @msmbaldwin Can you please check and add your comments on this doc update request as applicable.
<issue_start><issue_comment>Title: pronly-false-suggestion: changed includes file reported in parent file username_0: <issue_comment>username_1: Docs Build status updates of commit _[af648a3](https://github.com/OPS-E2E-PPE/E2E_DocFxV3/commits/af648a3c904a849ccef0e7c0e8082657ce0d58a5)_: ### :white_check_mark: Validation status: passed File | Status | Preview URL | Details ---- | ------ | ----------- | ------- [E2E_DocsBranch_Dynamic/pr-only/includes/skip-level.md](https://github.com/OPS-E2E-PPE/E2E_DocFxV3/blob/pronly-F-includechange1-suggestion/E2E_DocsBranch_Dynamic/pr-only/includes/skip-level.md) | :white_check_mark:Succeeded | [View](https://ppe.docs.microsoft.com/en-us/E2E_DocFxV3/pr-only/skiplevel?branch=pr-en-us-32410) | For more details, please refer to the [build report](https://opbuilduserstoragepubdev.blob.core.windows.net/report/2022%5C3%5C26%5C8b165afe-a866-8c48-e5c3-ead94440f27e%5CPullRequest%5C202203262121432463-32410%5Cworkflow_report.html?sv=2020-08-04&se=2022-04-26T21%3A22%3A22Z&sr=b&sp=r&sig=bISlUNpppMVTdZjYv8rBzF8ioLv2XDe6X1n2pod22BM%3D). **Note:** Broken links written as relative paths are included in the above build report. For broken links written as absolute paths or external URLs, see the [broken link report](https://docs-portal-pubdev-wus.azurewebsites.net/#/repos/8b165afe-a866-8c48-e5c3-ead94440f27e?tabName=brokenlinks). For any questions, please:<ul><li>Try searching the docs.microsoft.com <a href="https://review.docs.microsoft.com/en-us/help/?branch=main">contributor guides</a></li><li>Post your question in the <a href="https://teams.microsoft.com/l/channel/19%3a7ecffca1166a4a3986fed528cf0870ee%40thread.skype/General?groupId=de9ddba4-2574-4830-87ed-41668c07a1ca&amp;tenantId=72f98bf-86f1-41af-91ab-2d7cd011db47">Docs support channel</a></li></ul>
<issue_start><issue_comment>Title: Add multiple series support to legends, and fix legend styles. username_0: Certain combinations of scatterplot and plot markers and styles aren't displayed correctly in legends. We really should let the marks themselves define what the legend marker should look like. As a happy side effect, we can also allow marks to control what their appearance should be when they contain multiple series. <issue_comment>username_0: Upon deeper reflection, it seemed preferable to continue to keep code affecting mark legend appearance in the HTML backend. But we did split the mark-specific logic up in a way that should aid clarity. Closed in 89059e307326972d6a73d6a55829f7a693188793.<issue_closed>
<issue_start><issue_comment>Title: Functionality to disable quote escaping username_0: I would like to suggest, or if you would accept a pull-request, the ability to disable the quote escaping via a directive attribute escape-quotes="false" maybe I want to be able to output ="123456789" so excel will read this as a formula and not display it in scientific notation <issue_comment>username_1: I agree with @username_0 I have a similar issue when trying to force a numeric string to be effectively a string - because of leading zeroes - instead of a number. <issue_comment>username_2: Actually if you need this feature right now and (as me) don't have time to wait until the author accept my pull request... you just have to do these steps: 1.- Open file build/ng-csv.js 2.- Comment the line number 68. It should look like this: // data = data.replace(/"/g, '""'); // Escape double qoutes 3.- Enjoy =D BTW: I needed my accout field to look like this inside the CSV, in order to display correctly without exponential values in Excel: "=""xxxxxxxxxxxxxxxxxxxx""" where xxx... is the number
<issue_start><issue_comment>Title: Ability to disabled changing of a field if Field Mapping was loaded from a config. username_0: Scenario for the utility network is that we would like to stub out the XML config files, in there we need to set the Asset Type and Asset Group fields to a default value and we would not like the user to override these. <issue_comment>username_1: Most likely would just add "Editable=false" to config files so not editable in the UI. I would use python/xslt to generate these stub files, I'd like to work on setting those up as part of a 'dev package' for DA if that makes sense... <issue_comment>username_2: @username_1 how is this implemented? Is there a tag I put in the XML and at which node level? <issue_comment>username_1: For Field elements, the UI IsEnabled property is bound to the IsEnabled Xml attribute: ![image](https://cloud.githubusercontent.com/assets/4749060/24212116/92140f94-0f04-11e7-91ff-4ed11c3c825b.png) <issue_comment>username_1: Result is the Method panel is not enabled: ![image](https://cloud.githubusercontent.com/assets/4749060/24212169/c1b9e534-0f04-11e7-8e3a-e152386cdfb5.png) <issue_comment>username_0: moved to wiki - https://github.com/Esri/data-assistant/wiki/General-Wiki <issue_comment>username_2: verified<issue_closed>
<issue_start><issue_comment>Title: Using with Multiple Frames username_0: I have a website that uses frames. SurfKeys only seems to work within the active frame. Is there a way to have it work across all frames/iframes loaded in the current window? <issue_comment>username_1: You need press `w` to switch to the frame that you'd like.<issue_closed>
<issue_start><issue_comment>Title: Customiezable service worker path username_0: To support wider scenarios that would be great to make service worker path a parameter. So for example adding different channels for anonymous ad logged in user becomes possible. <issue_comment>username_1: Just created a PR for this (#67) <issue_comment>username_1: This feature is now live in version v0.0.11.. closing.<issue_closed>
<issue_start><issue_comment>Title: It would be great if setting an example value in a custom type ModelConverter could be reflected in the generated JSON username_0: Example: we are using a JAXB XmlAdapter in our API that de-/serializes java.util.Currency from/to the corresponding ISO currency code String. In order to get swagger to define Currency properties as JSON data type string we register a custom ModelConverter, which amongst others contains the instruction to map the class Currency to this Property: ```java public class CurrencyProperty extends io.swagger.models.properties.AbstractProperty { public CurrencyProperty() { this.setType("string"); this.setExample("USD"); //ISO 4217 currency code } } ``` The resulting JSON works as far as our Currency fields come out as string (rather than a ```$ref":"#/definitions/Currency``` and a complex data type that includes currencyCode, defaultFractionDigits, symbol ...). There is not, however, any mention of the type example anywhere. We need to manually annotate all Currency fields with ```@ApiModelProperty(example="USD")``` Would it be possible for the example set in a ModelConverter to be used as default across all occurrences of that type? <issue_comment>username_1: Yes, using `string` as an example really doesn't work here. Will try to get this addressed in an update to the swagger-core and swagger-models.<issue_closed> <issue_comment>username_1: Follow #1107
<issue_start><issue_comment>Title: Streaming Context is stopped prematurely, which prevents sequence of batches to be executed fully, visible on slower machines username_0: Hi, I am struggling with running SparkStreaming tests on slower boxes, and apparently the streaming context is being closed to soon, (before every batch could be processed), my code: ```scala class StreamToFileDumperSparkTest extends StreamingActionBase { test("smoke integration test for dumper transformation") { val input = List(List(data1), List(data2), List(data3)) runAction(input, (s: DStream[String]) => s.foreachRDD(/*save some files*/)) // here I do waiting for files to be available val result = sc.textFile(some_path).collect().sorted result should equal(inputData) } } ``` Exception that shows my suspicion about streaming context be closed to soon is following: ``` 15/11/17 12:02:24 INFO DAGScheduler: Submitting ResultStage 1 (MapPartitionsRDD[15] at foreachRDD at StreamToFileDumper.scala:57), which is now runnable Exception in thread "streaming-job-executor-0" java.lang.Error: java.lang.InterruptedException at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1151) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:745) Caused by: java.lang.InterruptedException at java.lang.Object.wait(Native Method) at java.lang.Object.wait(Object.java:503) at org.apache.spark.scheduler.JobWaiter.awaitResult(JobWaiter.scala:73) at org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:559) at org.apache.spark.SparkContext.runJob(SparkContext.scala:1822) at org.apache.spark.SparkContext.runJob(SparkContext.scala:1835) at org.apache.spark.SparkContext.runJob(SparkContext.scala:1912) at org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsHadoopDataset$1.apply$mcV$sp(PairRDDFunctions.scala:1124) at org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsHadoopDataset$1.apply(PairRDDFunctions.scala:1065) at org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsHadoopDataset$1.apply(PairRDDFunctions.scala:1065) at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:147) at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:108) at org.apache.spark.rdd.RDD.withScope(RDD.scala:306) at org.apache.spark.rdd.PairRDDFunctions.saveAsHadoopDataset(PairRDDFunctions.scala:1065) at org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsHadoopFile$4.apply$mcV$sp(PairRDDFunctions.scala:989) at org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsHadoopFile$4.apply(PairRDDFunctions.scala:965) at org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsHadoopFile$4.apply(PairRDDFunctions.scala:965) at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:147) at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:108) at org.apache.spark.rdd.RDD.withScope(RDD.scala:306) at org.apache.spark.rdd.PairRDDFunctions.saveAsHadoopFile(PairRDDFunctions.scala:965) at org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsHadoopFile$3.apply$mcV$sp(PairRDDFunctions.scala:951) at org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsHadoopFile$3.apply(PairRDDFunctions.scala:951) at org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsHadoopFile$3.apply(PairRDDFunctions.scala:951) at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:147) at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:108) at org.apache.spark.rdd.RDD.withScope(RDD.scala:306) at org.apache.spark.rdd.PairRDDFunctions.saveAsHadoopFile(PairRDDFunctions.scala:950) at org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsHadoopFile$2.apply$mcV$sp(PairRDDFunctions.scala:909) at org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsHadoopFile$2.apply(PairRDDFunctions.scala:907) at org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsHadoopFile$2.apply(PairRDDFunctions.scala:907) at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:147) at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:108) at org.apache.spark.rdd.RDD.withScope(RDD.scala:306) at org.apache.spark.rdd.PairRDDFunctions.saveAsHadoopFile(PairRDDFunctions.scala:907) at org.apache.spark.rdd.RDD$$anonfun$saveAsTextFile$2.apply$mcV$sp(RDD.scala:1444) at org.apache.spark.rdd.RDD$$anonfun$saveAsTextFile$2.apply(RDD.scala:1432) at org.apache.spark.rdd.RDD$$anonfun$saveAsTextFile$2.apply(RDD.scala:1432) at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:147) at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:108) at org.apache.spark.rdd.RDD.withScope(RDD.scala:306) at org.apache.spark.rdd.RDD.saveAsTextFile(RDD.scala:1432) at com.nokia.ph.kinesis2s3.spark.StreamToFileDumper.save(StreamToFileDumper.scala:45) at com.nokia.ph.kinesis2s3.spark.StreamToFileDumper$$anonfun$process$1.apply(StreamToFileDumper.scala:57) at com.nokia.ph.kinesis2s3.spark.StreamToFileDumper$$anonfun$process$1.apply(StreamToFileDumper.scala:57) at org.apache.spark.streaming.dstream.ForEachDStream$$anonfun$1$$anonfun$apply$mcV$sp$1.apply$mcV$sp(ForEachDStream.scala:42) at org.apache.spark.streaming.dstream.ForEachDStream$$anonfun$1$$anonfun$apply$mcV$sp$1.apply(ForEachDStream.scala:40) at org.apache.spark.streaming.dstream.ForEachDStream$$anonfun$1$$anonfun$apply$mcV$sp$1.apply(ForEachDStream.scala:40) at org.apache.spark.streaming.dstream.DStream.createRDDWithLocalProperties(DStream.scala:399) at org.apache.spark.streaming.dstream.ForEachDStream$$anonfun$1.apply$mcV$sp(ForEachDStream.scala:40) at org.apache.spark.streaming.dstream.ForEachDStream$$anonfun$1.apply(ForEachDStream.scala:40) at org.apache.spark.streaming.dstream.ForEachDStream$$anonfun$1.apply(ForEachDStream.scala:40) at scala.util.Try$.apply(Try.scala:161) at org.apache.spark.streaming.scheduler.Job.run(Job.scala:34) at org.apache.spark.streaming.scheduler.JobScheduler$JobHandler$$anonfun$run$1.apply$mcV$sp(JobScheduler.scala:218) at org.apache.spark.streaming.scheduler.JobScheduler$JobHandler$$anonfun$run$1.apply(JobScheduler.scala:218) at org.apache.spark.streaming.scheduler.JobScheduler$JobHandler$$anonfun$run$1.apply(JobScheduler.scala:218) at scala.util.DynamicVariable.withValue(DynamicVariable.scala:57) at org.apache.spark.streaming.scheduler.JobScheduler$JobHandler.run(JobScheduler.scala:217) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) ... 2 more 15/11/17 12:02:24 INFO JobScheduler: Stopped JobScheduler ``` <issue_comment>username_1: Huh that certainly is weird - what sort of speed boxes? I'll take a look this week but I'm travelling so it might be a bit before I can work on a fix. <issue_comment>username_0: Its: ``` Linux svl-build21 3.13.0-36-generic # Ubuntu 14.04 6 GB RAM 2x Intel(R) Xeon(R) CPU E5540 @ 2.53GHz ``` Tests are executed through jenkins slave so far what I did as a workaround I extend StreamingActionBase in following way: ```scala trait GracefulStreamingActionBase extends StreamingActionBase { override def withStreamingContext(outputStreamSSC: TestStreamingContext)(block: (TestStreamingContext) => Unit): Unit = { ... outputStreamSSC.stop(stopSparkContext = false, stopGracefully = true) ... } override def runActionStream(ssc: TestStreamingContext, numBatches: Int) { ... // This actually sucks, since it will always wait this amount of time... ssc.awaitTerminationOrTimeout(5000) Thread.sleep(3000) // Give some time for the forgetting old RDDs to complete ... } } ``` <issue_comment>username_1: Thanks - and sorry about the bug. I'll see what I can do in the way of a proper solution. <issue_comment>username_2: So... this is my bad. In StreamingSuiteBase, because non-action tests generate output it can wait for the output to indicate that the correct number of intervals have run... ```while (output.size < numExpectedOutput...``` But when testing an action there is no output so what to replace it with? I have one idea - will give it a test and send a PR if it works. <issue_comment>username_2: I just put in a PR - @username_0 would you please test my changes and let @username_1 know if it looks good? Thanks. <issue_comment>username_0: @username_2 sure I will give it a spin <issue_comment>username_0: @username_2 after I patched your pull request it tends to be working I repeat it several times to make sure there is no timing issue but it seems to work fine. Thanks Please fix adding streamingListener <issue_comment>username_2: Yes looks like I made a merge error (I'm running an older version of spark-testing-base). I'll update the PR. Thanks for testing this. <issue_comment>username_0: its been merged already closing<issue_closed>
<issue_start><issue_comment>Title: GLFW_CURSOR_HIDDEN does not hide cursor on X11 (metacity wm) username_0: When using GLFW 3.1 with Ubuntu, hiding the mouse cursor via glfwSetInputMode() does not have any effect. The same code does hide the mouse cursor under Windows. With GLFW 3.0, everything worked as supposed. Is this a bug? <issue_comment>username_1: This is a duplicate of #309.<issue_closed>
<issue_start><issue_comment>Title: Update Stories “אֲנִי_אוֹהֵב_אֶת_הַגֶּשֶׁם” username_0: Automatically generated by Netlify CMS <issue_comment>username_0: 👷 Deploy Preview for *cranky-golick-b48ff2* processing. 🔨 Explore the source changes: 272c942c401107357d5f2db963bb1507169fde40 🔍 Inspect the deploy log: [https://app.netlify.com/sites/cranky-golick-b48ff2/deploys/612e12785b5e8c000882764f](https://app.netlify.com/sites/cranky-golick-b48ff2/deploys/612e12785b5e8c000882764f?utm_source=github&utm_campaign=bot_dl)
<issue_start><issue_comment>Title: Please support aria-current username_0: Browsers are "getting there" with support for aria-current: https://rawgit.com/w3c/aria/master/aria/aria.html#aria-current Support is in for (most of) Chrome: https://bugs.chromium.org/p/chromium/issues/detail?id=609362 and Webkit/VO: https://bugs.webkit.org/show_bug.cgi?id=155469 Still waiting on FF: https://bugzilla.mozilla.org/show_bug.cgi?id=1104947 Would be nice of AT could support this, too, so that devs don't have to hack in a roving aria-label="current page" or similar. <issue_comment>username_0: Update: JAWS 18 Beta now supports aria-current. Just FYI. ;) <issue_comment>username_1: Adding the browse mode label. <issue_comment>username_2: Some thought will need to be put into the UX for this. But I think we can prioritise this as P2 for now. @username_3 We will need to add this to the aria 1.1 project. <issue_comment>username_3: I think we can basically just add a new state "current" and report it whenever it is present. <issue_comment>username_2: Problem is that the [spec](https://www.w3.org/TR/wai-aria-1.1/#aria-current) for `aria-current` has several values other than `true`: `page`, `step`, `location`, `date`, `time`. I'm not sure what the use case is for these, however the [expected implementation](http://ljwatson.github.io/design-patterns/aria-current/) seems to be along the lines of "current page". <issue_comment>username_3: My bad. I neglected to realise aria-current now has several values. I discussed this with several people. I think we should report "current page", "current location", etc. as the expected implementation suggests. I have my concerns about this; e.g. it seems counter-intuitive that we report a "current step" but no other steps get reported as "step". Nevertheless, this is what other screen readers are doing and it does seem like this is the best experience we can get here. <issue_comment>username_4: VoiceOver also support it cf https://ljwatson.github.io/design-patterns/aria-current/<issue_closed> <issue_comment>username_5: If I read the spec correctly, this should also be used where the current page is indicated, but is not a link. For example in a breadcrumb. I just implemented this with aria-current="page" on a span and it diesn't seem to work in NVDA next and Firefox. <issue_comment>username_3: It does work on a div, but not a span. It seems both Firefox and Chrome are pruning spans from the tree even when they have aria-current. We'll need to file bugs against both browsers. <issue_comment>username_5: I just filed a bug against Firefox: https://bugzilla.mozilla.org/show_bug.cgi?id=1365904 <issue_comment>username_3: Filed Chrome issue: https://bugs.chromium.org/p/chromium/issues/detail?id=730917 <issue_comment>username_3: Verified fixed in Chrome Version 62.0.3201.2 (Official Build) canary (64-bit). <issue_comment>username_3: Fixed for Firefox 65. <issue_comment>username_6: I found in my single page react application, when the url changes because of a link within the side (eg. on the nav) that NVDA reads the old active nav item. It's almost as if in a single paged react app that even though the url and the DOM have the correct attributes on the correct elements, NVDA doesn't update until I refresh the page. <issue_comment>username_3: What browser did you test in? This works as expected for me in Firefox: `data:text/html,<button aria-current="true">a</button><br><button>b</button><script>document.addEventListener("click", function(event) { document.querySelector("[aria-current=true]").removeAttribute("aria-current"); event.target.setAttribute("aria-current", "true"); });</script>` Pressing the button that isn't current makes it current and that gets reflected in browse mode. This doesn't work in Chrome, however. Investigation note: My guess is they're not firing IA2_EVENT_OBJECT_ATTRIBUTE_CHANGED when aria-current changes, but I haven't verified this. <issue_comment>username_7: Hi jcstech, Using NVDA/Chrome i am able to replicate this issue. I am facing same issue in my vue single page application. <issue_comment>username_8: @username_7 I filed a Chromium bug re changes not being conveyed https://bugs.chromium.org/p/chromium/issues/detail?id=1099323 <issue_comment>username_8: Support is scheduled to land in Chrome v89. I tested it with NVDA and Chrome Canary and it works great. https://bugs.chromium.org/p/chromium/issues/detail?id=1099323
<issue_start><issue_comment>Title: Icon request: icon-patreon and icon-issuu username_0: As you did for many other social networks and portals, I think that can be really useful if you include also these two: https://www.patreon.com/ https://issuu.com/ ![patreon_navigation_logo_mini_orange](https://cloud.githubusercontent.com/assets/11670332/14229386/dbe0f6e8-f932-11e5-9315-ac4b093bf664.png) ![download](https://cloud.githubusercontent.com/assets/11670332/14229387/df80e83a-f932-11e5-9ab8-ff3cad23df98.png)<issue_closed> <issue_comment>username_1: Duplicate of #1993 and #3257 , please +1 that request Please also [search](https://github.com/FortAwesome/Font-Awesome/search?type=Issues) before opening new requests Closing here
<issue_start><issue_comment>Title: malloc error with --hash_mask_bits username_0: ``` Download & build, run the demo commands adding --hash_mask_bits to the arguments. Training proceeds fine, but testing of the model gives the malloc error: $ ./sofia-ml --learner_type pegasos --loop_type stochastic --lambda 0.1 --iterations 100000 --dimensionality 150000 --training_file demo/demo.train --model_out demo/model --hash_mask_bits 8 hash_mask_ 255 Reading training data from: demo/demo.train Time to read training data: 0.061278 Time to complete training: 52.3639 Writing model to: demo/model Done. $ ./sofia-ml --model_in demo/model --test_file demo/demo.train --results_file demo/results.txt --hash_mask_bits 8 hash_mask_ 255 sofia-ml(6235) malloc: *** error for object 0x800000: pointer being freed was not allocated *** set a breakpoint in malloc_error_break to debug Reading model from: demo/model Done. Reading test data from: demo/demo.train Time to read test data: 0.06114 Time to make test prediction results: 0.008274 Writing test results to: demo/results.txt Done. ======== $ g++ --version i686-apple-darwin10-g++-4.2.1 (GCC) 4.2.1 (Apple Inc. build 5659) ``` Original issue reported on code.google.com by `jel...@gmail.com` on 18 Jun 2010 at 6:43<issue_closed> <issue_comment>username_1: ``` Download & build, run the demo commands adding --hash_mask_bits to the arguments. Training proceeds fine, but testing of the model gives the malloc error: $ ./sofia-ml --learner_type pegasos --loop_type stochastic --lambda 0.1 --iterations 100000 --dimensionality 150000 --training_file demo/demo.train --model_out demo/model --hash_mask_bits 8 hash_mask_ 255 Reading training data from: demo/demo.train Time to read training data: 0.061278 Time to complete training: 52.3639 Writing model to: demo/model Done. $ ./sofia-ml --model_in demo/model --test_file demo/demo.train --results_file demo/results.txt --hash_mask_bits 8 hash_mask_ 255 sofia-ml(6235) malloc: *** error for object 0x800000: pointer being freed was not allocated *** set a breakpoint in malloc_error_break to debug Reading model from: demo/model Done. Reading test data from: demo/demo.train Time to read test data: 0.06114 Time to make test prediction results: 0.008274 Writing test results to: demo/results.txt Done. ======== $ g++ --version i686-apple-darwin10-g++-4.2.1 (GCC) 4.2.1 (Apple Inc. build 5659) ``` Original issue reported on code.google.com by `jel...@gmail.com` on 18 Jun 2010 at 6:43<issue_closed> <issue_comment>username_1: No longer encountered after the patch
<issue_start><issue_comment>Title: how to run multiple jhipster in one machine username_0: Can i run multiple jhipster app in one machine if each app has different port and different database too ? <issue_comment>username_1: Yes as it is based on spring-boot <issue_comment>username_0: i try run a jhipster app on default config --8080 port and create the new one with different port - server.port : 8083 - metrics.spark.port: 9998 - metrics.graphite.port: 2002 but on second running shown an error like this transport error 202: bind failed: Address already in use ERROR: JDWP Transport dt_socket failed to initialize, TRANSPORT_INIT(510) did i miss some thing ?<issue_closed> <issue_comment>username_2: Please use Stackoverflow for this kind of question. <issue_comment>username_0: [SOLVED] found it, i forgot to change jvmArguments address in pom.xml thx
<issue_start><issue_comment>Title: Extension issue username_0: - Issue Type: `Bug` - Extension Name: `LiveServer` - Extension Version: `5.6.1` - OS Version: `Windows_NT x64 10.0.19042` - VS Code version: `1.58.2` :warning: We have written the needed data into your clipboard. Please paste! :warning:
<issue_start><issue_comment>username_0: Currently running the test continually waiting for a flake... <issue_comment>username_1: Going through the code it looks like we scale the hazelcast rc to 2 replicas, and poll on the logs of one of the pods for the string "Members [2]". The events indicate that the 2 replicas are created and healthy: ``` Aug 22 06:56:43.192: INFO: hazelcast-cfzit jenkins-e2e-minion-group-3ge2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-08-22 06:48:07 -0700 PDT } {Ready True 0001-01-01 00:00:00 +0000 UTC 2016-08-22 06:48:20 -0700 PDT } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-08-22 06:48:07 -0700 PDT }] Aug 22 06:56:43.192: INFO: hazelcast-h0a56 jenkins-e2e-minion-group-i6mn Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2016-08-22 06:48:26 -0700 PDT } {Ready True 0001-01-01 00:00:00 +0000 UTC 2016-08-22 06:48:39 -0700 PDT } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2016-08-22 06:48:26 -0700 PDT }] ``` So I think the scope of this is retriced to the one hazelcast example, and that the second scaled up member is not able to join the group so "Members [2]" never shows up in logs. Given the scope (correct me if I'm wrong), I don't think it is a P0, still important to fix the flake but not release blocking. <issue_comment>username_0: After doing some debugging with @username_2, we think it's a thread safety issue in the hazelcast library. <issue_comment>username_0: I'm going to add a skip to this test until the upstream issue is fixed. <issue_comment>username_2: @username_1 I agree with your analysis. This has been happening at least since June 22 https://github.com/kubernetes/kubernetes/issues/27850 probably around when we started up the flake issue creation munger. <issue_comment>username_2: Noo..... 👎 <issue_comment>username_0: /sadpanda <issue_comment>username_0: It looks like the problematic code didn't get removed =( ```java nodes.parallelStream().forEach(tcpCfg::addMember); ``` I opened a new issue: https://github.com/pires/hazelcast-kubernetes-bootstrapper/issues/10 <issue_comment>username_0: Here's a link to the line: https://github.com/pires/hazelcast-kubernetes-bootstrapper/blob/master/src/main/java/com/github/pires/hazelcast/HazelcastDiscoveryController.java#L155 <issue_comment>username_0: I believe the original failure is fixed with the update. This looks like a new type of flake.<issue_closed> <issue_comment>username_2: We can close and allow it to be reopened then.
<issue_start><issue_comment>Title: Data link loss reaction in all modes (when it's configured) username_0: **Describe problem solved by this pull request** The data link loss reaction is disabled by default. I got multiple reports of confused users that configured a reaction to data link loss but found that the reaction is executed only in certain unexpected undocumented modes. **Describe your solution** Define data link loss a general failsafe that applies in all modes. **Test data / coverage** I did SITL testing of all the cases I could think of. **Additional context** I found an unused flag `stay_in_failsafe` that got obsolete when removing the Casa Outback Challange failsafes: https://github.com/PX4/PX4-Autopilot/pull/14307 <issue_comment>username_0: Reminder to myself to explicitly test that taking over by RC in a data link failsafe works as expected. <issue_comment>username_0: I checked and this pr has issues with the user trying to regain control via RC when the data link is still lost. I'm investigating but it seems like there's no simple answer to this with the current architecture. I'm putting it to WIP and see what's needed to solve the case.
<issue_start><issue_comment>Title: How to append objects to existing array using spec username_0: input: { "abc":["1","2","3"], "def":["4","5","6"] } expected: { "abc":["1","2","3","4","5","6"] } How do we merge two object arrays<issue_closed> <issue_comment>username_0: resolved. <issue_comment>username_1: How did you do it? <issue_comment>username_2: Spec ``` [ { "operation": "shift", "spec": { "abc": { "*": "abc[]" }, "def": { "*": "abc[]" } } } ] ``` <issue_comment>username_3: Hi what about ``` { "itineraties":[ {"to": "X"}, {"to": "Y"} ], "travelers":[ {"name": "username_3"}, {"name": "username_2"}, ] } ``` Transformed to (permutation?) ``` { "magic":[ {"to": "X", "name": "username_3"}, {"to": "X", "name": "username_2"}, {"to": "Y", "name": "username_3"}, {"to": "Y", "name": "username_2"}, ] } ``` thanks =)
<issue_start><issue_comment>Title: Fix metadata functions getProcedures() and getFunctions() to ignore search_path username_0: * [X] Have you added an explanation of what your changes do and why you'd like us to include them? * [X] Have you written new tests for your core changes, as applicable? * [X] Have you successfully run tests with your changes locally? ---- Closes #2173. See also discussion on #1633. This PR changes those two metadata functions to ignore the search_path. Previously without a schema pattern they restricted the results to the active search path. The rest of the changes update the tests to reflect that behavior and subsequently clean up and improve them a bit. The last commit adds some more detail to the getProcedures() test to validate the result set columns similar to how were were already validation the result set of getFunctions(). <issue_comment>username_1: Using try/catch in the tests will not allow me to backpatch this ... :( <issue_comment>username_0: Force pushed to add the PR details to the change log as well. The main CI action only runs on the latest version of PG server. I have it running against the omni matrix with all the other server versions on my fork here: https://github.com/username_0/pgjdbc/actions/runs/915786302. Locally it worked fine on all the versions I checked and looks like it's 2/3 done clearing the full matrix. <issue_comment>username_0: Which branch are you going to back patch into? I thought we had already completely dropped everything less than JDK 8. <issue_comment>username_1: 42.2.21 is still what folks are using until we release 42.3.0 which 1) won't be stable and 2) doesn't seem to be imminent <issue_comment>username_0: Dang. I really thought we were done with those ancient JDKs. My fault for trying to improve the tests ;-) The fix itself applies clean but the test won't work on the old JDK and without a change won't pass either as they expect search path changes to be reflected. Speaking of which, how do we test this anyway? None of Actions or Travis CI runs those old JDKs and I don't have either 6 or 7 installed right now either. If you think it's worthwhile to backpatch this I can get it running and "ancient-ize" the tests so it applies uniformly. IIRC, with the gradle changes we'd have to build on 8+ targeting 1.6 as gradle itself won't run on there either.
<issue_start><issue_comment>Title: Include aliased tags in autocomplete username_0: Makes it so autocomplete searches aliases too. See https://danbooru.donmai.us/forum_topics/11547 for details. Related: #1831 What it looks like: ![autocomp example](https://cloud.githubusercontent.com/assets/3627189/6993528/22965598-dac5-11e4-973a-9f43f94999b4.png) <issue_comment>username_0: Note: the cached post_count in the tag_aliases table gets updated once per day with a daily maintenance job. This means the post counts (only for the aliased tags) can be up to 24 hours out of date. There are other ways to do it but I don't think them being 24 hours out of date is much of a problem. <issue_comment>username_1: Is there anything more to add to this? Forum topic mentions coloring antecedent name gray - I don't thing that's necessary, but perhaps wrapping it in `<span class='autocomplete-antecedent'>` might be a good idea to allow some customizing. <issue_comment>username_1: Huh. Was that feature omitted for the last deploy, 2.77, or it somehow doesn't work properly? <issue_comment>username_0: It wasn't omitted - if you go to the tag aliases api you can see the post count column filled with the default 0 value. A possibility I can think of is that the function to update all aliases on the site daily timed out. I forgot to add `without_timeout` since it was so fast on my server, but on the production site it might be slow since there are more aliases. I'll add it now. <issue_comment>username_2: What's probably going on is your dropdown is cached locally. If you open up a javascript console and enter this: $.localStorage.removeAll(); it'll probably reset the cache and you'll see the aliases. <issue_comment>username_1: Nope, clearing local storage doesn't help. Does it work for you? <issue_comment>username_0: Oh right, I forgot about that. It's not the only issue though: the post_count column is still unset at 0 in [the tag_aliases table](http://danbooru.donmai.us/tag_aliases.json). So they're not showing up in autocomplete since 0-post tags are excluded. <issue_comment>username_0: The daily job still didn't work after the update. @username_2 Any idea why it's not running? If you run `TagAlias.update_cached_post_counts_for_all` directly in the rails console is there an error? <issue_comment>username_1: Ping @username_2. Newly approved aliases are using this feature just fine, it's clearly the daily update job that's failing. <issue_comment>username_2: I just ran it manually. It seems to execute very fast since there's less than 10k aliases. <issue_comment>username_1: Huh. How come it fails as a part of daily jobs, then? Could it be that it's actually the `DailyMaintenance.run` procedure that fails at some earlier task and never gets to it? Is there any exception logging for daily jobs? I see that output is set to `/var/log/whenever.log`, does that catch error stream as well? <issue_comment>username_0: Tag subscriptions and the json tag cache and the stuff before those all seem to run fine, so by process of elimination it would be `ForumSubscription.process_all!` causing the error I guess? It seems the `forum_subscriptions` table doesn't specify `not null` for any of its fields so maybe there's a null in there somewhere causing an error. <issue_comment>username_2: So the alias autocomplete is working now? <issue_comment>username_1: Yep: http://puu.sh/hNiOv.png <issue_comment>username_0: It's working but not updating automatically so values will eventually become outdated. <issue_comment>username_2: Yeah looks like the forum subscription mailer was erroring out. Fixed in c31b869a01a49ada43c3c1f253d80177bb794ebe
<issue_start><issue_comment>Title: Deploy username_0: # Description <!-- Please include a summary of the change and which issue is fixed. --> <!-- A #ticketNumber will be sufficient, delete if not applicable Fixes #(issue) --> ## Type of change <!-- Please delete options that are not relevant. --> - [x] Bug fix (non-breaking change which fixes an issue) - [x] New content (non-breaking change which adds functionality) <!-- If this is your first time contributing and you want to get the shiny Documentation Contributor role on our Discord, please add your Discord username below -->
<issue_start><issue_comment>Title: GriefDefender cancels EntitySpawnEvent username_0: Hi, a user of my AngelChest plugin reported that holograms do not spawn. I hacked a bit into the EventHandler and found out that GriefDefender cancels the EntitySpawnEvent: ``` [18:50:05 INFO]: [EventLogger] ============== EVENT CANCELLED ============== [18:50:05 INFO]: [EventLogger] Event: CreatureSpawnEvent [18:50:05 INFO]: [EventLogger] Class: org.bukkit.event.entity.CreatureSpawnEvent [18:50:05 INFO]: [EventLogger] has been cancelled by [18:50:05 INFO]: [EventLogger] Plugin: GriefDefender [18:50:05 INFO]: [EventLogger] Class: com.griefdefender.GDBootstrap [18:50:05 INFO]: [EventLogger] ============================================= ``` This is how AngelChest spawns the armor stand: https://github.com/JEFF-Media-GbR/Spigot-AngelChest/blob/master/src/main/java/de/jeff_media/AngelChest/Hologram.java#L36 Is there any way to prevent GriefDefender from cancelling the EntitySpawnEvent? <issue_comment>username_1: GD does not deny creature spawns by default. The user turned off spawning which is why GD is cancelling the event. The user just needs to run gddebug and allow the id that is spawning. <issue_comment>username_0: Thank you, I will forward that information and then reply again <issue_comment>username_0: https://griefdefender.github.io/debug/?vpR8d69PAj Quoting the user: ``` This was generated by gddebug, once I got a grip on how to use it better, and according to the GD guys, GD isn't preventing the spawn ``` As you can see in the output above, the event is 100% cancelled by GriefDefender. I am using this detect what plugin has cancelled an event: https://gist.github.com/aadnk/5563794#file-cancellationdetector-java <issue_comment>username_1: The user used a filter with their username so it isn't going to show everything. As I stated before, GD does not turn off `entity-spawn` by default. The user set the flag to false and is now having issues. They need to report it in GD's discord. <issue_comment>username_0: Alright, thanks, I'm going to tell him <issue_comment>username_0: BTW I cannot find a link to your discord anywhere<issue_closed> <issue_comment>username_0: Okay, removing either LevelledMobs or GriefDefender solved this. Must be something happening between those two plugins. We got it solved by adding ARMOR_STAND to the blacklisted items of LevelledMobs.
<issue_start><issue_comment>Title: Empty view won't show when adapter item count is zero. username_0: I set the attribute: swipe:recyclerviewEmptyView="@layout/tour_main_overview" and the item count is zero but the empty layout flashes for a second and disappears. <issue_comment>username_1: Have you tried the demo and set a zero count adapter to the recyclerview? I tried and it works fine. <issue_comment>username_0: Can you please give me the link of that class? i can't find it... <issue_comment>username_1: The MainActivity in demo.You can remove ``ultimateRecyclerView.enableLoadmore();`` and make stringList as a empty list in that class. <issue_comment>username_0: ...But I want enableLoadmore() method somewhere in my code... <issue_comment>username_1: You can enableLoadmore. But in demo,the loading more feature will load data to the adapter so the adapter is not null any more. <issue_comment>username_0: I'm confused! Please add a snippet that shows how to do this that can work. thank you. <issue_comment>username_1: Do you insert some items into the adapter when loading more?If you do that the adapter is not empty after loading more. <issue_comment>username_0: I'm inserting items after loading more but the problem is that before this, i can't see emptyView.<issue_closed> <issue_comment>username_2: #276 this should be done for the load more issue. @username_0
<issue_start><issue_comment>Title: Request / Reply pattern? username_0: Is is possible to implement a req/res pattern where I put something on the queue, wait for that job to complete to get the result? Thx! <issue_comment>username_1: If you check on the README the section called "Returning job completions" it may be what you are looking after. <issue_comment>username_0: So, bull's way would be to create two queues, one for req other for res? That doesn't sounds performatic for me, do you think that redis + bull is the right solution for this kind of pattern? <issue_comment>username_1: not sure how it can be made more performant, consider that its becomes a very stable system where the requester and receiver do not need to be online at the same time... <issue_comment>username_0: my system was using bull for queue management and that was fine, but now I really need req/rep patterns and I don't think that redis is the best solution for this, I'm inclined to use RabbitMQ. Do you have any cents for this senario? <issue_comment>username_1: I have no experience on RabbitMQ, but you should also take a look at ZeroMQ if performance is important for you: http://zeromq.org/<issue_closed> <issue_comment>username_0: I will take a look, tanks for your time.
<issue_start><issue_comment>Title: BandedMatrix with upper/subdiagonals far away from the main diagonal username_0: May I please ask a question :) (Sorry if this is not the right place to do so.) Is it possible to define a sparse circulant matrix in Julia? (For instance, a matrix stemming from the spatial discretization of a one-dimensional heat equation with periodic boundary condition.) If so, how? Thank you very much, in advance! <issue_comment>username_1: My suggestion is to modify `Circulant` in https://github.com/JuliaMatrices/ToeplitzMatrices.jl to support a storage format of `SparseVector`.
<issue_start><issue_comment>Title: Include required libraries in pkg-config output (SDL2) username_0: <issue_comment>username_1: Can this be merged? <issue_comment>username_2: Yep! <issue_comment>username_1: @username_0 can you please rebase it? <issue_comment>username_0: OK, done. <issue_comment>username_1: @username_3 Is a new release planned anytime soon? <issue_comment>username_3: Don't know - @username_2 and/or @icculus decide that. <issue_comment>username_1: Thank you! :) <issue_comment>username_1: Ahh there was a release, but it doesn't include this commit! :/ So when is the *next* release planned @username_2 / @icculus?
<issue_start><issue_comment>Title: fix creating processEngine with instance of ProcessEngineConfigurationIm... username_0: I get an error trying to create engine bean using an instance of `org.activiti.engine.impl.cfg.StandaloneProcessEngineConfiguration`, which is a descendant of `org.activiti.engine.impl.cfg.ProcessEngineConfigurationImpl`, but has no transaction manager unlike `org.activiti.spring.SpringProcessEngineConfiguration`. An error occurs in `org.activiti.spring.ProcessEngineFactoryBean` when it tries to cast configuration bean to `org.activiti.spring.SpringProcessEngineConfiguration`: ``` Caused by: java.lang.ClassCastException: org.activiti.engine.impl.cfg.StandaloneProcessEngineConfiguration cannot be cast to org.activiti.spring.SpringProcessEngineConfiguration at org.activiti.spring.ProcessEngineFactoryBean.configureExternallyManagedTransactions(ProcessEngineFactoryBean.java:72) ``` The same code works on previous versions of Activiti bacause of the appropriate type checking: ```java public class ProcessEngineFactoryBean implements FactoryBean<ProcessEngine>, DisposableBean, ApplicationContextAware { // ... protected void initializeTransactionExternallyManaged() { if (processEngineConfiguration instanceof SpringProcessEngineConfiguration) { // remark: any config can be injected, so we cannot have SpringConfiguration as member SpringProcessEngineConfiguration engineConfiguration = (SpringProcessEngineConfiguration) processEngineConfiguration; if (engineConfiguration.getTransactionManager() != null) { processEngineConfiguration.setTransactionsExternallyManaged(true); } } } // ... } ``` I hope, that check was not deleted intentionally, but removed accidently during refactoring [here](https://github.com/Activiti/Activiti/commit/93e45b8d3637c0f858a6dd751c34c966f59a0b56#diff-535b61785ec17bf42ecf9fc8864ad66fL68) when `initializeTransactionExternallyManaged()` was renamed to `configureExternallyManagedTransactions()`. So, I want it to be there, to be able to use any `ProcessEngineConfigurationImpl` (`StandaloneProcessEngineConfiguration` in my case). <issue_comment>username_1: Related: http://forums.activiti.org/content/javalangclasscastexception-standaloneinmemprocessengineconfiguration-cannot-be-cast <issue_comment>username_1: Valid point. That shouldn't have gone through. Thanks for fixing it! <issue_comment>username_2: Does activiti releases minor versions or do we need to (officially) wait for the 5.18.0 for this fix? (By the way, I know I can get the sources from here and build myself, but that's not an option to use that for our current build process)
<issue_start><issue_comment>Title: Data Transporter ignoring State/Status Codes during transfer username_0: When migrating data from 1 environment to another and checking the box to map the Statuscode field, all records in the target environment become Active (Statecode=0/Statuscode=1) regardless of status settings in the Source environment. Wondering if this is related to the fact that it looks like MS recently moved the actual state change function to an async function after setting the new state/status codes. In the tool, it appears only Statuscode is mapped but not the Statecode. Since the actual state change is asynchronous, no error is detected by the tool? Just guessing though.
<issue_start><issue_comment>Title: Use m.thread over m.in_reply_to username_0: Following up on matrix-org/matrix-doc#3440 We want to move away from using `m.in_reply_to` to build threads, and instead use a new relation type `m.thread`. - [x] `io.element.thread` should be used in place of `m.thread` as relation type before the MSC has shipped - [x] Make sure that `m.thread` can be combined with `m.in_reply_to` to allow quote relies in threads - [x] Remove all support for `io.element.in_thread` and `m.in_thread`<issue_closed>
<issue_start><issue_comment>Title: Fix getCurrent to work with folder and files with '.' in his names. username_0: I have a trouble when try to use folder with '.' in name. After search I found that problem is in this function getCurrent() from helpers. Simple realization not allow dots in file or folder name except file extension. This patch solve problem. <issue_comment>username_1: Thanks @username_0, this worked beautifully. This is my file structure: ``` Readmore.js/ _data.json _layout.jade index.jade ``` Without this patch, Harp either won't read, or won't expose, values in `_data.json`.
<issue_start><issue_comment>Title: Relation annotations username_0: Migrated from gitlab, originally by @bartv on Jun 27, 2016, 10:17 New relation syntax: ```<Type>.<attribute> (<arrity>)? (--|<variable>(,<variable>)?) <Type>.<attribute> (<arrity>)?``` Example usage: ``` entity RelationType: string name end implement RelationType using std::none connects_to = std::RelationType(name="connected_to") Client.server [1] connects_to Server.clients [0:] ``` <issue_comment>username_0: Migrated from gitlab, originally by @username_1 on Aug 24, 2016, 17:20 mentioned in commit 75e65782f7ac75b7fae1a37770e67cea9a845582 <issue_comment>username_1: Looking forward (to issue #108) We now have annotations, which are can be any value. We have no relation types in the strict sense (annotations are not types, they put no constraint on source and target type and they are not instantiated for each relation.) This has rather far reaching consequences for #108 (and tosca compliance, if when we get to that) I think that full relation types have one severe disadvantage: - relations with full implementations make the model less readable. There is no clear guideline whether to implement stuff in relations or in entities. Adding relation implementations makes the code less predictable, thus less readable. And some advantages - Tosca compliance may require it - Constraining source and target type may allow for more type checking in #108, making state machines more readable, but, it may also add undue typing complications. Perhaps clear documentation gets us farther. I think it would be wise to constrain the annotation to be instances of a specific type (std::RelationType for example), but beyond that, annotations instead of types might be a good idea. (perhaps we should think about an inheritance like mechanism, where each annotation can imply several other annotations)<issue_closed>
<issue_start><issue_comment>Title: Support for asciidoc content in tables (for definition properties) username_0: I am using GFM in property descriptions. Some of the syntax is working, i.e. italics, bold, monospace. However some isn't working, i.e. headers, lists. I don't have much experience with asciidoc but looking at [quick reference](http://asciidoctor.org/docs/asciidoc-syntax-quick-reference/): _Table with column containing AsciiDoc content_, there is an extra `a` beside the column width. I am guessing this is missing from the generated adoc files for description columns. I hope this information is enough but if not please let me know and I will try my best. <issue_comment>username_1: You can add the ```a``` flag in ```withMarkupSpecifiers(AsciiDoc, ...)``` parameter (for the Description column only) in the various tables that need it. Please note that your documentation, if it contains advanced Markup for properties descriptions, will then render correctly only using AsciiDoc output. Markdown and Confluence Wiki rendering of tables containing advanced formatting is not supported by these languages. Moreover long properties descriptions don't always fit well in tables, so an advice is you keep your properties descriptions short enough when possible. <issue_comment>username_2: Can I close the issue? <issue_comment>username_0: I was using the cli and had to edit the generated adoc file. So such a change would still be useful if there is no harm. <issue_comment>username_2: Ok. You are right. The AsciiDoc cell specifier is missing. Btw, if you use and like Swagger2markup, we would love to get your GitHub Star. <issue_comment>username_3: I'm also waiting this enhancement. When is applied? <issue_comment>username_2: I won't have time to implement this. Please provide a PR, if you need this feature urgently. <issue_comment>username_4: @username_2 Would like to fix this low hanging fruit during hacktoberfest. I would add the `a` specifier in the `TableComponent` such that every table may contain `.adoc` content. Is that okay for you? <issue_comment>username_2: Yes sure. That would be awesome. I guess this must be fixed in the markup-document-builder<issue_closed>
<issue_start><issue_comment>Title: Composer cleanup username_0: dev minimum stability with prefer-stable option. knp/paginator-bundle `^2.4.2` to prevent following error: ``` [Symfony\Component\DependencyInjection\Exception\ServiceNotFoundException] The service "knp_paginator.helper.processor" has a dependency on a non-existent service "templating.helper.router". ``` <issue_comment>username_1: I will revert this commit as some sonata's bundles do not get the latest commit. <issue_comment>username_0: @username_1 which one? Could fix the composer.json instead. <issue_comment>username_1: @username_0 you should see in the build https://codeship.com/projects/84753/builds/6211305 <issue_comment>username_0: Are you talking about deprecated fix? Well for example if a fixed package has `2.2.2` at last stable version, just add `^2.2.3` on requirement. This will get the dev-master version until new stable release is pushed. <issue_comment>username_0: Will take a look right now. <issue_comment>username_1: The old configuration make sure to get all dev versions ;) <issue_comment>username_1: and stable version for others bundles <issue_comment>username_0: I also did that because all sonata package aren't on dev version. I could work on it to correct composer.json to get test working with latest version or just revert it. Your choice. :+1: <issue_comment>username_0: Well, only `sonata-project/seo-bundle` apparently. :-P As your choice, just tell me to know if I have to work on it or not. <issue_comment>username_1: I think reverting back, will help to release one by one sonata's bundles <issue_comment>username_0: Well, I wont touch composer.json on sonata-sandbox then, let you play with it. ;-) Another thing more or less related: Could be great to define the new versionning rules with branching management like Symfony does and complete contributing docs before release new minor version of this bundle, isn't it? Where is the better place to talk about it? I was thinking about start a PR/issue on sonata-admin to start discussing.
<issue_start><issue_comment>Title: Add seo_description to twitter card. username_0: See https://dev.twitter.com/cards/types/summary for reason for change. `twitter:description -> required=true` <issue_comment>username_0: I'm currently learning about how the test code works, but until then, no testing code has been added. <issue_comment>username_1: Twitter will [use `og:description` if present](https://dev.twitter.com/cards/markup), so this is unnecessary. Thank you for the PR though :+1:
<issue_start><issue_comment>Title: resolve csv with only header username_0: <issue_comment>username_1: Hi @username_0 - thanks for the PR! Can you explain and possibly provide an example to illustrate the problem that this PR addresses? Thanks! <issue_comment>username_0: Hi @username_1 - thank you for following up. An error occurred when the input csv file is empty. For example, I used the example [here](https://github.com/tea-lang-org/tea-lang/tree/master/examples/AR_TV) with an empty csv file (only has headers `,ID,Condition,Score`) Error message: ``` Traceback (most recent call last): File "run.py", line 39, in <module> results = tea.hypothesize(['Score', 'Condition'], ['Condition:AR > TV']) File "/Users/rockpang/PycharmProjects/tea/venv/lib/python3.8/site-packages/tea/api.py", line 173, in hypothesize result = vardata_factory.create_vardata(dataset_obj, relationship, assumptions, study_design) File "/Users/rockpang/PycharmProjects/tea/venv/lib/python3.8/site-packages/tea/vardata_factory.py", line 56, in create_vardata return self.__create_relate_vardata(dataset, expr, assumptions, design) File "/Users/rockpang/PycharmProjects/tea/venv/lib/python3.8/site-packages/tea/vardata_factory.py", line 430, in __create_relate_vardata test_result = execute_test(dataset, design, expr.predictions, combined_data, test) File "/Users/rockpang/PycharmProjects/tea/venv/lib/python3.8/site-packages/tea/helpers/evaluateHelperMethods.py", line 1540, in execute_test stat_result = test_func(dataset, predictions, combined_data) File "/Users/rockpang/PycharmProjects/tea/venv/lib/python3.8/site-packages/tea/helpers/evaluateHelperMethods.py", line 595, in mannwhitney_u t_stat, p_val = mann_whitney_exact(data[0], data[1], alternative="greater") File "/Users/rockpang/PycharmProjects/tea/venv/lib/python3.8/site-packages/tea/helpers/evaluateHelperMethods.py", line 548, in mann_whitney_exact assert(cum_prob[len(cum_prob) - 1] <= 1.0 + epsilon) AssertionError ``` This PR only returns the proper statistical tests by tea without executing it. <issue_comment>username_2: In this case, Tea should only output the set of valid tests but not attempt to execute them. @username_0 Can you please add an example to your PR (e.g., under *examples/prereg*)? Feel free to copy the Tea script and (empty) data file from an existing example. Additionally, adding a brief description about the issue and how to reproduce it (using the provided example) would be helpful. Thanks, René <issue_comment>username_0: Hi @username_2 and @username_1, thanks for following up, and apologize for the lack of clarity. Here is the code to reproduce the issue. Tea tries to execute valid tests when input dataset is empty. ``` import tea # https://github.com/tea-lang-org/tea-lang/blob/master/examples/AR_TV/ar_tv_long.csv # To reproduce the issue, remove the data and only keep the header `,ID,Condition,Score` data_path = "./ar_tv_long.csv" variables = [ { 'name': 'ID', 'data type': 'ratio' }, { 'name': 'Condition', 'data type': 'nominal', 'categories': ['AR', 'TV'] }, { 'name': 'Score', 'data type': 'ordinal', 'categories': [1,2,3,4,5] } ] experimental_design = { 'study type': 'experiment', 'independent variables': 'Condition', 'dependent variables': 'Score' } assumptions = { 'Type I (False Positive) Error Rate': 0.01969 } tea.data(data_path, key='ID') tea.define_variables(variables) tea.define_study_design(experimental_design) tea.assume(assumptions) results = tea.hypothesize(['Score', 'Condition'], ['Condition:AR > TV']) ``` <issue_comment>username_1: Was able to reproduce (from repo version, not latest on pip - time for a new release, @emjun?). Thanks!
<issue_start><issue_comment>Title: Atom-typescript fails to load tsconfig with glob if invalid symlinks are in source tree username_0: In my project I had a npm module added to my project with `npm link` which created a junction (windows) for the module in node_modules. However after that package didn't exist in my global package cache any more, atom-typescript failed to load the tsconfig in my project. I guessed it was silently crashing somewhere while resolving the glob expression, however even with an empty filesGlob it was still not working which is slightly confusing (I didn't try it without filesGlob config entry at all though). Original post was [here](https://github.com/TypeStrong/atom-typescript/issues/994#issuecomment-234717569)
<issue_start><issue_comment>Title: utilities.widths responsive 'mobile' doesn't work as expected username_0: I was digging through _utilities.widths, and noticed that the classes are created using these for loops: ``` @if (variable-exists(mq-breakpoints)) { @each $inuit-bp-name, $inuit-bp-value in $mq-breakpoints { @include mq($from: $inuit-bp-name) { @include inuit-widths($inuit-fractions, \@#{$inuit-bp-name}); } } } ``` When I try using the class `u-1/3@mobile`, I would expect it to create `max-width` media query. But in this loop we are only using `mq($from...)`. I fixed this by creating an if/else within the @each loop: ``` @if (variable-exists(mq-breakpoints)) { @each $inuit-bp-name, $inuit-bp-value in $mq-breakpoints { $isMobile: ($inuit-bp-name == mobile); @if $isMobile { @include mq($until: $inuit-bp-name) { @include inuit-widths($inuit-fractions, \@#{$inuit-bp-name}); } } @else { @include mq($from: $inuit-bp-name) { @include inuit-widths($inuit-fractions, \@#{$inuit-bp-name}); } } } } ``` This isn't a final solution, but I was hoping you guys could incorporate this into the framework. <issue_comment>username_1: I prefer the mobile first approach myself. And I don't think we should mix the media queries in the way that you suggest. I agree that if you name your breakpoints as `mobile`, `tablet`, `desktop` etc then it makes it more confusing. Alternatively you could leave out the `mobile` breakpoint and assume that `u-1/2` will be in effect on mobile and `u-1/4@tablet` on tablets and so on. <issue_comment>username_2: I should mention that the current implementation of responsive widths is just a default solution which is likely to be dropped in favor of a custom solution by any developer project-wise. Hence this condition: ```scss @if (variable-exists(mq-breakpoints)) { ... } ``` That is, if you want to use another tool as Sass-MQ, you can do so. So, being a default solution, we want it [as simple as possible](https://github.com/inuitcss/inuitcss/blob/develop/CONTRIBUTING.md#simplicity-is-paramount) even more. We don't want to overcomplicate the things, so we take a mobile first approach by default and keep it simple by not providing extra Sass functionality to be able to choose between mobile first and desktop first. IMO, inuitcss shouldn't introduce Sass complexity here. What does @inuitcss/core think? <issue_comment>username_3: @username_2 fully agree. <issue_comment>username_4: and make a settings file in your project named something like `_settings.sass-mq.scss`. There you can have following breakpoints (just an example): ```sass $mq-breakpoints: ( md: 320px, lg: 769px, xlg: 980px, xxlg: 1300px ); ``` (md = medium, lg = large, xl = extralarge, xxl = extraextralarge). Then in your html you'd write classes in mobile first approach: ```html <div class="layout__item u-1/2 u-3/4@md u-2/5@lg u-1/3@xlg"> ``` This is how I'd personally do it, which helps me not having to write any Sass logic/functionality in my project. Also, I can use this file in my starter kit so that I have the same workflow across different projects. <issue_comment>username_0: Ah gotcha. This makes a lot of sense. I was accustomed to using the old [inuit-widths](https://github.com/inuitcss/trumps.widths-responsive/blob/master/_trumps.widths-responsive.scss), and thought it was a bug. This approach really simplifies it. Thanks again!<issue_closed> <issue_comment>username_4: @username_0 No problems, and glad we could help ;)
<issue_start><issue_comment>Title: "HOVER" TouchEvent should not be fired on deviced not supporting cursor username_0: I had a problem where buttons stayed on there "over" state due to the reception of an "HOVER" TouchEvent just after the release of said buttons. After a few exchange with Jeff, he came to the conclusion that "Hover" TouchEvent probably shouldn't be fired if Mouse.supportsCursor is false. Here is the reference on the Starling forums : http://forum.starling-framework.org/topic/touchevent-hover-event-fired-after-ended-event<issue_closed> <issue_comment>username_1: You're right, that artificial `HOVER` makes only sense on devices that support a mouse cursor. Thanks a lot for the report! :smile:
<issue_start><issue_comment>Title: Run imagemin-cli on _site/images username_0: <issue_comment>username_1: you may want to parse the Jekyll config and know where the jekyll build destination is. That way this package would work regardless of Jekyll configuration settings. <issue_comment>username_0: This package is opinionated to our structure. Our destination folder is always the Jekyll default of `/_site`. We could look at adding configurability (and, may need to for introducing features that some Jekyll sites may not be *ready* for), but I don't think we need it to merge this.
<issue_start><issue_comment>Title: Changelogs username_0: ## What's done * `CHANGELOG.md` file is automatically generated from GitHub data (not just Git alone) * Docs updated in `CONTRIBUTING.md` so that maintainers can generate the changelog themselves too
<issue_start><issue_comment>Title: Drag and drop from Explorer onto VS Code multiple times resulted in "associated text model is undefined" error. username_0: - VSCode Version: 1.2.1 - OS Version: 10586.318 Steps to Reproduce: 1. Open VS Code (only instance with `--disable-extensions`) 2. Drag a file in to VS Code to open 3. Close file 4. Remove from file list (multiple times) 5. Repeat 2-4 several times. 6. Encounter the following error: ![image](https://cloud.githubusercontent.com/assets/494055/16129352/57075c7a-33b9-11e6-9d3e-b54c0eabae4b.png) 7. From then on, unable to open that file despite it being a valid file. Restarting VS Code fixes the problem. <issue_comment>username_1: @username_0 does it reproduce using our insiders build? We are releasing preview releases of the next stable VS Code version for everyone to try and give feedback. These preview releases are not 100% tested and might be unstable but contain our latest features and bugfixes. You can give our preview releases a try from: http://code.visualstudio.com/Download#insiders <issue_comment>username_0: @username_1 Thanks for pointing me to the preview release. I'll run that for a while and see if I happen to observe this again.<issue_closed> <issue_comment>username_1: Ok please comment if it reproduces. We can reopen then. <issue_comment>username_0: Yes, things seem to work better now. I'll let you know if I see any further problems.
<issue_start><issue_comment>Title: update from the cli takes forever username_0: When running the update command from the CLI, the command does not finish and seems to work forever. Asking for info shows this: ![image](https://user-images.githubusercontent.com/1598254/93983900-7002db80-fd83-11ea-9ea7-84efc3d5e9d5.png) It seems that the update function cannot detect any cate versions newer than 2.0.0.
<issue_start><issue_comment>Title: <Domain Class>.get no longer treats 'null' as null keyword. Tries to resolve as ObjectId. username_0: Environment: Grails 3.1.4, MongoDb 5.0.5 Given the following gsp code: `<g:select optionKey="id" optionValue="name" name="rating" from="${ratings}" value="${client?.rating?.id}" noSelection="['null':'-Unrated-']" />` This code in the controller will fail: `skill.rating = Rating.get(params.rating)` with: `Message: invalid hexadecimal representation of an ObjectId: [null]` Workaround is: `<g:select optionKey="id" optionValue="name" name="rating" from="${ratings}" value="${client?.rating?.id}" noSelection="['':'-Unrated-']" />` and `skill.rating = params.rating ? Rating.get(params.rating)` : null`<issue_closed>
<issue_start><issue_comment>Title: Can't get Example Code to compile username_0: I'm getting started on a FastLED/OctoWS2811 LED project. I opened up the Example sketch in the FastLED folder called OctoWS2811Demo. I thought that'd be a great place to get going. The example code, however, does not compile. I did a fresh install of Arduino 1.6.12 and Teensyduino, but the problem remains. The error looking for OctoWS2813 seems like there's a typo in the library or something? Anyway, here's the error msg: In file included from /Applications/Arduino.app/Contents/Java/hardware/teensy/avr/libraries/FastLED/examples/Multiple/OctoWS2811Demo/OctoWS2811Demo.ino:3:0: /Applications/Arduino.app/Contents/Java/hardware/teensy/avr/libraries/FastLED/FastLED.h: In static member function 'static CLEDController& CFastLED::addLeds(CRGB*, int, int)': /Applications/Arduino.app/Contents/Java/hardware/teensy/avr/libraries/FastLED/FastLED.h:359:12: error: 'OCTOWS2813' was not declared in this scope case OCTOWS2813: { static COctoWS2811Controller<RGB_ORDER,WS2813_800kHz> controller; return addLeds(&controller, data, nLedsOrOffset, nLedsIfOffset); } ^ Error compiling for board Teensy 3.2 / 3.1. <issue_comment>username_1: This has already been fixed - pull the most recent version of the library. (See #346 for the pull request where I took the fix)<issue_closed> <issue_comment>username_0: Thanks, that did it! I mistakenly thought that a fresh install of Teensyduino would've started me with the most up to date libraries. Still learning all the ins and outs. Thanks for the fast reply.
<issue_start><issue_comment>Title: Remove relative time from JS logs username_0: Stop printing relative times (like "[ 0.007s]") in the JavaScript logs. These times aren't really necessary now that we're printing absolute times since #108. The only remaining benefit of relative times was that they allowed to distinguish logs from multiple instances in case they work in parallel and their logs are squashed together, but in practice this was only relevant in a specific scenario (reading Chrome OS system logs from a device that is locked and has the NaCl-based smart card applications installed both in-session and on the Login Screen). Assuming this a rare scenario that will anyway disappear with the NaCl deprecation, it seems fine to reduce the log clutter by removing the relative times. This change contributes to the logging improvements tracked by #146. <issue_comment>username_0: I actually remembered there was one reason why I kept these relative times originally. PTAL again - I've updated the commit description; I still think it's beneficial to get rid of these times.
<issue_start><issue_comment>Title: Functorize over the regex backend username_0: This would be nice. In particular, it may allow to implement the whole thing on top of native javascript regex (which are probably faster when used in javascript). <issue_comment>username_0: After https://github.com/username_0/tyre/pull/18, we are now in a much better position to do this, since we only use "vanilla" regular expression operators. <issue_comment>username_0: So, I've made two early prototype to investigate this a bit: - https://github.com/username_0/tyre/compare/functorized Uses a functor to abstract over the backend. The issue with that version is typed regex written with one backend will be incompatible with any other backend. The upside is that the API is unchanged once the functor is applied. - https://github.com/username_0/tyre/compare/poly Add a type parameter to the `Tyre.t` type depending on the backend. This means there might be a chance to write backend-agnostic regexs, although that would require more work and I'm not sure the final result will be so nice. The downside is that we would require new functions for the compilation aspects, and the API will require some changes. It also seems a bit cumbersome. <issue_comment>username_0: If anyone is interested and find one version or the other better, I'm open to suggestions.
<issue_start><issue_comment>Title: TypeError: Cannot read properties of undefined (reading 'create') username_0: I am implementing the sdk on react native project with react native version 0.64.2, and NOT using expo. This is the initialization of the sdk, but can not read create method: import { Adjust, AdjustConfig } from 'react-native-adjust' `constructor(props) { super(props); const adjustConfig = new AdjustConfig("{YourAppToken}", AdjustConfig.EnvironmentSandbox); Adjust.create(adjustConfig); } componentWillUnmount() { Adjust.componentWillUnmount(); }` This is the error shown: <img width="567" alt="Screen Shot 2022-02-16 at 5 04 53 PM" src="https://user-images.githubusercontent.com/19738414/154559675-803e1e38-236a-4f0a-a076-65eac6366057.png"> or if I try to get sdk version by: `Adjust.getSdkVersion(function(sdkVersion) { console.log("Adjust SDK version: " + sdkVersion) })` <img width="588" alt="Screen Shot 2022-02-17 at 10 57 50 AM" src="https://user-images.githubusercontent.com/19738414/154559696-3fc07223-5408-407c-9b3b-d3b4cb0b9165.png"> <issue_comment>username_1: Hi @username_0, Which platform are you running your app on (iOS / Android)? <issue_comment>username_1: Hi @username_0, Any update on this one? <issue_comment>username_0: @username_1 it is on android <issue_comment>username_0: ![image](https://user-images.githubusercontent.com/19738414/155173476-7f3a2b9a-124e-42c0-a2de-1fb71b480143.png) <issue_comment>username_0: to manually link the android side, here is what I did: - added to `settings.gradle`: `include ':react-native-adjust' project(':react-native-adjust').projectDir = new File("$rootDir/../node_modules/react-native-adjust/android")` - added to `build.gradle`: `implementation project(':react-native-adjust')` but still get `module_adjust` as undefined ... <img width="963" alt="Screen Shot 2022-02-22 at 8 30 15 AM" src="https://user-images.githubusercontent.com/19738414/155175849-130aa007-6e9f-43a2-ac1f-c9398799e56b.png"> <issue_comment>username_2: Hi @username_0, Did you also include the package in your MainApplication.java file as described in [this post](https://medium.com/@bala.krishnan/react-native-auto-linking-on-android-65a850bb9ed9)? <issue_comment>username_0: Looking in to it @username_2 <issue_comment>username_1: Thank you @username_0 for an update. I'm going to close this ticket, but in case you still have any further questions, feel free to comment / reopen. Cheers!<issue_closed>
<issue_start><issue_comment>Title: Update dependencies and peer dependencies username_0: #### Describe the desired behavior Check the current/latest version of all dependencies and peer dependencies. Update the package.json to include the latest. #### Describe the current behavior Using old versions of dependencies. #### Is this request related to a current issue? The newest version of NPM fails on npm install when there are peer dependency issues. #### Additional Context After updating the versions of dependencies, make sure that things still work as expected.
<issue_start><issue_comment>Title: Ajax Form / Form Handlers username_0: This was removed in v3 as we no longer support the jQuery plugin version. We have a version in the creative-dot website project that we can probably add back into this repo. <issue_comment>username_0: We need to create a version that is Pure JS and uses fetch. <issue_comment>username_0: Regardless add back in the PHP Class.
<issue_start><issue_comment>Title: sandbox.useFakeServer does not fake XMLHttpRequest in IE9 username_0: Sinon version: 1.14.1 Environment: Windows 7 IE9 Example URL: http://jsbin.com/gaxine **What happens?** The example alerts with `false` in IE9 because the XHR Request does not return the expected fake response. When using the `sandbox` (together with sinon-qunit, but also standalone) to create a fake server by calling `sinon.sandbox.useFakeServer()` sinon fakes only `XDomainRequest` but not `XMLHttpRequest` as `sinon.xhr.supportsCORS` is set to `false` (see https://github.com/cjohansen/Sinon.JS/issues/584). **What is expected to happen?** The example should alert with `true` like it does in other browsers like IE10/11, Firefox, Chrome, Safari. I would expect that XMLHttpRequest will be mocked as well as it is a valid scenario in IE9 even though it does not support CORS. My suggested solution would be to remove the else clause [here](https://github.com/cjohansen/Sinon.JS/commit/e8de34b5ec92b622ef76267a6dce12674fee6a73#diff-2e4c83d83d3d96b1f41ec5b83a73881eR87) to always fake `XMLHttpRequest` regardless of the value of `sinon.xhr.supportsCORS`. <issue_comment>username_0: This is still not working with sinon 1.17.2. Updated example: http://jsbin.com/nojojohaxo <issue_comment>username_1: @username_0 thank you for updating with more information! I like the idea of your suggestion, but I want to make sure I understand why there was a conditional to begin with. Perhaps there are reasons I don't yet know of. I'll add a couple of labels, and perhaps I or someone else will find time to investigate this. <issue_comment>username_2: The same happens when testing with jsdom: sinon.xhr.supportsCORS is false, which basically disables the handling of xhr responses. <issue_comment>username_3: https://github.com/sinonjs/sinon/blob/master/lib/sinon/util/fake_server.js#L87 Why not make this an option to us the user? I use a lib for xhr, it's using XMLHttpRequest in IE9 nor XDomainRequest. <issue_comment>username_3: ``` sinon.useFakeXDomainRequest = sinon.useFakeXMLHttpRequest ``` For now, I use this to fix the test case in IE9. It would be great to have a choice while create the server. May looks like this: ``` this.server = sinon.fakeServer.create({xhr: sinon.useFakeXMLHttpRequest()}) ``` Will try to make a PR lately
<issue_start><issue_comment>Title: Don't log missing PVC for stateless processes username_0: # Description Fixes: https://github.com/FoundationDB/fdb-kubernetes-operator/issues/1037 ## Type of change *Please select one of the options below.* - Bug fix (non-breaking change which fixes an issue) # Discussion - # Testing local # Documentation - # Follow-up -
<issue_start><issue_comment>Title: Url username_0: <issue_comment>username_1: Why do we need to bring j2objccontrib.org in to it at all? Given that we're already so tightly integrated with GitHub, I don't see that changing... so I'm not sure what value this adds? <issue_comment>username_0: Just easier to remember urls really <issue_comment>username_1: I still think we're better off with the GitHub urls and people are more likely to be comfortable clicking on that. Unless we're building out the website, I don't think it has much value to add in another url. <issue_comment>username_0: fair enough
<issue_start><issue_comment>Title: [KitchenSink] Landing page & guide username_0: ← #490 ## requirements Showroom should have a landing page and a dev guide following new mockups by @alessandravilla ## specs Mockups: - Landing: [](https://zpl.io/ZDHQSr) - Guide: [](https://zpl.io/ZDHQSr) ## misc {optional: other useful info}<issue_closed>
<issue_start><issue_comment>Title: Please upgrade NuGet package for gRPC with new version username_0: current version (0.6.1) of gRPC available via NuGet is not compatible with Google Protocol Buffers C# 3.0.0-alpha4 gRPC 0.6.1 generates non-compiled service stub: the trouble in ``Marshaller`` variable - ``ParseFrom`` method is part of ``Parser`` property of message rather then message one: ``` static readonly Marshaller<global::Mynamespace.MyMessage> __Marshaller_MyMessage = Marshallers.Create((arg) => arg.ToByteArray(), global::Mynamespace.MyMessage.ParseFrom); ``` <issue_comment>username_0: in addition cannot build ``RoutGuideServer`` example because it references on non-existing NuGet package: https://github.com/grpc/grpc/blob/master/examples/csharp/route_guide/RouteGuideServer/packages.config <issue_comment>username_1: The gRPC C# beta nuget packages have been pushed a few hours ago. Official announcement & docs are coming soon.<issue_closed>
<issue_start><issue_comment>Title: Strange behavior with a Bernoulli/sigmoid model username_0: I'm trying to fit a simple model to some data. The data are binary outcomes clearly being driven by a sigmoidal function, I want to find the parameters for the shape of that function (i.e., center and slope). The data look like this: ![sample_data](https://cloud.githubusercontent.com/assets/9464950/16098592/d35996a0-3321-11e6-80f2-c9c9e4d1d4df.png) Here is my code: ```python import numpy as np import pandas as pd import pymc3 as pm import theano.tensor as T import matplotlib.pyplot as plt def normcdf(x): return (1 + T.erf(x / T.sqrt(2))) / 2 with pm.Model() as model: data = pd.read_csv('sample_data.csv') a = pm.Normal(name='a', mu=0, sd=10) b = pm.Normal(name='b', mu=0, sd=10) x = (np.asarray(data.delta) - a) / T.exp(b) p = normcdf(x) X = pm.Bernoulli(name='X', p=p, observed=np.asarray(data.response)) trace = pm.sample(2000, njobs=3) pm.traceplot(trace) plt.savefig('traceplot.png') ``` The sampler seems to finish suspiciously quickly, and invariably produces nonsense, like this: ![traceplot](https://cloud.githubusercontent.com/assets/9464950/16099162/bf71b70a-3324-11e6-9d73-f1cdbb2c2f0e.png)<issue_closed>
<issue_start><issue_comment>Title: phpunit 9.5.8 username_0: --- Debug Info: - homebrew updater version: 1.0.6 - formula new file size: 4,458,067 bytes - formula fetch time: 3.4 seconds Pull request opened by [homebrew-updater](https://github.com/username_0/homebrew-updater) project. Open a new [issue](https://github.com/username_0/homebrew-updater/issues) to monitor new formula.
<issue_start><issue_comment>Title: Support the new BCL DateOnly and TimeOfDay structs username_0: New DateOnly and TimeOfDay structs are being introduced to .NET 6 as alternatives to DateTime (https://github.com/dotnet/runtime/issues/49036). MySqlConnector issue: https://github.com/mysql-net/MySqlConnector/issues/963. <issue_comment>username_1: Please note that the final names for these types are `DateOnly` and `TimeOnly` and that they are now merged into the main branch for the next version of .NET 6 (preview 4 likely). Please rename the title of this issue accordingly. Thanks. <issue_comment>username_2: They are [coming](https://github.com/dotnet/core/issues/6098#issuecomment-840815510) in .NET 6 Preview 4. <issue_comment>username_0: @username_2 yes. Note that it may be a good idea to hold off here (and in Npgsql) until support is added in EF Core itself (https://github.com/dotnet/efcore/issues/24506, https://github.com/dotnet/efcore/issues/24507), this way the specification tests will already be added upstream. I hope to make that happen pretty quickly after the preview4 release. <issue_comment>username_3: Any news? <issue_comment>username_0: FYI support in EF Core has been added for Sqlite, including functional tests. <issue_comment>username_2: We will implement it in the near future.<issue_closed>
<issue_start><issue_comment>Title: Improve indexing performance - DSC with external imports username_0: When file contains Import-DSCResource pointing to a non-installed resource, parsing takes long time and 'Unable to load resource' errors appear. Therefore, we won't parse these files by default - a new configuration option will be added 'ParsePowershellDSCWithExternalImports' with default = false.<issue_closed>
<issue_start><issue_comment>Title: Debug mode stops code execution without breakpoints. username_0: _From @username_3 on April 5, 2016 7:10_ Hey I am running {N} 1.5.2 (I cannot update to a newer version because of a plugin that we are currently using) and whenever I start the app with tns debug android the node debugger runs in chrome and stops at different places where there are no breakpoints. It slows down everything! Anyone had the same issue? _Copied from original issue: NativeScript/NativeScript#1885_ <issue_comment>username_1: Hi @username_3 Indeed 1.5.x is quite old and I would suggest to upgrade to a newer version. If the plugin in question is the only reason to stay with 1.5.x I would love to help you with the migration. Please let us know which is the plugin or even better send us the URL to the plugin repo. <issue_comment>username_2: @username_3, feel free to reopen the issue if you need any help or face any problems.<issue_closed> <issue_comment>username_3: I found the solution. I had to clear the DNS cache and it worked fine since then.
<issue_start><issue_comment>Title: Time-series models in edward username_0: Hi, I'm interested in doing some time-series state-space modelling in edward. So I started implementing a simple autoregressive model AR(k) with a toy dataset. I started out with fixed values for the observation noise and for the variance in the latent process, and everything seemed to work great and fast (code for this example is available here: [ARk_fixed_noises.py](https://github.com/username_0/edward/blob/example/timeseries/examples/ARk_fixed_noises.py) ). However, if I place Inverse-Gamma priors on these two variances and estimate their distributions, then I need to increase the number of samples significantly in order to obtain any decent results for the inferred states, and then MFVI (naturally) becomes quite slow even for this very small toy example (code with InvGamma priors: [ARk_estimate_obs_noise.py](https://github.com/username_0/edward/blob/example/timeseries/examples/ARk_estimate_obs_noise.py) and [ARk_estimate_both.py](https://github.com/username_0/edward/blob/example/timeseries/examples/ARk_estimate_both.py) ). So may questions are: - Did anyone implemented time-series models in edward before? - Do you have any intuition why introducing these two univariate Inverse-Gamma priors can make such a huge difference in the behavior of the inference algorithm? (I have similar models implemented in Stan with ADVI and inference is fast...). Am I doing something wrong? Any suggestions are welcome :-) <issue_comment>username_1: hi @username_0, this is great! we've been meaning to add some time-series models to our examples and website. we would love to work with you on these. a quick question before we begin to dig into your examples: does ADVI in Stan work well for the Inverse Gamma prior case? (my intuition says that it shouldn't work very well in that case.) <issue_comment>username_0: Hi @username_1, Actually, in my Stan code I had a half-Cauchy prior on the variances, but I just changed that to a Inverse Gamma and it still works great. Once the Stan model is compiled, it converges in just a few seconds (for a dataset that is pretty much the same as the one that I'm trying in edward), and it gives good results. Here is the Stan model and the R script that uses it, in case you want to have a look. I was pretty much trying to get something similar to this working in edward as a starting point... [kf_simple_invgamma.stan](https://github.com/username_0/edward/blob/example/timeseries/examples/kf_simple_invgamma.stan) [kf_simple_invgamma.R](https://github.com/username_0/edward/blob/example/timeseries/examples/kf_simple_invgamma.R) <issue_comment>username_2: @username_0 thanks for posting this code, exactly what I was looking for. What's your end goal? Just AR? I recently added a Euler-Maruyama scheme PyMC3 and will try to do something similar here, if possible. Would there be interest in contributing such support for SDEs in Edward? <issue_comment>username_3: @username_0: i think this works, no? ```python mu = Normal(mu=1.0, sigma=10.0) beta = Normal(mu=1.0, sigma=10.0) noise_proc = InverseGamma(alpha=1.0, beta=1.0) noise_obs = InverseGamma(alpha=1.0, beta=1.0) x = [0] * N x[0] = Normal(mu=mu, sigma=10.0) # fat prior on x for n in range(1, N): x[n] = Normal(mu = mu + beta * x[n-1], sigma=noise_proc) y = Normal(mu=x, sigma=noise_obs) ``` @username_2: yes, SDEs would be awesome. would be very interesting to explore it especially in the realm of ABC, where the likelihood is intractable (due to a complex high-dimensional SDE). <issue_comment>username_4: @username_3, are there any updates on the native modeling language that can provide what ADVI needs? <issue_comment>username_3: essentially we need an attribute to all random variables, with a default constrained-to-unconstrained transformation. we can then do ADVI by leveraging this attribute to perform automated transformations. i thought `tf.contrib.distributions` implemented this, although now that i look i no longer see it(?). <issue_comment>username_5: I would like to model time series in edward so I tried out a modified version of the code provided by @username_3 above. Specifically, ``` mu = 0. beta_true = 0.9 noise_obs = 0.1 T = 64 x_true = np.random.randn(T)*noise_obs for t in range(1, T): x_true[t] += beta_true*x_true[t-1] mu = Normal(mu=0., sigma=10.0) beta = Normal(mu=0., sigma=2.0) noise_proc = tf.constant(0.1) #InverseGamma(alpha=1.0, beta=1.0) noise_obs = tf.constant(0.1) #InverseGamma(alpha=1.0, beta=1.0) x = [0] * T x[0] = Normal(mu=mu, sigma=10.0) # fat prior on x for n in range(1, T): x[n] = Normal(mu=mu + beta * x[n-1], sigma=noise_proc) qmu = PointMass(params=tf.Variable(0.)) qbeta = PointMass(params=tf.Variable(0.)) inference = ed.MAP({beta: qbeta, mu: qmu}, {x: x_true}) ``` however, I get the error `TypeError: unhashable type: 'list'` which I assume comes from `x` being a list `Normal` variables. Is there something obvious I'm just missing? This should just be a large hierarchical model, do I have to feed each `x[i]` separately? <issue_comment>username_3: Since `x` is a list of random variables, I think you'd have to, e.g., `data={xt: xt_true for xt, xt_true in zip(x, x_true)}`. We could think of ways to vectorize this. It's easy to vectorize across data points, so each `x[t]` consists of N data points, implying the list `x` comprises of N data points each of `T` steps. `x[t]`'s dependence on `x[t-1]` makes it a little difficult to vectorize across time steps. <issue_comment>username_5: Thanks @username_3. I'll try feeding in each time separately for now. This will be fine for univariate series, I'm wondering if it'll be able to scale to the series I'm interested in analyzing. I agree that vectorizing over replicates of series is easy but the dependency makes it difficult. It may be that for higher dimensional series it'd be better to just make an explicit `Distribution` that handles the joint distribution of all the `x[t]`. I'll let you know how things work out with this solution in the mean time. <issue_comment>username_3: Do let me know if that works! If building up a new random variable turns out to be necessary for very long time series, then we should definitely make note of it. <issue_comment>username_5: The code you suggested generates the following error: `TypeError: Data value has an invalid type.` It looks like the `Inference` constructor doesn't handle the case when the values being fed into a node is of type `numpy.float` (or even just `float`). I tried to make each `x_true[t]` a 1d numpy array but that generated a different error saying that the sizes were incompatible. I'm assuming this is because each `x[t]` is a `Normal` which is probably a scalar. I'm happy to try to add the case of a missing float, just wanted to check that I wasn't either duplicating effort or that it would be part of a larger issue. Thanks. <issue_comment>username_3: nice catch—that would be a fine contribution. it is not duplicating effort anywhere. <issue_comment>username_5: cool. I think I have it working. Will submit a PR shortly.
<issue_start><issue_comment>Title: Custom Location Page: Add indicator graph to admin interface username_0: For a country/location page we should be able to display an indicator - data is coming from CPS <issue_comment>username_1: What indicators should be available ? Only the ones for which we have charts in normal country pages or for all indicators ? <issue_comment>username_0: I would say for all the indicators. @cjhendrix ?<issue_closed>
<issue_start><issue_comment>Title: [unity] Fix multiple nested prefab override issue. username_0: Related to #1273. http://ko.esotericsoftware.com/forum/Unity-Nested-prefab-overrides-11476 <issue_comment>username_0: `An asset is marked with HideFlags.DontSave but is included in the build: Asset: 'Assets/xxxxxx/xxxxxxx.prefab' Asset name: Skeleton Prefab Mesh "xxxxxxxxx" (You are probably referencing internal Unity data in your build.)` AssetBundle build faild with this message. I'm finding a solution.
<issue_start><issue_comment>Title: Graph2D: Mark regions with background color username_0: Hello, is there a way to mark regions (by Y-values) with different background-colors in a Graph2D? Example: mark the area between 0 and 30 in red and 30-60 in blue. I would not like to do it with more graphs + shadows, because the areas should be infinite to right and left x-values. Thanks in advance.<issue_closed> <issue_comment>username_1: @username_0 I don't think this is possible at the time. Feel free to create a new feature-request issue.
<issue_start><issue_comment>Title: Getting NSInvalidArgumentException on iOS 8.0, AFNetworking 2.6.0, XCode 7.0.1 username_0: Getting this error when compiling my IOS 8 app locally. 2015-10-02 14:04:40.845 App Name[12130:1274441] *** Terminating app due to uncaught exception 'NSInvalidArgumentException', reason: 'data parameter is nil' *** First throw call stack: ( 0 CoreFoundation 0x0000000104ddcf65 __exceptionPreprocess + 165 1 libobjc.A.dylib 0x00000001046f7deb objc_exception_throw + 48 2 CoreFoundation 0x0000000104ddce9d +[NSException raise:format:] + 205 3 Foundation 0x00000001028a429d +[NSJSONSerialization JSONObjectWithData:options:error:] + 67 4 App Name 0x0000000101231c28 __28-[LoginController authUser:]_block_invoke + 328 5 App Name 0x0000000101226431 __20+[User login:block:]_block_invoke215 + 113 6 Ap Name 0x00000001012e697b __64-[AFHTTPRequestOperation setCompletionBlockWithSuccess:failure:]_block_invoke_3 + 91 7 libdispatch.dylib 0x0000000105575ef9 _dispatch_call_block_and_release + 12 8 libdispatch.dylib 0x000000010559649b _dispatch_client_callout + 8 9 libdispatch.dylib 0x000000010557e34b _dispatch_main_queue_callback_4CF + 1738 10 CoreFoundation 0x0000000104d3d3e9 __CFRUNLOOP_IS_SERVICING_THE_MAIN_DISPATCH_QUEUE__ + 9 11 CoreFoundation 0x0000000104cfe939 __CFRunLoopRun + 2073 12 CoreFoundation 0x0000000104cfde98 CFRunLoopRunSpecific + 488 13 GraphicsServices 0x0000000107d38ad2 GSEventRunModal + 161 14 UIKit 0x000000010327b676 UIApplicationMain + 171 15 App Name 0x000000010122d04f main + 111 16 libdyld.dylib 0x00000001055ca92d start + 1 ) libc++abi.dylib: terminating with uncaught exception of type NSException (lldb)<issue_closed>
<issue_start><issue_comment>Title: Allow `rkt app rm` on a stopped pod. username_0: When the pod is stopped, don't call app-add/app-rm entrypoint because there is not much we can/need to do when the whole pod is stopped. Fix #3221 . This is one approach of fixing it , the other way is to pass the -1 pid to the entrypoints, and make a contract saying `if the pid is negative, then it means the pod is not running`. I am ok with both. cc @username_1 @username_2 @s-urbaniak <issue_comment>username_1: I think I like this approach better so we don't complicate stuff by passing `-1` to stage1. <issue_comment>username_1: lgtm <issue_comment>username_2: This should be fine. Just as a note of caution in case we start passing this around other places, negative pid number are typically used to refer to PGID (eg. by `kill`).
<issue_start><issue_comment>Title: Added DataParallelTable. Added docs. username_0: See the docs for an explanation and usage. After chatting with @username_1, it seems like he would prefer that the lua source goes in cunn, rather than nn (which makes sense to me since this is a GPU only related module). It means that a) CMakeLists needs to change so that all lua source is copied and b) I needed to add docs. <issue_comment>username_0: It would be nice if we could do a proper code-review on this one. I've been using it in my production code for about a week now and it seems to be OK. I've also run cuda-memcheck on test_DataParallelTable.lua, but I'm not 100% sure if there aren't other nasty corner cases. <issue_comment>username_1: i have a curious idea i think net:syncParameters() can be avoided in a way: keep a self.synced boolean variable in accGradParameters mark it to false in :updateOutput, if it is false, force-sync. what do you think? <issue_comment>username_0: Yeah, I thought that might work... However, it assumes that you're calling forward, backward, forward, etc. One might be inclined to call backward multiple times before performing the gradient step (I can't think of a reason of the top of my head, but its a possibility). So to keep it general, I figured the syncParameters call seemed to be the easiest API. <issue_comment>username_1: even in your use-case there's a clear boundary on when to sync and when not to sync right, i.e. in the case of forward-backward-backward-backward-forward. I dont mind this new change, but it would become 100% natural to use DataParallelTable if something like that is introduced. <issue_comment>username_0: Any updates? Seems silly to waste all that work... <issue_comment>username_1: i've reviewed the code. just debating on how to make it seamless to the user to use. Is there a possibility that you can remove syncParameters() by integrating it into the forward/backward phases? If not it's ready for merge. <issue_comment>username_0: OK, so I was actually thinking about taking it one step further. Speaking to Wojciech, it seems like the magic hack to get DataParallel scaling is to only update the weights every few mini-batches. ie you let the models diverge and then every few batches you average the parameters. I haven't tried it but I plan to do so this week. I would also need to train a model to completion to make sure the performance is not degraded... Therefore, I would like to keep the API for this stage the way it is and maybe make it even more "user-unfriendly", but enable the user to sync the parameters when they choose to do so. What do you think? You probably wont like it, but personally I'd rather open up the sync() function to the user so that a) they have the freedom to do as described and b) they are aware of the PCIe overhead of moving weights around (which from my experience so far is pretty bad). I dream of having training be 2x faster :-) Right now I'm getting 1.6x faster on 2 GPUs which is a little disappointing. <issue_comment>username_1: @username_0 your comments sound good. I'll merge it at the current state then. And wrt comments about longer times between sync, Sixin is the man for that. Go talk to him. http://arxiv.org/abs/1412.6651 <issue_comment>username_1: This has waited long enough, I've merged it in. Jono see my comment inline, the new copy kernels have landed. <issue_comment>username_0: Sorry, it's been a hectic weekend. I'll try and get around to it when I have time.
<issue_start><issue_comment>Title: 'tslint failure' is harsh wording username_0: _From @jrieken on October 25, 2016 9:54_ The tslint extension gives little encouraging code actions ala 'Fix xyz tslint failure'. I think that's a little too harsh and some more positive wording should be used _Copied from original issue: Microsoft/vscode#14394_<issue_closed>
<issue_start><issue_comment>Title: Include thumbnail deletion in the delete method username_0: Take all the subfolders mentioned in the options and delete the file from each of them. <issue_comment>username_1: I think you'd have to use a conditional to check for imageVersions first, as not everyone will specify alternate image versions? <issue_comment>username_0: @username_1 yes. thanks.
<issue_start><issue_comment>Title: Add a test file username_0: <issue_comment>username_0: test internal-perf please <issue_comment>username_0: test perf please <issue_comment>username_0: test internal-perf please <issue_comment>username_0: test internal-perf please <issue_comment>username_0: test internal-perf please <issue_comment>username_1: test internal-perf please <issue_comment>username_1: test internal-perf please <issue_comment>username_0: test-internal perf please <issue_comment>username_0: test internal-perf please <issue_comment>username_0: test internal-perf please <issue_comment>username_0: test internal-perf please <issue_comment>username_0: test internal-perf please <issue_comment>username_0: test internal-perf please <issue_comment>username_0: test internal-perf please <issue_comment>username_0: test internal-perf please <issue_comment>username_0: test internal-perf please <issue_comment>username_0: test internal-perf please <issue_comment>username_0: test internal-perf please <issue_comment>username_0: test internal-perf please <issue_comment>username_0: test internal-perf please <issue_comment>username_0: test internal-perf please <issue_comment>username_0: test internal-perf please <issue_comment>username_0: test internal-perf please <issue_comment>username_0: test internal-perf please <issue_comment>username_0: test internal-perf please <issue_comment>username_0: test internal-perf please <issue_comment>username_0: test internal-perf please <issue_comment>username_0: test internal-perf please <issue_comment>username_0: test internal-perf please <issue_comment>username_0: test internal-perf please <issue_comment>username_0: test internal-perf please <issue_comment>username_0: test internal-perf please <issue_comment>username_0: test internal-perf please <issue_comment>username_0: test internal-perf please
<issue_start><issue_comment>Title: add simple tests for result type checks username_0: Some checks for compliant GroupBy objects is done while executing operations. However, others need to be done at the end, since... * custom functions (UDFs) can return anything Some useful checks... - [ ] correct GroupBy (e.g. similar groupings check as in ops) - [x] result must be GroupBy instance or "literal" - [ ] is it okay for the result to just be something with a matching index to original data? (likely no?)
<issue_start><issue_comment>Title: Switch zend_print_zval_r to use smart_str username_0: Instead of directly writing to stdout. This allows doing a print_r into a string, without using output buffering. The motivation for this is bug #67467: print_r() in return mode will still dump the string to stdout (causing a potential information leak) if a fatal error occurs). <issue_comment>username_1: As you remove two exported symbols, and thus break internal API , I would have merged it against 7.1 <issue_comment>username_2: Would be great to get this merged. <issue_comment>username_0: Merged as https://github.com/php/php-src/commit/1b29e0cacd4670d2f92456302bc5a3199b408a06 :)
<issue_start><issue_comment>Title: Security Vulnerability Password in Launch File username_0: Found password hardcoded in the [navigation launch file]( https://github.com/utra-robosoccer/soccer_ws/blob/master/soccerbot/launch/modules/navigation.launch). @Shah-Rajab changed password already. We should consider updating all our passwords.<issue_closed>
<issue_start><issue_comment>Title: Mongo Panache error with '$in' queries on a CustomID field username_0: ### Describe the bug While using CustomIds, a query $in doesn' work (and returns an empty array) I pushed a sample project to reproduce this issue : https://github.com/username_0/quarkusPanacheIssueCustomID - I made two services, one that uses the CustomID field, and an other that uses a regular field and works - I have tested it with both Long and String CustomIDs and the behavior is the same I have discussed this issue with @username_1 ### Expected behavior A $in query based on a CustomID field should retrieve the documents with theses ids For example : ```java @Path("/findByField") public ArrayList<UserModel> listByOtherField(@QueryParam("id") ArrayList<String> listIds) { ArrayList<UserModel> result = new ArrayList<UserModel>(); PanacheQuery<PanacheMongoEntityBase> query = UserDAO.find("{'customId' :{$in: [?1]}}", listIds); ``` ### Actual behavior An empty array is returned for a query based on a CustomID field, but works fine for an other field ### How to Reproduce? 1. Create a customID on a field by using @BsonId annotation on a class extending **PanacheMongoEntityBase** 2. Add function that uses this Class (for example UserDAO) and add a Query **find** (eg : UserDAO.find("{'userId' :{$in: [?1]}}", listIds); In my sample project, I've added a ".http" file that contains requests that can be executed https://github.com/username_0/quarkusPanacheIssueCustomID/blob/master/src/test/resources/sampleRequests.http IntelliJ allows you to execute .http files queries natively, in VSCode you have to use the RESTClient extension ### Output of `uname -a` or `ver` _No response_ ### Output of `java -version` openjdk version "16.0.2" 2021-07-20 ### GraalVM version (if different from Java) _No response_ ### Quarkus version or git rev 2.2.1.final ### Build tool (ie. output of `mvnw --version` or `gradlew --version`) Apache Maven 3.8.1 ### Additional information _No response_ <issue_comment>username_0: I think it is related to the fact that a Custom ID is store "_id" in the Mongo database... So my query cannot find any field that is named "CustomId" I made a loop to bypass this behavior : loop on my IDs, and then for each, made a "findById" request Maybe an improvement would be to add a "findByIds" method that could take a list of ObjectId values ? So finally, I would consider this issue not to be an issue, and maybe I could write an adendum for the Documentation just to explain this ? :) <issue_comment>username_1: Yes, so your query must find on `_id` and not `userId`, at least on the current version of MongoDB with Panache. <issue_comment>username_1: Today you use: ``` UserDAO.find("{'userId' :{$in: [?1]}}", listIds); ``` And replacing userId by _id will do the trick. ``` UserDAO.find("{'userId' :{$in: [?1]}}", listIds); // this one is with PanacheQL UserDAO.find("_id in ?1", listIds); ``` Today, we allow using object field name instead of database field name in case of `@BsonProperty` but not when using `@BsonId` I think it's an oversight worth fixing it. This issue will occurs not only on $in but for all queries based on custom ID when using the object field name instead of _id.
<issue_start><issue_comment>Title: dockerfile update username_0: Combine node install commands into single RUN Avoids problematic behavior that occurs when the nodesource curl and nodejs install commands are cached separately. Following up on suggestion from https://github.com/codeclimate/codeclimate-duplication/pull/80/files#r50307883 @codeclimate/review <issue_comment>username_1: :+1:
<issue_start><issue_comment>Title: Comment out slack username_0: Comment out slack adapter as there's no good defaults. Service will continue to work. This will need a correspondent change in https://github.com/StackStorm/st2-packages/pull/206/files cc @username_3 to uncomment the lines. Longer term, I'd like to ask @username_1 to add all supported Chat Adapter's environment variables, commented out, so that user (or script) can uncomment them for a given provider. <issue_comment>username_1: Ack. <issue_comment>username_2: I've fixed tests on this one, but bootstrap scripts are also need to be changed. Do we really care about that much about this lines to go into the hassle of changing it in two repo, backporting it to stable and then testing it still works everywhere? <issue_comment>username_1: Added the adapter blocks as well as the doc links for each one. <issue_comment>username_3: That was my question too but sometimes @username_0 is insistent on such things. Can you merge to 1.3 as well?
<issue_start><issue_comment>Title: Use original Ada grammar username_0: We were using the branch `better-with-highlighting` on @aroben's fork of https://github.com/aroben/ada.tmbundle. textmate/ada.tmbundle#2 is now fixed and we can use the original repository again. Discussed in [#2357 (Comment)](https://github.com/github/linguist/issues/2357#issuecomment-98129828). <issue_comment>username_1: :+1:
<issue_start><issue_comment>Title: Pointing manipulation functions username_0: We will need a set of common routines for various pointing operations: aberration calculation, positional astronomy operations, coordinate transforms, etc. These are free functions that we should put into a new source file in toast.tod<issue_closed> <issue_comment>username_0: Many of these have already been added to toast.pointing_math. Closing this now since the broad scope has been fulfilled and more specific tickets can be opened as needed.
<issue_start><issue_comment>Title: Stream API - Do not receive anything for account, confirms, OPU etc. username_0: Hi again, I tried to use the stream API based on the stream_ig.py file in the sample folder. Price update part works perfectly, on the other hand the account update part does not. In debug mode, i only get PROBE from lightstreamer (i tried to update account by closing positions and making a deposit and still nothing received). The following message repeat indefinitely ``` DEBUG:trading_ig.lightstreamer:Waiting for a new message DEBUG:trading_ig.lightstreamer:Received message ---> <PROBE> DEBUG:trading_ig.lightstreamer:PROBE message ``` In the same way, i cannot get the TRADE part working, i do not get any reception whith the following parameters: ``` subscription_trades = Subscription( mode="DISTINCT", items='TRADE:'+self.accountId, fields=["CONFIRMS","OPU", "WOU"], ) ``` thanks for your help, <issue_comment>username_1: Hi, Nice to know that price update is working... because according https://github.com/ig-python/ig-markets-api-python-library/issues/24 it was broken I don't use this part API so I can't help but feel free to post here what you noticed (or [using email](https://github.com/username_1/)). Kind regards <issue_comment>username_2: - Items needs to be a list so items=['TRADE:'+self.accountId] - Check #42 , there is another bug Fixing these two things got it to work for me. <issue_comment>username_3: I had a recent issue so I've moved ACCOUNT and TRADE subscribtions in first positions (before adding the MARKET) and now I can received them again. <issue_comment>username_4: Closing this. price and account updates are working ok in the ``/sample/stream_ig.py`` example<issue_closed>
<issue_start><issue_comment>Title: Styles (transformations) not applied, when isOpen set to true initially username_0: When initial value of `isOpen` is set to `true` on `<Menu>` component, then transformation of main content is not happening and console spits out `Element with ID 'page-wrap' not found` and `Element with ID 'outer-container' not found` <issue_comment>username_0: I did manage to make this working by setting initial state to `isOpen` prop, so in `menuFactory`: ``` getDefaultProps: function getDefaultProps() { return { id: '', noOverlay: false, onStateChange: function onStateChange() {}, outerContainerId: '', pageWrapId: '', styles: {}, width: 300, breakpoint: 960, isOpen: false }; }, getInitialState: function getInitialState() { return { isOpen: this.props.isOpen }; }, componentDidMount: function componentDidMount() { window.onkeydown = this.listenForClose; if (this.props.isOpen) { this.toggleMenu(); } }, ``` Opening menu initially via `isOpen` prop: `<Menu isOpen ... />` For a moment I thought this is antipattern ([Props in `getInitialState` is an Antipattern](https://facebook.github.io/react/tips/props-in-getInitialState-as-anti-pattern.html)), but clearly it is not, according to last part of this React's guide, as we are properly using `isOpen` prop to "seed data for the component's internally-controlled state". One thing to concerning here is fact that all of this doesn't pass tests related to initial `isOpen` state: 1. menuFactory when rendered successfully is initially closed 2. menuFactory open state change should not occur when parent component state changes 3. menuFactory open state change should not occur when receiving new props if isOpen prop is undefined <issue_comment>username_1: Hi @username_0, I'm glad that fixes it, but I'd need to look into this a bit more as those tests cover a lot of edge cases people have reported while using `isOpen`. I don't have a lot of time to spend on this at the moment (sorry I haven't got back to you about your PR yet either) but will get round to it as soon as I can. Thanks so much for your detailed reporting! :) <issue_comment>username_2: :1 also confirming this bug :) <issue_comment>username_1: Hey @username_0 and @username_2, This should be fixed in v1.9.2. I agree it seems not to count as an antipattern to reference the `isOpen` prop (when it exists) in this case, as it is used to seed initial data. If you're interested, the way I solved the issue of those failing tests was to check for the presence of the `isOpen` prop inside `getInitialState`, rather than setting it as a default prop (see https://github.com/username_1/react-burger-menu/commit/5a2d98214c81589ae78b91ef090a9319abc5e349).<issue_closed>
<issue_start><issue_comment>Title: Re-evaluate macOS min supported version username_0: <!-- ⚠️ If you do not respect this template, your issue will be closed --> <!-- ⚠️ Make sure to browse the opened and closed issues to confirm this idea does not exist. --> ### What is the expected enhancement? Tweedledum is moving to supporting macOS>=10.15 and making anything < 10.15 best effort support. See: https://github.com/boschmitt/tweedledum/issues/159#issuecomment-915434883 In Qiskit we've normally supported macOS >=10.9 and build all our binary packages accordingly. However we're now at a bit of an impasse because we've had tweedledum as a hard requirement since the 0.18.0 release. For the pending tweedledum 1.1.1 release @boschmitt built the wheels by hand to continue supporting the older macOS versions for us. But the question of macOS version support still stands and moving forward I think we're going to see more of this. (for example mthree, which is a downstream consumer of terra only supports macOS>=10.14: https://pypi.org/project/mthree/#description ). The open question is which versions of macOS do we support moving forward. I'm not proposing we change our binary builds (we should continue to build for 10.9) but maybe we document a higher minimum version to have a seamless install experience. I'm personally not a mac user so I'm not necessarily aware of all the implications of doing this. <issue_comment>username_1: 85% of all OSX users would still be covered with this change: https://gs.statcounter.com/macos-version-market-share/desktop/worldwide <issue_comment>username_2: 78% sounds worryingly low to me, to be honest. I've kept the Macbook I got from Imperial on 10.14, for example, because 10.15 introduced new annoying things for any non-bundled C toolchain, and more restrictions on what processes can and can't do. That said, as long as we keep the binary builds using the 10.9 API, and it's just a documentation note that <10.15 may not be properly supported, I think that'll be fine. <issue_comment>username_0: We discussed this last week among @Qiskit/terra-core and came to the conclusion that we still want to provide full support for older macOS versions, mostly for supporting older macOS hardware, since apple is pretty aggressive with culling support of their older hardware with newer macOS releases. Requiring macOS users have a mac from 2012 or newer wasn't something we wanted to do. As for tweedledum support if this means we might have to move it back to optional and come up with a different phase oracle solution This also would potentially preclude future expansion of the use of tweedledum (this was part of the move to make it a hard requirement in #6588, the wide use of the phase oracle for grovers and the potential use of tweedledum in more of qiskit including the transpiler). I think for now we've decided to try and maintain the same qiskit macOS support policy so I'm going to close this, and we can look at tweedledum specifically as part of #6981 or a new issue. But we're not at a critical point yet because tweedledum 1.1.1 fixed the packaging issues (by building the wheels manually).<issue_closed>
<issue_start><issue_comment>Title: Regression: order of tags not taken into account username_0: Since version 1.7.0, there is the following regression. If a tag 1.0.1 and tag 1.0.2 are created for the release 1.0 branch, the last tag should be taken into account (1.0.2 in this case) before computing the next tag. However, we now have an order with is lexicographic, and 1.0.1 is taken into account first. Previous version (1.6) was ordering the tag based on their age.<issue_closed> <issue_comment>username_0: Fixed in 1.7.1
<issue_start><issue_comment>Title: [akka-http] Incorrect rejection is sent to the client username_0: I suspect it might be a bug. I have unauthenticated routes and authenticated routes in my Akka-Http DSL. Below is simplified version of the code ```scala val nakedRoutes = pathPrefix("user") { post { entity(as[UserNew]) { user => (validate(EmailValidator.getInstance().isValid(user.email), s"Invalid email address ${user.email}") & validate(!user.password.isEmpty, "Password should not be empty")) { complete { UserWire(new ObjectId(), user.firstName, user.lastName, user.email, user.company, user.role, new ObjectId()) } } } } } val myAuthorization: Directive[Tuple1[String]] = (headerValueByName("X-API-Token") | parameter("token")).tflatMap[Tuple1[String]]{ case Tuple1(token) => if (!token.isEmpty) provide(token.reverse) else complete(StatusCodes.Forbidden -> "API token is not provided") } val authenticatedRoutes = myAuthorization { user => get { complete { "" } } } val routes = (decodeRequest & encodeResponse) { authenticatedRoutes ~ nakedRoutes } ``` The idea is that user makes POST /user request, gets token and uses it for authenticated routes. The problem I have is that when validation of email fails I get incorrect rejection ``` Request is missing required HTTP header 'X-API-Token' ``` Why doesn't it throw last occurred rejection ? How does it pick what rejection to return to the client ? If I refer to [old Spray's doc](http://spray.io/documentation/1.2.3/spray-routing/key-concepts/rejections/#rejection-cancellation) I'd expect rejections generated by `myAuthorization` to be canceled because `post` directive allowed the request to pass. <issue_comment>username_1: You could see this as a general ambiguity problem in your code, expanding your route leads to: ```scala val routes = myAuthorization { user => get { complete { "" } } } ~ post { entity(as[UserNew]) { user => (validate(EmailValidator.getInstance().isValid(user.email), s"Invalid email address ${user.email}") & validate(!user.password.isEmpty, "Password should not be empty")) { // ... } } } ``` Both route alternatives generally apply but are rejected for different reasons. The logic of rejection cancellation you cite is that a former rejection *of the same kind* can be cancelled if a later directive *of the same kind* passes. But that's not the case here as authorization and validation produce different kinds of rejections. In this case the default behavior is to *handle the first rejection* that occurred. As you have found out yourself you can override the rejection handling behavior in many ways. Usually, you can also achieve the desired effect by reordering your directives in a way that puts the most important discriminators on outer levels while nesting more specific discriminators. In your case, your quite specific `myAuthorization` will be invoked in a catch-all manner that would apply to all requests. Instead, you could "guard" authentication by a path directive that prevents other requests from running into the authentication problem. AFAICS this is no bug, so I'm closing this for now but feel free to continue the discussion here, on the ML, or on SO. /cc @sirthias<issue_closed> <issue_comment>username_0: Thank you for detailed answer! What do you mean by guarding authentication with path ? Do you mean putting auth endpoints under another URI segment such as `/authenticated/user` ? Also could you please explain why following code fixes my problem ? ```scala val myAuthorization: Directive[Tuple1[String]] = (headerValueByName("X-API-Token") | parameter("token")).tflatMap[Tuple1[String]]{ case Tuple1(token) => if (!token.isEmpty) provide(token.reverse) else complete(StatusCodes.Forbidden -> "API token is invalid") }.recover { case rejections => reject(ValidationRejection("API token is not provided")) } ``` When I use code above I get proper rejections when use passed malformed email. Also out of curiosity I tried rejecting as `case rejections => reject(rejections.head)` and it resulted in wrong behavior again. So it's quite confusing. <issue_comment>username_1: Yes. ```scala .recover { case rejections => reject(ValidationRejection("API token is not provided")) } ``` This means that you are discarding any existing rejections and replace them with your `ValidationRejection("API token is not provided")`. `recover { case rejections => reject(rejections.head) }` is almost the definition of the default behavior which you don't like. If you want some rejections to take precedence over other ones you need to inspect the complete list of rejections and choose one that suites your needs. <issue_comment>username_0: I'm not sure I understand. When I make post with malformed email I get two rejections in `.recover` block - `MissingHeaderRejection(X-API-Token)` and `MissingQueryParamRejection(token)`. `post` block generates `ValidationRejection(email)`. So we have 3 scenarios 1) `MissingHeaderRejection(X-API-Token)` and `MissingQueryParamRejection(token)` vs. `ValidationRejection(email)` 2) `ValidationRejection("API token is not provided")` and `ValidationRejection(email)` 3) In in case of `rejections.head` we have `MissingHeaderRejection(X-API-Token)` and `ValidationRejection(email)` To me scenarios 2 and 3 seem equivalent. Why Akka-Http chooses wrong rejection in case 3 as if `MissingHeaderRejection` has more weight than `ValidationRejection` ? <issue_comment>username_1: Yes, you are right. The logic changed from spray where the default rejection handler would prefer a rejection based on when it occurred. Now, the order of how rejections are handled depends on the order of handlers as defined in `RejectionHandler.Default`. See this comment where I noticed the change: https://github.com/akka/akka/pull/16949#issuecomment-76368718 In any case, there are several ways to customize rejection handling and you have to choose one if the existing doesn't suite your needs.
<issue_start><issue_comment>Title: Updates/additional graph data username_0: Adds new metadata to nodes: - Ongoing count of learner/sharers for each skill - Submitted at attribute for each learner - SubmittedBy for each skill based on when inititally created. To test: - Run the build step - Check the outputted data - Confirm visualisation still works. <issue_comment>username_1: @username_0 I've merged the current version as I am keen to use some of the values. could you still add these two improvements: - would be nice to make submittedBy, firstSubmittedOn, submittedAt consistent, like: submittedBy, firstSubmittedAt, (so change firstSubmittedOn, submittedAt both into firstSubmittedAt, we distinguish node type based on _labelClass ) - use the date of the first time a skill appears (even if it is only from the multiple choice answers) and use that as firstSubmittedAt
<issue_start><issue_comment>Title: TextInputBase does not support mouse drag to mark text username_0: Current behavior: Holding the left mouse button and moving to a different location there the button is release will only update the cursor to be at the new position. Expected behavior: All text between mouse down and mouse up events should be selected with the cursor at the mouse up position <issue_comment>username_1: +1 this issue. Very noticeable in the Blishpad module in the multiline textbox.<issue_closed>
<issue_start><issue_comment>Title: Please publish a newer version to npm username_0: The current version that gets installed via `npm install protractor-console-plugin` has no dependencies listed in the package.json, meaning `q` does not get installed, leading to fail.<issue_closed> <issue_comment>username_1: Fixed. Sorry about that one! <issue_comment>username_0: No problem, thanks for fixing it so quickly!
<issue_start><issue_comment>Title: copy unsupported accel username_0: # Before submitting - [ ] Was this discussed/approved via a Github issue? (no need for typos and docs improvements) - [ ] Did you make sure to update the docs? - [ ] Did you write any new necessary tests? ## What does this PR do? Fixes #51 ## PR review Anyone in the community is free to review the PR once the tests have passed. If we didn't discuss your PR in Github issues there's a high chance it will not be merged. ## Did you have fun? Make sure you had fun coding 🙃 <issue_comment>username_0: in principle, we shall run all accelerators in sequence, each accelerator serves its notebooks (which it can render/serve) and the last one will commit the remaining even they are empty...
<issue_start><issue_comment>Title: Use XmlElementWrapper name when wrapping repeated elements with JAXB XmlElementWrapper annotation username_0: The following model: ```java @XmlRootElement(name = "monster") public class Monster { public String name; @XmlElementWrapper(name = "children") @XmlElement(name = "child") public java.util.List<String> child; } ``` produces the following swagger.json spec: ```json { "properties" : { "name" : { "type" : "string" }, "children" : { "type" : "array", "xml" : { "name" : "child", "wrapped" : true }, "items" : { "type" : "string" } } }, "xml" : { "name" : "monster" } } ``` This pullrequest will fix the swagger.json spec to: ```json { "properties" : { "name" : { "type" : "string" }, "children" : { "type" : "array", "xml" : { "name" : "children", "wrapped" : true }, "items" : { "type" : "string" } } }, "xml" : { "name" : "monster" } } ``` <issue_comment>username_1: OK but the current behavior _is_ the desired behavior. We don't want the inner name to be the same as the wrapper.
<issue_start><issue_comment>Title: Hello username_0: Hi not a programmer at all but I'm trying to learn with this. I got everything to work, but I am trying to figure out how I could post to slack via a web hook instead of twitter. ANy ideas? <issue_comment>username_1: Hi, I'm not sure. but I have made the LoLChaMa Slack Bot version once. I used aGateway service one of AWS services. Gateway service is not free. if you have AWS free trial, you can try for free. It is an article I read for creating the Slack Bot. but it's Japanese. http://dev.classmethod.jp/cloud/aws/slack-integration-blueprint-for-aws-lambda/ I think you can find help on Google by key wards like these. AWS, Labmda, Gateway, Slack Bot, Incoming Webhook I hope this will help you! <issue_comment>username_0: You wouldnt happen to have the code used to have the riot api data go to slack would you? <issue_comment>username_1: Sorry, I'm not good at English. Yes, I have made some codes using Riot API data. but I forgot where the code.
<issue_start><issue_comment>Title: Adds more tests by moving print to a separate file username_0: #67 <issue_comment>username_1: wow @username_0 this is great! Minor nitpick, can we remove the `.vscode/launch.json` file? <issue_comment>username_0: @username_1 woops sorry about that. This should be all set now. <issue_comment>username_1: awesome :+1: thanks again @username_0! happy #hacktoberfest 😸