repo_name
stringlengths 4
136
| issue_id
stringlengths 5
10
| text
stringlengths 37
4.84M
|
---|---|---|
jhipster/jhipster-registry | 558169619 | Title: Links for logs and loggers are incorrect
Question:
username_0: <!--
- Please follow the issue template below for bug reports and feature requests.
- If you have a support request rather than a bug, please use [Stack Overflow](http://stackoverflow.com/questions/tagged/jhipster) with the JHipster tag.
- For bug reports it is mandatory to run the command `jhipster info` in your project's root folder, and paste the result here.
- Tickets opened without any of these pieces of information will be **closed** without any explanation.
-->
##### **Overview of the issue**
Links for logs and loggers seem to be swapped. When clicking on logs, the logger screen is displayed and vice-versa.
##### **Motivation for or Use Case**
The links should display information per navigation bar.
##### **Reproduce the error**
I checkout v6.0.2 tag from github, and ran jHipster in development mode (./mvnw -Pdev,webpack).
##### **Related issues**
None that I could find.
##### **Suggest a Fix**
Index: src/main/webapp/app/layouts/navbar/navbar.component.html
IDEA additional info:
Subsystem: com.intellij.openapi.diff.impl.patch.CharsetEP
<+>UTF-8
===================================================================
--- src/main/webapp/app/layouts/navbar/navbar.component.html (revision c6e8a17a35495d7487af6781d0e35bc9b7844205)
+++ src/main/webapp/app/layouts/navbar/navbar.component.html (date 1580433952757)
@@ -103,7 +103,7 @@
</a>
</li>
<li>
- <a class="dropdown-item" routerLink="admin/logs" routerLinkActive="active" (click)="collapseNavbar()">
+ <a class="dropdown-item" routerLink="admin/logfile" routerLinkActive="active" (click)="collapseNavbar()">
<fa-icon icon="file-alt" [fixedWidth]="true"></fa-icon>
<span>Logs</span>
</a>
@@ -115,7 +115,7 @@
</a>
</li>
<li>
- <a class="dropdown-item" routerLink="admin/logfile" routerLinkActive="active" (click)="collapseNavbar()">
+ <a class="dropdown-item" routerLink="admin/logs" routerLinkActive="active" (click)="collapseNavbar()">
<fa-icon icon="tasks" [fixedWidth]="true"></fa-icon>
<span>Loggers</span>
</a>
##### **JHipster Registry Version(s)**
v6.0.2
##### **Browsers and Operating System**
Chrome, MacOS
- [x] Checking this box is mandatory (this is just to show you read everything)
Answers:
username_1: I confirm it !
This is a bug introduced by this ticket #391
I synchronized the registry with jhipster 6.5.0, links for logs and loggers have been overwritten
Status: Issue closed
|
godotengine/webrtc-native | 728825911 | Title: RPC calls failing when integrating webrtc-native into bomber-rtc project
Question:
username_0: In prototyping multiplayer with Godot, I originally had an example working that was based off of the bomberman example codebase doing NAT punchthrough with a rendezvous server using Godot's ENet implementation. However, after doing some research and realizing how naive my implementation was, and not wanting to dive into implementing ICE on top of ENet (just yet), I figured it was worthwhile seeing if WebRTC would serve my purposes.
Using a combination of the `godot-example-project`'s `webrtc-signaling` example, @Faless 's `bomber-rtc` example, `webrtc-native`, and my existing codebase, I went to work of transitioning everything over to WebRTC. For what it's worth, I'm currently testing with 3.2.3-stable, and have four different machines interacting:
- Windows 10 (local)
- Ubuntu 20.04 LTS (local)
- OSX 10.14.5 (local)
- Ubuntu 20.04 LTS (digital ocean droplet running headless)
The issues I have seem to stem from having more than two peers connecting. I went through a pretty lengthy process of trying different combinations of systems being the "host" (for what that means in the webrtc sense with `server_compatibility = true`), with and without the non local headless instance to determine if it was an issue with ICE/NAT punchthrough.
The issues I have always seemed to come up when the ICE candidate process had completed, and `connected_ok` was called from `gamestate.gd`. I've greatly simplified `gamestate`, essentially halting it from moving on to the `register_player` calls, as this was what was failing, occasionally. Instead, after receiving `connected_ok`, I have each of the "clients" starting a loop of sending rpc calls that just sends a string with `ping+n`, where `n` is an incrementing number. What ends up happening is that the other connected clients, as well as the server should receive:
```
ping+1
ping+2
ping+3
ping+4
ping+5
ping+6
...
```
Instead, what happens is that occasionally one of these rpc calls seems to get dropped, and instead of `ping+n`, I'll just get a blank newline written to the debug log, like this:
```
ping+1
ping+2
ping+3
ping+5
ping+6
...
```
The drops seem random, and just because the drop seems to happen on one receiver (be it another client, or the server), another receiver won't necessarily miss the same call, which leads me to believe there is _something_ happening on the webrtc-native rpc reception side that's causing the miss.
The reason I've narrowed this down to webrtc-native is because last night I went through and generated HTML5 exports on the 3 local machines in order to utilize the same webrtc high level multiplayer code base, but not use webrtc-native, and I have not been able to recreate the issue once. I probably ran ~20 iterations, and was switching back and forth between HTML5 and native export, with the native always failing a few times in sending 25 ping rpc calls following `connected_ok`.
I understand this is a pretty large wall of text. I certainly can put together a reproducible test if someone else has at least three separate machine that can do HTML5 and native tests, if need be, but I figured it was worthwhile to post my findings and get a conversation going first.
Answers:
username_1: I think I was able to produce something identical to this in https://github.com/godotengine/godot-demo-projects/pull/667 |
bbonnin/zeppelin-mongodb-interpreter | 298259685 | Title: error while integrating mongodb interpreter into zeppelin
Question:
username_0: org.apache.thrift.TApplicationException: Internal error processing getFormType
at org.apache.thrift.TApplicationException.read(TApplicationException.java:111)
at org.apache.thrift.TServiceClient.receiveBase(TServiceClient.java:71)
at org.apache.zeppelin.interpreter.thrift.RemoteInterpreterService$Client.recv_getFormType(RemoteInterpreterService.java:337)
at org.apache.zeppelin.interpreter.thrift.RemoteInterpreterService$Client.getFormType(RemoteInterpreterService.java:323)
at org.apache.zeppelin.interpreter.remote.RemoteInterpreter.getFormType(RemoteInterpreter.java:446)
at org.apache.zeppelin.interpreter.LazyOpenInterpreter.getFormType(LazyOpenInterpreter.java:111)
at org.apache.zeppelin.notebook.Paragraph.jobRun(Paragraph.java:387)
at org.apache.zeppelin.scheduler.Job.run(Job.java:175)
at org.apache.zeppelin.scheduler.RemoteScheduler$JobRunner.run(RemoteScheduler.java:329)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Answers:
username_1: @username_0 could you provide the version of Zeppelin you are using ? Thanks !
username_0: zeppelin version 0.7.3
Thanks
username_0: Hi, iam using zeppelin0.7.3 i'm facing problem while executing the query using mongodb interpreter
Thanks
org.apache.thrift.TApplicationException: Internal error processing getFormType
at org.apache.thrift.TApplicationException.read(TApplicationException.java:111)
at org.apache.thrift.TServiceClient.receiveBase(TServiceClient.java:71)
at org.apache.zeppelin.interpreter.thrift.RemoteInterpreterService$Client.recv_getFormType(RemoteInterpreterService.java:337)
at org.apache.zeppelin.interpreter.thrift.RemoteInterpreterService$Client.getFormType(RemoteInterpreterService.java:323)
at org.apache.zeppelin.interpreter.remote.RemoteInterpreter.getFormType(RemoteInterpreter.java:446)
at org.apache.zeppelin.interpreter.LazyOpenInterpreter.getFormType(LazyOpenInterpreter.java:111)
at org.apache.zeppelin.notebook.Paragraph.jobRun(Paragraph.java:387)
at org.apache.zeppelin.scheduler.Job.run(Job.java:175)
at org.apache.zeppelin.scheduler.RemoteScheduler$JobRunner.run(RemoteScheduler.java:329)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
username_2: hi I faced the same problem when I am developing my interpreter, I am using zeppelin 0.8.0, I wonder if you solved this problem and what's your zeppelin version, thx~
username_2: I solved this problem by not using spring framework to develop interpreter... |
kemitix/thorp | 512997534 | Title: Create and use a cache of hashes for local files
Question:
username_0: The cache would enable a quick check of the last modified time for the file and use the cached hash rather than re-calculating the hash. On remote storage, or even on a non-SSD drive this can be slow for a log of large files. |
BEEmod/BEE2-items | 495407379 | Title: lightbridges that have the properties of the custom fizzlers
Question:
username_0: basically what it says, but orange ones act like normal fizzlers, green ones act solid to physics objects, white ones don't let anything through, and you get the idea
Answers:
username_1: Can't do that, and why would you want it?
Status: Issue closed
|
Azure/azure-powershell | 419327387 | Title: I need to create Python Azure Function using Az PowerShell
Question:
username_0: Hi,
I need to create Python Azure Function using AZ PowerShell command. So could you please let me know which command it this.
The AZ Command I am using is below, but I need a Az PowerShell Command for This
az functionapp createpreviewapp -n buhler-uat-we-snapit-diehole-algorithm -g buhler-uat-snapit-dieholealgorithm -l ""westeurope"" -s buhleruatsnapitalgo --runtime python --is-linux
Answers:
username_1: @sphibss @sisrap Can you please respond about Functions support in Azure PowerShell?
username_2: @ahmedelnably - can help you here.
username_3: @username_0 we have published couple of weeks ago a first preview of the Az.Functions module for PowerShell. Please give it a try and let us know what you think.
`Install-Module -Name Az.Functions -AllowPrerelease`
username_4: Close it because of new Az.Functions module
Status: Issue closed
username_3: Hi @username_0
The Azure PowerShell product team is looking for customers who are willing to try some of the newest Azure PowerShell features for Azure Functions and provide feedback.
If you are interested in talking with Azure PowerShell product team on this topic, fill out this short survey (https://microsoft.qualtrics.com/jfe/form/SV_0PATZ9XFWS8S82V?Q_CHL=github) and include your email at the end. We will contact you to set up a Teams call if you are a good fit for this study.
Thank you!
Damien |
projectacrn/acrn-hypervisor | 866548290 | Title: ACRN panic when boot acrn-hypervisor in MAXTANG WHL WL10 board
Question:
username_0: **Describe the bug**
When I follow ACRN document [Getting Started Guide for ACRN Industry Scenario with Ubuntu Service VM](https://projectacrn.github.io/2.0/getting-started/rt_industry_ubuntu.html) with ACRN v2.0 using MAXTANG WHL WL10 board. ACRN fail to boot
**Platform**
cpu: i5: Intel(R) Core(TM) i5-8265U
service vm: ubuntu20
grub: grub2.04
**Codebase**
I've tried ACRN v2.0 v2.2 v2.4.
**Scenario**
Industry
**To Reproduce**
Steps to reproduce the behavior:
1. install service vm ubuntu20
2. compile acrn hypervisor using `make all BOARD_FILE=misc/acrn-config/xmls/board-xmls/whl-ipc-i5.xml SCENARIO_FILE=misc/acrn-config/xmls/config-xmls/whl-ipc-i5/industry.xml RELEASE=0`, this step produce no error or warning.
3. compile kernel without error.
4. update grub
5. reboot
6. see following error

**Expected behavior**
service vm should boot successfully.
**Is there something wrong with my way to boot ACRN?**
Answers:
username_1: Can you check/confirm the following:
* Number of CPUs on your platform?
* Check bios options (hyperthreading turned off, all VT turned on, e.g. VT-x, VT-d)
* Version of Ubuntu used, is that 18.04 or 20.04?
I don't know if this is what can cause this issue but you need to patch Grub in 18.04.
username_0: Thanks @username_1
After I configure BIOS with hyperthreading disabled, ACRN can boot successfully now. Thanks a lot~
username_1: Thanks for the confirmation @username_0 !
@NanlinXie @dbkinder , this is an example of an error message that could be improved. And perhaps ACRN shouldn't panic if there are more processors than what it was configured to use.
username_0: In [Getting Started Guide for ACRN Industry Scenario with Ubuntu Service VM](https://projectacrn.github.io/2.0/getting-started/rt_industry_ubuntu.html) with ACRN v2.0, Service VM should be installed in nvm while RTVM should be installed in SATA.
Is it necessary?
I installed both Service VM and RTVM in SATA, while Service VM on `/dev/sda5 `and RTVM in `/dev/sda6`. However I came across such error when launch RTVM. I don't know if this has anything to do with installing Service VM and RTVM in the same device.

username_1: Hi @username_0 , you will need to adjust the launch script and also make sure that you have a bootloader properly configured if you do that. Where did you put the ESP (Grub bootloader)?
Having said that, you will find that sharing the same physical disk (just different partition) will adversely affect the realtime performance of your RTVM. The reason we install the Service VM and RTVM rootfs on different partitions is so that we can pass-through the storage controller to the RTVM and we avoid sharing that resource.
username_0: Hi @username_1
My ESP is in /dev/sda3.

and I modified RTVM grub file like this.

where UUID and PARTUUID and been updated according to RTVM's root partition `/dev/sda6`.
**I'm confused about how ACRN find RTVM's image?**
Here is my RTVM launch script which I commented disk pass-through.

username_2: @username_0
It is expected; setup related issue:
SATA was passthroughed to RTVM, but SOS also using this SATA.
We prefer RTVM installed in one Whole SSD(If no more SSD on your board, please using USB driver) then passthrough the whole SSD to RTVM, ovmf will launch it base on the bdf info in launch script.
username_1: Hi @username_0 , I'm not sure what you call teh RTVM Grub config but in your case I suspect you have only one Grub instance for the entire system, and that's causing problems here. Here is what the Getting Started Guide is setting up when you use two separate disks in terms of boot sequences:
1. Service VM: platform UEFI firmware -> Grub (ESP partition on the **NVMe** disk) -> rootfs on NVMe (`/dev/nvme0p2`)
2. User VM (RTVM): virtual UEFI firmware (`OVMF.fd`) -> Grub (on the **SATA** disk) -> rootfs on SATA (`/dev/sda2`)
The key is that **each** VM (Service VM and RTVM) have their own Grub, and each is configured to pick up the right kernel and set the rootfs to the right disk partition.
There are a couple of options (well, maybe three actually) for you if you want to use a single disk:
1. Create a virtual disk image based on Ubuntu in which we will replace the stock kernel by a realtime kernel (that virtual disk will contain an ESP partition with its own Grub that we would configure for the RTVM, using the UUID and PARTUUID of the virtual disk)
2. Use a dedicated partition (like you are doing today):
1. Use the `-k` and `-B` options from `acrn-dm` to use the RTVM kernel (that is located somewhere in your Service VM filesystem) and set the `bootargs` value to point at the partition where the RTMV rootfs is
2. Treat the partition as an entire disk and install a copy of grub in there... it probably involves splitting that partition into at least two partitions, one to act as the ESP and the other to host the RTVM rootfs.
Note that the realtime performance in **any** of those scenarios is likely to be sub-optimal because you are sharing the same physical resource and the Service VM will therefore interfere with the RTVM (for storage access).
We do not have a good set of instructions for you for any of these set-ups as described above but if you pick one, we can guide you through it. My recommendation is that we try first with option 2.1 (using the `-k` and `-B` options of `acrn-dm` and point at the SATA partition for the RTVM rootfs).
We will also need to restore your Grub installation to point at the correct Service VM partition.
username_0: Thanks! @username_1 @username_2
I'd like to try option2.1 first. I have two questions.
1. where should bootargs set? and which is bootargs's value? UID or bdf?
2. _We will also need to restore your Grub installation to point at the correct Service VM partition._ Do you mean I should update file `/etc/grub.d/40_custom` of SOS?
username_0: Thanks! @username_1 @username_2
I'd like to try option2.1 first. I have two questions.
1. which is bootargs's value? UID or bdf?
2. _We will also need to restore your Grub installation to point at the correct Service VM partition._ Do you mean I should update file /etc/grub.d/40_custom of SOS?
username_1: The `-B, --bootargs` parameter should be set to something like: `root=PARTUUID=<UUID of rootfs partition> rw rootwait nohpet console=hvc0 console=ttyS0 no_timer_check ignore_loglevel log_buf_len=16M consoleblank=0 clocksource=tsc tsc=reliable x2apic_phys processor.max_cstate=0 intel_idle.max_cstate=0 intel_pstate=disable mce=ignore_ce audit=0 isolcpus=nohz,domain,1 nohz_full=1 rcu_nocbs=1 nosoftlockup idle=poll irqaffinity=0`
Re: Grub: yes, you need to make sure that the `/etc/grub.d/40_custom` file that's on the **Service VM partition (i.e. /dev/sda5?)** is updated to use the Service VM kernel parameters and correct PARTUUID (from `/dev/sda5`).
username_0: Thanks!, After follow above tips, I update acrn-dm command parameters, however my usage is not correct, here is my script.

Status: Issue closed
username_0: Can I install RTVM in a usb flash disk, and pass through USB to RTVM?
username_0: **Describe the bug**
When I follow ACRN document [Getting Started Guide for ACRN Industry Scenario with Ubuntu Service VM](https://projectacrn.github.io/2.0/getting-started/rt_industry_ubuntu.html) with ACRN v2.0 using MAXTANG WHL WL10 board. ACRN fail to boot
**Platform**
cpu: i5: Intel(R) Core(TM) i5-8265U
service vm: ubuntu20
grub: grub2.04
**Codebase**
I've tried ACRN v2.0 v2.2 v2.4.
**Scenario**
Industry
**To Reproduce**
Steps to reproduce the behavior:
1. install service vm ubuntu20
2. compile acrn hypervisor using `make all BOARD_FILE=misc/acrn-config/xmls/board-xmls/whl-ipc-i5.xml SCENARIO_FILE=misc/acrn-config/xmls/config-xmls/whl-ipc-i5/industry.xml RELEASE=0`, this step produce no error or warning.
3. compile kernel without error.
4. update grub
5. reboot
6. see following error

**Expected behavior**
service vm should boot successfully.
**Is there something wrong with my way to boot ACRN?**
username_0: Hi @username_1 I changed launch script like this:
![Uploading boot_rtvm_error1.jpg…]()
However I came across same error as same as https://github.com/projectacrn/acrn-hypervisor/issues/5970#issuecomment-826776579
username_2: @username_0 Yes. you can passthrough USB controller to RTVM.
username_3: About ‘Fix address space of post launched rfc v2 #5914’
My issue is OVMF can’t drive NVME ptdev at the pre-merge test env, but I can’t reproduce it on my whl-ipc-i7.
If you use /user/share/acrn/bios/OVMFD_debug.fd, you may see more serial output, and find out the where is the problem.
By the way, have you tried virtio-blk? In my case, vm can launch with virtio-blk device, because it is nvme issue.
Status: Issue closed
|
blakeohare/crayon | 724063870 | Title: Migrate these libraries to core functions
Question:
username_0: In an effort to redo how CNI works, the following libraries should be rewritten as native-free libraries that depend on common core functions:
- DateTime
- Environment
- FileIOCommon
- Image Encoder
- Json
- Matrices
- ProcessUtil
- Random
- Resources
- SRandom
- TextEncoding
- UserData
- Web
- Xml
Answers:
username_0: They're all done now!
Status: Issue closed
|
scrapy/scrapy | 73950960 | Title: Allow FEED_EXPORT_FIELDS to be specified per Item rather than for the whole project
Question:
username_0: This setting was added in this commit: https://github.com/scrapy/scrapy/commit/1534e8540bf083c8d7beb0264cccea5488ee0250
But it would be more useful to define per Item. Maybe change it to a dict and have the Item name as the key? Or a new attribute for the Item class?
Answers:
username_1: `FEED_EXPORT_FIELDS` option was added for 2 main reasons:
1. It is not possible to know all possible fields in advance (because items are processed on-by-one), but we need to know it to write a CSV header. With a single Item class a set of its defined fields can be used. But when several item classes this doesn't work, and `FEED_EXPORT_FIELDS` allows to specify which fields should be in result.
2. In Scrapy master it is possible to yield raw dicts instead of Item instances; FEED_EXPORT_FIELDS allows to define fields to export for the cases dicts have different keys (e.g. when keys for empty values are omitted).
(1) won't work if you define `FEED_EXPORT_FIELDS` per Item class.
A set of fields defined in Item is essentially a per-item `FEED_EXPORT_FIELDS`.
Do you want to skip some fields on export time even if they are defined?
username_0: I see what you mean. I have a requirement at the moment that feed exports are csv format and that the fields remain in the same order (currently using a custom exporter to do this). Spiders only use one item each, so defining via the Item is not an issue. Maybe define these fields in the Spider instead?
username_1: In scrapy master it is possible to override settings per-spider - see http://doc.scrapy.org/en/master/topics/settings.html#settings-per-spider.
username_0: I keep forgetting this! Thank you!
Status: Issue closed
|
metal3-io/baremetal-operator | 811463280 | Title: Resetting status.errorCount manualy?
Question:
username_0: Is there any way of re-setting `errorCount` manually inside bmh object?
Maybe using some kind of special annotation?
```
{"level":"info","ts":1613681703.0899441,"logger":"controllers.BareMetalHost","msg":"done","baremetalhost":"default/master-2","provisioningState":"deprovisioning","requeue":false,"after":18790.040866384}
```
Waiting for ~5h is a bit too much :)
Would you be willing to accept for review PR that add special annotation for clearing error count and message?
Answers:
username_1: /kind feature
/cc @username_2
username_2: The error count was meant to introduce an exponential backoff in case of error, so clearing out the message doesn't seem the correct thing to do to manage an erroneous situation: there should be an intervention (manual or automatic) to fix the problem, with a subsequent re-evaluation (reconcile) that will take care of clearing error if the right conditions are met. Can you please provide more details about you specific scenario? What kind of error? What did prevent the reconcile retriggering?
username_0: @username_2 Image field on bmh object was incorrectly set in this particular case, this triggered multiple reconsiles and eventually a backoff.
I think that we need some kind of a backup plan to remove this errors without re-provisioning bmh.
username_0: @username_2 Any ideas how to approach this?
username_2: And fixing the Image field on the bmh object didn't retrigger the reconcile loop and cleared out the errorCount? If not, that's the issue to be fixed
username_0: @username_2 no, to re-trigger I had to pause/resume reconcile via annotation, after that, successful de-provisioning cleared errorCount as expected.
username_0: @username_2 So this happened again in different setup, stuck at backoff with no apparent way to trigger reconcile.
```
{"level":"info","ts":1615829787.7539394,"logger":"controllers.BareMetalHost","msg":"publishing event","baremetalhost":"worker-1","reason":"InspectionError","message":"No lookup attributes were found, inspector won't be able to find it after introspection, consider creating ironic ports or providing an IPMI address"}
{"level":"info","ts":1615829788.0157337,"logger":"controllers.BareMetalHost","msg":"done","baremetalhost":"worker-1","provisioningState":"inspecting","requeue":false,"after":11686.964158174}
```
This error happened because of ironic not providing free port to `worker-1` node, but this is different story, not related to this feature.
cc: @username_3
Can we please consider implementing this in some way?
For additional notes, ironic error happened due to Ironic pod being unexpectedly restarted.
```
"error":"action \"registering\" failed: failed to validate BMC access: failed to create port in ironic: The service is currently unable to handle the request due to a temporary overloading or maintenance. This is a temporary condition. Try again later."
```
username_2: I guess could make sense to have an explicit mechanism to force re-triggering a reconciliation loop for a specific BMH, considering that in some cases the remediation task could be an external (in respect to the resource) activity carried out by a user, but still wondering why amending the Image field didn't work, as it is a normal spec field?
username_3: Any edit of the host resource should trigger reconciliation. If we want a less crude way to do it, we could add an annotation API to reset the value, but maybe it would be better to reduce the longest wait time that we use?
username_0: but also this approach would greatly help with debugability
currently struggling with this long back-off blocking and unability to disable this during debuging.
username_2: I'm not very keen to such approach (if the intention it's to zero the value), as it will give the user the possibility to modify a field that it is managed by the operator - and usually the ErrorCount goes in pair with the ErrorMessage, so it wouldn't be enough to clear it.
I think that in such cases what it's required it's something like a "wake up" annotation that will set it to 1, forcing thus at the same time a re-evaluation of the current status without losing the current error condition
username_3: That makes sense. I was mostly pointing out that *any* edit to the host should trigger reconciliation, so we don't need a special API just to trigger a retry. If we wanted an API to let the user say "I think I have cleared the error on this host, try again" then an annotation would make sense instead of a spec field, because the controller could delete the annotation.
username_0: @username_2 So how do we set ErrorCount to 1 ?
via annotation?
username_2: The idea sounds fine to me, but I think we should prepare first a design proposal on https://github.com/metal3-io/metal3-docs to be discussed and approved by the community (I could prepare a draft tomorrow in case)
username_0: @username_2 Would be great, thanks!
username_4: @username_0 Please open a new bug and upload logs so we can figure out what the root cause is here.
As Andrea and Doug pointed out, modifying the BMH to fix the erroneous configuration will cause an immediate reconcile, and the fact that another reconcile is scheduled in 5 hours time is not relevant.
There is definitely work still to be done on retrying provisioning. When a bad image is set I believe we currently retry in a loop without incrementing error count past 1 (it goes from provisioning->deprovisioning->ready->provisioning with the successful deprovisioning clearing the error count). Ideally the error count should continue to increment as long as the image hasn't changed.
Clearly you are seeing something else, where the deprovisioning is failing repeatedly. That shouldn't happen if ironic is working and the BMC is reachable, so we need to look into it.
username_0: @username_4 For the most part https://github.com/metal3-io/metal3-docs/pull/171 seems like a solution, let's keep this one open until that design doc is implemented.
If I encounter similar issue to this I will do another more specific issue with logs.
username_4: No that is absolutely not a solution. In fact that the existence of such workarounds tends to prevent people from reporting obvious bugs is one reason not to implement them.
username_0: @username_4 existence of such workaround help people fix issues in production clusters without recompiling code and not having new gray hair appearing.
Lets not forget that contributions to code works both ways, sometimes community just needs some workaround to solve current active problems without waiting for quite some time for a proper fix.
username_4: After a lot of back-and-forth on metal3-io/metal3-docs#171 we have concluded that a retry can be triggered at any time by making any change to the BMH resource (e.g. add an annotation of your choice, not one recognised by the bmo), and we have no information to suggest that this mechanism would not work as intended when the underlying problem is fixed. Please open another bug if information comes to light suggesting otherwise.
/close |
EthicalNYC/website | 245258609 | Title: Design: What page width for the site?
Question:
username_0: 960px? Fluid? Other?
Answers:
username_0: From Renee: To be determined – we ultimately want the site to be mobile responsive, which (I imagine) will drive the proper answer to this question. In the meantime, I suggest you create a width that is appropriate for the three-columns
Status: Issue closed
|
Esri/military-tools-geoprocessing-toolbox | 192081539 | Title: Highest Points and Find Local Peaks in Pro 1.2 are symbolizing properly but not labeling
Question:
username_0: ## Expected Behavior
Running Highest Points or Find Local Peaks should produce an output that has an orange triangle for the marker, with a label that gives the value of that point from the elevation surface
## Current Behavior
Tools both run properly and symbolize properly, but there is no label with the point.
Answers:
username_1: Known issue in core geoprocessing as recent as Pro 1.3. Not much we can do for our tools until this one is fixed in core.
Status: Issue closed
|
flutter/flutter | 769188340 | Title: Can't run on Ubuntu 20.10
Question:
username_0: <!-- Thank you for using Flutter!
If you are looking for support, please check out our documentation
or consider asking a question on Stack Overflow:
* https://flutter.dev/
* https://api.flutter.dev/
* https://stackoverflow.com/questions/tagged/flutter?sort=frequent
If you have found a bug or if our documentation doesn't have an answer
to what you're looking for, then fill out the template below. Please read
our guide to filing a bug first: https://flutter.dev/docs/resources/bug-reports
-->
## Steps to Reproduce
<!-- You must include full steps to reproduce so that we can reproduce the problem. -->
1. `flutter create bug`
2. `cd bug`
3. `flutter run`
**Expected results:** <!-- what did you want to see? -->
The app running! It was working before on Ubuntu 20.04. Also, flutter-gallery 'snap version' isn't working any more!
**Actual results:** <!-- what did you see? -->
```
Launching lib/main.dart on Linux in debug mode...
Building Linux application...
Error waiting for a debug connection: The log reader stopped unexpectedly.
Error launching application on Linux.
```
<details>
<summary>Logs</summary>
<!--
Run your application with `flutter run --verbose` and attach all the
log output below between the lines with the backticks. If there is an
exception, please see if the error message includes enough information
to explain how to solve the issue.
-->
```
$ flutter run --verbose
[ +102 ms] executing: [/home/username_0/snap/flutter/common/flutter/] git -c log.showSignature=false log -n 1 --pretty=format:%H
[ +57 ms] Exit code 0 from: git -c log.showSignature=false log -n 1 --pretty=format:%H
[ ] e24e763872c1bcd34e9a5b1e6baaa98defff7fc5
[ ] executing: [/home/username_0/snap/flutter/common/flutter/] git tag --points-at e24e763872c1bcd34e9a5b1e6baaa98defff7fc5
[ +22 ms] Exit code 0 from: git tag --points-at e24e763872c1bcd34e9a5b1e6baaa98defff7fc5
[ +7 ms] executing: [/home/username_0/snap/flutter/common/flutter/] git describe --match *.*.* --long --tags e24e763872c1bcd34e9a5b1e6baaa98defff7fc5
[ +45 ms] Exit code 0 from: git describe --match *.*.* --long --tags e24e763872c1bcd34e9a5b1e6baaa98defff7fc5
[ ] 1.25.0-8.0.pre-88-ge24e763872
[ +59 ms] executing: [/home/username_0/snap/flutter/common/flutter/] git rev-parse --abbrev-ref --symbolic @{u}
[ +9 ms] Exit code 0 from: git rev-parse --abbrev-ref --symbolic @{u}
[ ] origin/master
[ ] executing: [/home/username_0/snap/flutter/common/flutter/] git ls-remote --get-url origin
[ +9 ms] Exit code 0 from: git ls-remote --get-url origin
[ ] https://github.com/flutter/flutter.git
[ +61 ms] executing: [/home/username_0/snap/flutter/common/flutter/] git rev-parse --abbrev-ref HEAD
[Truncated]
See https://flutter.dev/docs/get-started/install/linux#android-setup for more details.
[✓] Linux toolchain - develop for Linux desktop
• clang version 6.0.0-1ubuntu2 (tags/RELEASE_600/final)
• cmake version 3.10.2
• ninja version 1.8.2
• pkg-config version 0.29.1
[!] Android Studio (not installed)
• Android Studio not found; download from https://developer.android.com/studio/index.html
(or visit https://flutter.dev/docs/get-started/install/linux#android-setup for detailed instructions).
[✓] Connected device (1 available)
• Linux (desktop) • linux • linux-x64 • Linux
! Doctor found issues in 2 categories.
```
</details>
Answers:
username_1: The same happens in Ubuntu 18.04
username_2: tried to run the counter app on Ubuntu 20.04 with the latest master
<details>
<summary>logs</summary>
```bash
[ +4 ms] Launching lib/main.dart on Linux in debug mode...
[ +5 ms] /home/francesco/snap/flutter/common/flutter/bin/cache/dart-sdk/bin/dart --disable-dart-dev
/home/francesco/snap/flutter/common/flutter/bin/cache/artifacts/engine/linux-x64/frontend_server.dart.snapshot --sdk-root
/home/francesco/snap/flutter/common/flutter/bin/cache/artifacts/engine/common/flutter_patched_sdk/ --incremental --target=flutter --debugger-module-names --experimental-emit-debug-metadata --output-dill
/tmp/flutter_tools.DNDTRL/flutter_tool.QXNDPQ/app.dill --packages /home/francesco/projects/issue/.dart_tool/package_config.json -Ddart.vm.profile=false -Ddart.vm.product=false --enable-asserts
--track-widget-creation --filesystem-scheme org-dartlang-root --initialize-from-dill build/152a2ce78bff4a6f74a44ed554717a64.cache.dill.track.dill --flutter-widget-cache
--enable-experiment=alternative-invalidation-strategy
[ +15 ms] Building Linux application...
[ +12 ms] <- compile package:issue/main.dart
[ +4 ms] executing: [build/linux/debug/] cmake -G Ninja -DCMAKE_BUILD_TYPE=Debug /home/francesco/projects/issue/linux
[ +964 ms] -- The CXX compiler identification is Clang 6.0.0
[ +15 ms] -- Check for working CXX compiler: /snap/flutter/38/usr/bin/clang++
[ +162 ms] -- Check for working CXX compiler: /snap/flutter/38/usr/bin/clang++ -- works
[ +2 ms] -- Detecting CXX compiler ABI info
[ +123 ms] -- Detecting CXX compiler ABI info - done
[ +5 ms] -- Detecting CXX compile features
[ +480 ms] -- Detecting CXX compile features - done
[ +9 ms] -- Found PkgConfig: /snap/flutter/38/usr/bin/pkg-config (found version "0.29.1")
[ ] -- Checking for module 'gtk+-3.0'
[ +32 ms] -- Found gtk+-3.0, version 3.22.30
[ +98 ms] -- Checking for module 'glib-2.0'
[ +22 ms] -- Found glib-2.0, version 2.56.4
[ +59 ms] -- Checking for module 'gio-2.0'
[ +33 ms] -- Found gio-2.0, version 2.56.4
[ +60 ms] -- Checking for module 'blkid'
[ +28 ms] -- Found blkid, version 2.31.1
[ +64 ms] -- Checking for module 'liblzma'
[ +25 ms] -- Found liblzma, version 5.2.2
[ +66 ms] -- Configuring done
[ +50 ms] -- Generating done
[ +1 ms] -- Build files have been written to: /home/francesco/projects/issue/build/linux/debug
[ +3 ms] executing: ninja -C build/linux/debug install
[ +18 ms] ninja: Entering directory `build/linux/debug'
[+20536 ms] [1/6] Generating /home/francesco/projects/issue/linux/flutter/ephemeral/libflutter_linux_gtk.so, /home/francesco/projects/issue/linux/flutter/ephemeral/flutter_linux/fl_basic_message_channel.h,
/home/francesco/projects/issue/linux/flutter/ephemeral/flutter_linux/fl_binary_codec.h, /home/francesco/projects/issue/linux/flutter/ephemeral/flutter_linux/fl_binary_messenger.h,
/home/francesco/projects/issue/linux/flutter/ephemeral/flutter_linux/fl_dart_project.h, /home/francesco/projects/issue/linux/flutter/ephemeral/flutter_linux/fl_engine.h,
/home/francesco/projects/issue/linux/flutter/ephemeral/flutter_linux/fl_json_message_codec.h, /home/francesco/projects/issue/linux/flutter/ephemeral/flutter_linux/fl_json_method_codec.h,
/home/francesco/projects/issue/linux/flutter/ephemeral/flutter_linux/fl_message_codec.h, /home/francesco/projects/issue/linux/flutter/ephemeral/flutter_linux/fl_method_call.h,
/home/francesco/projects/issue/linux/flutter/ephemeral/flutter_linux/fl_method_channel.h, /home/francesco/projects/issue/linux/flutter/ephemeral/flutter_linux/fl_method_codec.h,
/home/francesco/projects/issue/linux/flutter/ephemeral/flutter_linux/fl_method_response.h, /home/francesco/projects/issue/linux/flutter/ephemeral/flutter_linux/fl_plugin_registrar.h,
/home/francesco/projects/issue/linux/flutter/ephemeral/flutter_linux/fl_plugin_registry.h, /home/francesco/projects/issue/linux/flutter/ephemeral/flutter_linux/fl_standard_message_codec.h,
/home/francesco/projects/issue/linux/flutter/ephemeral/flutter_linux/fl_standard_method_codec.h, /home/francesco/projects/issue/linux/flutter/ephemeral/flutter_linux/fl_string_codec.h,
/home/francesco/projects/issue/linux/flutter/ephemeral/flutter_linux/fl_value.h, /home/francesco/projects/issue/linux/flutter/ephemeral/flutter_linux/fl_view.h,
/home/francesco/projects/issue/linux/flutter/ephemeral/flutter_linux/flutter_linux.h, _phony_
[ +20 ms] [ +83 ms] executing: [/home/francesco/snap/flutter/common/flutter/] git -c log.showSignature=false log -n 1 --pretty=format:%H
[ ] [ +44 ms] Exit code 0 from: git -c log.showSignature=false log -n 1 --pretty=format:%H
[ ] [ ] 9ff4326e1fb5375e9adae37367338a2de5daa438
[ ] [ +1 ms] executing: [/home/francesco/snap/flutter/common/flutter/] git tag --points-at 9ff4326e1fb5375e9adae37367338a2de5daa438
[ ] [ +16 ms] Exit code 0 from: git tag --points-at 9ff4326e1fb5375e9adae37367338a2de5daa438
[ ] [ +2 ms] executing: [/home/francesco/snap/flutter/common/flutter/] git describe --match *.*.* --long --tags 9ff4326e1fb5375e9adae37367338a2de5daa438
[ ] [ +32 ms] Exit code 0 from: git describe --match *.*.* --long --tags 9ff4326e1fb5375e9adae37367338a2de5daa438
[ ] [ ] 1.26.0-1.0.pre-64-g9ff4326e1f
[ ] [ +64 ms] executing: [/home/francesco/snap/flutter/common/flutter/] git rev-parse --abbrev-ref --symbolic @{u}
[ ] [ +6 ms] Exit code 0 from: git rev-parse --abbrev-ref --symbolic @{u}
[Truncated]
+sssssssssdmydMMMMMMMMddddyssssssss+ Terminal: gnome-terminal
/ssssssssssshdmNNNNmyNMMMMhssssss/ CPU: Intel i5-7200U (4) @ 2.500GHz
.ossssssssssssssssssdMMMNysssso. GPU: Intel HD Graphics 620
-+sssssssssssssssssyyyssss+- Memory: 6149MiB / 7685MiB
`:+ssssssssssssssssss+:`
.-/+oossssoo+/-.
```
</details>
everything works fine, although this shows up in the logs
```bash
[+1213 ms] libEGL warning: DRI2: failed to create dri screen
[ +18 ms] libEGL warning: DRI2: failed to create dri screen
```
username_3: Hi @username_0
Just tried to reproduce on the latest `master` channel, no issues
<details>
<summary>logs</summary>
```bash
taha@pop-os:~/AndroidStudioProjects/mybug$ flutterm run -d linux -v
[ +51 ms] executing: [/home/taha/Code/flutter_master/] git -c
log.showSignature=false log -n 1 --pretty=format:%H
[ +27 ms] Exit code 0 from: git -c log.showSignature=false log -n 1
--pretty=format:%H
[ ] 9ff4326e1fb5375e9adae37367338a2de5daa438
[ ] executing: [/home/taha/Code/flutter_master/] git tag --points-at
9ff4326e1fb5375e9adae37367338a2de5daa438
[ +9 ms] Exit code 0 from: git tag --points-at
9ff4326e1fb5375e9adae37367338a2de5daa438
[ +1 ms] executing: [/home/taha/Code/flutter_master/] git describe --match *.*.*
--long --tags 9ff4326e1fb5375e9adae37367338a2de5daa438
[ +19 ms] Exit code 0 from: git describe --match *.*.* --long --tags
9ff4326e1fb5375e9adae37367338a2de5daa438
[ ] 1.26.0-1.0.pre-64-g9ff4326e1f
[ +33 ms] executing: [/home/taha/Code/flutter_master/] git rev-parse --abbrev-ref
--symbolic @{u}
[ +4 ms] Exit code 0 from: git rev-parse --abbrev-ref --symbolic @{u}
[ ] origin/master
[ ] executing: [/home/taha/Code/flutter_master/] git ls-remote --get-url
origin
[ +3 ms] Exit code 0 from: git ls-remote --get-url origin
[ ] https://github.com/flutter/flutter.git
[ +27 ms] executing: [/home/taha/Code/flutter_master/] git rev-parse --abbrev-ref
HEAD
[ +3 ms] Exit code 0 from: git rev-parse --abbrev-ref HEAD
[ ] master
[ +28 ms] Artifact Instance of 'AndroidGenSnapshotArtifacts' is not required,
skipping update.
[ ] Artifact Instance of 'AndroidInternalBuildArtifacts' is not required,
skipping update.
[ ] Artifact Instance of 'IOSEngineArtifacts' is not required, skipping
update.
[ ] Artifact Instance of 'FlutterWebSdk' is not required, skipping update.
[ +1 ms] Artifact Instance of 'WindowsEngineArtifacts' is not required, skipping
update.
[ ] Artifact Instance of 'MacOSEngineArtifacts' is not required, skipping
update.
[ ] Artifact Instance of 'LinuxEngineArtifacts' is not required, skipping
update.
[ ] Artifact Instance of 'LinuxFuchsiaSDKArtifacts' is not required,
skipping update.
[ ] Artifact Instance of 'MacOSFuchsiaSDKArtifacts' is not required,
skipping update.
[ ] Artifact Instance of 'FlutterRunnerSDKArtifacts' is not required,
skipping update.
[ ] Artifact Instance of 'FlutterRunnerDebugSymbols' is not required,
skipping update.
[ +38 ms] executing: /home/taha/Code/sdk/platform-tools/adb devices -l
[ +10 ms] Artifact Instance of 'AndroidGenSnapshotArtifacts' is not required,
skipping update.
[ ] Artifact Instance of 'AndroidInternalBuildArtifacts' is not required,
skipping update.
[Truncated]
• VS Code at /usr/share/code
• Flutter extension version 3.17.0
[✓] Connected device (2 available)
• Linux (desktop) • linux • linux-x64 • Linux
• Chrome (web) • chrome • web-javascript • Google Chrome 87.0.4280.88
! Doctor found issues in 1 category.
```
</details>
@username_0 @username_1
It seems you guys are using the snap version of Flutter
For Flutter on snap related issues, please file an issue on https://github.com/canonical/flutter-snap/issues
If it reproduces with git clone version of Flutter, please provide details
https://flutter.dev/docs/get-started/install/linux
Thank you
username_0: Hi @username_2 , just to make sure, did you create a new flutter project or used an old copy?
username_0: @username_3 I think it's an issue with the boilerplate code because @username_2 uses the snap version and it works for him!
username_2: @username_0 as a matter of fact I do
I know it's a long shot, but sometimes happens w/ master... would you mind try a try `flutter run upgrade -f`
username_0: you mean `flutter upgrade -f`?
username_0: ```
$ flutter upgrade -f
Flutter is already up to date on channel master
Flutter 1.26.0-2.0.pre.64 • channel master • https://github.com/flutter/flutter.git
Framework • revision 9ff4326e1f (9 hours ago) • 2020-12-17 06:19:05 +0000
Engine • revision 6edb402ee4
Tools • Dart 2.12.0 (build 2.12.0-157.0.dev)
```
username_3: Hi @username_0
Does the issue persists with uograde?
Can you please provide `flutter run --verbose` and a complete reproducible minimal code sample
Thank you
username_4: I removed the flutter snap package and installed flutter manually, no issue now.
username_0: Hi @username_3 Yes, it does!
<details>
<summary>flutter run -v</summary>
```
[ +101 ms] executing: [/home/username_0/snap/flutter/common/flutter/] git -c log.showSignature=false log -n 1 --pretty=format:%H
[ +53 ms] Exit code 0 from: git -c log.showSignature=false log -n 1 --pretty=format:%H
[ ] cda1fae6b6f17e7178c7668cfe1a8b714840f4b3
[ +1 ms] executing: [/home/username_0/snap/flutter/common/flutter/] git tag --points-at cda1fae6b6f17e7178c7668cfe1a8b714840f4b3
[ +21 ms] Exit code 0 from: git tag --points-at cda1fae6b6f17e7178c7668cfe1a8b714840f4b3
[ +5 ms] executing: [/home/username_0/snap/flutter/common/flutter/] git describe --match *.*.* --long --tags cda1fae6b6f17e7178c7668cfe1a8b714840f4b3
[ +39 ms] Exit code 0 from: git describe --match *.*.* --long --tags cda1fae6b6f17e7178c7668cfe1a8b714840f4b3
[ ] 1.26.0-1.0.pre-84-gcda1fae6b6
[ +61 ms] executing: [/home/username_0/snap/flutter/common/flutter/] git rev-parse --abbrev-ref --symbolic @{u}
[ +11 ms] Exit code 0 from: git rev-parse --abbrev-ref --symbolic @{u}
[ ] origin/master
[ ] executing: [/home/username_0/snap/flutter/common/flutter/] git ls-remote --get-url origin
[ +5 ms] Exit code 0 from: git ls-remote --get-url origin
[ ] https://github.com/flutter/flutter.git
[ +65 ms] executing: [/home/username_0/snap/flutter/common/flutter/] git rev-parse --abbrev-ref HEAD
[ +6 ms] Exit code 0 from: git rev-parse --abbrev-ref HEAD
[ ] master
[ +65 ms] Artifact Instance of 'AndroidGenSnapshotArtifacts' is not required, skipping update.
[ ] Artifact Instance of 'AndroidInternalBuildArtifacts' is not required, skipping update.
[ ] Artifact Instance of 'IOSEngineArtifacts' is not required, skipping update.
[ ] Artifact Instance of 'FlutterWebSdk' is not required, skipping update.
[ +2 ms] Artifact Instance of 'WindowsEngineArtifacts' is not required, skipping update.
[ ] Artifact Instance of 'MacOSEngineArtifacts' is not required, skipping update.
[ ] Artifact Instance of 'LinuxEngineArtifacts' is not required, skipping update.
[ ] Artifact Instance of 'LinuxFuchsiaSDKArtifacts' is not required, skipping update.
[ ] Artifact Instance of 'MacOSFuchsiaSDKArtifacts' is not required, skipping update.
[ ] Artifact Instance of 'FlutterRunnerSDKArtifacts' is not required, skipping update.
[ ] Artifact Instance of 'FlutterRunnerDebugSymbols' is not required, skipping update.
[ +88 ms] executing: /home/username_0/Android/Sdk/platform-tools/adb devices -l
[ +56 ms] List of devices attached
[ +5 ms] Artifact Instance of 'AndroidGenSnapshotArtifacts' is not required, skipping update.
[ ] Artifact Instance of 'AndroidInternalBuildArtifacts' is not required, skipping update.
[ ] Artifact Instance of 'IOSEngineArtifacts' is not required, skipping update.
[ ] Artifact Instance of 'FlutterWebSdk' is not required, skipping update.
[ +1 ms] Artifact Instance of 'WindowsEngineArtifacts' is not required, skipping update.
[ ] Artifact Instance of 'MacOSEngineArtifacts' is not required, skipping update.
[ +19 ms] Downloading linux-x64/linux-x64-flutter-gtk tools...
[+1283 ms] Content https://storage.googleapis.com/flutter_infra/flutter/1be6f414e7db52304b5e94a6725260c7f32f8dec/linux-x64/linux-x64-flutter-gtk.zip
md5 hash: dqSECrYxWPG+xTqbrKnerg==
[+293698 ms] Downloading linux-x64/linux-x64-flutter-gtk tools... (completed in 295.0s)
[ ] executing: unzip -o -q
/home/username_0/snap/flutter/common/flutter/bin/cache/downloads/storage.googleapis.com/flutter_infra/flutter/1be6f414e7db52304b5e94a6725260c7f32f8dec/l
inux-x64/linux-x64-flutter-gtk.zip -d /home/username_0/snap/flutter/common/flutter/bin/cache/artifacts/engine/linux-x64
[+1245 ms] Exit code 0 from: unzip -o -q
/home/username_0/snap/flutter/common/flutter/bin/cache/downloads/storage.googleapis.com/flutter_infra/flutter/1be6f414e7db52304b5e94a6725260c7f32f8dec/l
inux-x64/linux-x64-flutter-gtk.zip -d /home/username_0/snap/flutter/common/flutter/bin/cache/artifacts/engine/linux-x64
[ +65 ms] Downloading linux-x64-profile/linux-x64-flutter-gtk tools...
[ +826 ms] Content
https://storage.googleapis.com/flutter_infra/flutter/1be6f414e7db52304b5e94a6725260c7f32f8dec/linux-x64-profile/linux-x64-flutter-gtk.zip md5 hash:
gVpm7iiDoJrBfKmEDkf7Ng==
^[[B[+142955 ms] Downloading linux-x64-profile/linux-x64-flutter-gtk tools... (completed in 143.8s)
[ ] executing: unzip -o -q
[Truncated]
<asynchronous suspension>
#14 AppContext.run (package:flutter_tools/src/base/context.dart:149:12)
<asynchronous suspension>
#15 runInContext (package:flutter_tools/src/context_runner.dart:72:10)
<asynchronous suspension>
#16 main (package:flutter_tools/executable.dart:89:3)
<asynchronous suspension>
[ +254 ms] ensureAnalyticsSent: 252ms
[ +1 ms] Running shutdown hooks
[ ] Shutdown hook priority 4
[ +11 ms] Shutdown hooks complete
[ ] exiting with code 1
```
</details>
`flutter build linux --debug && ./build/linux/debug/bundle/bugs` & `flutter build linux --release && ./build/linux/debug/bundle/bugs` works! It's just the run command that doesn't work. I'll try to manually install flutter...
username_0: I cloned the repo and ran `flutter doctor` and other commands like `flutter channel`, and I'm waiting for flutter to download the dart sdk! not sure if this is by design or a bug or there is some issue on my laptop!
username_0: It works now!
username_0: Maybe the issue was because I chose "Ubuntu on wayland" when I login! I wasn't able to run the snap version of flutter-gallery until I changed that setting! I think so!
Or maybe it was because I have a minimal installation of Ubuntu and needed to run `sudo apt install clang cmake ninja-build liblzma-dev` as found by `flutter doctor`?
Anyway, I can't re-install the snap version of flutter to know for sure because it'll download another 200MB and I'm really tired of downloading! Hopefully someone else will be able to solve the mystery! :)
Thank you.
Status: Issue closed
username_3: Hi @username_0
Glad it works for you now, generally, snap packages should download everything necessary even if you're using a minimal version of Ubuntu. Since @username_4 also had the same issue, I would think it might snap Flutter issue. If the problem persists using the Snap Flutter version, please feel free to file an issue in their dedicated GitHub [repository](https://github.com/canonical/flutter-snap/issues).
Given your last message I feel safe to close this issue, if you disagree please write in the comments and I will reopen it.
Thank you
username_0: Does it work if you choose "Ubuntu on wayland" when you login? :thinking:
username_5: Hi, could I please ask you to try a full re-install of the snap and see if it fixes things:
```
snap remove --purge flutter
rm -rf ~/snap/flutter
snap install flutter --classic
flutter channel dev
flutter upgrade
flutter config --enable-linux-desktop
```
username_0: Hi @username_5
I believe I did that before (with the --edge flag) and it didn't work at all! I really don't want to do it again because my internet speed isn't fast, but I may try in a few days.
I think it's a wayland issue or Ubuntu 20.10 issue!
The snap version of "fluttter gallery" used to work on Ubuntu 20.04 on wayland (as well as flutter itself), but now I get:
```
$ flutter-gallery
/bin/bash: warning: setlocale: LC_ALL: cannot change locale (en_US.UTF-8)
(flutter_gallery:7808): Gtk-WARNING **: 17:00:39.491: cannot open display: :0
```
when I run flutter-gallery on wayland and not when I use the default display (x11?!)
username_0: @username_5 It seems it did purge flutter and installed the git version afterwards! Will try these steps later on.
username_0: Hi @username_5,
I tried to run the snap version of flutter on x11 and wayland but it still doesn't run!
These're the commands I used:
```
sudo snap remove --purge flutter
rm -rf ~/snap/flutter
sudo snap install flutter --classic
flutter channel dev
flutter upgrade
flutter config --enable-linux-desktop
flutter create isbuggy
```
<details>
<summary>flutter run -v</summary>
```
$ flutter run -v
[ +101 ms] executing: [/home/username_0/snap/flutter/common/flutter/] git -c log.showSignature=false log -n 1 --pretty=format:%H
[ +52 ms] Exit code 0 from: git -c log.showSignature=false log -n 1 --pretty=format:%H
[ ] 63062a64432cce03315d6b5196fda7912866eb37
[ +1 ms] executing: [/home/username_0/snap/flutter/common/flutter/] git tag --points-at 63062a64432cce03315d6b5196fda7912866eb37
[ +17 ms] Exit code 0 from: git tag --points-at 63062a64432cce03315d6b5196fda7912866eb37
[ ] 1.26.0-1.0.pre
[ +58 ms] executing: [/home/username_0/snap/flutter/common/flutter/] git rev-parse --abbrev-ref --symbolic @{u}
[ +8 ms] Exit code 0 from: git rev-parse --abbrev-ref --symbolic @{u}
[ ] origin/dev
[ ] executing: [/home/username_0/snap/flutter/common/flutter/] git ls-remote --get-url origin
[ +6 ms] Exit code 0 from: git ls-remote --get-url origin
[ ] https://github.com/flutter/flutter.git
[ +72 ms] executing: [/home/username_0/snap/flutter/common/flutter/] git rev-parse --abbrev-ref HEAD
[ +5 ms] Exit code 0 from: git rev-parse --abbrev-ref HEAD
[ ] dev
[ +88 ms] Artifact Instance of 'AndroidGenSnapshotArtifacts' is not required, skipping update.
[ ] Artifact Instance of 'AndroidInternalBuildArtifacts' is not required, skipping update.
[ ] Artifact Instance of 'IOSEngineArtifacts' is not required, skipping update.
[ ] Artifact Instance of 'FlutterWebSdk' is not required, skipping update.
[ +3 ms] Artifact Instance of 'WindowsEngineArtifacts' is not required, skipping update.
[ ] Artifact Instance of 'MacOSEngineArtifacts' is not required, skipping update.
[ ] Artifact Instance of 'LinuxEngineArtifacts' is not required, skipping update.
[ ] Artifact Instance of 'LinuxFuchsiaSDKArtifacts' is not required, skipping update.
[ ] Artifact Instance of 'MacOSFuchsiaSDKArtifacts' is not required, skipping update.
[ ] Artifact Instance of 'FlutterRunnerSDKArtifacts' is not required, skipping update.
[ ] Artifact Instance of 'FlutterRunnerDebugSymbols' is not required, skipping update.
[ +104 ms] Artifact Instance of 'AndroidGenSnapshotArtifacts' is not required, skipping update.
[ ] Artifact Instance of 'AndroidInternalBuildArtifacts' is not required, skipping update.
[ ] Artifact Instance of 'IOSEngineArtifacts' is not required, skipping update.
[ ] Artifact Instance of 'FlutterWebSdk' is not required, skipping update.
[ +1 ms] Artifact Instance of 'WindowsEngineArtifacts' is not required, skipping update.
[ ] Artifact Instance of 'MacOSEngineArtifacts' is not required, skipping update.
[ +1 ms] Artifact Instance of 'LinuxFuchsiaSDKArtifacts' is not required, skipping update.
[ ] Artifact Instance of 'MacOSFuchsiaSDKArtifacts' is not required, skipping update.
[ ] Artifact Instance of 'FlutterRunnerSDKArtifacts' is not required, skipping update.
[ ] Artifact Instance of 'FlutterRunnerDebugSymbols' is not required, skipping update.
[ +105 ms] Skipping pub get: version match.
[ +119 ms] Found plugin integration_test at /home/username_0/snap/flutter/common/flutter/packages/integration_test/
[ +295 ms] Found plugin integration_test at /home/username_0/snap/flutter/common/flutter/packages/integration_test/
[Truncated]
[ +256 ms] ensureAnalyticsSent: 254ms
[ +3 ms] Running shutdown hooks
[ ] Shutdown hook priority 4
[ +11 ms] Shutdown hooks complete
[ ] exiting with code 1
```
</details>
then:
```
snap refresh flutter --edge
flutter upgrade
flutter run -v
```
and still not working on wayland and x11.
Note: It seems I hadn't used flutter on wayland on Ubuntu 20.04.
username_5: Thanks @username_0, it appears this may not be snap specific. We’re seeing the same from another user using the flutter git repo directly: https://github.com/flutter/flutter/issues/72703#issuecomment-749529246
username_0: welcome but that issue seems a bit different to me! (logger vs. linker)
username_5: Oh I see, sorry I just saw that the end of the trace looked the same, missed the message just above.
username_5: I've just pushed a new version of the snap to edge, please could you see if it fixes the issue for you:
```
snap refresh flutter --edge
```
username_5: I've just pushed a new version of the snap, please could you see if it fixes the issue for you:
```
snap refresh flutter --stable
```
username_0: Hi @username_5 Yes, it does fix it but closing the release version of the app doesn't finish it; I have to press `q` in the terminal to finish it!
Also, the app renders black screen for about 2 seconds when on debug mode and keeps showing a black screen on release mode unless I press something or move the mouse!
```
$ flutter run --release -v
[ +100 ms] executing: [/home/username_0/snap/flutter/common/flutter/] git -c log.showSignature=false log -n 1 --pretty=format:%H
[ +55 ms] Exit code 0 from: git -c log.showSignature=false log -n 1 --pretty=format:%H
[ ] 63062a64432cce03315d6b5196fda7912866eb37
[ ] executing: [/home/username_0/snap/flutter/common/flutter/] git tag --points-at 63062a64432cce03315d6b5196fda7912866eb37
[ +23 ms] Exit code 0 from: git tag --points-at 63062a64432cce03315d6b5196fda7912866eb37
[ ] 1.26.0-1.0.pre
[ +65 ms] executing: [/home/username_0/snap/flutter/common/flutter/] git rev-parse --abbrev-ref --symbolic @{u}
[ +10 ms] Exit code 0 from: git rev-parse --abbrev-ref --symbolic @{u}
[ ] origin/dev
[ ] executing: [/home/username_0/snap/flutter/common/flutter/] git ls-remote --get-url origin
[ +14 ms] Exit code 0 from: git ls-remote --get-url origin
[ ] https://github.com/flutter/flutter.git
[ +62 ms] executing: [/home/username_0/snap/flutter/common/flutter/] git rev-parse --abbrev-ref HEAD
[ +6 ms] Exit code 0 from: git rev-parse --abbrev-ref HEAD
[ ] dev
[ +65 ms] Artifact Instance of 'AndroidGenSnapshotArtifacts' is not required, skipping update.
[ ] Artifact Instance of 'AndroidInternalBuildArtifacts' is not required, skipping update.
[ ] Artifact Instance of 'IOSEngineArtifacts' is not required, skipping update.
[ ] Artifact Instance of 'FlutterWebSdk' is not required, skipping update.
[ +2 ms] Artifact Instance of 'WindowsEngineArtifacts' is not required, skipping update.
[ ] Artifact Instance of 'MacOSEngineArtifacts' is not required, skipping update.
[ ] Artifact Instance of 'LinuxEngineArtifacts' is not required, skipping update.
[ ] Artifact Instance of 'LinuxFuchsiaSDKArtifacts' is not required, skipping update.
[ ] Artifact Instance of 'MacOSFuchsiaSDKArtifacts' is not required, skipping update.
[ ] Artifact Instance of 'FlutterRunnerSDKArtifacts' is not required, skipping update.
[ ] Artifact Instance of 'FlutterRunnerDebugSymbols' is not required, skipping update.
[ +94 ms] Artifact Instance of 'AndroidGenSnapshotArtifacts' is not required, skipping update.
[ ] Artifact Instance of 'AndroidInternalBuildArtifacts' is not required, skipping update.
[ ] Artifact Instance of 'IOSEngineArtifacts' is not required, skipping update.
[ ] Artifact Instance of 'FlutterWebSdk' is not required, skipping update.
[ +1 ms] Artifact Instance of 'WindowsEngineArtifacts' is not required, skipping update.
[ ] Artifact Instance of 'MacOSEngineArtifacts' is not required, skipping update.
[ +1 ms] Artifact Instance of 'LinuxFuchsiaSDKArtifacts' is not required, skipping update.
[ ] Artifact Instance of 'MacOSFuchsiaSDKArtifacts' is not required, skipping update.
[ ] Artifact Instance of 'FlutterRunnerSDKArtifacts' is not required, skipping update.
[ ] Artifact Instance of 'FlutterRunnerDebugSymbols' is not required, skipping update.
[ +110 ms] Skipping pub get: version match.
[ +124 ms] Found plugin integration_test at /home/username_0/snap/flutter/common/flutter/packages/integration_test/
[ +275 ms] Found plugin integration_test at /home/username_0/snap/flutter/common/flutter/packages/integration_test/
[ +5 ms] Generating /home/username_0/Coding/projects/waw/android/app/src/main/java/io/flutter/plugins/GeneratedPluginRegistrant.java
[ +127 ms] Launching lib/main.dart on Linux in release mode...
[ +13 ms] Building Linux application...
[ +24 ms] executing: [build/linux/release/] cmake -G Ninja -DCMAKE_BUILD_TYPE=Release /home/username_0/Coding/projects/waw/linux
[ +60 ms] -- Configuring done
[ +2 ms] -- Generating done
[ ] -- Build files have been written to: /home/username_0/Coding/projects/waw/build/linux/release
[ +14 ms] executing: ninja -C build/linux/release install
[ +17 ms] ninja: Entering directory `build/linux/release'
[+3581 ms] [1/5] Generating /home/username_0/Coding/projects/waw/linux/flutter/ephemeral/libflutter_linux_gtk.so,
/home/username_0/Coding/projects/waw/linux/flutter/ephemeral/flutter_linux/fl_basic_message_channel.h,
/home/username_0/Coding/projects/waw/linux/flutter/ephemeral/flutter_linux/fl_binary_codec.h,
/home/username_0/Coding/projects/waw/linux/flutter/ephemeral/flutter_linux/fl_binary_messenger.h,
/home/username_0/Coding/projects/waw/linux/flutter/ephemeral/flutter_linux/fl_dart_project.h,
/home/username_0/Coding/projects/waw/linux/flutter/ephemeral/flutter_linux/fl_engine.h,
[Truncated]
• Framework revision 63062a6443 (3 weeks ago), 2020-12-13 23:19:13 +0800
• Engine revision 4797b06652
• Dart version 2.12.0 (build 2.12.0-141.0.dev)
[✓] Linux toolchain - develop for Linux desktop
• clang version 6.0.0-1ubuntu2 (tags/RELEASE_600/final)
• cmake version 3.10.2
• ninja version 1.8.2
• pkg-config version 0.29.1
[✗] Flutter IDE Support (No supported IDEs installed)
• IntelliJ - https://www.jetbrains.com/idea/
• Android Studio - https://developer.android.com/studio/
• VS Code - https://code.visualstudio.com/
[✓] Connected device (1 available)
• Linux (desktop) • linux • linux-x64 • Linux
! Doctor found issues in 1 category.
```
username_5: @username_0, do you see the same behaviour when running the built application from `build/linux/debug/bundle/` / `build/linux/release/bundle/`?
username_0: @username_5 Yes for the debug version, and no for the release version!
Using the commands:
```
flutter build linux --debug && ./build/linux/debug/bundle/waw
flutter build linux --release && ./build/linux/release/bundle/waw
```
Note: I have a similar issue started when I upgraded to Ubuntu 20.10, the clock doesn't update whenever I watch a video while on full-screen mode. So maybe Ubuntu issue? or a snap issue (not sure if the clock is related to the gnome-calendar snap!)
username_5: Ok, if you see any of these issues when running direct from `build/linux/xxx/bundle/` it won't be snap-specific. I think it's fair to say this bug can be closed now. Could you please [log a separate bug](https://github.com/flutter/flutter/issues/new?assignees=&labels=&template=2_bug.md&title=) for new issue you're facing. Thanks for all your help! |
ssu-see/see.ssu.ac.kr | 64591764 | Title: 작업시 요청사항[반드시 읽어주시기 바랍니다.]
Question:
username_0: 서버에서 절대 작업하지 마세요. 로컬 환경이랑 충돌납니다.
(해서는 안되지만 했다면 꼭 깃헙에 올려주세요.)
@준형 @Luavis 서버 환경을 바꾸는 것은 뭐 그렇다고 칩니다. 그렇지만
사전에 반드시 말해주셔야 하는 사항들이 있습니다.
(하지만 저 역시 배우는 단계이고, 저번에 실수를 많이 한건 거듭 사과 드립니다.)
무엇보다, 실력보다 협업이 훨씬 중요하다구요.
변경사항 적용시 Fork를 떠서 작업 후, Pull request를 하고 적용해주시기 바랍니다.
Travis 결과를 확인해주시기 바랍니다.<issue_closed>
Status: Issue closed |
cmdcolin/travigraphjs | 456654226 | Title: Data display sometimes includes bad time (unix epoch)
Question:
username_0: Could be related to network failure as it's not super reproducible
Answers:
username_0: Could be related to network failure as it's not super reproducible
username_1: It seems it's because sometimes when you're querying new jobs that haven't finished yet, their `finished_at` field is `null`, which is probably interpreted as `0` by the graphing library
username_0: Thanks, I'll see if i can at least filter these out!
Status: Issue closed
username_0: Should be added now |
wuhan2020/map-viz | 561465868 | Title: 合并两个折线图
Question:
username_0: ## 描述功能要求
### 如果有的话,请附图

[Figma](https://www.figma.com/file/Dq4SyxvTwO7lDQVnwox36G/wuhan-v1-rapid-prototype?node-id=0%3A28)
## 额外的信息
Answers:
username_0: 两个图标合并在一起,无法看清死亡和治愈。 数量级差距太大。 所以 关闭 issue。
Status: Issue closed
|
prometheus/snmp_exporter | 734715655 | Title: cant add APC Swtiched Rack PDU
Question:
username_0: <!--
Please note: GitHub issues should only be used for feature requests and
bug reports. For general discussions and support, please refer to one of:
- #prometheus on freenode
- the Prometheus Users list: https://groups.google.com/forum/#!forum/prometheus-users
For bug reports, please fill out the below fields and provide as much detail
as possible about your issue. For feature requests, you may omit the
following template.
If you include CLI output, please run those programs with additional parameters:
snmp_exporter: `-log.level=debug`
snmpbulkget etc: `-On`
-->
### Host operating system: output of `uname -a`
### snmp_exporter version: output of `snmp_exporter -version`
<!-- If building from source, run `make` first. -->
### What device/snmpwalk OID are you using?
modules:
apcpdu:
version: 1
walk:
- .1.3.6.1.4.1.318.1.1.26.2.1.3 # rPDU2IdentName
- .1.3.6.1.4.1.318.1.1.26.2.1.4 # rPDU2IdentLocation
- .1.3.6.1.4.1.318.172.16.17.32.1.8 # rPDU2IdentModelNumber
- .1.3.6.1.4.1.318.1.1.26.2.1.9 # rPDU2IdentSerialNumber
- .1.3.6.1.4.1.318.172.16.17.32.3.1.4 # rPDU2DeviceStatusLoadState
- 1.3.6.1.4.1.318.172.16.17.32.3.1.5 # rPDU2DeviceStatusPower
- 1.3.6.1.4.1.318.1.1.26.4.3.1.6 # rPDU2DeviceStatusPeakPower
- 1.3.6.1.4.1.318.1.1.26.4.3.1.9 # rPDU2DeviceStatusEnergy
- 1.3.6.1.4.1.318.1.1.26.4.3.1.12 # rPDU2DeviceStatusPowerSupplyAlarm
- 1.3.6.1.4.1.318.1.1.26.4.3.1.13 # rPDU2DeviceStatusPowerSupply1Status
- 1.3.6.1.4.1.318.1.1.26.4.3.1.14 # rPDU2DeviceStatusPowerSupply2Status
- 1.3.6.1.4.1.318.1.1.26.4.3.1.16 # rPDU2DeviceStatusApparentPower### If this is a new device, please link to the MIB(s).
### What did you do that produced an error?
generate doesnt build the snmp.yml file. i am using this mibs. https://www.se.com/us/en/download/document/APC_POWERNETMIB_432/
a successful snmp.yml file
### What did you expect to see?
### What did you see instead? ./generator generate
level=info ts=2020-11-02T18:06:50.747Z caller=net_snmp.go:142 msg="Loading MIBs" from=../mibs/apc/powernet432.mib
level=info ts=2020-11-02T18:06:50.752Z caller=main.go:52 msg="Generating config for module" module=apcpdu
level=error ts=2020-11-02T18:06:50.752Z caller=main.go:130 msg="Error generating config netsnmp" err="cannot find oid '.1.3.6.1.4.1.318.1.1.26.2.1.3' to walk"
Answers:
username_1: It makes more sense to ask questions like this on the [prometheus-users mailing list](https://groups.google.com/forum/#!forum/prometheus-users) rather than in a GitHub issue. On the mailing list, more people are available to potentially respond to your question, and the whole community can benefit from the answers provided.
Status: Issue closed
|
quasilyte/go-ruleguard | 677921898 | Title: breaking api change in ruleguard.go -> Context -> Report ?
Question:
username_0: See https://github.com/username_1/go-ruleguard/pull/64/files#r469483412 for the exact location of the problematic change
Here is how the error materializes itself in a project that uses go-ruleguard (in this case, go-critic):
```
[16:55:23] : [Step 5/6] #21 51.74 /go/pkg/mod/github.com/go-critic/go-critic@v0.5.0/checkers/ruleguard_checker.go:81:3: cannot use func literal (type func(ast.Node, string, *ruleguard.Suggestion)) as type func(ruleguard.GoRuleInfo, ast.Node, string, *ruleguard.Suggestion) in field value
[16:55:23] : [Step 5/6] #21 57.86 make: *** [GNUmakefile:183: lint-deps] Error 2
```
Typical dependency chain is as follows:
<project> -> golangci-lint -> go-critic -> go-ruleguard
if <project> uses go get -u to install packages, the dependency tree gets updated and go-critic can't build so the build breaks.
I think this is would be considered a breaking change by semver rules, but given it's all pre-release, I guess all bets are off. Up to you to decide how to best address this :)
Answers:
username_1: Am I right that we basically need to bumb at least minor version to make things work?
username_0: Looking at:
- https://golang.org/cmd/go/#hdr-Add_dependencies_to_current_module_and_install_them
I think bumping the minor version would protect folks using `-u=patch` option but not folks using `-u` so unfortunately not everything is in your hands - this would reduce the number of users running into issues but not eliminate it. I'm not sure it's possible to eliminate the problem entirely though.
- https://semver.org/#spec-item-4
specifies that with version 0.x.x all bets are off.
in my experience (mostly with NPM), pre-release packages are easier to manage using the following dependency scheme:
`<X.Y.Z>-preview.<a>` or to the extreme, `<X.Y.Z>-preview.<a.b.c>` (this is what nuget does) where `X.Y.Z` is the version you expect the package to be released as stable and `<a>` or `<a.b.c>` is the pre-release versioning. I'm still kinda new to the golang world, so I don't think I know all the intricacies of how `go get` works. And I believe as a maintainer you have the right to pick what works best for you :)
username_1: I'll start by updating the ruleguard version in go-critic. This should fix the golangci build.
username_1: Release `v0.2.0` is out and we're also using Go modules now.
username_0: That's awesome, thanks for the quick turn around :)
Status: Issue closed
|
openshift/origin | 82076180 | Title: Container status errors look ugly in details sidebar
Question:
username_0: Better to have the message than not have it, but we could present it better.

/cc @liggitt
Answers:
username_0: We might just close this. We have individual pod pages now and don't show pod details in the sidebar.
username_0: Container state reason is supposed to be a brief message.
```
// ContainerStateWaiting is a waiting state of a container.
type ContainerStateWaiting struct {
// (brief) reason the container is not yet running.
Reason string `json:"reason,omitempty"`
// Message regarding why the container is not yet running.
Message string `json:"message,omitempty"`
}
```
I haven't seen a message like this for a while. Closing. We can reopen if we see it again.
Status: Issue closed
|
postmanlabs/postman-app-support | 345663751 | Title: Postman App Performance Lag
Question:
username_0: I'm using the desktop native app on a Windows 10 machine. I've closed all tabs within the collection and rebooted my machine. I'm authoring in form-data fields specifically in the Body section of a POST request of the collection and noticing a very long lag (~10-20 seconds) from keystroke to the character appearing in the text box. My requests are at 176 in that one collection so I understand I may be pushing the tool to the limits. However, please advise best practices to improve the performance of the tool on our end. Is there a way we should say author outside and can import it in? All of my content is written, I'm just pulling it in from another source via a combination of bulk edit and copy/paste.
Answers:
username_1: I have the "new" desktop Windows app, I have installed it / uninstalled it and it runs poorly. My laptop is a 4 core CPU with 16 GB of RAM running Windows 10 Enterprise.
It takes several minutes for it to respond to a mouse click. When it does finally respond, it takes ages to search. The interceptor feature was removed. This is a horrible experience that I would not wish on my worst enemy. When does a company decide to remove critical functionality? Is it so you can make $$$ from support cases? I really wish you added to the Chrome version rather than deprecating it. You seem to be moving backward in time, apps are going away from the desktop, not to the desktop.
username_2: I'm a paying client (Postman Pro) and the one who reported the original lag issue. I'm running Windows 10 Enterprise on a laptop with a Core i7 processor and 16 GB of RAM, 64-bit OS.
username_3: We are making some fundamental improvements to Postman's response rendering including using the Monaco editor for better memory optimization and rendering speeds. You can try out these updates in the Canary channel here: https://www.getpostman.com/downloads/canary
<img width="1790" alt="Screenshot 2019-06-12 12 19 53" src="https://user-images.githubusercontent.com/653409/59379813-72ffc700-8d0c-11e9-9da4-5c9ce0492dc4.png">
You can test it out with an API that we have created on the Postman Echo service. Examples here: https://documenter.getpostman.com/view/55577/S1ZudWWm?version=latest
username_4: It's 2020, I think it is the same. I used postman extensively. Insomnia is still better experience when switching tabs. I am sorry but, you can feel it too. IMHO.
username_5: Still lagging on windows, as high as a 1.5 second delay after typing things for things to appear in the URL bar.
username_6: We have made some performance improvements in the past few months.
Can anyone having issues please try the most recent version of the application and let us know if they are still seeing an issue?
username_7: Windows 10. Core i7 and 16GB of RAM.
Postman lags so much.

username_8: I had a similar issue with a not so big postman file (1 collection with 30 requests) on Windows 10, the IDE was irregularly laggy when travelling through the app.
Changing the "Performance options" settings to "best performances" solved the problems for me. I did not spend time to identify which option caused the lags (maybe text smoothing, idk).

That could be a idea.
username_9: Merging this ticket with https://github.com/postmanlabs/postman-app-support/issues/8761
Status: Issue closed
|
amro/gibbon | 320157887 | Title: How can I check if email is a member?
Question:
username_0: Hello, I am trying to make it so that I can check if an email is a member and if they are a member add them to a group. If not a member than subscribe the email. I have everything working EXCEPT how to check if they are on the list. Here is what I have...
list_id = ENV['MAILCHIMP_LIST_ID']
@gibbon = Gibbon::Request.new(api_key: ENV['mailchimp_api_key'], symbolize_keys: true)
@member_id = Digest::MD5.hexdigest(params[:Email])
if @member_id.present #THIS IS WHAT I CANT FIGURE OUT NEED TO change to if response is good (member is present)
@gibbon.lists(list_id).members(@member_id).update(body: {interests: {'interest': true}})
else
@gibbon.lists(list_id).members.create(body: { email_address: params[:Email], status: 'subscribed', double_optin: false, interests: {'86c0a59e15': true}})
end`
The hexdigest gives me a string even if member is not created. Any thoughts?
Answers:
username_1: Sorry it took me a little while to respond, @username_0. Gibbon throws errors for API responses that are not "success." Wrap the call in a begin/rescue and look for the 404 and handle it as a "not subscribed." That's how I would handle this.
username_0: Worked like a charm! Thanks
Status: Issue closed
|
snabbco/snabb | 209127719 | Title: Proposal: writing tests also using another language
Question:
username_0: TL;DR: at Igalia we'd like to _also_ use another language for writing tests, probably Python.
### Current status
In addition to unit tests in Lua code, integration tests are mostly written using Bash shell scripts.
In the `next` branch there are currently 100 shell scripts, totaling around 3100 lines. In the `src/program/lwaftr` directory of [our fork](https://github.com/Igalia/snabb) alone there currently 22 shell scripts, totaling around 1200 lines. Most of those scripts are used for tests, called by `make test` via 18 `selftest.sh` files (stats courtesy of [loccount](https://gitlab.com/esr/loccount)).
### Our situation
While we keep adding and enhancing integration tests, our dissatisfaction with Bash shell scripts grows. The tests now include sizable amounts of logic, and we frequently find ourselves fighting the various peculiarities and limitations of the language.
We are at a point were we feel it would be more productive to start using a more consistent tool for this job.
### Alternatives
After evaluating the alternatives, we are considering two candidates: Lua and Python.
Lua would be the natural choice, being the main language of the project. However its features for handling subprocesses are not very good, and that is obviously the main thing happening in integration tests. Also, libraries would have to be added for specific features we need, like JSON encoding/decoding.
Python is a strong candidate: it has robust subprocess handling, an extensive standard library and most of us know it well. Its syntax is slightly more verbose than shell scripts for running system commands, but that could be corrected by using [a small, one-file library](http://amoffat.github.io/sh/).
### Feasibility
Our tests are currently run both manually and automatically, via `snabb-bot` and Hydra. As far as I know there is no formal list of test dependencies: the closest thing is the list in a [Dockerfile](https://github.com/eugeneia/snabbswitch-docker/blob/master/image/eugeneia/snabb-nfv-test/Dockerfile#L5).
Ensuring the availability of another interpreter in all the environments tests are run should not be a problem.
### Impact on the project
We are not proposing to establish any kind of policy for future tests, much less rewriting the current ones: we would like to have the _option_ to also use another language in tests. The current structure would remain in place: the `selftest.sh` scripts would call other test scripts, whatever the language they are written in.
### Request for comment
Do you think it would be acceptable to have tests written in another language too? Would you benefit from this proposal? Would you like to suggest other languages? Which ones, and why?
Please comment. Thank you.
Answers:
username_1: Good writeup! To me it makes sense to keep `snabb` minimal but to be more liberal with related tooling e.g. tests, benchmarks, log analysers, etc. It's definitely important to nail down the dependencies but this can naturally by done by making them run under Hydra.
So 👍 from me on writing tests suites and related tools using the technologies that you consider most appropriate and providing a nix expression to make them run with the right dependencies. This way we can always run the tests under the Hydra CI with consistent dependency versions, lazy people like me can use nix to install the dependencies automatically in development environments, and people can also privately use any other methods they like personally e.g. `apt-get`, `yum`, `docker`, `make install`, etc.
Sound reasonable?
Further braindump...
I have been pondering this quite a bit lately because I am working on Snabb-related development tools that I want to write in R and Pharo - domain-specific tools for statistical analysis and interactive visualization - and I will need a solution for making these easy to run including dependencies. I see nix as the solution here but I am working out the exact details.
For one simple example, I have an R program called `timeliner` that reads a Snabb "shm" folder and produces statistics and visualizations. The basic source code for this program looks like:
```
#!/usr/bin/env Rscript
... R code ...
```
and the issue is that it only actually runs if you have a suitable version of R installed and also suitable versions of all the libraries that it depends on. This is potentially a problem because I really have no idea which Linux distros satisfy that criteria. So the simple solution is to make a `timeliner-nix` wrapper script that asks nix to provide the necessary dependencies. This is a simple two-liner:
```
#!/usr/bin/env nix-shell
#!nix-shell -i Rscript -p 'with pkgs; with rPackages; [ R dplyr ggplot2 bit64 ]'
```
... and so if you are in doubt about the dependencies you can just install nix and run the `timeliner-nix` wrapper to automatically get the same versions that I am using. If you are sure you have the dependencies right then you can run `timeliner` directly instead and you don't need nix.
That's not as convenient as deploying Snabb - one binary with no dependencies - but seems like a reasonable compromise for optional development tools.
username_0: Yes, quite reasonable, thank you. :-) The implementation plan is in [this issue](https://github.com/Igalia/snabb/issues/749). |
facebookresearch/pytorch3d | 642284502 | Title: Support for multiple lighting source for rendering
Question:
username_0: ## Motivation
Currently the renderer forward method only support one lighting per batch, this limits the usage of this library for more complex neural rendering training.
For example, to obtain a visually appealing rendering of a object with smooth texturing, it is not possible to contrast shadows with a single lighting source. For example:

Multi-lighting configuration is a common functionality in OpenGL and its related third-party libararies, such as pyrender.
## Pitch
It seems like the easiest way of adding this new feature is to construct a multi-lighting class that accecpt currently lighting objects as a list.
```python
lights = MultiLights([
directional_lights_a,
point_lights_b,
])
shader = shader_cls(..., lights=lights)
```
This would require modifying current diffuse and specular functions, but should be doable.
Answers:
username_1: @nikhilaravi, I wanted to ask whether this is still on the TODO list?
Status: Issue closed
username_0: @nikhilaravi I am closing this now due to no response. |
mganss/HtmlSanitizer | 180724186 | Title: .NET 4
Question:
username_0: Why did this nice library dropped the support of .NET 4.0?
The dependency (AngleSharp) does support .NET 4.
Answers:
username_1: I assume it's from #72, specifically, https://github.com/username_2/HtmlSanitizer/pull/72#issuecomment-223939723
Status: Issue closed
username_2: That was easy, just added a net40 section to project.json 😺
username_0: Yes indeed! Curious how it will work in the build system after project.json |
Hounddog/hounddog.github.com | 9223439 | Title: Test case has an Error
Question:
username_0: 1) AlbumRestTest\Controller\AlbumRestControllerTest::testCreateCanBeAccessed
Undefined variable: id
/var/www/zf2-tutorial/module/AlbumRest/src/AlbumRest/Controller/AlbumRestController.php:43
/var/www/zf2-tutorial/vendor/zendframework/zendframework/library/Zend/Mvc/Controller/AbstractRestfulController.php:191
/var/www/zf2-tutorial/vendor/zendframework/zendframework/library/Zend/Mvc/Controller/AbstractRestfulController.php:153
/var/www/zf2-tutorial/vendor/zendframework/zendframework/library/Zend/EventManager/EventManager.php:464
/var/www/zf2-tutorial/vendor/zendframework/zendframework/library/Zend/EventManager/EventManager.php:208
/var/www/zf2-tutorial/vendor/zendframework/zendframework/library/Zend/Mvc/Controller/AbstractController.php:107
/var/www/zf2-tutorial/vendor/zendframework/zendframework/library/Zend/Mvc/Controller/AbstractRestfulController.php:104
/var/www/zf2-tutorial/module/AlbumRest/test/AlbumRestTest/Controller/AlbumRestControllerTest.php:60
FAILURES!
Tests: 6, Assertions: 5, Errors: 1.<issue_closed>
Status: Issue closed |
FAForever/fa | 1047048338 | Title: Tanks, Fatboy tracks animations.
Question:
username_0: The animations of tracked vehicle is slowing down during the match until it completely stops.
how to reproduce:
Play game with many units, during mid/lategame track animations will start to degrade eventually stop to be animated at all.
Answers:
username_0: Could it be just replaced? Like Monkeyking and vehicles with legs do not suffer from this. Could be new elements just replace the tracks and work on code like animated legs? |
formio/formio.js | 598548705 | Title: Common Fields on a wizard
Question:
username_0: Using the form builder, is it possible to create a single wizard which shows 4 common fields (like a header fields) in all three steps of the wizard?
I can add 4 fields on each step to have it visible on all steps but I would like to avoid duplication. |
nestauk/beis-indicators | 956009522 | Title: Landing on `/accessibility` returns 404
Question:
username_0: Landing elsewhere and then navigating to `/accessibility` works fine, but landing on `/accessibility` gives a 404 - Not found.
I tried exporting and serving locally and indeed `/accessibility` is not scraped by Sapper.<issue_closed>
Status: Issue closed |
HackGT/live-site | 365259447 | Title: Random disappearing letters in navbar site title on Safari on Mac
Question:
username_0: On Safari on Mac, random letters disappear from the [HackGTeeny ](live.teeny.hack.gt) navbar site title element.
<img width="1440" alt="screen shot 2018-09-30 at 12 58 18 pm" src="https://user-images.githubusercontent.com/5790137/46262455-7c733700-c4cf-11e8-8b64-6b0d6a064c3f.png">
_Reported on HackGTeeny live site feedback form_ |
nuwave/lighthouse | 370143337 | Title: Using @group in "extend type" causes incorrectly nested types when using @paginate
Question:
username_0: **Describe the bug**
When using `@paginate` directive within an `extend type Query @group` the emitted schema is incorrect.
Introspection/Playground Schema:

Query examples:
```graphql
{
users(count: 5) {
data { id email }
}
}
# Error: Cannot query field \"id\" on type \"UserPaginator\".
```
```graphql
{
users(count: 5) {
data { data { id email } }
}
}
# Error: "Argument 1 passed to Nuwave\\Lighthouse\\Schema\\Types\\PaginatorField::dataResolver() must implement interface Illuminate\\Contracts\\Pagination\\LengthAwarePaginator, instance of App\\User given, called in test/vendor/nuwave/lighthouse/src/Schema/Directives/Fields/FieldDirective.php on line 52",
```
**Expected behavior**
Query example 1 should work fine and emitted schema should only contain `UserPaginator` and not `UserPaginatorPaginator`
**Schema**
```graphql
type User {
id: ID!
email: String!
}
type Query {
me: User @auth
}
extend type Query @group {
users: [User!]! @paginate
}```
**Environment**
Lighthouse Version: 2.3
Laravel Version: 5.7
PHP Version: 7.1
**Additional context**
Issue was introduced with Lighthouse 2.3. Downgrading to 2.2 fixes the issue.
Answers:
username_1: Added a pull-request that solves the issue for me and passes the unit test in the previous comment.
Hope it helps.
Status: Issue closed
|
DiscordDungeons/Bugs | 281450713 | Title: quests problem
Question:
username_0: Can't start quests or view any topic quest related
DiscordAPIError: Missing Access
at item.request.gen.end (/home/mackan/RPGBot/node_modules/discord.js/src/client/rest/RequestHandlers/Sequential.js:71:65)
at then (/home/mackan/RPGBot/node_modules/snekfetch/src/index.js:218:21)
at <anonymous>
at process._tickCallback (internal/process/next_tick.js:188:7)```

Answers:
username_1: having the same problem
TypeError: message.client.hasEmbedPerms is not a function
at Quest._showCurrent (/home/mackan/RPGBot/src/Commands/Commands/User/Quest.js:123:37)
at Quest._sendMainOptions (/home/mackan/RPGBot/src/Commands/Commands/User/Quest.js:209:16)
at
at process._tickCallback (internal/process/next_tick.js:188:7)

username_2: Also having the same problem
TypeError: message.client.hasEmbedPerms is not a function
at Quest._showCurrent (/home/mackan/RPGBot/src/Commands/Commands/User/Quest.js:123:37)
at Quest._sendMainOptions (/home/mackan/RPGBot/src/Commands/Commands/User/Quest.js:209:16)
at <anonymous>
at process._tickCallback (internal/process/next_tick.js:188:7)
username_3: Same issue as #490
Status: Issue closed
username_0: Now it is working... At least the response

username_3: Yeah, I fixed it. |
rcds2017/Colmar | 281728438 | Title: Simplify: Double classes
Question:
username_0: https://github.com/rcds2017/Colmar/blob/master/index.html#L35
While I don't understand why there need to be two classes applied, you can do so more easily like this: class="main-contents container" |
github/pages-gem | 189221369 | Title: Proposal: Add amp-jekyll
Question:
username_0: This plugin enables you to generate AMP versions of your Jekyll posts alongside the normal HTML posts. AMP pages load super fast, especially on older devices.
While it is possible to only write posts in AMP (I've tried), you cannot use your own JavaScript or tags like `<form>` which aren't supported. Yes, the page may load fast by default, but it loses lots of functionality when using a desktop (hence the name Accelerated *Mobile* Pages).
Answers:
username_1: Interested in this plugin too.
username_2: I wouldn't be opposed to adding AMP support, especially if it was automatic and transparent (e.g., just activate the plugin and it works), similar to how Jekyll Feed, Jekyll Sitemap, etc. work.
Regardless of the implementation, we wouldn't be able to consider any plugin without unit and integration tests for the desired behavior.
username_3: Related: https://github.com/jekyll/jekyll/issues/3041
Status: Issue closed
|
AndTheDaysGoBy/ForFun | 288450995 | Title: Locations
Question:
username_0: Locations aren't necessarily of the form:
"city, state ZIP"/"city, state"
They could merely be "state" (where state is given completely, not just as an abbreviation), or "Remote" or some other phrase. What must be done is to parse the "location" term, and come up with a procedure to determine its parts.
I believe the best course is to gsub() the string first, then check the remainder(s) against pre-made lists of cities/states.
Answers:
username_0: Fixed by adding a new method parseLocations().
Status: Issue closed
|
PorktoberRevolution/ReStocked | 409534186 | Title: RA-2 Relay Antenna - Holes in mesh
Question:
username_0: Holes in mesh where supports meet the dish. Around other legs too. Just couldn't capture all in the same pic.

A gap in mesh that runs up the side of the dish

Answers:
username_1: something stupid I did thinking I was clever. fixed with b2bcb6f
Status: Issue closed
username_0: Confirmed |
whatwg/participate.whatwg.org | 286668039 | Title: Alternative flow for managing participation
Question:
username_0: Currently in order to manage participation, a company has to create a dedicated Github organization and make sure all its members are people that are allowed to contribute to WHATWG standards.
While that's feasible and free-of-charge, it can create confusion for companies that already have an active Github organization that's not related to standard participation.
It'd be great of an alternative flow was enabled, that doesn't require a full-fledged separate GH org. One alternative can be a separate repository on an existing org, where all the repo admins are tracked as people that can contribute to WHATWG standards.
Answers:
username_1: Looking at https://developer.github.com/v3/repos/collaborators/ this seems like an experimental API. On top of that it's not entirely clear to me if we could access such data. I think currently all that information is privileged.
username_2: Right, when I last looked into this, most of this sort of data is very private and can't be accessed by @whatbot. (E.g. team membership, even for public teams.)
Organizations was the only public thing I could figure out. But if you see something else in the GitHub API that is publicly accessible, let me know. You can test by just accessing it in your browser: see e.g. https://api.github.com/users/username_2/orgs.
username_2: So if GitHub was willing to add features for us, this could be solved with either:
- The ability to create public teams for an org, which are readable through the GitHub API with no privileges
- Some kind of "sub-organization" that might help organizations reduce the confusion about having multiple organizations.
The first seems nicer, and would also solve #12.
username_3: I'm not 100% on the context here, but would [nested teams](https://github.com/blog/2378-nested-teams-add-depth-to-your-team-structure) help? Inside the whatwg org, you could have a `participants` team which itself has a set of sub-teams named after the companies, each company team can have a team admin who can add their colleagues. Would this solve the issue?
username_1: @username_3 I think for Mozilla it would not as we plan to eventually have automated control over who gets to join and who has to leave the organization that controls our standards participation. Having to manually manage that for lots of people will lead to mistakes. And I don't think WHATWG is comfortable allowing bots from all the organization that join us to poke around and make those changes.
username_0: Is it possible to use the nested teams idea as an alternative optional flow? For me, I only have a handful of people that will be contributing, so managing it manually won't be a problem. (and creating a full fledged organization seems like an overkill)
username_2: Nested teams seems like a potential workaround, but it feels wrong to have this be "under the WHATWG" instead of "under the company". Companies would also need to be careful to keep @whatbot on their teams and never accidentally kick it off, further making it feel less like their space and more like ours.
@username_0, we could build a whole alternate flow around nested teams, but we'd need some convincing that it's worth the engineering time. So far all you've said is that "a full fledged organization seems like an overkill", but I don't really understand that. Organizations are free, easy to create, easy to administer, and several companies are already using them in this manner. They seem much less like overkill than teams, which in GitHub's data model are rather complex entities.
I guess #12 is an argument for spending engineering effort on the nested-teams workflow.
username_0: Right now my legal team's opinion is that we'll need to create a separate paid organization for this purpose (As according to GH ToS: "the ToS specify that an entity can have only one free account per entity").
The cost is not huge but not negligible either (9-21$ per member per month, I'm not yet sure which plan we actually need for that). This adds friction to the signing process as I figure out if the existing org account can upgrade itself, or I need to create a new one.
At the same time, are nested teams at the organizations more visible than simple teams? I thought the reason we didn't go with org teams is that the information is behind a login.
I also poked at the GH APIs and I think there's a simpler alternative to nested teams: Repo collaborators.
It seems easy to extract contributors from a specific public repo.
e.g. `curl -H "Accept: application/vnd.github.v3+json" https://api.github.com/repos/WICG/starter-kit/contributors` will give you a list of everyone who contributed to this specific public repo.
So an organization can specify a specific repo that every contributor to that repo will also get permissions to contribute to the WHATWG. It'll be a bit tricky to manage memberships (e.g. removing a contributor will require history rewriting), but it seems like a simple alternative to get started.
username_2: The teams proposed above would be under the WHATWG organization and have whatbot as an administrator, so they would be visible to whatbot through the API.
We could indeed hack up something based on any world-readable piece of data, such as collaborators on a repo, a Twitter list administered by an employee, or a file at a .well-known location on the entity's website. All of these require nontrivial engineering effort to build into the workflow, and will need to be supported indefinitely. So I'm not eager to pick any in particular unless we are sure it is long-term the best option. So far everything proposed seems pretty terrible, but I agree that between the $9 cost and the requirement to publicize membership one by one (#12), I can see why organizations are suboptimal.
Still, I would advise anyone wanting to be able to sign and contribute in the near future to go that route for now. Any alternative we come up with is not going to be fast to decide on or code up, test, stage, and deploy. (I'd estimate on the order of 3 weeks-2 months.)
username_2: Currently I'm leaning toward allowing companies to enter a URL that contains a text file with a list of GitHub usernames they want to allow to contribute. The contents of that URL could be administered any way they want; e.g. it could be the githubusercontent.com version of a .txt file in some repo they control.
How does that sound to folks? I'm not sure how fast I could make it work, but I do want to walk back a bit from my previous longer estimates of 3 weeks-2 months, as upon further reflection on the situation it seems like the potential TOS violation/cost of rectifying that is pretty serious, and so this should be treated as more urgent.
I'm still leery about supporting too many alternate workflows, e.g. organization (optimal for Google), this URL thing (optimal for a number of others perhaps), and perhaps in the future something based on GitHub teams if they become public. So let's make sure this new proposal makes people happy.
username_0: I'm happy with a txt file based alternative flow.
username_2: As an update, I was able to start coding on this today, and have made significant progress. I hope to have something to show this week.
username_0: Update on my end: managed to (finally) sign the participant agreement by creating a new Github org, using the team plan, paying the minimum 25$/month.
I'd still highly appreciate if we could get rid of this requirement as it creates a monetary barrier to participation, as well as a legal and bureaucratic one.
@username_2 - hope you're feeling better. Anything I can help with to make that happen?
username_2: So sorry about that; it's still a high priority to fix. I am feeling better, but now have less time due to various events I'm traveling to.
If people are interested, I could upload my half-baked code for this and others could try to contribute to that branch. Otherwise, I'll get back to it as soon as I'm able.
username_1: Is this still something that people need?
Status: Issue closed
|
vue-microfrontends/root-config | 851547606 | Title: Is that dynamic inject DOM problems??
Question:
username_0: When i reload demo [link](https://vue.microfrontends.app/rate-doggos) in browser tow or three times. I found a problem

after two or three times

dynamic injection of the DOM is disorderly
Is that a bug??
Status: Issue closed
Answers:
username_1: Good question. This is expected when you're (1) not using single-spa-layout, and (2) don't manually create `<div id="single-spa-application:@vue-mf/navbar"></div>` elements.
When you need to control positioning in the dom, single-spa-layout is recommended. See https://single-spa.js.org/docs/layout-overview |
sebastianwachter/ScratchCard | 111049768 | Title: add example/demo
Question:
username_0: gh-pages demo then, maybe?
Status: Issue closed
Answers:
username_1: The entire project is a example. You may use the provided JavaScript file in your own project. All you need is to set a <canvas> in your html give it a width and a height and set it's id to "myCanvas".
username_0: gh-pages demo then, maybe?
Status: Issue closed
username_1: done http://username_1.github.io/ScratchCard/ |
kembolanh/kembo | 459424814 | Title: Thoát vị đĩa đệm ở người trẻ tuổi: nguyên nhân và cách phòng tránh
Question:
username_0: Không chỉ là căn bệnh của người già, thoát vị đĩa đệm ở người trẻ tuổi đang có xu hướng ngày càng gia tăng và cần phải có biện pháp phòng ngừa và điều trị thích hợp.
https://imedicare.vn/thoat-vi-dia-dem-o-nguoi-tre-tuoi-nguyen-nhan-va-cach-phong-tranh/
• Đai thoát vị đĩa đệm
• Hội chứng đau thắt lưng là gì |
dependabot/gomodules-extracted | 474530398 | Title: Update extracted go modules
Question:
username_0: Current extracted implementation of Go Modules is outdated and has some buggy behaviour.
I suggest to re-extract the current implementation and update this repo and anything that depends on it.
Currently, we use the go modules helper and dependabot has problems updating a specific dependency, something that doesn't happen with native go modules in go 1.12.5.
Answers:
username_0: Stacktrace of dependabot failing
```
Fetching go_modules dependency files for user/repo
Parsing dependencies information
- Updating github.com/go-kit/kit…/open/dependabot/vendor/ruby/2.6.0/gems/dependabot-go_modules-0.107.38/lib/dependabot/go_modules/file_updater/go_mod_updater.rb:94:in `handle_subprocess_error': go: finding github.com/go-kit/kit v0.9.0 (Dependabot::DependencyFileNotParseable)
go: downloading github.com/go-kit/kit v0.9.0
go: extracting github.com/go-kit/kit v0.9.0
build github.com/user/repo: cannot load github.com/go-kit/kit: cannot find module providing package github.com/go-kit/kit
from /open/dependabot/vendor/ruby/2.6.0/gems/dependabot-go_modules-0.107.38/lib/dependabot/go_modules/file_updater/go_mod_updater.rb:59:in `block (2 levels) in updated_go_sum_content'
from /open/dependabot/vendor/ruby/2.6.0/gems/dependabot-common-0.107.38/lib/dependabot/shared_helpers.rb:141:in `with_git_configured'
from /open/dependabot/vendor/ruby/2.6.0/gems/dependabot-go_modules-0.107.38/lib/dependabot/go_modules/file_updater/go_mod_updater.rb:51:in `block in updated_go_sum_content'
from /open/dependabot/vendor/ruby/2.6.0/gems/dependabot-common-0.107.38/lib/dependabot/shared_helpers.rb:37:in `block (2 levels) in in_a_temporary_directory'
from /open/dependabot/vendor/ruby/2.6.0/gems/dependabot-common-0.107.38/lib/dependabot/shared_helpers.rb:37:in `chdir'
from /open/dependabot/vendor/ruby/2.6.0/gems/dependabot-common-0.107.38/lib/dependabot/shared_helpers.rb:37:in `block in in_a_temporary_directory'
from /usr/local/lib/ruby/2.6.0/tmpdir.rb:93:in `mktmpdir'
from /open/dependabot/vendor/ruby/2.6.0/gems/dependabot-common-0.107.38/lib/dependabot/shared_helpers.rb:34:in `in_a_temporary_directory'
from /open/dependabot/vendor/ruby/2.6.0/gems/dependabot-go_modules-0.107.38/lib/dependabot/go_modules/file_updater/go_mod_updater.rb:50:in `updated_go_sum_content'
from /open/dependabot/vendor/ruby/2.6.0/gems/dependabot-go_modules-0.107.38/lib/dependabot/go_modules/file_updater.rb:29:in `updated_dependency_files'
from ./dependabot_script.rb:182:in `block in <main>'
from ./dependabot_script.rb:121:in `each'
from ./dependabot_script.rb:121:in `<main>'
```
`go.mod` file in user/repo
```
module github.com/user/repo
go 1.12
require (
github.com/user/repo1 v0.0.0-20190330214741-eddf924cad02
github.com/user/repo2 v0.0.0-20190330203204-d206a5439aa6
github.com/BurntSushi/toml v0.3.1 // indirect
github.com/alicebob/gopher-json v0.0.0-20180125190556-5a6b3ba71ee6 // indirect
github.com/alicebob/miniredis v2.5.0+incompatible
github.com/dgrijalva/jwt-go v3.2.0+incompatible
github.com/erizocosmico/flagga v1.0.0
github.com/erizocosmico/flaggax v1.0.0
github.com/garyburd/redigo v1.6.0
github.com/go-kit/kit v0.8.0
github.com/gomodule/redigo v2.0.0+incompatible // indirect
github.com/gorilla/mux v1.7.0
github.com/inconshreveable/mousetrap v1.0.0 // indirect
github.com/pkg/errors v0.8.1
github.com/prometheus/client_golang v0.9.2
github.com/sony/gobreaker v0.0.0-20181109014844-d928aaea92e1
github.com/spf13/cobra v0.0.3
github.com/spf13/pflag v1.0.3 // indirect
github.com/yuin/gopher-lua v0.0.0-20190206043414-8bfc7677f583 // indirect
golang.org/x/net v0.0.0-20190225153610-fe579d43d832
)
```
username_1: Thanks! I'll take a look at re-extracting.
Status: Issue closed
username_0: Thank you @username_1 for fixing this, much appreciated! |
wpvi/todos | 273646452 | Title: Xây dựng trang Thư viện trên vi.wordpress.org
Question:
username_0: Nhằm mục tiêu giới thiệu và quảng bá các plugin/giao diện đã được dịch bởi WPVN Team, cần tạo 1 page trên vi.wordpress.org và liệt kê kèm credit về những người đóng góp nhiều nhất.
Tiêu chí:
- Plugin: phổ biến với lượt cài lớn HOẶC dễ sử dụng, phục vụ các mục đích quan trọng khi build site.
- Theme: dễ cài đặt, ít buggy, hoạt động trơn tru và dễ dàng customize.
Answers:
username_1: Cái này em không nghĩ là cần thiết, vì trên chính trang plugin của WordPress có thống kê/rating/review rất cụ thể. Liệu có nên sử dụng nó như "the only source of truth" không nhỉ?
username_2: Theo ý kiến của em thì cái này dạng "nice to have" chứ cũng không hẳn là thứ đặt ưu tiên cao quá.
username_3: Em đồng ý kiến với @username_1.
username_0: Mình sẽ remove phần này trên trang hiện tại.
Status: Issue closed
|
bigspring/monolith | 59460899 | Title: Create styled list shortcode
Question:
username_0: Shortcode to render styled lists as per utility css clases (tick, chevron, etc)
Answers:
username_1: ADD TO MCE-BUTTON.JS
//List Shortcode
{
text: 'List',
onclick: function() {
editor.windowManager.open( {
title: 'Insert list style',
body: [
{
type: 'listbox',
name: 'listboxListTypes',
label: 'List Type',
'values': [
{text: 'Chevrons', value: 'chevron'} ,
{text: 'Caret List', value: 'caret'},
{text: 'Tick List', value: 'tick'}
]
},
],
onsubmit: function( e ) {
editor.insertContent( '[list type="' + e.data.listboxListTypes + '" ][/list]');
}
});
}
}, // end list shortcode
ADD TO INC/SHORTCODES
/**
* Renders wrapper div to create different types of lists
* @param array $atts
* @param string $content
* @return string
*/
function list_shortcode( $atts, $content = null ) {
extract( shortcode_atts( array(
'type' => '', /* no-bullet, ticks, chevron etc */
), $atts ) );
$output = '<div class="monolith-list '. $type . '">';
$output .= apply_filters('the_content', $content);
$output .= '</div>';
return $output;
}
add_shortcode('list', 'list_shortcode');
Status: Issue closed
|
spring-projects/spring-security | 199148513 | Title: AclAuthorizationStrategyImpl does not check reachable granted authorities when checking principal's authorities to determine right
Question:
username_0: AclAuthorizationStrategyImpl does not check reachable granted authorities when checking principal's authorities to determine right
Spring Security has been configured using role hierarchies but Spring Security ACL does not consider this when evaluating a principals authorities to determine right.
### Actual Behavior
From AclAuthorizationStrategyImpl .securityCheck(Acl, int) method:
// Iterate this principal's authorities to determine right
if (authentication.getAuthorities().contains(requiredAuthority)) {
return;
}
### Expected Behavior
The AclAuthorizationStrategyImpl .securityCheck(Acl, int) method should do something along the lines of:
List<GrantedAuthority> grantedAuthorities = (List<GrantedAuthority>) roleHierarchy.getReachableGrantedAuthorities(authentication.getAuthorities());
// Iterate this principal's authorities to determine right
if (authentication.getAuthorities().contains(requiredAuthority)) {
return;
}
### Configuration
SpringACLConfig.java
...
@Bean
public AclAuthorizationStrategyImpl aclAuthorizationStrategy() {
AclAuthorizationStrategyImpl aclAuthorizationStrategy = new AclAuthorizationStrategyImpl(
new SimpleGrantedAuthority("ROLE_ACL_OWNERSHIP_ADMIN"), // grant ACL authority to CHANGE_OWNERSHIP
new SimpleGrantedAuthority("ROLE_ACL_AUDITING_ADMIN"), // grant ACL authority to CHANGE_AUDITING
new SimpleGrantedAuthority("ROLE_ACL_GENERAL_ADMIN")); // grant ACL authority to CHANGE_GENERAL
aclAuthorizationStrategy.setSidRetrievalStrategy(new SidRetrievalStrategyImpl(roleHierarchy()));
return aclAuthorizationStrategy;
}
...
### Version
Using io.spring.platform:platform-bom:Athens-SR1 & org.springframework.boot:spring-boot-gradle-plugin:1.4.2.RELEASE |
Arcana/node-dota2 | 330515314 | Title: Retrieve previous rank in profile card
Question:
username_0: I noticed that the profileCards have a new property to get previous rank. Is it possible to update the protobuf in the current library to retrieve this data?
Answers:
username_0: @username_1 : Thanks! Btw, do you think there is a way to automate updates when a new protobuf is released to take care of these kinds of issues?
username_1: I've been thinking about it recently but I haven't found a way yet :/ The issue is that if something changes in the protobufs that breaks compatibility it will break things.. Too be honest I think the best way to go about it would be to find a way to automatically generate the various functions straight from the protobuf definitions. That way if they change, the method definitions would get updated automatically as well. I *think* @paralin has done something similar in go, but I haven't looked at it in detail yet
Status: Issue closed
username_1: New version pushed to npm, should be fine now. |
PaddlePaddle/PaddleHub | 823589838 | Title: PaddleOCR识别小图片问题
Question:
username_0: 1)PaddleHub 2.0.4,PaddlePaddle 2.0.1
2)系统环境:Win系统,python3.7.6,
- 复现信息:加载的模型为 chinese_ocr_db_crnn_server, 识别的图片大小为100*25像素
- 报错信息:`W0306 15:45:24.032341 24408 analysis_predictor.cc:1145] Deprecated. Please use CreatePredictor instead.
`
Answers:
username_1: 您好,您提供的不是报错信息,那是一个warning,不影响使用。 |
json4s/json4s | 230163663 | Title: Parsing Mutable Maps
Question:
username_0: ### json4s version: 3.5.0
### scala version: 2.11.8
### jdk version: 1.8.0_131
Is extracting to a Mutable Map supported? When I try to run the code below I get an error
`java.lang.ClassCastException: scala.collection.immutable.Map$Map1 cannot be cast to scala.collection.mutable.Map`
```
import org.json4s.jackson.JsonMethods._
import scala.collection.mutable
object JsonDeserializerTesting {
implicit val formats = org.json4s.DefaultFormats
def main(args: Array[String]): Unit = {
val mapJson = """{"name":"<NAME>", "address": {"city":"Chicago"}}"""
val parsedMap = parse(mapJson)
val extractedMap = (parsedMap \ "address").extract[mutable.Map[String, AnyRef]]
println(extractedMap.getClass)
}
}
```
Parsing to a mutable.Seq works, the test case in project seems to be parsing to an immutable.Map.
https://github.com/karthicks/json4s/blob/c57d71062f852ab50b786fb5f888623b5b8e1c8b/tests/src/test/scala/org/json4s/ExtractionExamplesSpec.scala#L72
Thanks,
Jeetu
Status: Issue closed
Answers:
username_1: 3.5.3 released |
j-easy/easy-random | 1046734143 | Title: Use discovered non default constructor by generating random paramters for it
Question:
username_0: I've been looking but can't see it - what's the reason for not reflecting and using a discovered constructor that is not the default? Is there some design reason why this hasn't been done? If not - I'd be happy to submit a PR.
Answers:
username_0: Something like:
```
class ReflectionConstructor implements ObjectFactory {
@Override
public <T> T createInstance(Class<T> type, RandomizerContext context) {
return (T) constructReflection(type, context);
}
@SneakyThrows
private <T> Object constructReflection(Class<T> type, RandomizerContext context) {
Constructor<?> complexConstructor = findComplexConstructor(type);
Object[] params = createParams(complexConstructor);
Object instance = complexConstructor.newInstance(params);
return instance;
}
private Object[] createParams(Constructor<?> constructor) {
Object[] randomParams = Arrays.stream(constructor.getParameters()).map(parameter ->
easyRandom.nextObject(parameter.getType())
).toArray();
return randomParams;
}
/**
* @eturn the constructor with the most number of arguments
*/
private <T> Constructor<?> findComplexConstructor(Class<T> type) {
Constructor<?>[] constructors = type.getConstructors();
int numParams = 0;
Constructor<?> complex = null;
for (Constructor<?> constructor : constructors) {
int parameterCount = constructor.getParameterCount();
if (parameterCount > numParams) {
numParams = parameterCount;
complex = constructor;
}
}
return complex;
}
}
``` |
ericam/christmasorigami | 149821309 | Title: Lady Ghost by <NAME>
Question:
username_0: Lady Ghost by <NAME><br>
http://www.origami-kids.com/blog/halloween/lady-ghost-by-niklas-kiuru.htm<br><p><img width="150" height="150" src="http://www.origami-kids.com/blog/wp-content/uploads/2013/09/Lady-Ghost-by-Niklas-Kiuru-2-150.jpg?d6f214" alt="Lady Ghost by <NAME> 2 150">Lady Ghost by <NAME>uru Designer: Niklas Kiuru Folder, Photo and Video: @Origamikids Complexity: Easy. Time to fold 10 min. Folded from a one Square Printer White paper How to fold: 1) Print the CP on a sheet of white … <a href="http://www.origami-kids.com/blog/halloween/lady-ghost-by-niklas-kiuru.htm">Continue reading <span>→</span></a></p>
<p>The post <a href="http://www.origami-kids.com/blog/halloween/lady-ghost-by-niklas-kiuru.htm">Lady Ghost by <NAME></a> appeared first on <a href="http://www.origami-kids.com/blog">Origami Blog</a>.</p>
<br><br>
via MixOrigami http://www.rssmix.com/<br>
April 20, 2016 at 09:04AM |
brikr/bthles | 417619976 | Title: More Auth options
Question:
username_0: Right now the FE automatically logs in via Anonymous auth. We should have a way to sign-in via Google or some of the other ways Firebase supports (GitHub, Twitter, etc.)
Status: Issue closed
Answers:
username_0: Finished between 19b37d1e4a94a02f269164c5dc54ca22a5f70554 and 0bc7d5e6e7f5533386d69161892f6e08cd91a907 |
CougsInSpace/CougSat1-Hardware | 312340739 | Title: Test basic solar panel function
Question:
username_0: Test current voltage output of solar cell
Answers:
username_1: Solar cells output 2.45Voc and 25mAsc under our grow light
Voltages of 2.7 have been recorded under the real sun through the window
Currents of 80mA have been recorded under the real sun through the window
Our SpectroLabs contact confirmed we will not see the rated 400mA without being in orbit or under an artifical sun with AM0 spectrum.
Status: Issue closed
|
StanfordHCI/bang | 607639678 | Title: Server sometimes fails to pull and rebuild prod correctly
Question:
username_0: This has lead to some updates (especially in `.ts`) not getting executed. Right now the best fix is to manually pull on `prod`.
Answers:
username_1: @username_0 just as a follow-up: how do you manually pull on prod? I'm still seeing that the server isn't reflecting the updated .ts file.
username_0: I think this is a non-issue and was caused by us committing a fix in the wrong place.
Status: Issue closed
|
NSEE/LPKF | 231403521 | Title: placa de circuito impresso rafael
Question:
username_0: # Sobre : atualizado
- Nome do Aluno : <NAME>
- R.A : 15.03178-0
- Habilitação : controle e automação
- Ano : 3 ano
- Nome do Orientador: <NAME>
- Nome do Projeto : projfreq 2
- Nome da Disciplina : Eletrônica Digital
# PCB :
- Nome da Placa : projfreq 2
- Link para arquivos :
https://www.dropbox.com/s/dkfwlkcbclwhmqv/projFreq%202%20-%20CADCAM.ZIP?dl=0
## Informações básicas
- Pequeno descritivo sobre a placa :
- Software utilizado
- ( ) Altium
- ( ) Eagle
- ( ) Orcad
- ( ) UltiBoard
- (X) Outro
- Tipo de prototipagem
- (X) Simples
- ( ) Completa
- Descrição :
- (X) Face Simples
- ( ) Dupla Face
- ( ) SMD
- ( ) Trilha RF
- ( ) Máscara
Answers:
username_1: bom dia, sua placa esta pronta. pode ser retirada na H227
Obrigado
Status: Issue closed
|
gisaia/ARLAS-web-components | 516079362 | Title: Chartype should not be a mandatory input
Question:
username_0: To fix it, create a default histogram in default off switch case :
` public ngOnChanges(changes: SimpleChanges): void {
if (this.histogram === undefined) {
switch (this.chartType) {
case ChartType.area: {
this.histogram = new ChartArea();
break;
}
case ChartType.bars: {
this.histogram = new ChartBars();
break;
}
case ChartType.oneDimension: {
this.histogram = new ChartOneDimension();
break;
}
case ChartType.swimlane: {
if (this.swimlaneMode === SwimlaneMode.circles) {
this.histogram = new SwimlaneCircles();
} else {
this.histogram = new SwimlaneBars();
}
break;
}
default: {
break;
}
}
this.setHistogramParameters();
}`<issue_closed>
Status: Issue closed |
bitcoin/bitcoin | 279442248 | Title: block syncing incredibly slow
Question:
username_0: When syn'ing blocks, the progress is incredibly slow.
The CPU usage is less than 10%, network traffic is less than 10%, load factor is around 3
The whole system is very unresponsive; it takes sometimes a minute or more to run a cli command.
The system (ODROID MC1; ARM SBC) has 8 cores and 2GB RAM; RAM usage is around 1.1 GB, no swap
storage is a 480GB SSD
Bitcoin Core Daemon version v0.15.1.0-g7b57bc998f3
I verified that the system is not CPU throttling because of temperature.
e.g:
2017-12-05 00:55:26 UpdateTip: new best=000000000000000000d00911c00d26abb641d6366082522f5562f715cde92d53 height=417300 version=0x20000001 log2_work=84.865016 tx=137411252 date='2016-06-21 04:05:21' progress=0.496572 cache=36.7MiB(328035txo)
2017-12-05 00:55:26 - Connect postprocess: 2.31ms [0.12s]
2017-12-05 00:55:26 - Connect block: 23700.01ms [3525.67s]
2017-12-05 00:55:37 - Load block from disk: 10556.34ms [293.62s]
2017-12-05 00:55:37 - Sanity checks: 7.94ms [0.41s]
2017-12-05 00:55:37 - Fork checks: 0.19ms [0.01s]
2017-12-05 01:00:16 - Connect 1472 transactions: 279395.86ms (189.807ms/tx, 53.854ms/txin) [3520.09s]
2017-12-05 01:00:16 - Verify 5188 txins: 279396.04ms (53.854ms/txin) [3520.11s]
2017-12-05 01:00:16 - Index writing: 0.09ms [0.00s]
2017-12-05 01:00:16 - Callbacks: 0.07ms [0.00s]
2017-12-05 01:00:16 - Connect total: 279752.57ms [3521.30s]
2017-12-05 01:00:16 - Flush: 17.99ms [0.94s]
2017-12-05 01:00:16 - Writing chainstate: 0.18ms [0.01s]
So it looks to me like processing this block took about 5 minutes. AFAIK a new block is created about every 10 minutes. No wonder that the syncing "never" finishes. This is going on for over a week now.
Maybe this is some kind of DOS attack from malicious peer nodes?!?
Or there is some multithreading related /(and/or ARM related) bug in the code; because CPU and network load is very low; yet the system makes little progress and is never the less VERY unresponsive.
Whatever the reason, this is as it is, not usable at all.
Are you guys testing your software on ARM SBC? This is what I'm using, it's $50:
(ODROID HC1; 8 cores, 2GB RAM, SATA interface)
http://www.hardkernel.com/main/products/prdt_info.php?g_code=G150229074080
Answers:
username_1: 2017-12-05 00:55:37 - Load block from disk: 10556.34ms [293.62s] <-- it took 10 seconds to load the block to connect from disk. You should look into the speed/latency of your disk.
username_2: Your system isn't equipped to do what you think it can do. This level of performance is expected if you don't have an SSD drive. This is definitively not a bug.
username_0: # hdparm -tT /dev/sda
/dev/sda:
Timing cached reads: 1714 MB in 2.00 seconds = 857.28 MB/sec
Timing buffered disk reads: 776 MB in 3.01 seconds = 258.18 MB/sec
username_0: @username_2: I wrote above that I have a 480GB SSD ???
username_1: @username_0 No need to be hostile. Can you provide the output of mount? Is the Bitcoin datadir on the internal SD card or is it being stored on the SSD?
username_0: yes, datadir is on ssd. I use ext4 on lvm logical volume.
below is the smartmontools output, which looks ok to me:
```
smartctl --all /dev/sda
=== START OF INFORMATION SECTION ===
Model Family: SandForce Driven SSDs
Device Model: MKNSSDEC480GB
LU WWN Device Id: 0 015030 1a1739104
Firmware Version: 603ABBF0
User Capacity: 480,103,981,056 bytes [480 GB]
Sector Size: 512 bytes logical/physical
Rotation Rate: Solid State Device
Device is: In smartctl database [for details use: -P show]
ATA Version is: ATA8-ACS, ACS-2 T13/2015-D revision 3
SATA Version is: SATA 3.0, 6.0 Gb/s (current: 6.0 Gb/s)
Local Time is: Tue Dec 5 21:28:34 2017 UTC
SMART support is: Available - device has SMART capability.
SMART support is: Enabled
```
[...]
```
1 Raw_Read_Error_Rate 0x0032 095 095 050 Old_age Always - 0/3112556
5 Retired_Block_Count 0x0033 099 099 003 Pre-fail Always - 25
9 Power_On_Hours_and_Msec 0x0032 099 099 000 Old_age Always - 1450h+22m+01.500s
12 Power_Cycle_Count 0x0032 100 100 000 Old_age Always - 96
171 Program_Fail_Count 0x000a 100 100 000 Old_age Always - 0
172 Erase_Fail_Count 0x0032 100 100 000 Old_age Always - 0
174 Unexpect_Power_Loss_Ct 0x0030 000 000 000 Old_age Offline - 34
177 Wear_Range_Delta 0x0000 000 000 000 Old_age Offline - 0
181 Program_Fail_Count 0x000a 100 100 000 Old_age Always - 0
182 Erase_Fail_Count 0x0032 100 100 000 Old_age Always - 0
187 Reported_Uncorrect 0x0012 100 100 000 Old_age Always - 0
190 Airflow_Temperature_Cel 0x0000 052 057 000 Old_age Offline - 52 (Min/Max 19/57)
194 Temperature_Celsius 0x0022 052 057 000 Old_age Always - 52 (Min/Max 19/57)
195 ECC_Uncorr_Error_Count 0x001c 120 120 000 Old_age Offline - 0/3112556
196 Reallocated_Event_Count 0x0033 099 099 003 Pre-fail Always - 25
201 Unc_Soft_Read_Err_Rate 0x001c 120 120 000 Old_age Offline - 0/3112556
204 Soft_ECC_Correct_Rate 0x001c 120 120 000 Old_age Offline - 0/3112556
230 Life_Curve_Status 0x0013 100 100 000 Pre-fail Always - 100
231 SSD_Life_Left 0x0013 089 089 010 Pre-fail Always - 21474836481
233 SandForce_Internal 0x0032 000 000 000 Old_age Always - 53797
234 SandForce_Internal 0x0032 000 000 000 Old_age Always - 1004
241 Lifetime_Writes_GiB 0x0032 000 000 000 Old_age Always - 1004
242 Lifetime_Reads_GiB 0x0032 000 000 000 Old_age Always - 863
```
username_1: @username_0 what does the output of dd if=/path/to/bitcoin/datadir/blocks/blk00000.dat of=/dev/null say?
username_0: ```
[root@alarm alarm]# dd if=/dev/mapper/ssd-data of=/dev/null count=15 bs=4096
15+0 records in
15+0 records out
61440 bytes (61 kB, 60 KiB) copied, 15.5446 s, 4.0 kB/s
```
So it seems to be indeed a hardware problem. I've never seen something like this before.
Sorry for the trouble caused!
Status: Issue closed
|
StompMarket/helpdesk | 444243855 | Title: User Reported Problem - Need A Copy Button On Catalog Product page |<EMAIL>
Question:
username_0: Partner : AT, User : <EMAIL>
Problem : Many a times we are adding a variation of a product. In current scenario I have to enter all the data again. I request you to provide a Copy button so that a copy is created and I can just change the relevant information
Router Link : /catalog/brands/27 |
Azure/autorest.java | 460043639 | Title: Repeated interface error in Compute v2018_10_01
Question:
username_0: repeated WithLocation interface definition.
```java
interface Definition extends DefinitionStages.Blank, DefinitionStages.WithLocation, DefinitionStages.WithLocation, DefinitionStages.WithCreate {
}
/**
* Grouping of VirtualMachine definition stages.
*/
interface DefinitionStages {
/**
* The first stage of a VirtualMachine definition.
*/
interface Blank extends WithLocation {
}
/**
* The stage of the virtualmachine definition allowing to specify Location.
*/
interface WithLocation {
/**
* Specifies resourceGroupName.
* @param resourceGroupName The name of the resource group
* @return the next definition stage
*/
WithLocation withExistingLocation(String resourceGroupName);
}
/**
* The stage of the virtualmachine definition allowing to specify Location.
*/
interface WithLocation {
/**
* Specifies location.
* @param location Resource location
* @return the next definition stage
*/
WithCreate withLocation(String location);
}
```
Answers:
username_1: Please reopen if this is still an issue for v4 generator.
Status: Issue closed
|
DeepBlueRobotics/RobotCode2018 | 290307466 | Title: Rename getGyro() to getGyroAngle()
Question:
username_0: https://github.com/DeepBlueRobotics/RobotCode2018/blob/532bc103c3b1446a6e62ee9c14928650ede07a49/Robot2018/src/org/usfirst/frc/team199/Robot2018/subsystems/Drivetrain.java#L199
Answers:
username_0:  [Rename getGyro() to getGyroAngle()](https://trello.com/c/XqiYgIOU/150-rename-getgyro-to-getgyroangle)
username_1: Resolved in newest pr
Status: Issue closed
|
Hypfer/valetudo-companion | 1015204707 | Title: Allow user to enter IP manually
Question:
username_0: Since some of my "IoT" stuff is in a separated subnet and passes a router, zeroconf with UDP broadcasts doesn't really work. I'd need to enter the IP manually to use this app. I think it could be integrated as some sort of "advanced" option.
Answers:
username_1: If you have to enter your IP manually, why use the app? Given all the app does is open the web-UI in your browser? You may just as well type the IP directly into your browser then, exact same functionality.
Status: Issue closed
username_1: Nah, no need to recreate the UI :)
The provisioning does use the API, but that involves connecting to the robot's network anyway so your network separation shouldn't be relevant then.
username_0: Indeed. Given the purpose of this companion app, I don't think I'll ever need it for my purposes. |
cosmos/cosmos-sdk | 341705363 | Title: Difference in Fee structure in Result and auth.StdTx
Question:
username_0: <!-- < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < < ☺
v ✰ Thanks for opening an issue! ✰
v Before smashing the submit button please review the template.
v Please also ensure that this is not a duplicate issue :)
☺ > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > -->
## Summary of Bug
`auth.StdTx` allows users to pass in sdk.Coins into StdFee. However the `sdk.Result` struct seems to expect using a single coin for the fee.
```go
// Tx fee amount and denom.
FeeAmount int64
FeeDenom string
```
## Suggested Fix
Allow result to store multicoin fee
```go
// Tx fee amount and denom.
FeeAmount []int64
FeeDenom []string
```
Construct list such that amount list indices are correctly matched with denom list indices. May also want to sort the denoms in the same way that sdk.Coins does the sort.
____
#### For Admin Use
- [ ] Not duplicate issue
- [ ] Appropriate labels applied
- [ ] Appropriate contributors tagged
- [ ] Contributor assigned/self-assigned
Answers:
username_1: Why is it expecting:
```
FeeAmount []int64
FeeDenom []string
```
and not
```
FeeCoins sdk.Coins
```
username_0: Yea, that works better. Just followed the initial model.
username_1: Cool! Good catch btw. Definitely seems like something we should change.
username_0: Dependent on: https://github.com/tendermint/tendermint/issues/1997
Status: Issue closed
username_0: Should be fixed in tendermint/tendermint#1861 |
dotnet/docfx | 607472830 | Title: DocFx Caching Old Content on Localhost
Question:
username_0: **Operation System**: (`Windows` or `Linux` or `MacOS`)
Window
**DocFX Version Used**:
2.51.0
**Template used**: (`default` or `statictoc` or contain custom template)
default
**Steps to Reproduce**:
1. Edit any page content.
2. Start the localhost server
3. Navigate to page that you made updates to. Notice sometimes the new updates don't appear.
**Expected Behavior**:
Updates should always appear after a save and localhost rebuild
**Actual Behavior**:
Looks like it's caching results from a previous localhost spin up. I don't want to have to clear my cache every time I work with DocFx. Is there any fix for this issue or a workaround? This is making debugging and quick edits overly difficult.
Answers:
username_1: @username_0 Can refresh (F5) or force refresh (Ctrl+F5)help?
username_2: I could not reproduce.
Pressing F5 in my browser displayed the new content without issues after a successful re-build.
Which `cache` are you clearing?
Steps I took:
1. Download [walkthrough3.zip](https://dotnet.github.io/docfx/tutorial/walkthrough/artifacts/walkthrough3.zip) and extract to `c:\walkthrough3`
2. Download [wkhtmltox-0.12.5-1.msvc2015-win64.exe](https://github.com/wkhtmltopdf/wkhtmltopdf/releases/download/0.12.5/wkhtmltox-0.12.5-1.msvc2015-win64.exe) and extract to `c:\wkhtmltox`
3. Copy `c:\wkhtmltox\bin\wkhtmltopdf.exe` to `c:\walkthrough3\wkhtmltopdf.exe`
4. Download [docfx.zip](https://github.com/dotnet/docfx/releases/download/v2.51/docfx.zip) and extract to `c:\docfx`
5. Open a command prompt and run the following commands:
```
cd C:\walkthrough3
c:\docfx\docfx.exe docfx.json --logLevel Verbose
```
6. Wait for build to succeed.
7. Open a new command prompt and run the following commands:
```
cd C:\walkthrough3
c:\docfx\docfx.exe serve
```
Result:
`Serving "C:\walkthrough3" on http://localhost:8080`
8. Navigate to `http://localhost:8080/_site` in a web browser and notice the text `This is the HOMEPAGE.`
9. Edit `C:\walkthrough3\index.md` and change the text `This is the HOMEPAGE.` to `This is the new HOMEPAGE.`
10. In the first command prompt run the following command:
`c:\docfx\docfx.exe docfx.json --logLevel Verbose`
11. Wait for build to succeed.
12. Press F5 in the web browser and notice the new text `This is the new HOMEPAGE.`
I tested in Firefox, Chrome, Internet Explorer, and Edge on Windows 10.
username_0: Thank you @username_1 for the suggestion and @username_2 for the detailed write-up and repo attempt. This is exactly (e.g. with our code) what I am doing with mixed results (e.g. it doesn't happen all the time, just sometimes for some unknown reason that I thought was a caching problem). That said, **Ctrl+Shift+F5** seems to help for anyone else experiencing this problem.
Status: Issue closed
|
segment-integrations/analytics-ios-integration-nielsen-dcr | 271595028 | Title: Installs Empty Directory when using CocoaPods
Question:
username_0: When installing via spec repo, or direct from git, an empty directory is installed and you cannot import the framework. My guess is this has to do with pod spec not defining an `s.sources`
Answers:
username_0: I made a PR [here](https://github.com/segment-integrations/analytics-ios-integration-nielsen-dcr/pull/3) which fixes the empty directory but when trying to build I get "NielsenAppApi/NielsenAppApi.h file not found". I've added the Nielsen sdk to my build target and can properly import it in my swift code but for some reason the pod isn't finding it. Any idea if there's a build setting I need to update?
username_1: @username_0 Can you provide a bit more context on what issues you are having? Do you have a basic app to reproduce the issue with that I can test out? Thanks!
username_1: Hi @username_0 , haven't heard anything here so I will close this out. Feel free to re-open if you have further info.
Status: Issue closed
|
minimul/qbo_api | 306158856 | Title: Faraday::TimeoutError - Net::ReadTimeout and request_id
Question:
username_0: I've been running along smoothly for a few weeks now, coding in earnest, and suddenly today I'm getting loads of timeout errors, which is a nuisance as I'm writing and deleting, so not getting a response is a big issue.
I see in the QBO documentation that if I add requestid parameter to the path then I can send the request again and it'll be idempotent. I also see in the QboAPI code that if I set `QboApi.request_id = true` then it'll add a request id to the path. But...
There's no way to resend the request without triggering a brand new call to `finaliaize_path` which will call `uuid` again and get a new uuid so the requestid parameter will be different each time. That means each POST will be treated as new, and I'll be creating say invoices with the same data over again. It's not idempotent.
without changing the code in the gem, how can I resend a request with the same request_id ?
Answers:
username_0: Ok, I can't find anything in the gem code to deal with timeout errors so I'll have to add it myself. But I'll need your advice on how to reorganise the code a bit
In this method, finalize_path is the bit that adds the request_id as a new uuid each time it's called.
` def request(method, path:, entity: nil, payload: nil, params: nil)
raw_response = connection.send(method) do |req|
path = finalize_path(path, method: method, params: params)
case method
when :get, :delete
req.url path
when :post, :put
req.url path
req.body = payload.to_json
end
end
response(raw_response, entity: entity)
end
`
Obviously if I get a Faraday::Timeout error and re-run the request it'll call finalize_path again, and get a new request_id.
The problem is that finalize_path is inside the block and to re-send the request with the same request_id it has to be outside the block. It would have to change to something like
` def request(method, path:, entity: nil, payload: nil, params: nil)
path = finalize_path(path, method: method, params: params)
tries = @configured_max_tries
begin
raw_response = connection.send(method) do |req|
case method
when :get, :delete
req.url path
when :post, :put
req.url path
req.body = payload.to_json
end
end
response(raw_response, entity: entity)
rescue Faraday::TimeoutError
if tries > 0
tries -= 1
retry
else
raise
end
end
end
`
But how do I test it, and where would I put the specs in the suite?
username_1: I am going to be merging in @username_2 refactoring PRs first so I'll respond after.
username_2: @username_0 Any thoughts with the newest version?
fyi, I want to make the error handling more configuralbe as well
username_1: This gem will not be handling this error.
It's my recommendation that each integration project [have a `request_handler` like approach](https://username_1.com/migrating-a-quickbooks-online-app-from-oauth1-to-oauth2-using-the-qbo_api-gem.html) so `Faraday::Timeout` can be handling there.
Status: Issue closed
|
jaegertracing/jaeger-operator | 1006456870 | Title: Operator should allow some components to be skipped for installation
Question:
username_0: ## Requirement
Flexibility of Jaeger components installation with k8s operator
## Problem
When using Jaeger Kubernetes Operator, there is no option to skip a component from being installed
## Proposal
Have the ability to skip installing a component; For example update the component installation (https://www.jaegertracing.io/docs/1.26/operator/#production-strategy) to have an option like below:
```
spec:
strategy: production
agent:
install: "false" # default can be "true"
``` |
BenB196/ip-api-proxy | 494669096 | Title: Bug: Proxy does not properly access IPv6 queries
Question:
username_0: When sending IPv6 queries, the proxy returns "400 request is blank".
Example: 2001:0db8:85a3:0000:0000:8a2e:0370:7334
Status: Issue closed
Answers:
username_0: Fixed in release [v0.0.8](https://github.com/username_0/ip-api-proxy/releases/tag/v0.0.8). |
legacysurvey/legacysurvey | 464760610 | Title: Ensure that the "custom" theme is supported by recent versions of Nikola.
Question:
username_0: With Nikola v8.0.2, `nikola build` warns:
```
[2019-07-05T19:59:03Z] WARNING: Nikola: Cannot load theme "custom", using 'bootblog4' instead.
```
And the pages do not look like our standard legacysurvey.org theme.
Answers:
username_0: Note that the "custom" theme is based on [Readable](https://bootswatch.com/3/readable/) which is not available for Bootstrap 4-compatible CSS frameworks.
Under the hood, Readable **replaces** the base Bootstrap 3 CSS files, but the css files are still **called** *e.g.* `bootstrap.css`, as in the `themes/custom/assets/css` directory. |
Azure/azure-iot-sdks | 177559393 | Title: How to disable certificate validation in C library?
Question:
username_0: Hello,
Please suggest me how can I disable certificate validation? We need a custom certificate validation so we want to disable default certificate validation because we want to exclude start and end date certificate validations. It will be great if someone suggest me any switch or flags available to disable. If there is no switch, please suggest me the file that do the certificate validation in C library.
Answers:
username_1: @username_0 - There is currently no switch to disable certificate validation in the SDK.
However we do provide the option to pass your cert using the "TrustedCerts" via the SetOption API.
See the sample: [https://github.com/Azure/azure-iot-sdks/blob/master/c/iothub_client/samples/iothub_client_sample_amqp/iothub_client_sample_amqp.c](url)
The option is available on Linux & MBED. It's also available on Windows but only when using AMQP over websockets.
Regards.
username_0: The reason behind this i want to avoid the below error when I run the iothub sample amqp over websocket. Below is the output I am getting after run the program. How the certificate validation failing if I am not enabling it.
Starting the IoTHub client sample AMQP over WebSockets...
Info: IoT Hub SDK for C, version 1.0.14
IoTHubClient_SetMessageCallback...successful.
[1474350084:7637] NOTICE: Initial logging level 7
[1474350084:7685] NOTICE: Libwebsockets version: 1.6.0 f1cf5be
[1474350084:7701] NOTICE: IPV6 not compiled in
[1474350084:7774] NOTICE: libev support not compiled in
[1474350084:7812] NOTICE: ctx mem: 16916 bytes
[1474350084:8280] NOTICE: canonical_hostname = localhost
[1474350084:8377] NOTICE: per-conn mem: 160 + 2126 headers + protocol rx buf
IoTHubClient_SendEventAsync accepted data for transmission to IoT Hub.
IoTHubClient_SendEventAsync accepted data for transmission to IoT Hub.
IoTHubClient_SendEventAsync accepted data for transmission to IoT Hub.
IoTHubClient_SendEventAsync accepted data for transmission to IoT Hub.
IoTHubClient_SendEventAsync accepted data for transmission to IoT Hub.
Press any key to exit the application.
**[1474350086:3134] ERR: server's cert didn't look good, X509_V_ERR = 20: error:00000014:lib(0):func(0):SSL lib**
Error: Time:Tue Sep 20 05:41:55 2016 File:/home/avenger/azure-iot-sdk/azure-iot-sdk-2016-08-26/c/iothub_client/src/iothubtransportamqp.c Func:IoTHubTransportAMQP_DoWork Line:1933 AMQP transport authentication timed out.
[1474350115:0093] NOTICE: lws_context_destroy
[1474350115:0332] NOTICE: Initial logging level 7
[1474350115:0404] NOTICE: Libwebsockets version: 1.6.0 f1cf5be
[1474350115:0492] NOTICE: IPV6 not compiled in
[1474350115:0496] NOTICE: libev support not compiled in
[1474350115:0514] NOTICE: ctx mem: 16916 bytes
[1474350115:0574] NOTICE: canonical_hostname = localhost
[1474350115:0581] NOTICE: per-conn mem: 160 + 2126 headers + protocol rx buf
**[1474350116:2936] ERR: server's cert didn't look good, X509_V_ERR = 20: error:00000014:lib(0):func(0):SSL lib**
[1474350118:8769] NOTICE: lws_context_destroy
Confirmation[0] received for message tracking id = 0 with result = IOTHUB_CLIENT_CONFIRMATION_BECAUSE_DESTROY
Confirmation[1] received for message tracking id = 1 with result = IOTHUB_CLIENT_CONFIRMATION_BECAUSE_DESTROY
Confirmation[2] received for message tracking id = 2 with result = IOTHUB_CLIENT_CONFIRMATION_BECAUSE_DESTROY
Confirmation[3] received for message tracking id = 3 with result = IOTHUB_CLIENT_CONFIRMATION_BECAUSE_DESTROY
Confirmation[4] received for message tracking id = 4 with result = IOTHUB_CLIENT_CONFIRMATION_BECAUSE_DESTROY
username_1: @username_0
1) Which platform are you running when encountering this issue?
2) Which version of the SDK are you using?
Thanks.
username_0: Branch : 2016-08-26 release
Code is compiled in **Linux lcbs-dev-vm 3.19.0-68-generic #76~14.04.1-Ubuntu SMP x86_64 GNU/Linux** using cross compiler
Running Linux: **Linux 3.14.39ltsi-WR7.0.0.9_standard #4 SMP PREEMPT armv7l GNU/Linux**
username_0: @username_1
Please update me if you have any idea why it is happening?
username_1: @username_0 - Have you tried this against the latest in develop?
username_1: @username_0 - Please let us know if you still have any issues. Regards.
Status: Issue closed
username_0: I am facing this issue ::
Build branch: https://github.com/Azure/azure-iot-sdks/tree/2016-10-14
Starting the IoTHub client sample AMQP over WebSockets...
Info: IoT Hub SDK for C, version 1.0.17
IoTHubClient_SetMessageCallback...successful.
[18007:0843] NOTICE: Initial logging level 7
[18007:0946] NOTICE: Libwebsockets version: 1.6.0 f1cf5be
[18007:1017] NOTICE: IPV6 not compiled in
[18007:1022] NOTICE: libev support not compiled in
[18007:1033] NOTICE: ctx mem: 16916 bytes
[18007:1068] NOTICE: canonical_hostname = localhost
[18007:1193] NOTICE: per-conn mem: 160 + 2126 headers + protocol rx buf
IoTHubClient_SendEventAsync accepted data for transmission to IoT Hub.
IoTHubClient_SendEventAsync accepted data for transmission to IoT Hub.
IoTHubClient_SendEventAsync accepted data for transmission to IoT Hub.
IoTHubClient_SendEventAsync accepted data for transmission to IoT Hub.
IoTHubClient_SendEventAsync accepted data for transmission to IoT Hub.
Press any key to exit the application.
[**18008:3335] ERR: server's cert didn't look good, X509_V_ERR = 9: error:00000009:lib(0):func(0):PEM lib**
Because I am facing this issue, my device date is not correct when it boot at first time. It was some time like 1970 Jan 01. We don't have separate time sever to correct it. This is failing because of start date or end date of the certificate is not matching with device date. I have corrected the device date to current date and it was working fine. For this reason I want to disable the certificate checks. I can do the certificate validation myself if it is required. I want to know where you put the code to validate certificate so that I can disable the certificate validation. Also I want to know is there a any flag I can use the simply disable the validation. I have removed the "TrustedCerts" option in the sample code but still it doing the validation. Please check and let me know.
username_2: Hi @username_0
We want to ensure connection between devices and IoT Hub is secured and definitively DON'T recommend doing this.
Have you considered pinging a NTP server to adjust the device's time before connecting to IoTHub? We are doing this in our samples for microcontroller devices that don't have an RTC.
username_0: Hi @username_2
Its not just one device we can sync with NTP server. In production, there are many devices connect and send/receive message from Cloud and my company wont allow our devices to connect to NTP server running by free or third party. Actually I am already doing all the certificate validations manually before connect to the cloud. We want to disable the default certificate validation or we want custom certificate validation so that we can enable or disable some of the validation.
username_3: @username_0
We definitely DON'T recommend disabling certificate validation like Olivier mentioned.
Here is how you should be able do it anyways if needed for development or if you know exactly what your case is.
Option 1:
- Look in wsio.c for the call to lws_client_connect (in the function wsio_open),
- Instead of passing the wsio_instance->use_ssl as argument, pass the value 2.
Option 2:
- We use libwebsockets for the websockets implementation (I see that you are running the WS code). The part of the code where you should look is wsio.c.
- The certificate load happens in the on_ws_callback function, see the case for LWS_CALLBACK_OPENSSL_LOAD_EXTRA_CLIENT_VERIFY_CERTS. The user argument contains the OpenSSL SSL_CTX *).
- You can call SSL_CTX_set_verify(user, SSL_VERIFY_NONE, NULL); to tell OpenSSL to not verify the certificate at all.
As a note for a non websocket connection you would need to do something different depending on what TLS adapter you use.
Let us know if this works for you.
Also, I am not sure I understood this statement: " Actually I am already doing all the certificate validations manually before connect to the cloud. "
When using TLS, the certificate is typically sent by the server to the client, the client then checks the validity of the cert information that was just sent by the server.
Thus, I do not think you can validate the server cert upfront before connecting, so I have probably missed something here.
Let us know what your case is as we would like to help you the best way we can secure your connection to the cloud and avoid any attacks on your system.
Thanks,
/Dan
username_0: @username_3
I am doing certificate validation using openssl library before instantiate IOT hub library. The certificate validation that I am doing is custom validation I can easily disable or enable just setting some flags.
I need a help because most of the IoT device doesn't have RTC in it. What is your suggestion for handling those device connect to IoT hub? I knew we can connect anyone of the NTP server but the there is no guarantee in availability of those NTP servers. We cannot take any risk in connecting our device to those NTP servers. Or we have to endup in pay some vendors to make the NTP servers available all the time.
Please suggest your thoughts around this problem. Do you know anyone has handled this issue in better way?
Thanks
-Raj
username_1: Hello,
Please suggest me how can I disable certificate validation? We need a custom certificate validation so we want to disable default certificate validation because we want to exclude start and end date certificate validations. It will be great if someone suggest me any switch or flags available to disable. If there is no switch, please suggest me the file that do the certificate validation in C library.
username_3: @username_0
AFAIK the only options are:
a) Keep track of time so that you are able to properly validate the cert information that is coming from the server on the wire. Also having the time allows you to generate tokens (SAS tokens in the case of IoTHub) with a limited validity in time.
To keep track of time there are usually 2 approaches that are used:
- get the time from an NTP.
- store the time and use a battery to power special circuitry/storage to keep that time (like it is done in PCs for example).
In many small devices the battery and extra HW is overshoot and extra cost and probably it is out of the question for you too.
If the NTP is not reliable there is not much you can do.
b) Do not keep track of time and validate as much as you can from the certificate you receive from the server (i.e. domain, title, etc, but not the expiry time).
Suggestion:
What I would suggest is putting in the NTP code and do a best effort to use it.
That means that if NTP is done or not available you can proceed to do the validation as if your device does not have time and essentially disable the validation in the client. Also please note that without a time you would have to specify a very long lifetime for the tokens that are generated by the SDK to authenticate with the IoTHub service (you would probably need to essentially default to a hardcoded time if you cannot reach the NTP and set the SAS token expiry time to be in the range of hundred years maybe - something very big, so that they are not seen as expired).
Again, I have to say we do not recommend that and ideal would be to rely on the NTP. But this obviously is something you have to decide on based on how sensitive the data that you are sending is and many other factors.
If your device did reach an NTP then best would be to use it.
Having this mixed approach will mean writing more code and doing more testing but it would reduce man in the middle attack risk.
I hope this helps.
/Dan
Status: Issue closed
username_1: @username_0 - After the latest update from @username_3 please let us know if you have more inquiries.
username_0: Thanks. I can able to disable to certifocate using SSL_CTX_set_verify(user, SSL_VERIFY_NONE, NULL); |
aristanetworks/go-cvprac | 653323101 | Title: Deprecate POST /inventory/deleteDevices.do
Question:
username_0: We need to deprecate DeleteDevices(). The endpoint`/inventory/deleteDevices.do` is:
- Marked as deprecated starting 2020.1
- Replaced by `DELETE` on `/inventory/devices` starting 2020.1.0
- Planned deletion after 2020.3.0 train.
The initial plan for go-cvprac is to change/rename DeleteDevices() to DeleteDevicesByMac(), and add an additional method called DeleteDevicesBySerial(). These changes will reside in v3 branch and be tagged v3.x. Anyone updating to v3.x of go-cvprac will have a compile failure if they are using DeleteDevices()...users can easily address the change once they get the compile time error.
v3.x will only be supported on cvp >= 2020.1.0<issue_closed>
Status: Issue closed |
home-assistant/core | 1030792310 | Title: Provide supply_temperature for secondaryCircuit
Question:
username_0: ### The problem
It would be great not only showing ```sensor.vicare_supply_temperature``` of the primary circuit, but also of the secondary circuit.
My guess is that you already show the value of ```heating.circuits.0.sensors.temperature.supply``` as ```sensor.vicare_supply_temperature```. But in case there are more circuits like ```heating.circuits.1.sensors...*```, they should also be shown by the vicare component.
### What is version of Home Assistant Core has the issue?
2021.10.0
### What was the last working version of Home Assistant Core?
_No response_
### What type of installation are you running?
Home Assistant OS
### Integration causing the issue
vicare
### Link to integration documentation on our website
https://www.home-assistant.io/integrations/vicare/
### Example YAML snippet
_No response_
### Anything in the logs that might be useful for us?
_No response_
### Additional information
_No response_ |
bcgov/wps | 575699025 | Title: Assess MSC Open Data
Question:
username_0: **Describe the task**
Assess MSC Open Data
https://eccc-msc.github.io/open-data/readme_en/
**Acceptance Criteria**
- [ ] first
- [ ] second
- [ ] third
**Additional context**
- Need SME involvement to determine what data we are interested in.
- Look at small example to test potential?<issue_closed>
Status: Issue closed |
openshift/cluster-monitoring-operator | 349277608 | Title: Duplicate RBAC rules in cluster-monitoring-operator-role.yaml
Question:
username_0: I'm guessing something is up with the manifest generation, but there are duplicate RBAC rules in https://github.com/openshift/cluster-monitoring-operator/blob/9a3ff15b5784580f11c9fe04e077e2524da2dc56/manifests/cluster-monitoring-operator-role.yaml#L40:15
Answers:
username_1: Yes we are aware of this. It’s due to the lazy script that combines them https://github.com/openshift/cluster-monitoring-operator/blob/master/hack/merge_cluster_roles.py. The fix is simple and I plan on getting to it shortly tm
Status: Issue closed
username_1: Fixed in #60 |
solo-io/gloo | 653568689 | Title: e2e test flake, appears to hit real k8s cluster when it shouldn't
Question:
username_0: **Describe the bug**
Sample failed log https://storage.googleapis.com/solo-public-build-logs/logs.html?buildid=b20a21e0-5ccf-4386-9b5d-a8af9e4ec3d6 , from https://github.com/solo-io/gloo/pull/3300#issuecomment-655732843
**Expected behavior**
No flakes
**Additional context**
Third or fourth time I've seen in CI. Unit tests shouldn't have flakes or hit clusters, should be easy to resolve.
Answers:
username_0: again https://console.cloud.google.com/cloud-build/builds/3f02c52d-4140-412f-a14a-8646d7f02cf8;step=8?project=solo-public
username_1: https://storage.googleapis.com/solo-public-build-logs/logs.html?buildid=90d014a8-b29f-4cb4-b653-097f7ab59093 |
igorski/MWEngine | 379598304 | Title: Low volume for events added to sequencer
Question:
username_0: Seems like the volume for events added to the sequencer are a lot lower than live events
Answers:
username_1: There is bit in the audio engine where summing live events multiplies their volume. Not entirely sure what the rationale here was, I think it is lingering code from before the volume summing was refactored. There is no reason this should behave differently between live and synthesized events, as they belong to the same instrument.
username_1: Just to be clear: the sequenced events weren't "low in volume", the live ones were super loud (due to a scaling bug, instead of decreasing volume when lowering a live events volume, it would actually increase!) Keep this in mind if you're disappointed with the sudden decrease in violent impact :)
Give this feature branch a spin: https://github.com/username_1/MWEngine/pull/85
username_0: Thanks once again Igor! Will try this out tonight when I get home and give
an update!
username_2: Ha, I was using live events with sequenced events yesterday and noticed this, now it makes sense what was happening.
Status: Issue closed
|
department-of-veterans-affairs/va.gov-team | 809575098 | Title: (WIP) [Cheetah MVP] Update appointment confirmation page
Question:
username_0: ## Item Description
Update the appointment scheduling confirmation page for COVID clinics.
## Acceptance Criteria
- [ ] Changes are behind va_online_scheduling_cheetah feature flag
- [ ] CONTENT - Copy matches **_TBD_**
- [ ] ANALYTICS - Track "Add to Calendar" clicks
- [ ] DESIGN - Designs match specs https://www.sketch.com/s/e3493682-f69f-4910-a45c-36720e9639d9/a/kaaOx9r#Inspector
Status: Issue closed
Answers:
username_0: @username_1 Here are the links to the documentation and screenshot folder. Can you please update with your changes? Thanks!
[Flow doc](https://github.com/department-of-veterans-affairs/va.gov-team/blob/master/products/health-care/appointments/va-online-scheduling/design/cheetah_flow.md)
[Screenshots](https://github.com/department-of-veterans-affairs/va.gov-team/tree/master/products/health-care/appointments/va-online-scheduling/design/cheetah-flow)
username_1: @username_0 done |
X-DataInitiative/tick | 556945439 | Title: In the learning of Hawkes process, where can I decide the begining time stamp of input history data.
Question:
username_0: And In Hawkes process simulation, which algorithms do you adopt? such as Ogata thinning algorithm, I can't find the introduction ,could you please tell me?
Answers:
username_0: And In Hawkes process simulation, which algorithms do you adopt? such as Ogata thinning algorithm, I can't find the introduction ,could you please tell me?
username_1: Q1 : in tick, beginning and ending timestamps are the same for all sequences in one realization. It is not something we had in mind while implementing the algorithms.
Q2 : We use Ogata thinning algorithm, see https://github.com/X-DataInitiative/tick/blob/master/lib/cpp/hawkes/simulation/simu_point_process.cpp#L107 for its C++ implementation
username_0: Thank you very much for your generous help, it helped me a lot to understand the code.
Status: Issue closed
|
AntixK/PyTorch-VAE | 1029727750 | Title: Torch Issue
Question:
username_0: I am trying to run this repo for the first time. I am getting the following error. Torch is installed and I am able to import torch outside of this script. Has anyone experienced a similar issue?
```
Traceback (most recent call last):
File "run.py", line 5, in <module>
from models import *
File "D:\github\PyTorch-VAE\models\__init__.py", line 1, in <module>
from .base import *
File "D:\github\PyTorch-VAE\models\base.py", line 2, in <module>
from torch import nn
ModuleNotFoundError: No module named 'torch'
```
Answers:
username_1: Please install pytorch from here - https://pytorch.org/
Status: Issue closed
|
tidyverse/dplyr | 231502426 | Title: Preloading errors
Question:
username_0: My .Rprofile in Ubuntu 16.04 64bit
```r
.First <- function() {
library(BiocInstaller)
library(pipeR)
library(plyr)
library(dplyr)
rp <- "http://mirrors.ustc.edu.cn/CRAN"
names(rp) <- "CRAN"
options(repos=rp)
BioC_rp <- "http://mirrors.ustc.edu.cn/bioc"
names(BioC_rp) <- "Anhui (China)"
options(BioC_mirror=BioC_rp)
}
.Last <- function() {
savehistory(file=".Rhistory")
}
```
My test code
```r
data <- iris[,1:4]
colnames(data)[4] <- "."
filter(data,Petal.Length == 1.4)
library(dplyr)
filter(data,Petal.Length == 1.4)
```
Error in filter(data,Petal.Length == 1.4): can not find object 'Petal.Length'
Status: Issue closed
Answers:
username_1: Yes, because it's loaded before the base package which provides `filter()`. This is one of the reasons that I don't recommend pre-loading packages in your .Rprofile. |
Francommit/win10_emulation_station | 916384592 | Title: [Improvement Request] Remove the '.' for scrappers.
Question:
username_0: Even on windows some programs such as skraper won't look in folders that start with a period. So I suggest that in your script you use mklink /s to symlink ".emulationstation" to "emulationstation". then generate the systems file to point to the "emulationstation" folder.
Answers:
username_1: This is a valid point - but I honestly thing it's more hassle than it's worth setting um symlinks in a big script that I want to cater for as many people as possible with as little error as possible.
It HAS been a while since I used a scraper myself - but I haven't had any problems with the default path that the emulation station folder is in - can I ask how you were running the scraper - maybe on CMD or an older version of Powershell? Powershell core tends to treat things much nicer than it's predecessors. |
coobird/thumbnailator | 743763005 | Title: 请问如果压缩gif才能保持gif的动画呢?
Question:
username_0: ## Expected behavior
_Please describe what you are expecting the library to perform._
## Actual behavior
_Please describe the actual behavior you are experiencing, including stack
trace and other information which would help diagnose the issue._
## Steps to reproduce the behavior
_Please enter step-by-step instructions for reproducing the actual behavior.
Including code can be helpful in diagnosing issue, but please keep the code to
a minimal that will reproduce the behavior._
## Environment
_Please provide vendor and version information for the Operating System,
JDK, and Thumbnailator. Please feel free to add any other information
which may be pertinent._
- OS vendor and version: windows 10
- JDK 1.8
- Thumbnailator version: 0.4.11
Answers:
username_1: Thumbnailator does not support animated images (such as animated GIF) right now. Hopefully it will be supported in the future.
Closing as a duplicate of #30.
Status: Issue closed
username_0: gif压缩我只想到了抽帧的方式,希望这个工具以后会支持! |
ChicoState/Smart-Nap | 265840591 | Title: Alarm creation and convenient naming
Question:
username_0: Currently there is an onTouchListener setup on the AlarmEdit page which waits for user to edit the alarm name field. If it is the first time editing it, the field is cleared and waits for input. However, the issue arises here with getting the keyboard to move away from the screen following input.
Ideally, to make this intuitive, after the user is done with input, it should remove the keyboard from the screen. In addition, as it does already, it is important that the field does continually erase itself if the user attempts to make edits. |
BEEmod/BEE2-items | 219022351 | Title: Portal 1 Co-op Exit 4 door movement is broken
Question:
username_0: When the door tries to open, this happens:

The worldportals also don't connect - see #1799.
Answers:
username_0: You should just switch Portal 1 doors to use a combined model like buttons.
username_1: It's actually better to use the doors - they can easily reverse themselves, and can actually be jammed by objects - unlike animated models, if you put a cube in the way that would be able to temporarily hold it open.
username_0: You can't actually get through the jammed door though, there's a clip brush. |
sinfo/eventdeck | 96206414 | Title: Fetch resource by event
Question:
username_0: We need a special endpoint that can give us a list of speakers/companies/etc with valid participations for a certain event.
My opinion is that it should be:
`api/events/{id}/speakers`
`api/events/{id}/companies`
(...)
On a longer term I might see the other resource routes disappear, as the collections grow larger and larger and I don't see the point of keeping a generic route for a certain resource, but more on that later.
Answers:
username_0: Added this feature as a query param on the `speakers`, when it's fully working we should change this on the `companies` too.
username_0: #296 will allow to fetch members by events they participated in.
Status: Issue closed
username_0: Think this is done. |
Azure/azure-openapi-validator | 248105746 | Title: A linter rule to verify the parameters defined in the global parameters section
Question:
username_0: Autorest treats the parameters defined in the global parameters section as client properties. Hence one needs to be super sure when they are adding a parameter over there.
90% scenario is that subscriptionId and api-version are parameters that should be defined in the lobal parameters section.
However, one can define a parameter that is being referenced in multiple operations (example: resourceGroupName) in the global parameters section and apply the extension `"x-ms-parameter-location": "method"`. This will then not be a client property.
It would be nice to have a linter rule that validates this aspect and warns the user.
[CognitiveSerivices api has location as a global parameter](https://github.com/Azure/azure-rest-api-specs/blob/current/specification/cognitiveservices/resource-manager/Microsoft.CognitiveServices/2017-04-18/cognitiveservices.json#L1032) and SDKs have shipped with this.
Answers:
username_0: Another instance that proves that this should definitely be a linter rule.
(someone added resourceGroupName and expressRouteConnectorName as global parameters and the reviewer merged it)[https://github.com/Azure/azure-rest-api-specs-pr/pull/89/files#diff-97646710ea82ab92a7c2fcdae470581bR460]. By having this linter rule we will make it easier for reviewers.
username_0: And one more PR in the management plane was merged incorrectly https://github.com/Azure/azure-rest-api-specs/commit/bc175dbf36fd6d8e4ccac18a3bfd102d43fc6672#diff-e895aa4ceab43a039f4baf6e7b2bf7c0R3146
username_1: Another example: https://github.com/Azure/azure-rest-api-specs/pull/1829
username_1: New example: https://github.com/Azure/azure-rest-api-specs/pull/1860
username_1: Another example of merged PR:
https://github.com/Azure/azure-rest-api-specs-pr/pull/263/files#diff-ab59bbd9ab18efab6f5800473e0d7bb6R720
username_2: Apparently I got bitten by this issue as well in https://github.com/Azure/azure-cli/pull/5086 but luckily @username_1 is watching out for this.
Adding a linter to rule to catch this in the future would be 💯.
username_1: Again: https://github.com/Azure/azure-rest-api-specs/pull/2052
username_1: @username_3 At some point we need to prioritize this, this is really time-consuming... :(
username_3: @username_4 @username_5 @mcardosos
THis seems to be a rule that benefits our group and it's efficiency can we please prioritize this?
username_4: @username_3 I've just tagged it with Sprint-113. We haven't been updating rules recently or scheduling any work for them, it'd be good to triage and select which ones we'd like to tackle next and how it prioritizes with other work.
username_3: Lets Focus on SDK rules only
Get Outlook for iOS<https://aka.ms/o0ukef>
username_1: Again https://github.com/Azure/azure-rest-api-specs/pull/2289
username_0: For Heaven's sake, please implement this linter rule.
This [Consumption API PR](https://github.com/Azure/azure-rest-api-specs/pull/2335/files#diff-823b312a7b7d9f3c3c976f374ba46fbcR2070) was merged incorrectly. It has resourceGroupName and budgetName as client properties. They should be method parameters.
Assign this issue on priority to someone and implement this linter rule.
/cc @username_3
username_1: Again https://github.com/Azure/azure-rest-api-specs/pull/2433
username_1: Seriously, now when I type "validator" in my Chrome bar, it suggest me directly this issue....
username_1: Again https://github.com/Azure/azure-rest-api-specs/pull/2391
username_5: The new Rule [XmsParameterLocation](https://github.com/Azure/azure-openapi-validator/blob/master/src/dotnet/OpenAPI.Validator/Validation/XmsParameterLocation.cs) has been implemented. Refer [PR #149](https://github.com/Azure/azure-openapi-validator/pull/149) for details.
The dotnet classic open api validator is scheduled for release on Monday, March 19, 2018. The code changes are complete. Refer [PR #150][https://github.com/Azure/azure-openapi-validator/pull/150] for further details.
All the existing errors have already been fixed in the specs repository. Refer [PR #2649](https://github.com/Azure/azure-rest-api-specs/pull/2649) for details.
There is no pending action items (of course the actual release is pending until 19th) in this issue. Resolving it now.
Status: Issue closed
|
sebastianbergmann/php-file-iterator | 491577441 | Title: Too slow execution when tests folder contains many files (not even tests)
Question:
username_0: /Users/maksrafalko/tmp/phpunit-filter/first/second/second2.php
/Users/maksrafalko/tmp/phpunit-filter/first/second/second1.php
```
As you can see, `symfony/finder` meets `first/excluded` folder, sees that it is excluded and *does not travers* it farther.
This is done thanks to [`FilterIterator`](https://www.php.net/manual/en/class.filteriterator.php) and its implementation in `symfony/finder`: [ExcludeDirectoryFilterIterator](https://github.com/symfony/finder/blob/master/Iterator/ExcludeDirectoryFilterIterator.php#L57-L59)
1. Do you think we need to implement something similar in `php-file-iterator`?
2. Do you have any objections or know something that is incompatible with the filter iterator?
Thank you.
https://github.com/symfony/finder/blob/1d4d30533fa8e343a85f6c51b0cba1ef5d041929/Iterator/ExcludeDirectoryFilterIterator.php#L57-L59
Answers:
username_1: Sorry for not responding earlier. Yes, I am open to a pull requests that improves this. |
EminemJK/Banana | 488398303 | Title: 关于数据库连接释放问题
Question:
username_0: 你好,请问是否可以就关于数据库连接释放时机的问题进行讨论呢,或晚上下班后QQ讨论呢?
关于连接的创建,现在是在 ConnectionBuilder.cs 中进行统一的连接创建
```csharp
public static IDbConnection CreateConnection(string dbAliase = DefaultAliase)
{
try
{
if (string.IsNullOrEmpty(dbAliase))
{
dbAliase = DefaultAliase;
}
DBSetting dBSetting;
if (!DBSettingDic.TryGetValue(dbAliase, out dBSetting))
{
throw new Exception("The key doesn't exist:" + dbAliase);
}
var conn = dBSetting.ConnectionString;
switch (dBSetting.DBType)
{
// 以下直接创建连接
case DBType.SqlServer:
case DBType.SqlServer2012:
return new SqlConnection(conn);
case DBType.MySQL:
return new MySqlConnection(conn);
case DBType.SQLite:
return new SQLiteConnection(conn);
case DBType.Postgres:
return new NpgsqlConnection(conn);
case DBType.Oracle:
return new OracleConnection(conn);
}
// 省略其余代码
```
Repo的基类,目前是以这种方式直接创建使用的,但我发现数据连接的整个使用周期没有发现释放或Close的代码,这样将会导致连接一直被占用,一旦访问超过连接池中数量则会出现连接不上数据库的异常,原因在于池中所有连接都被占用没被释放。以上结论是我手动把连接池限制到10个连接时测试得出。
```csharp
/// <summary>
/// IDbConnection
/// </summary>
public IDbConnection DBConnection
{
get
{
if (_dbConnection == null)
{
// 创建连接 但 最终并没释放
_dbConnection = ConnectionBuilder.CreateConnection(dbAliase);
}
if (_dbConnection.State == ConnectionState.Closed && _dbConnection.State != ConnectionState.Connecting)
{
_dbConnection.Open();
}
return _dbConnection;
}
private set { this._dbConnection = value; }
}
```
目前我的解决方式,我把基础的Repo 实现 IDisposable 使用USING 或 手动释放连接,但这种方式感觉写起来有点麻烦,所以想与你讨论下是否有或好的方式在整个连接的使用周期内释放连接的方法?<issue_closed>
Status: Issue closed |
RSS-Bridge/rss-bridge | 1085278646 | Title: Twitter search failed with error 403
Question:
username_0: Error message: `Unexpected response from upstream.
cUrl error: (0)
PHP error: `
Query string: `action=display&bridge=Twitter&context=By+keyword+or+hashtag&format=Json&q=<some string here>`
Version: `dev.2021-04-25`
I recently upgraded rss-bridge with the latest version I'm getting a ton of exceptions now.
Answers:
username_1: Works fine on my instance.
https://feed.eugenemolotov.ru/?action=display&bridge=Twitter&context=By+keyword+or+hashtag&format=Html&q=hello
username_2: I get this too, and I wonder if this is related to the fix for #2366
username_3: I have a couple of keyword searches which work fine after applying #2366 .
username_2: I have a lot of feeds that work fine.
Every now and then (once a day?) I get a "403 curl (0)", and if I back out #2366 I get a "could not parse guest token" once a day.
I'm playing with the values, but it feels a bit random right now. I need a better testing strategy.
Note that I haven't seen both 403 and "could not parse guest token" simultaneously yet, but it could happen. "Once a day" doesn't make for a quick turnaround in testing.
username_3: We know what the guest token expiry time is because it is set by a cookie with a given expiry time.
The maximum number of guest token uses is more of a mystery, it is set at 100 at the moment and I have no issue with 90.
If anything the guest token expiry is more conservative than it needs to be, so I am unsure why some people are seeing 403 forbidden errors.
username_2: Cool, binary searching between 50 and 100 for now.
username_2: OK, so:
getApiContents() receives 403 from Twitter: (replaced the guest token with zzz)
HTTP/2 403
set-cookie: guest_id_marketing=v1%3Azzz; Max-Age=63072000; Expires=Fri, 22 Dec 2023 09:40:06 GMT; Path=/; Domain=.twitter.com; Secure; SameSite=None
set-cookie: guest_id_ads=v1%3Azzz; Max-Age=63072000; Expires=Fri, 22 Dec 2023 09:40:06 GMT; Path=/; Domain=.twitter.com; Secure; SameSite=None
set-cookie: guest_id=v1%3Azzz; Max-Age=63072000; Expires=Fri, 22 Dec 2023 09:40:06 GMT; Path=/; Domain=.twitter.com; Secure; SameSite=None
content-type: application/json;charset=utf-8
cache-control: no-cache, no-store, max-age=0
content-length: 79
content-encoding: gzip
x-response-time: 100
x-connection-hash: bdb5dc1...
date: Wed, 22 Dec 2021 09:40:06 GMT
server: tsa_f
There is a guest token in the outgoing headers
The guest token was obtained at 09:20 (local time), was used 3 times successfully, and triggered a 403 at 10:40 (local time)
That then triggers a PHP error in:
lib/error.php:24
#0 lib/contents.php(203): returnError()
#1 bridges/TwitterBridge.php(580): getContents()
#2 bridges/TwitterBridge.php(586): TwitterBridge->getApiContents()
#3 bridges/TwitterBridge.php(221): TwitterBridge->getRestId()
#4 bridges/TwitterBridge.php(241): TwitterBridge->getApiURI()
#5 actions/DisplayAction.php(135): TwitterBridge->collectData()
#6 index.php(40): DisplayAction->execute()
#7 {main}
index.php:40 class DisplayAction->execute - URI must include a scheme, host and path!
username_3: "URI must include a scheme, host and path!" doesn't fit with what I see when I try to use an expired token.
I see "Exception: Unexpected response from upstream."
username_2: "Exception: Unexpected response from upstream." is what's displayed to the rss-bridge client.
"URI must include a scheme, host and path!" is in the debug backtrace.
Status: Issue closed
username_4: Error message: `Unexpected response from upstream.
cUrl error: (0)
PHP error: `
Query string: `action=display&bridge=Twitter&context=By+keyword+or+hashtag&format=Json&q=<some string here>`
Version: `dev.2021-04-25`
I recently upgraded rss-bridge with the latest version I'm getting a ton of exceptions now. |
svga/SVGAPlayer-Android | 744401879 | Title: 关于使用File缓存时的问题
Question:
username_0: 我们在线上使用file缓存的时候,遇到一个问题,日志如下
Caused by android.system.ErrnoException: open failed: ENOENT (No such file or directory)
at libcore.io.Posix.open(Posix.java)
at libcore.io.BlockGuardOs.open(BlockGuardOs.java:186)
at java.io.File.createNewFile(File.java:935)
at com.opensource.svgaplayer.h$f$a.run(SVGAParser.kt:2)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1113)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:588)
at java.lang.Thread.run(Thread.java:818)
因为createNewFile而导致了这个错误,而我们的做法是在外部调用之前,先确保其目录存在。但是进入此位置,依旧有报错。能否在createNewFile初添加异常处理呢
Answers:
username_1: 请问用的是什么版本?如果你们确保了目录是否存在后仍会出现,那可能要定位下具体问题,这个错误出现的可能性也不仅仅只是这一种,例如:在创建文件夹时如果在不确定父目录的父级是否存在时,最好用 mkdirs 方法创建;权限问题;存储空间不足等等,都可能引发这个 exception,建议打些 log 收集下
username_0: 我们使用的额是2.5.11版本,最后,我抽出来你们的SvgaParser修改了缓存代码,加了try-catch,没这个问题产生了。其实是否你们可以直接加一个try-catch呢?因为这么多用户的情况下,这个问题一定不可避免。crash比缓存不住来的影响面大的多

username_2: 2.5.13 同样报这个问题,基本上都是vivo、oppo 5.0和6.0的机器
username_0: 兄弟你有怎么处理么?
username_1: 我明白了,晚点加上这个
username_0: 谢谢兄弟~
username_2: 加上try-catch后影响文件下载吗?
username_2: 我是前几天在bugly里面看到这个问题,才来issue上搜到这个问题,暂时没有处理。
username_0: 不影响,只是影响缓存
Status: Issue closed
|
jaspardzc/Blog | 149231426 | Title: Profile Controller and Profile View
Question:
username_0: Basic Responsive Structure and Corresponding Actions
1. Profolio Info
2. Education Info
3. Experiences Info
4. Projects Info
5. Skills Info
Answers:
username_0: 1. Skill Chart Brief Block - Completed
2. Industrial Experience Block - Completed
3. Education Brief Block - Completed
4. Recent Personal Projects Brief Block - Completed
username_0: Skill Chart Brief Block Refactored to Menu Tab with Material Design
Status: Issue closed
|
hassio-addons/addon-grocy | 563372907 | Title: Enable non admin users to use the addon (Feature Request)
Question:
username_0: # Problem/Motivation
It is currently not possible for non administrative users to gain access to the addon.
## Expected behavior
Non administrative users can navigate to the addon start page via the sidebar.
## Actual behavior
Currently not possible.
Thanks! Kay.
Status: Issue closed
Answers:
username_1: That is not an add-on limitation and thus cannot be addressed by the add-on. |
flutter/flutter | 828293468 | Title: [tool_crash] ProcessException: Nome di directory non valido. Command: C:\Users\Lenovo\Documents\flutter_SDK\bin\cache\dart-sdk\bin\pub.bat, OS error code: 267
Question:
username_0: ## Command
```
flutter doctor
```
## Steps to Reproduce
1. ...
2. ...
3. ...
## Logs
ProcessException: Nome di directory non valido.
Command: C:\Users\Lenovo\Documents\flutter_SDK\bin\cache\dart-sdk\bin\pub.bat, OS error code: 267
```
#0 _ProcessImpl._start (dart:io-patch/process_patch.dart:390:33)
#1 Process.start (dart:io-patch/process_patch.dart:36:20)
#2 LocalProcessManager.start (package:process/src/interface/local_process_manager.dart:39:20)
#3 ErrorHandlingProcessManager.start.<anonymous closure> (package:flutter_tools/src/base/error_handling_io.dart:638:33)
#4 _run (package:flutter_tools/src/base/error_handling_io.dart:532:20)
#5 ErrorHandlingProcessManager.start (package:flutter_tools/src/base/error_handling_io.dart:638:12)
#6 _DefaultProcessUtils.start (package:flutter_tools/src/base/process.dart:472:28)
#7 _DefaultProcessUtils.stream (package:flutter_tools/src/base/process.dart:491:35)
#8 _DefaultPub.batch (package:flutter_tools/src/dart/pub.dart:282:34)
<asynchronous suspension>
#9 _DefaultPub.get (package:flutter_tools/src/dart/pub.dart:222:7)
<asynchronous suspension>
#10 PubDependencies.update (package:flutter_tools/src/cache.dart:774:5)
<asynchronous suspension>
#11 Cache.updateAll (package:flutter_tools/src/cache.dart:562:9)
<asynchronous suspension>
#12 FlutterCommand.verifyThenRunCommand (package:flutter_tools/src/runner/flutter_command.dart:1109:7)
<asynchronous suspension>
#13 FlutterCommand.run.<anonymous closure> (package:flutter_tools/src/runner/flutter_command.dart:1009:27)
<asynchronous suspension>
#14 AppContext.run.<anonymous closure> (package:flutter_tools/src/base/context.dart:150:19)
<asynchronous suspension>
#15 AppContext.run (package:flutter_tools/src/base/context.dart:149:12)
<asynchronous suspension>
#16 CommandRunner.runCommand (package:args/command_runner.dart:197:13)
```
```
[✓] Flutter (Channel stable, 2.0.1, on Microsoft Windows [Versione 10.0.18363.1379], locale it-IT)
• Flutter version 2.0.1 at C:\Users\Lenovo\Documents\flutter_SDK
• Framework revision c5a4b4029c (6 days ago), 2021-03-04 09:47:48 -0800
• Engine revision 40441def69
• Dart version 2.12.0
[!] Android toolchain - develop for Android devices (Android SDK version 29.0.3)
• Android SDK at C:\Users\Lenovo\AppData\Local\Android\sdk
• Platform android-29, build-tools 29.0.3
• Java binary at: C:\Program Files\Android\Android Studio\jre\bin\java
• Java version OpenJDK Runtime Environment (build 1.8.0_242-release-1644-b01)
! Some Android licenses not accepted. To resolve this, run: flutter doctor --android-licenses
[✓] Chrome - develop for the web
• Chrome at C:\Program Files (x86)\Google\Chrome\Application\chrome.exe
[✓] Android Studio (version 4.0)
• Android Studio at C:\Program Files\Android\Android Studio
• Flutter plugin can be installed from:
🔨 https://plugins.jetbrains.com/plugin/9212-flutter
• Dart plugin can be installed from:
🔨 https://plugins.jetbrains.com/plugin/6351-dart
• Java version OpenJDK Runtime Environment (build 1.8.0_242-release-1644-b01)
[✓] Connected device (2 available)
• Chrome (web) • chrome • web-javascript • Google Chrome 88.0.4324.190
• Edge (web) • edge • web-javascript • Microsoft Edge 88.0.705.68
! Doctor found issues in 1 category.
```
## Flutter Application Metadata
No pubspec in working directory.
Answers:
username_1: Duplicate of https://github.com/flutter/flutter/issues/77376.
Status: Issue closed
|
mariajosebaeza/bunny-love | 197715332 | Title: Problemas en ejercicio
Question:
username_0: Hola!
Se encuentran los siguientes errores:
1. Falta meta viewport `<meta name="viewport" content="initial-scale=1, maximum-scale=1">`
2. Cabecera y columnas de conejitos que le siguen, no son responsive. |
zzx02/Raspberry-Pi-Indy-Agent | 575155289 | Title: Timeout issue when running agent on Raspberry Pi
Question:
username_0: Hi,
I am trying to run indy agents on Raspberry Pi.But I am getting the following error

I am using Raspberry 3B with 1GB RAM |
barbeau/gpstest | 314964706 | Title: Visualizing/Downloading GNSS measurements log from Android Studio
Question:
username_0: Hi, I'm using GPSTest on a Huawei P10 with Android 7.0. I was interested in logging the raw measurements. I followed all the steps suggested here (USB driver, enabling the log by the app, selecting "no filters" and typing "GpsOutput" in Adroid Studio), but I don't visualize, at least, the measurements on the Android Monitor. This is the screenshot from my laptop. Please, I'm not expert at all with Adroid Studio, so I do not know what all those windows on the top refer to.

Is something missing from all the steps that I've followed?
Answers:
username_1: @username_0 Have you enabled USB debugging on your device (which is different than installing a USB driver)? See https://developer.android.com/studio/run/device.html#developer-device-options
username_0: yes, I did it, even if it struggled me a lot before understanding why USB debugging was going to get disabled every time I quit the settings menu
username_1: If you don't do this you won't see anything in Logcat.
username_0: yes, I did it. I clicked ok to allow USB debugging, and then ok once again visualizing the RSA keycode. Moreover, once completed these steps, the tool "HiSuite" shows that the device is connected to the laptop. So, the steps I did in order are:
- open Android Studio (no device connected). Here the options "No Filters" and "GpsOutput" are still present from last time. Device offline.
- plug the USB cable
- enable USB debugging
- click "ok" two times. Now device recognized
- start GpsTest on the Huawei P10
- Settings, enable Measurements log. Going back to the main window the app confirms the success
- nothing on the logcat screen. Here, clicking on the "Device File Explorer", I've noticed that in the log folder, the permission is denied and there's an error. But I do not know if that one should be the right folder to store data, eventually.
Is some of the steps missing to visualize/store data?
I attach some figures to clear all the steps described.



username_1: folder, the permission is denied and there's an error. But I do not know if
that one should be the right folder to store data, eventually.
Currently GPSTest doesn't support logging to a file (that's ticketed in
https://github.com/username_1/gpstest/issues/66 for a future feature), so you
won't find a log file.
It sounds like you've completed all the steps correctly to see the output
in LogCat. It's possible that some devices don't output Logcat statements
for release builds of APKs (e.g., what you would download from Google Play).
You could test this by uninstall GPSTest from Google Play, and then
building the source code of the project on your machine in Android Studio
and installing that APK on your device. Instructions are here if you want
to try this out:
https://github.com/username_1/gpstest#building-in-android-studio
username_0: thank you. I'll try this one and let you know if it works.
Regards,
Francesco
Status: Issue closed
username_1: I'm closing this issue for housekeeping as it doesn't seem to be related to the app, but let me know your results from running in Android Studio. |
rsalayo/OpeniT.Timesheet.Issues | 230948025 | Title: Feature Request: Time Remaining and Hours needed per day
Question:
username_0: An indicator to know how many hours left you need to work and how many hrs per day is needed.
Example:
May Total: 154.81 hours
Time Remaining: 37.19 hours (6.20/day)

Answers:
username_1: Try hovering on Today’s progress bar and see if this helps.
username_2: Please make it 'always' visible, please?
Status: Issue closed
username_1: This is the best I can do for now...

username_2: Thanks! |
akkadotnet/Akka.Persistence.MongoDB | 695441788 | Title: Lots of Failed to persist event type [ ... ] with sequence number [...] for persistenceId [...]
Question:
username_0: Hey guys,
i was wondering if anyone had any tips or direction toward the resolution of this kind of issue i have been trying to solve since a while.
So basically it seam that when the system is under load and lots of actor receive lots of events to persist i would often receive this kind of error, i have tried to increase the timeout from 10s to 20s without any luck.
Thank you
```
[ERROR][09/07/2020 22:47:20][Thread 0005][akka://.....] Failed to persist event type [......] with sequence number [11] for persistenceId [....].
Cause: Akka.Pattern.OpenCircuitException: Circuit Breaker is open; calls are failing fast
---> System.TimeoutException: Execution did not complete within the time allotted 20000 ms
--- End of inner exception stack trace ---
at Akka.Pattern.Open.Invoke[T](Func`1 body)
at Akka.Persistence.Journal.AsyncWriteJournal.HandleWriteMessages(WriteMessages message)
```
Answers:
username_1: We solved that kind of error when recovering a lot of actors by using the Persistence Extras package https://github.com/petabridge/Akka.Persistence.Extras with its backoff supervisor.
If you are using Shard Regions with remember-entities set to true it might be a good idea to change the recovery strategy to `constant`. Otherwise the shard region will try to recover the entities as fast as possible
`akka.cluster.sharding.entity-recovery-strategy = "constant"`
username_2: @username_0 Indeed, if underlying storage is overloaded, back-off strategy should help to reduce number of attempts that your actors/journals are making when failing to persist events. See more here: https://getakka.net/articles/persistence/event-sourcing.html#failures
You may also consider explicit event batching with `PersistAll` or `PersistAllAsync` to reduce the load to the journal.
I am closing this assuming that you have already resolved the issue with configuration/storage overloading, but feel free to reopen if you still have questions.
Status: Issue closed
username_0: Thank you for your recommendations @username_2 and @username_1 |
adobe/helix-static | 550087538 | Title: add support for Typekit proxying
Question:
username_0: A request made to `/hlx_fonts/eic8tkf.css` would be proxied to `https://use.typekit.net/eic8tkf.css` in the result, all references to `https://use.typekit.net/` will be replaced with references to `/hlx_fonts/`
A request made to `/hlx_fonts/af/d91a29/00000000000000003b9af759/27/l?primer=34645566c6d4d8e7116ebd63bd1259d4c9689c1a505c3639ef9e73069e3e4176&fvd=i4&v=3` would be forwarded by Fastly to `https://use.typekit.net/af/d91a29/00000000000000003b9af759/27/l?primer=34645566c6d4d8e7116ebd63bd1259d4c9689c1a505c3639ef9e73069e3e4176&fvd=i4&v=3`<issue_closed>
Status: Issue closed |
facebook/react | 495643128 | Title: Plans for handling `hidden` differently
Question:
username_0: I don't necessarily agree with the reasoning given in the spec but I'm more interested if the core team is aware of this conflict and if there are plans to resolve this somehow or simply ignore it.
Answers:
username_1: The plan is to have a first-class API for this. We haven’t quite gotten to that yet.
username_2: Closing as answered
Status: Issue closed
|
jlippold/tweakCompatible | 406735756 | Title: `Moveable9` working on iOS 11.4
Question:
username_0: ```
{
"packageId": "com.hackyouriphone.moveable9",
"action": "working",
"userInfo": {
"arch32": false,
"packageId": "com.hackyouriphone.moveable9",
"deviceId": "iPhone8,1",
"url": "http://cydia.saurik.com/package/com.hackyouriphone.moveable9/",
"iOSVersion": "11.4",
"packageVersionIndexed": false,
"packageName": "Moveable9",
"category": "HYI - Tweaks",
"repository": "HackYouriPhone",
"name": "Moveable9",
"installed": "1.0.0~beta29-28",
"packageIndexed": false,
"packageStatusExplaination": "This tweak has not been reviewed. Please submit a review if you choose to install.",
"id": "com.hackyouriphone.moveable9",
"commercial": false,
"packageInstalled": true,
"tweakCompatVersion": "0.1.0",
"shortDescription": "Arrange statusbar icons.",
"latest": "1.0.0~beta29-28",
"author": "<NAME> (tateu)",
"packageStatus": "Unknown"
},
"base64": "<KEY>
"chosenStatus": "working",
"notes": "Incompatible with typestatus"
}
``` |
MaryRys/Futurescape | 405054239 | Title: Create completed triumphs
Question:
username_0: ### User Experience
As a user, I would like to see triumphs that completed, that is displayed in a separate area from the listed triumphs.
### Dev Notes
Use the smash function and filter in App.js as in const completedTriumphs = triumphs.filter(x => x.isCompleted);
send the isCompletedTriumph down to the isCompleted component and render the same triumph information as the All list. The "Untrack" button will no longer display and firebase shows isComplete: true and isFeatured: false.
### Acceptance Criteria
The only triumphs in the component will be unfeatured, complete and untracked.<issue_closed>
Status: Issue closed |
ant-design/ant-design | 389312214 | Title: controlled <Select showSearch />
Question:
username_0: <Option ... />
</Select>
```
<!-- generated by ant-design-issue-helper. DO NOT REMOVE -->
Status: Issue closed
Answers:
username_1: https://codesandbox.io/s/dreamy-butterfly-vdrne
`searchValue` of antd v4 Select is controlled prop now.
username_2: v4.0.0 still not working and this class `ant-select-selection-selected-value` also missing
username_3: Feel free to open a new issue please. |
nasa/osal | 677314579 | Title: Deprecate OS_open and OS_creat
Question:
username_0: **Is your feature request related to a problem? Please describe.**
For historical/backward compatibility reasons, the API of these two functions doesn't follow the typical flow. Rather than providing a `uint32` ID output buffer as the first argument with a separate int32 return code, they return the OSAL ID cast as an `int32` on success. For these functions, the caller is expected to check if the result is negative, and if so, consider it an error code. Whereas if it is non-negative, the caller is expected to cast it back to a `uint32` type and interpret it as an OSAL ID.
**Describe the solution you'd like**
These should be like all other OSAL APIs and pass back the ID separately from the return/error status.
**Describe alternatives you've considered**
Leave as is. But these two functions present a challenge when making a distinct type for OSAL IDs .
**Additional context**
In the current implementation,. these are just compatibility wrappers anyway. They both call `OS_OpenCreate()` internally, which provides both open (existing file) and creat (new file) based on the flags it was passed. The `OS_OpenCreate` function _does_ follow the correct pattern so one option would be to just expose this to the public API.
The other option is to create a new version of OS_open and OS_creat which follow the correct pattern. But in order to provide a transition they would have to use different names.
**Requester Info**
<NAME>, Vantage Systems, Inc.
Answers:
username_0: One other possible compromise is to keep these as-is returning `int32`, but provide another function to decode the return value into a status and ID of the correct type (e.g. `uint32`) rather than relying on the user to cast it themselves and do it properly.
This has been a source of confusion in the past, see #498.
username_0: Would like to get some feedback on this idea in the next CCB.
username_1: I'm in favor of the suggested approach (deprecate, expose OS_OpenCreate).
username_1: @acudmore @username_3 ping, deprecation proposal
username_1: CCB 2020-08-12: No objections, but solicited responses/review.
username_0: Also @CDKnightNASA - you might have some thoughts here as well?
Summary of questions are:
1. Does the inconsistency/confusion around this API warrant making a breaking change to try to fix it and make it like the others?
If (1) is yes, then:
2. Should we keep a separate `open` & `creat` routine or combine them into a single API, using flags to differentiate whether a file should be created or not? Or should we keep it 1:1 and offer a direct replacement for `OS_open` and `OS_creat`, to make it easier for apps to update?
If (1) is no, then:
3. Is it worth providing a new API to help users check/decode the `int32` value returned from these existing API calls? This would be backward compatible at least, as it doesn't change/remove any existing function, while alleviating the need for applications to directly cast/convert between OSAL IDs and other integer data types, which is the fundamental concern here.
username_2: No objections to just using OS_OpenCreate as this follows the normal API structure found in many file systems where the create option flag is passed in. Hopefully the change only impacts a limited number of applications.
username_0: Worth noting, in case anyone is unconvinced about the benefits of being type-strict, I found a mistake/bug in the unit tests here, that's probably been lurking for quite some time, where it fails to handle this return code properly:
https://github.com/nasa/osal/blob/8cfd6fe71a5506be8e463f26d92441785fd3e242/src/tests/file-api-test/file-api-test.c#L794-L795
This type of goof-up becomes very apparent/obvious when using a distinct type, and can actually trigger a compiler error if desired.
username_3: This will impact several apps, but I'm not opposed.
username_1: @username_0 - did you start implementing this change, or want me to take it?
username_0: A previously merged PR has already exposed the `OS_OpenCreate()` API in the public header as discussed, so the next step was to replace current CFE references to `OS_open`/`OS_creat`.
I hadn't started on that apart yet because I needed to get nasa/cfe#868 in first, but now that is settled there shouldn't be any issue in replacing these refs now. I can submit a CFE ticket for that. If someone else is available and wants to take it on that would be fine, otherwise I'm happy to do it.
username_0: See nasa/cfe#893 for the CFE work to be done.
Status: Issue closed
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.