content
stringlengths
240
2.34M
<issue_start><issue_comment>Title: n_stable not work? username_0: Hi, I set n_stable = 5 and leave f_stable at default (0.001). Why these iterations below did not stop at the 6th epoch? Epoch | Training Error | Validation Error | Time -----------|------------------------|---------------------|---- 1 | 2.887e-03 | 4.975e-05 | 98.1s 2 | 9.069e-07 | 4.949e-05 | 96.4s 3 | 7.560e-07 | 4.942e-05 | 95.8s 4 | 7.034e-07 | 4.938e-05 | 105.5s 5 | 6.737e-07 | 4.936e-05 | 105.8s 6 | 6.521e-07 | 4.934e-05 | 105.3s 7 | 6.365e-07 | 4.933e-05 | 103.3s 8 | 6.240e-07 | 4.931e-05 | 105.5s 9 | 6.138e-07 | 4.930e-05 | 105.4s 10 | 6.052e-07 | 4.930e-05 | 106.2s 11 | 5.983e-07 | 4.929e-05 | 106.9s and keep on going... For further information, the Validation Error column was in green (while Training Error column was in blue). I noticed than in some cases they change colour to black. What is the difference between green and black? <issue_comment>username_1: If the validation error goes up, then a minima was reached, and the previous version is considered stable. If you want to terminate after 5 iterations use `n_iter=5`?<issue_closed>
<issue_start><issue_comment>Title: multiple anonymous defines username_0: For the record, I encountered this problem one day out of the blue and thankfully only lost a couple of hours before I tried disabling the browser extensions which fixed the problem (but left me without an excellent extension :( ) <issue_comment>username_1: Which browser extension(s) did you have to disable? <issue_comment>username_0: Exif Viewer on Chrome. -------------------------------------------- it on GitHub.
<issue_start><issue_comment>Title: Wrong example for toast username_0: ```js var $toastContent = $('I am toast content'); Materialize.toast($toastContent, 5000); ``` The above example does not show the correct code to match the description. Thanks. <issue_comment>username_1: Fixed in 4fb1621<issue_closed>
<issue_start><issue_comment>Title: Upgrade WordPress to 4.2 username_0: Tested with hhvm perf.php --hhvm hhvm --repo-auth --i-am-not-benchmarking --wordpress ![image](https://cloud.githubusercontent.com/assets/90942/7310856/3ea48c28-e9e8-11e4-9491-bfe270ec2866.png) <issue_comment>username_1: Rather than committing the wordpress .gz file into this github repository, is there any way that this can be downloaded as needed from an authoritative site? True, no other framework does this in the oss-performance world, yet. However, doing the download when/if needed would save the git checkout time and bloat for all users of this repository. <issue_comment>username_2: We've tried that - it generally makes things unreliable and suddenly break. <issue_comment>username_2: @username_0 : how about without the extra args, and how do the stats (especially byte size and 200 rate compared to the other response codes) compare to without this diff? <issue_comment>username_1: Yes, I understand that. Our internal solution for problems like this has been to cache all of the 3rd party stuff on a local ftp server, but that won't really work in this situation, I agree. I withdraw my suggestion :) <issue_comment>username_0: { "Combined": { "Siege requests": 32753, "Siege wall sec": 0.36, "Siege RPS": 548.35, "Siege successful requests": 31153, "Siege failed requests": 0, "Nginx hits": 32953, "Nginx avg bytes": 29399.132734501, "Nginx avg time": 0.36017628137044, "Nginx P50 time": 0.351, "Nginx P90 time": 0.455, "Nginx P95 time": 0.492, "Nginx P99 time": 0.768, "Nginx 200": 29790, "Nginx 499": 163, "Nginx 301": 1400, "Nginx 404": 1600, "canonical": 1 } } VS { "Combined": { "Siege requests": 36931, "Siege wall sec": 0.32, "Siege RPS": 616.44, "Siege successful requests": 35331, "Siege failed requests": 0, "Nginx hits": 37131, "Nginx avg bytes": 30952.137566993, "Nginx avg time": 0.32134693382888, "Nginx P50 time": 0.318, "Nginx P90 time": 0.409, "Nginx P95 time": 0.439, "Nginx P99 time": 0.508, "Nginx 200": 33775, "Nginx 499": 156, "Nginx 301": 1600, "Nginx 404": 1600, "canonical": 1 } } Bear in mind there's a different theme so the bytes are different. Interestingly it seems a little faster. But close enough to prove it's working. <issue_comment>username_0: Interesting that there are more 301s. Thoughts? <issue_comment>username_2: Looks close enough given in the overall request increase, and near-constant average bytes. Please rebase.
<issue_start><issue_comment>Title: Can't I pipe a stream to an object stream? username_0: ```javascript var through2 = require('through2'); process.stdin.pipe(through2(function (chunk, _, next) { this.push({ data: chunk.toString() }); return next(); })) .pipe(through2.obj(function (chunk, _, next) { //do something })).pipe(process.stdout); ``` It throws "TypeError: Invalid non-string/buffer chunk". Must I stringify the output object when pushing and parse it at next pipeline? <issue_comment>username_1: You have to enable objectMode on both through streams for this to work.<issue_closed>
<issue_start><issue_comment>Title: Mutation Result not update in React 18 StrictMode username_0: <!-- Thanks for filing an issue on Apollo Client! Please make sure that you include the following information to ensure that your issue is actionable. If you don't follow the template, your issue may end up being closed without anyone looking at it carefully, because it is not actionable for us without the information in this template. **PLEASE NOTE:** Feature requests and non-bug related discussions are no longer managed in this repo. Feature requests should be opened in https://github.com/apollographql/apollo-feature-requests. --> **Intended outcome:** <!-- What you were trying to accomplish when the bug occurred, and as much code as possible related to the source of the problem. --> Trying to upgrade our project to React 18 RC.3. With simple `const [mutate, result] = useMutation()`. We are trying to show loading state and response data via `result.loading` and `result.data` **Actual outcome:** <!-- A description of what actually happened, including a screenshot or copy-paste of any related error messages, logs, or other output that might be related. Places to look for information include your browser console, server console, and network logs. Please avoid non-specific phrases like “didn’t work” or “broke”. --> `result.loading` and `result.data` are not being updated after the mutation is done. On further debugging, It seems that [React 18 invokes useEffect twice on newly mounted components](https://github.com/reactwg/react-18/discussions/19) on `StrictMode` which causes [`isMounted`](https://github.com/apollographql/apollo-client/blob/main/src/react/hooks/useMutation.ts#L141) to immediately be set to `false` and not update the [`result`](https://github.com/apollographql/apollo-client/blob/main/src/react/hooks/useMutation.ts#L99) when mutation is done or error. A quick fix is that we reset `isMounted` to `true` when the component remount. I know that `4.0.0` is coming for React 18 and this bug will be obsoleted, But the fix for this issue will allow us to migrate to React 18 without any hiccup (hopefully). **How to reproduce the issue:** <!-- If possible, please create a reproduction using https://github.com/apollographql/react-apollo-error-template and link to it here. If you prefer an in-browser way to create reproduction, try: https://codesandbox.io/s/github/apollographql/react-apollo-error-template Instructions for how the issue can be reproduced by a maintainer or contributor. Be as specific as possible, and only mention what is necessary to reproduce the bug. If possible, try to isolate the exact circumstances in which the bug occurs and avoid speculation over what the cause might be. --> 1. Update to `react@18.0.0-rc.3` and `react-dom@18.0.0-rc.3` 2. Wrap your Root Component with `StrictMode` 3. Call `useMutation` [Codesandbox URL ](https://codesandbox.io/s/apollo-react-18-mutation-issue-bz13b0?file=/src/App.js) **Versions** <!-- Run the following command in your project directory, and paste its (automatically copied to clipboard) results here: `npx envinfo@latest --preset apollo --clipboard` --> ```sh System: OS: Linux 5.10 Ubuntu 20.04.3 LTS (Focal Fossa) Binaries: Node: 16.14.0 - ~/.local/share/nvm/v16.14.0/bin/node Yarn: 1.22.15 - ~/.local/share/nvm/v16.14.0/bin/yarn npm: 8.5.1 - ~/.local/share/nvm/v16.14.0/bin/npm npmPackages: @apollo/client: ^3.5.10 => 3.5.10 ```
<issue_start><issue_comment>Title: Redesign PathwayTablePerspective username_0: PathwayTablePerspective is deprecated and rightfully so. Here are some design alternatives: Provide a defined field in the TablePerspectives where additional information can be held. Specifically, this would be a reference to the pathway datadomain and the specific pathway, but could equally be information about the type of data (e.g., is it data that should be shown as KM plots). This would also mean that such a new pathway table perspective is a fully independent TablePerspective which is stored in and primarily associated with the ATableBasedDataDomain. The biggest disadvantage of this approach is that it essentially limits us to one pathway per column, as we can't allow multiple pathways for one table perspective, as we did in the original pathway implementation. Therefore we have to make the decision that we do not intend to show multiple pathways in one column. The alternative would be to use a PathwayPerspecive which holds (not inherits) a TablePerspective. This would associated the perspective primarily with the pathway DD. The biggest disadvantage is that this is very different from what we have now and would probably require a joint super-type of the PathwayPerspective and the TablePerspective. It should also be noted that we don't serialize table-perspectives at all at the moment. They are always dynamically generated based on a record and dimension perspecitve. However, this is also not ideal since computationally expensive statistics, for example, would have to be re-computed.<issue_closed>
<issue_start><issue_comment>Title: docs: add notes to deadlocks redundant rollback username_0: <!-- Thanks for wanting to fix something on Sequelize - we already love you! Please fill in the template below. If unsure about something, just do as best as you're able. If your PR only contains changes to documentation, you may skip the template below. --> Closes #14249 ### Pull Request Checklist _Please make sure to review and check all of these items:_ - [ ] Have you added new tests to prevent regressions? - [ ] Does `yarn test` or `yarn test-DIALECT` pass with this change (including linting)? - [x] Is a documentation update included (if this change modifies existing APIs, or introduces new ones)? - [ ] Did you update the typescript typings accordingly (if applicable)? - [ ] Does the description below contain a link to an existing issue (Closes #[issue]) or a description of the issue you are solving? - [ ] Did you follow the commit message conventions explained in [CONTRIBUTING.md](https://github.com/sequelize/sequelize/blob/main/CONTRIBUTING.md)? <!-- NOTE: these things are not required to open a PR and can be done afterwards / while the PR is open. --> ### Todos - [x] add notes to deadlocks redundant rollback <issue_comment>username_1: Hey @username_0 👋. Could you open your PR to the new documentation website instead? :) https://github.com/sequelize/website/ <issue_comment>username_0: Sure!
<issue_start><issue_comment>Title: Add support for new "downtime" API calls username_0: per http://docs.datadoghq.com/api/#schedule-downtime . <issue_comment>username_1: Not sure I'll have the time today, but I'll see if I can get to it this week. <issue_comment>username_1: hey @username_0 should be good to go, `v0.2.0` is now published on npm. let me know if you have any issues with any of it.<issue_closed> <issue_comment>username_0: beautiful, thanks.
<issue_start><issue_comment>Title: Incorrect deprecate warning when MarkerLayer::markRange passed persistent options username_0: I'm working on Atom master branch to catch-up display-layer. cc @username_1 Then I noticed this warning. Assigning custom properties to a marker when creating/copying it is deprecated. Please, consider storing the custom properties you need in some other object in your package, keyed by the marker's id property. Only following code shows warning. ```coffeescript editor.markBufferRange editor.getSelectedBufferRange(), invalidate: 'never' persistent: false ``` As far as I check, persistent option is for `MarkerLayer` but this options is validated by `Marker`'s `Marker.extractParams(options)`. Should `delete options.persitent` before passing `Marker.extractParams()`? <issue_comment>username_1: With #149 we have basically moved all the concerns about markers' lifetime and persistence to marker layers and, as a result, passing that option won't actually persist the marker across reloads. This is how the new API looks like (you can also take a look at some of the core packages, like `bookmarks`, to see how we've switched to marker layers): ```js let markerLayer = editor.addMarkerLayer({persistent: true}) markerLayer.markRange(position) ``` You can also optionally pass a `maintainHistory` parameter to `addMarkerLayer` which, when `true`, makes sure to restore your markers when undo or redos are performed.<issue_closed> <issue_comment>username_0: Thanks, understood. So this api doc has to be updated, it says accept persistent options, but `@defaultMarkerLayer` is created without any options(`@defaultMarkerLayer = @displayLayer.addMarkerLayer()`) https://github.com/atom/atom/blob/master/src/text-editor.coffee#L1691-L1724 Also `TextEditor::markScreenRange` has to be updated. <issue_comment>username_1: Thanks for the heads-up, @username_0! 🙇 I'll make sure to update those docs as soon as possible. 📜 <issue_comment>username_1: Updated in atom/atom@e3790b8. <issue_comment>username_0: Thanks! Not only for this, all your heavy effort to improve Atom at very fundamental layer!!
<issue_start><issue_comment>Title: os: File should be an interface username_0: By analogy with #13473, os.File should also be an interface. That would permit a program to use files in the abstract sense, including implementations not provided by the os package. <issue_comment>username_1: Related: #5636 (VFS layer), in which we were never able to find a design that fit everybody's needs. <issue_comment>username_2: https://godoc.org/golang.org/x/net/webdav#FileSystem This may be one of implementations. <issue_comment>username_3: I think I'd argue that io.File should be the interface but it's all moot now. <issue_comment>username_4: @username_3 How about follow interface? ```go type FileV2 interface{ io.ReadWriteCloser .... } func NewFileFromV2(v2 FileV2) *File{ } ``` <issue_comment>username_5: No. Please see my reply to a similar suggestion in #13473. Having version numbers in APIs is really ugly. <issue_comment>username_3: This is a long-term issue that we are not actively working on. <issue_comment>username_6: After several years of maintaining various Google filesystem implementations, I disagree strenuously with this proposal. Go interfaces act as bounds on types. On function inputs, they are lower bounds: every implementation must provide _at least_ the methods described in the interface. On function outputs, they are upper bounds: even if the implementation provides methods beyond those in the interface, they are not visible in the compile-time type system, and they are therefore also less visible in the documentation. Because they act as bounds, interfaces work best on the consumer side of an API: for example, `io.Reader` belongs in the package that defines `io.Copy`, because `Copy` consumes the output of a passed‐in `Reader`. On the other hand, interfaces do not belong on the implementation side of the API: `bytes.NewReader` returns a `*bytes.Reader`, not an `io.Reader`, because a `*bytes.Reader` provides methods that are not required of every `io.Reader`. Turning that observation back toward the `os` package, we see that the `os` package is clearly on the implementation side of `os.File`: there are several functions in the package that return a `*File`, but **zero** that accept one as an argument other than a method receiver (#13473 notwithstanding). If we were to make `os.File` an interface, that would make every further addition to the `os.File` API either a breaking change (because existing implementations would no longer match the new method) or a second-class, under-documented feature (because the new method would not be in the method set of the exported interface). That is a high cost. In exchange for that cost, we would get very little benefit. It is already possible today for any package to define and use an interface with an arbitrary subset of the methods provided by `*os.File`, and so it is already possible for programs to use “files in the abstract sense”, independent of the `os` package but including `*os.File`. <issue_comment>username_7: Would you agree with adding symbols to ioutil so that we can pass it filesystem implementations? <issue_comment>username_8: An alternative would be to split the implementation. Leave `os.File` in place, but also introduce an `io.File` interface that covers the methods generally associated with files. For example: ```go type File interface { Reader Writer Seeker ReaderAt WriterAt Syncer // Doesn't exist, but you never know. Maybe `Flusher` instead? Close } ``` This is analogous to the difference between `net.Conn` and the various specific connection types. Go 2 could then decide what direct usages of `*os.File`to replace with `io.File`. In this case, an extra `io.Directory` interface covering directory-specific `os.File` operations, such as `Readdir()`, might make sense as well, although that might be complicated by the reliance on `os.FileInfo`. <issue_comment>username_6: Yes to adding symbols, no to `ioutil`. (I think `ioutil` should be specific to OS files; see https://github.com/golang/go/issues/19660. For specific alternatives, see https://github.com/golang/go/issues/13473#issuecomment-379946014.) <issue_comment>username_1: `*os.File` currently has 20 methods. I don't think we want to introduce an interface with 20 or even 15 or 10 methods. And we don't want to go overkill on optional interfaces because experience with `http.ResponseWriter` and its optional interfaces show that wrappers too often hide methods that the wrapped value does implement. Or you have to have wrappers implement the world and define the each optional method to have a "Oh, nevermind, carry on as if I didn't implement this" return value. Also, changing `*os.File` to be an interface would be pretty infectious, for better or worse. ## Counterproposal If one of the primary motivations for this is to implement VFS packages that return files, the real problem is there's no way to instantiate arbitrary `*os.File` values that aren't backed by the filesystem. We could instead add constructor func(s) to return `*os.File` values given an interface value. For example, ```go // NewReaderAtFile returns an *os.File given the provided stat information and ReaderAt. // The file is assumed to be a regular file with size fi.Size(), backed by the data in contents. func NewReaderAtFile(fi FileInfo, contents io.ReaderAt) *os.File ``` You could imagine a few of those for various types (regular files, symlinks, directories). For misc methods like `File.Chdir` or `File.Truncate`, you could return an error by default, or document that the content interface value is checked for matching names & signatures. That might avoid the composition problem with ResponseWriters because it wouldn't be interfaces all the way down and people would pass around an `*os.File` with a concrete set of methods that could grow over time. <issue_comment>username_1: I addition, this counterproposal would have a nice solution to #13473 too. Instead of that bug's: ```go os.Stdout = bufio.NewWriter(os.Stdout) os.Stderr = bufio.NewWriter(os.Stderr) ``` You'd instead write something like: ```go os.Stdout = os.NewWritableFile(bufio.NewWriter(os.Stdout)) os.Stderr = os.NewWritableFile(bufio.NewWriter(os.Stderr)) ``` So the type of `os.Stdout` and `os.Stderr` would remain *os.File`, but it'd be easier to create custom implementations of `*os.File`. <issue_comment>username_6: @username_1, what would the `*os.File` returned by `NewReaderAtFile` do for things like the `Fd` method that really do rely on having something that the OS recognizes as a file? <issue_comment>username_1: Either return `-1` by default (in lieu of an error return value there) or delegate to the wrapped interface as described above. <issue_comment>username_6: I agree that that approach works around many of the maintainability problems of interfaces, but I still don't understand why it belongs in the `os` package (rather than, say, a subpackage in `io`). <issue_comment>username_9: Can we maybe start with making a overview of what is currently a interface and what is not? Example https://golang.org/src/os/types.go?s=369:411#L6 https://github.com/golang/go/blob/go1.11.2/src/net/http/fs.go#L93 Sure there is a perfect reason for why one is a struct and the other a interface. We only have one shot to make it so that everything get to be in harmony again that developers can predict with almost 100% accuracy if something going to be a struct or a interface without looking at the code. I suggest we must first create a world map where we define interface or struct countries, but for me your proposal is already like a street name in a unknown country where you need to explain to the mail man to use the godoc stars to navigate his airplane I think we are skipping a step here and need to zoom way out just to make sure we have this logic harmony now that we have the chance to finally fix the world map after 10 years. So basically what I am trying to convince is that if A and B is a interface C must be too. Not saying your solution is bad I simply want a complete interface list that make sense first. Thanks <issue_comment>username_0: I just want to to be able to make them not be files at all. VFS is a different - important, but different - case.
<issue_start><issue_comment>Title: Get 'code' from iOS SDK username_0: According to the Web API 'Authorization Code Flow' I need a 'code' to be able to refresh my token. Is there any possibility to get the 'code' from the iOS SDK? <issue_comment>username_1: @username_0 what you want is like: ```objective-c NSURL *loginURL = [SPTAuth loginURLForClientId:kSpotifyClientId withRedirectURL:[NSURL URLWithString:kSpotifyCallbackURL] scopes:[self getAccessTokenScopes] responseType:@"code"]; ``` <issue_comment>username_0: I couldn't find the scope... but I tried with this code: ``` let loginURL = SPTAuth.loginURLForClientId(kClientID, withRedirectURL:NSURL(string:kCallbackURL), scopes: [SPTAuthSessionUserDefaultsKey] , responseType: "code") SPTRequest.requestItemAtURI(loginURL, withSession: session) { (error: NSError!, object: AnyObject!) in if error != nil { print("failed to get users playlists with error: \(error)") } print("code: \(object)") } ``` But I get this error: Undefined symbols for architecture arm64: "_SPTAuthSessionUserDefaultsKey", referenced from: Spotify.SpotifySession.getSpotifyCode () -> () in SpotifySession.o ld: symbol(s) not found for architecture arm64 clang: error: linker command failed with exit code 1 (use -v to see invocation) <issue_comment>username_0: I can't find that scope :https://developer.spotify.com/web-api/using-scopes/ In which header file is that located? <issue_comment>username_1: The docs is [here](https://developer.spotify.com/ios-sdk-docs/Documents/Classes/SPTAuth.html#//api/name/loginURLForClientId:withRedirectURL:scopes:responseType:). The one you were looking for was the web api, while this one is the iOS SDK. <issue_comment>username_0: You're correct, but the scopes in the iOS SDK are representing the ones for the Web API. The only scopes available in "SPTAuth.h" for iOS SDK are the following: ``` /** Scope that lets you stream music. */ FOUNDATION_EXPORT NSString * const SPTAuthStreamingScope; /** Scope that lets you read private playlists of the authenticated user. */ FOUNDATION_EXPORT NSString * const SPTAuthPlaylistReadPrivateScope; /** Scope that lets you modify public playlists of the authenticated user. */ FOUNDATION_EXPORT NSString * const SPTAuthPlaylistModifyPublicScope; /** Scope that lets you modify private playlists of the authenticated user. */ FOUNDATION_EXPORT NSString * const SPTAuthPlaylistModifyPrivateScope; /** Scope that lets you follow artists and users on behalf of the authenticated user. */ FOUNDATION_EXPORT NSString * const SPTAuthUserFollowModifyScope; /** Scope that lets you get a list of artists and users the authenticated user is following. */ FOUNDATION_EXPORT NSString * const SPTAuthUserFollowReadScope; /** Scope that lets you read user's Your Music library. */ FOUNDATION_EXPORT NSString * const SPTAuthUserLibraryReadScope; /** Scope that lets you modify user's Your Music library. */ FOUNDATION_EXPORT NSString * const SPTAuthUserLibraryModifyScope; /** Scope that lets you read the private user information of the authenticated user. */ FOUNDATION_EXPORT NSString * const SPTAuthUserReadPrivateScope; /** Scope that lets you get the birthdate of the authenticated user. */ FOUNDATION_EXPORT NSString * const SPTAuthUserReadBirthDateScope; /** Scope that lets you get the email address of the authenticated user. */ FOUNDATION_EXPORT NSString * const SPTAuthUserReadEmailScope; FOUNDATION_EXPORT NSString * const SPTAuthSessionUserDefaultsKey; ``` <issue_comment>username_1: @username_0 wait, are you talking about access token scopes or the `code` response type in the oAuth flow? <issue_comment>username_0: I think I need to explain my problem in more details... So, I'm currently using the SPTAuthViewController to enable login for the users. I don't want the login-process in Safari, I want it within the application, as it currently is. The problem I have is to refresh my token thereafter. I have created an amazon web-service lambda function, which takes, according to the Web API, a 'code'. So I don't want to create two URLs, which is the approach described inhttps://github.com/simontaen/SpotifyTokenSwap, rather send in the 'code' which the iOS SDK is extracting (somehow) since this parameter also is needed in this github approach. <issue_comment>username_2: Not sure I understand your problem, is it that you have to create two back-end endpoints? you could create one and differentiate on the input parameters or a query parameter e.g. https://example.com/swapandrefresh?type=swap and https://example.com/swapandrefresh?type=refresh <issue_comment>username_2: Since there has been no further comments here I will close this issue. If this still an issue please reopen.<issue_closed>
<issue_start><issue_comment>Title: OnPlayerGiveDamageActor - no publics called in main amx script username_0: I am not totally sure as to what is going on so i'm submitting a bug incase you might know what causes this. I use OnPlayerGiveDamageActor public in my amx script, it works great, but only on windows build. The script is identical on linux yet it doesn't function there. I suspect it could be some issue inside sampgdk that i use (in streamer and mathplugin). I have locally updated mathplugin to latest sampgdk 4.3, i have also recompiled streamer plugin on linux with same sampgdk, all works fine except on linux server - there the OnPlayerGiveDamageActor is not called at all. My plugins are: MapAndreas.so TranslationPlugin.so socket.so mysql_static.so gvar.so KeyVal.so rwthread.so PlayerDataRecorder.so crashdetect.so streamer.so SampUtils.so QIterator.so nativechecker.so To my knowledge only samputils (math plugin) and streamer use sampgdk. So i'm wondering if this could be caused by sampgdk, i can run various tests as needed. <issue_comment>username_1: Indeed this might be a bug in sampgdk, I'll look into it on my Linux system. Meanwhile, can you see if OnPlayerGiveDamageActor is broken in other scripts or it's just your particular gamemode? <issue_comment>username_0: OnPlayerGiveDamageActor is one of last publics added in 0.3.7 - that might be related. I will test OnPlayerGiveDamageActor with and without plugins in a blank gamemode. <issue_comment>username_1: BTW you can also enable debug logging in sampgdk by setting `SAMPGDK_LOG` variable: `SAMPGDK_LOG=+id ./samp03svr' and check what happens when `OnPlayerGiveDamageActor` gets called. If there is not such callback in the output it must be a bug <issue_comment>username_0: It turned out this was a sa-mp issue in the end. Apparently SetActorInvulnerable is needed for callback to be called on linux version (as it says in wiki documentation) but is called regardless on windows version.. it behaves different on linux basically and i suspected sampgdk by mistake. On the good side, i have upgraded math plugin to latest sampgdk. I will close the bug report and report this to kalcor.<issue_closed>
<issue_start><issue_comment>Title: [WIP] RPC: Indicate which transactions are signaling opt-in RBF username_0: I'm not sure how users of bitcoind are expected to figure out which transactions could be replaced via opt-in RBF, so I took a first stab at exposing that via RPC. I've started by adding it to `getrawmempool` when called with the verbose=true argument. I've also exercised that code with some small modifications to the `replace-by-fee.py` rpc test. I'd appreciate some specific feedback on the following: - Which RPC calls should return this information? - Would it be helpful to distinguish between a transaction signaling RBF itself (with `nSequence < 0xfffffffe` on one of its inputs), versus inheriting the signal from some unconfirmed parent? - The answer can only be definitive for transactions that are in the mempool; transactions that are not in the mempool might have unknown parents which could signal RBF and we wouldn't know it. Is it worth trying to answer this question for transactions not in the mempool, and if so, what information should be returned? <issue_comment>username_1: I don't see a use-case for getting the RBF signal for confirmed transactions (maybe for potential reorgs risk calculations?). But I agree, without `-txindex`, parents RBF signal is probably unknown. <issue_comment>username_2: I don't see any use case for exposing this information. <issue_comment>username_3: NACK Very high risk of naive users attempting to use this to do risk mitigation and accepting zeroconf txs in cases where they're risking losses; if you care about whether a tx can be replaced you're in a position where you should be using specialist services. Note how easy it is to send replacable in practice txs by just having an ancestor tx have a low fee. <issue_comment>username_1: @username_3: for wallet users (`listtransaction`, `gettransaction`): I think there is a use case for self created wtxs where a user likes to know whether he did enable RBF or not. <issue_comment>username_3: True, but for those users knowing if a tx is RBF is already trivial - just check nSequence. Equally, we haven't even merged my obvious set RBF flag option, so what's the usecase? <issue_comment>username_1: IMO this is basically what this PR does. <issue_comment>username_3: No, this pull-req checks ancestors too, a critical difference thats applicable to "risk-scoring" BS but has little applicibility to your own transactions. <issue_comment>username_0: @username_3 I don't agree, but I can understand your concern about offering a feature that could mislead users. So rather than change existing RPC calls as I initially proposed, what about just adding two new RPC calls, one that would return the txid's of unconfirmed parents of a transaction and one that would return txid's of in-mempool descendants? That seems to me to be more broadly useful and basically addresses my underlying concern, that users who want to do calculations relating to whether a transaction might be replaced would have to reinvent the wheel to efficiently trace mempool chains. <issue_comment>username_0: I should add -- I think exposing mempool ancestors/descendants is also useful for code that would seek to use RBF efficiently (I'm not sure how else you'd figure out what fee to use, or whether you'll get rejected because you're replacing too many transactions, etc). <issue_comment>username_3: Is efficiency in that case actually a concern? I'd rather see that version proposed in conjunction with code that actually needs it, and benchmarks showing how badly it's needed. <issue_comment>username_0: How would an application even calculate what fee to use to rbf two transactions down to one? The only think I can think of would be to call getrawmempool and recursively search for transactions which depend on the given transactions. Dumping a 300mb mempool over rpc sounds like a losing proposition to begin with, never mind reimplementing the logic bitcoind already has for iterating. Is there a simpler approach I'm missing? <issue_comment>username_4: Concept ACK. The arguments against making this (or any basic tx attribute information) available to a user who is explicitly looking for it are beyond unconvincing. As users, we want to see what we just did. And we want to see details about the tx that just arrived that pays us. RPC's to return ancestors and descendants would be extremely useful too. Re: throwing a runtime error in ``IsRBFOptIn(...)``, maybe make it three-valued (yes, no, unknown)? Caller will probably end up doing that anyway. <issue_comment>username_5: needs rebase. <issue_comment>username_0: Closing in favor of #7292. <issue_comment>username_6: Pardon my tardy response. With due respect to the prior arguments, they're an argument against showing unconfirmed transactions at all by default. Not against this. An indicator for BIP125 (in particular, in listtransactions) would be both labor saving for users, and also make it more likely that parties looking at it get it right (and, for example, don't consider confirmed transactions to be BIP125 mempool replaceable). Moreover, it was always my expectation that it would work this way-- and this small affordance is an aspect of how I explained BIP125 to others without getting any protest so I expect many also expected this. Importantly, while many people do objectively confused things with unconfirmed transactions, the elegance of BIP125 was that it avoided the difficult task of educating people which so far many have failed with; by providing something that allowed their activities to continue without conflict. But it doesn't do so if getting access to the information is too burdensome. Because of this I consider this a release blocker. I was concerned that the patch for this would be complex, but it turns out to be quite clean... which I think makes this a no-brainer. <issue_comment>username_7: Needs rebase <issue_comment>username_0: Rebased and updated to extend this information to `listtransactions` and `gettransaction`. I also updated the `listtransactions.py` rpc test to exercise this code. This appears to merge cleanly into 0.12 as well. <issue_comment>username_0: On further thought I've removed the opt-in RBF signaling information in `getrawmempool`. I timed that RPC call -- without this change -- on a node with a large mempool and it took over 6seconds. Potentially multiplying that by the average chain length (limit of 25 by default) seems like a bad performance hit to suddenly introduce. I had an issue with the build not working after pulling the code out of `rpcblockchain.cpp`; the second commit which moves `policy/rbf.cpp` to libbitcoin_wallet seems to be a workaround. I imagine there may be a better way of fixing the build, but leaving it here for now to make travis happy and so the rest of this pull can be reviewed. We can replace or squash the second commit as appropriate... @cfields Thanks for looking at this; let me know if you figure out a better way to make everything work. <issue_comment>username_0: Sorry for the churn -- changed the name of the field in the RPC output to be "bip125-replaceable", and fixed the handling of confirmed transactions. <issue_comment>username_8: Needs rebase <issue_comment>username_0: Rebased (trivial conflict with #7164 in `listtransactions.py`). Also squashed down to one commit, and verified this still merges cleanly into 0.12. <issue_comment>username_7: utACK eaa8d27
<issue_start><issue_comment>Title: add --subroutinize/--no-subroutinize options username_0: This PR depends on https://github.com/googlei18n/ufo2ft/pull/61 If that's merged, the fontmake default behavior would remain the same as the current one: i.e., always run compreffor if a CFF table is present. But In addition, fontmake could gently skip running compreffor when the latter is not installed (it's still a bit tricky to compile compreffor on Windows, but I'm working on providing pre-compiled wheels for that). On the other hand, if the '-s' ('--subroutinize') option is explicitly provided, then ufo2ft raises ImportError when compreffor is not installed. Finally, when the '-S' ('--no-subroutinize') option is provided, fontmake tells ufo2ft to skip compreffor altogether, whether it's installed or not. Note that on makeotf subroutinization is optional, not default. I'm ok with leaving the current fontmake's default -- as long as it doesn't force you to install 10GB Visual Studio and monkeypatch setuptools... <issue_comment>username_1: Or install a Linux VM? :smile: <issue_comment>username_0: Yeah I know ;) <issue_comment>username_2: I don't like this. It results in hard-to-debug situations where the same fontmake command can result in vastly different fonts. What do you think about changing default to be subroutinize? <issue_comment>username_2: (imagine if we did the same for removeOverlaps...) <issue_comment>username_3: I was about to suggest the same (this is also true for the new ufo2ft default). <issue_comment>username_0: Ok, let's make --subroutinize True by default. I'll make wheels for comrpeffor so one doesn't need to compile it from source.
<issue_start><issue_comment>Title: (dbg, c, windows) failure: tests.vsprojects/Debug/h2_proxy_nosec_test.exe disappearing_server username_0: ``` E0311 14:04:14.080000000 6844 disappearing_server.c:174] assertion failed: status == GRPC_STATUS_UNIMPLEMENTED D0311 14:04:14.081000000 6844 test_config.c:73] Abort handler called. ``` https://grpc-testing.appspot.com/job/gRPC_master/15723/config=dbg,language=c,platform=windows/testReport/junit/(root)/tests/vsprojects_Debug_h2_proxy_nosec_test_exe_disappearing_server/ <issue_comment>username_0: Same for h2_full_test https://grpc-testing.appspot.com/job/gRPC_portability_master/916/language=c,scenario=windows_x86_vs2013/testReport/junit/(root)/tests/vsprojects_Debug_h2_full_test_exe_disappearing_server/ <issue_comment>username_1: Seen again at https://grpc-testing.appspot.com/job/gRPC_master/15931/config=opt,language=c,platform=windows/testReport/junit/(root)/tests/vsprojects_Release_h2_proxy_nosec_test_exe_disappearing_server/<issue_closed>
<issue_start><issue_comment>Title: NPE when getting the commit change log in Git username_0: In version 2.17.4: ``` java.lang.NullPointerException at org.eclipse.jgit.lib.ObjectIdOwnerMap.get(ObjectIdOwnerMap.java:131) at org.eclipse.jgit.revwalk.RevWalk.parseAny(RevWalk.java:837) at org.eclipse.jgit.revwalk.RevWalk.parseCommit(RevWalk.java:752) at net.nemerosa.ontrack.git.support.GitRepositoryClientImpl.range(GitRepositoryClientImpl.java:477) at net.nemerosa.ontrack.git.support.GitRepositoryClientImpl.graph(GitRepositoryClientImpl.java:177) at net.nemerosa.ontrack.extension.git.service.GitServiceImpl.getChangeLogCommits(GitServiceImpl.java:232) ``` <issue_comment>username_0: Repository declared in the project properties was not synched (indexation = 0). Therefore, the commits were not available at all. Consider forcing the sync upon change log computation (when fetching the commits).<issue_closed> <issue_comment>username_0: In version 2.17.4: ``` java.lang.NullPointerException at org.eclipse.jgit.lib.ObjectIdOwnerMap.get(ObjectIdOwnerMap.java:131) at org.eclipse.jgit.revwalk.RevWalk.parseAny(RevWalk.java:837) at org.eclipse.jgit.revwalk.RevWalk.parseCommit(RevWalk.java:752) at net.nemerosa.ontrack.git.support.GitRepositoryClientImpl.range(GitRepositoryClientImpl.java:477) at net.nemerosa.ontrack.git.support.GitRepositoryClientImpl.graph(GitRepositoryClientImpl.java:177) at net.nemerosa.ontrack.extension.git.service.GitServiceImpl.getChangeLogCommits(GitServiceImpl.java:232) ```<issue_closed> <issue_comment>username_0: No force the sync but return a meaningful message: * if the sync is currently running * if the sync has never been done <issue_comment>username_0: In version 2.17.4: ``` java.lang.NullPointerException at org.eclipse.jgit.lib.ObjectIdOwnerMap.get(ObjectIdOwnerMap.java:131) at org.eclipse.jgit.revwalk.RevWalk.parseAny(RevWalk.java:837) at org.eclipse.jgit.revwalk.RevWalk.parseCommit(RevWalk.java:752) at net.nemerosa.ontrack.git.support.GitRepositoryClientImpl.range(GitRepositoryClientImpl.java:477) at net.nemerosa.ontrack.git.support.GitRepositoryClientImpl.graph(GitRepositoryClientImpl.java:177) at net.nemerosa.ontrack.extension.git.service.GitServiceImpl.getChangeLogCommits(GitServiceImpl.java:232) ``` <issue_comment>username_0: Linked to #377 <issue_comment>username_0: Fixed by #377<issue_closed>
<issue_start><issue_comment>Title: Demo from README not working on astrolab machines username_0: ``` We're using: Py 3.4.3 on Scientific Linux. <issue_comment>username_1: Thanks! Should be fixed in https://github.com/username_1/multiband_LS/commit/7733e54d93a4c20346384301d59b190a77b72267 It is due to an API change in the most recent ``gatspy`` release – we no longer set the optimizer range by default, because some thought must go into choosing the right range for each specific problem.<issue_closed>
<issue_start><issue_comment>Title: uninitialized constant User::SharedAlbum (NameError) username_0: I want to create admin dashboard which showing me this error. I Configured devise in my App with 2 modules [admins, users] I think that makes some issues but not sure. ``` rails g administrate:dashboard User /home/fahidnasir/.rbenv/versions/2.1.5/lib/ruby/gems/2.1.0/gems/activerecord-4.2.0/lib/active_record/inheritance.rb:158:in `compute_type': uninitialized constant User::SharedAlbum (NameError) from /home/fahidnasir/.rbenv/versions/2.1.5/lib/ruby/gems/2.1.0/gems/activerecord-4.2.0/lib/active_record/reflection.rb:271:in `compute_class' from /home/fahidnasir/.rbenv/versions/2.1.5/lib/ruby/gems/2.1.0/gems/activerecord-4.2.0/lib/active_record/reflection.rb:267:in `klass' from /home/fahidnasir/.rbenv/versions/2.1.5/lib/ruby/gems/2.1.0/gems/activerecord-4.2.0/lib/active_record/reflection.rb:661:in `source_reflection' from /home/fahidnasir/.rbenv/versions/2.1.5/lib/ruby/gems/2.1.0/gems/activerecord-4.2.0/lib/active_record/reflection.rb:871:in `derive_class_name' from /home/fahidnasir/.rbenv/versions/2.1.5/lib/ruby/gems/2.1.0/gems/activerecord-4.2.0/lib/active_record/reflection.rb:147:in `class_name' from /home/fahidnasir/.rbenv/versions/2.1.5/lib/ruby/gems/2.1.0/gems/administrate-0.0.12/lib/generators/administrate/dashboard/dashboard_generator.rb:85:in `relationship_options_string' from /home/fahidnasir/.rbenv/versions/2.1.5/lib/ruby/gems/2.1.0/gems/administrate-0.0.12/lib/generators/administrate/dashboard/dashboard_generator.rb:72:in `association_type' from /home/fahidnasir/.rbenv/versions/2.1.5/lib/ruby/gems/2.1.0/gems/administrate-0.0.12/lib/generators/administrate/dashboard/dashboard_generator.rb:48:in `redundant_attributes_for' from /home/fahidnasir/.rbenv/versions/2.1.5/lib/ruby/gems/2.1.0/gems/administrate-0.0.12/lib/generators/administrate/dashboard/dashboard_generator.rb:43:in `block in redundant_attributes' from /home/fahidnasir/.rbenv/versions/2.1.5/lib/ruby/gems/2.1.0/gems/administrate-0.0.12/lib/generators/administrate/dashboard/dashboard_generator.rb:42:in `each' from /home/fahidnasir/.rbenv/versions/2.1.5/lib/ruby/gems/2.1.0/gems/administrate-0.0.12/lib/generators/administrate/dashboard/dashboard_generator.rb:42:in `flat_map' from /home/fahidnasir/.rbenv/versions/2.1.5/lib/ruby/gems/2.1.0/gems/administrate-0.0.12/lib/generators/administrate/dashboard/dashboard_generator.rb:42:in `redundant_attributes' from /home/fahidnasir/.rbenv/versions/2.1.5/lib/ruby/gems/2.1.0/gems/administrate-0.0.12/lib/generators/administrate/dashboard/dashboard_generator.rb:38:in `attributes' from (erb):17:in `template' from /home/fahidnasir/.rbenv/versions/2.1.5/lib/ruby/2.1.0/erb.rb:850:in `eval' from /home/fahidnasir/.rbenv/versions/2.1.5/lib/ruby/2.1.0/erb.rb:850:in `result' from /home/fahidnasir/.rbenv/versions/2.1.5/lib/ruby/gems/2.1.0/gems/thor-0.19.1/lib/thor/actions/file_manipulation.rb:116:in `block in template' from /home/fahidnasir/.rbenv/versions/2.1.5/lib/ruby/gems/2.1.0/gems/thor-0.19.1/lib/thor/actions/create_file.rb:53:in `call' from /home/fahidnasir/.rbenv/versions/2.1.5/lib/ruby/gems/2.1.0/gems/thor-0.19.1/lib/thor/actions/create_file.rb:53:in `render' from /home/fahidnasir/.rbenv/versions/2.1.5/lib/ruby/gems/2.1.0/gems/thor-0.19.1/lib/thor/actions/create_file.rb:46:in `identical?' from /home/fahidnasir/.rbenv/versions/2.1.5/lib/ruby/gems/2.1.0/gems/thor-0.19.1/lib/thor/actions/create_file.rb:72:in `on_conflict_behavior' from /home/fahidnasir/.rbenv/versions/2.1.5/lib/ruby/gems/2.1.0/gems/thor-0.19.1/lib/thor/actions/empty_directory.rb:113:in `invoke_with_conflict_check' from /home/fahidnasir/.rbenv/versions/2.1.5/lib/ruby/gems/2.1.0/gems/thor-0.19.1/lib/thor/actions/create_file.rb:60:in `invoke!' from /home/fahidnasir/.rbenv/versions/2.1.5/lib/ruby/gems/2.1.0/gems/thor-0.19.1/lib/thor/actions.rb:94:in `action' from /home/fahidnasir/.rbenv/versions/2.1.5/lib/ruby/gems/2.1.0/gems/thor-0.19.1/lib/thor/actions/create_file.rb:25:in `create_file' from /home/fahidnasir/.rbenv/versions/2.1.5/lib/ruby/gems/2.1.0/gems/thor-0.19.1/lib/thor/actions/file_manipulation.rb:115:in `template' from /home/fahidnasir/.rbenv/versions/2.1.5/lib/ruby/gems/2.1.0/gems/railties-4.2.0/lib/rails/generators/named_base.rb:26:in `block in template' from /home/fahidnasir/.rbenv/versions/2.1.5/lib/ruby/gems/2.1.0/gems/railties-4.2.0/lib/rails/generators/named_base.rb:60:in `inside_template' from /home/fahidnasir/.rbenv/versions/2.1.5/lib/ruby/gems/2.1.0/gems/railties-4.2.0/lib/rails/generators/named_base.rb:25:in `template' from /home/fahidnasir/.rbenv/versions/2.1.5/lib/ruby/gems/2.1.0/gems/administrate-0.0.12/lib/generators/administrate/dashboard/dashboard_generator.rb:25:in `create_dashboard_definition' from /home/fahidnasir/.rbenv/versions/2.1.5/lib/ruby/gems/2.1.0/gems/thor-0.19.1/lib/thor/command.rb:27:in `run' from /home/fahidnasir/.rbenv/versions/2.1.5/lib/ruby/gems/2.1.0/gems/thor-0.19.1/lib/thor/invocation.rb:126:in `invoke_command' from /home/fahidnasir/.rbenv/versions/2.1.5/lib/ruby/gems/2.1.0/gems/thor-0.19.1/lib/thor/invocation.rb:133:in `block in invoke_all' from /home/fahidnasir/.rbenv/versions/2.1.5/lib/ruby/gems/2.1.0/gems/thor-0.19.1/lib/thor/invocation.rb:133:in `each' from /home/fahidnasir/.rbenv/versions/2.1.5/lib/ruby/gems/2.1.0/gems/thor-0.19.1/lib/thor/invocation.rb:133:in `map' from /home/fahidnasir/.rbenv/versions/2.1.5/lib/ruby/gems/2.1.0/gems/thor-0.19.1/lib/thor/invocation.rb:133:in `invoke_all' from /home/fahidnasir/.rbenv/versions/2.1.5/lib/ruby/gems/2.1.0/gems/thor-0.19.1/lib/thor/group.rb:232:in `dispatch' from /home/fahidnasir/.rbenv/versions/2.1.5/lib/ruby/gems/2.1.0/gems/thor-0.19.1/lib/thor/base.rb:440:in `start' from /home/fahidnasir/.rbenv/versions/2.1.5/lib/ruby/gems/2.1.0/gems/railties-4.2.0/lib/rails/generators.rb:157:in `invoke' from /home/fahidnasir/.rbenv/versions/2.1.5/lib/ruby/gems/2.1.0/gems/railties-4.2.0/lib/rails/commands/generate.rb:13:in `<top (required)>' from /home/fahidnasir/.rbenv/versions/2.1.5/lib/ruby/gems/2.1.0/gems/activesupport-4.2.0/lib/active_support/dependencies.rb:274:in `require' from /home/fahidnasir/.rbenv/versions/2.1.5/lib/ruby/gems/2.1.0/gems/activesupport-4.2.0/lib/active_support/dependencies.rb:274:in `block in require' from /home/fahidnasir/.rbenv/versions/2.1.5/lib/ruby/gems/2.1.0/gems/activesupport-4.2.0/lib/active_support/dependencies.rb:240:in `load_dependency' from /home/fahidnasir/.rbenv/versions/2.1.5/lib/ruby/gems/2.1.0/gems/activesupport-4.2.0/lib/active_support/dependencies.rb:274:in `require' from /home/fahidnasir/.rbenv/versions/2.1.5/lib/ruby/gems/2.1.0/gems/railties-4.2.0/lib/rails/commands/commands_tasks.rb:123:in `require_command!' from /home/fahidnasir/.rbenv/versions/2.1.5/lib/ruby/gems/2.1.0/gems/railties-4.2.0/lib/rails/commands/commands_tasks.rb:130:in `generate_or_destroy' from /home/fahidnasir/.rbenv/versions/2.1.5/lib/ruby/gems/2.1.0/gems/railties-4.2.0/lib/rails/commands/commands_tasks.rb:50:in `generate' from /home/fahidnasir/.rbenv/versions/2.1.5/lib/ruby/gems/2.1.0/gems/railties-4.2.0/lib/rails/commands/commands_tasks.rb:39:in `run_command!' from /home/fahidnasir/.rbenv/versions/2.1.5/lib/ruby/gems/2.1.0/gems/railties-4.2.0/lib/rails/commands.rb:17:in `<top (required)>' from bin/rails:4:in `require' from bin/rails:4:in `<main>' ```<issue_closed> <issue_comment>username_0: I want to create admin dashboard which showing me this error. I Configured devise in my App with 2 modules [admins, users] I think that makes some issues but not sure. ``` rails g administrate:dashboard User /home/fahidnasir/.rbenv/versions/2.1.5/lib/ruby/gems/2.1.0/gems/activerecord-4.2.0/lib/active_record/inheritance.rb:158:in `compute_type': uninitialized constant User::SharedAlbum (NameError) from /home/fahidnasir/.rbenv/versions/2.1.5/lib/ruby/gems/2.1.0/gems/activerecord-4.2.0/lib/active_record/reflection.rb:271:in `compute_class' from /home/fahidnasir/.rbenv/versions/2.1.5/lib/ruby/gems/2.1.0/gems/activerecord-4.2.0/lib/active_record/reflection.rb:267:in `klass' from /home/fahidnasir/.rbenv/versions/2.1.5/lib/ruby/gems/2.1.0/gems/activerecord-4.2.0/lib/active_record/reflection.rb:661:in `source_reflection' from /home/fahidnasir/.rbenv/versions/2.1.5/lib/ruby/gems/2.1.0/gems/activerecord-4.2.0/lib/active_record/reflection.rb:871:in `derive_class_name' from /home/fahidnasir/.rbenv/versions/2.1.5/lib/ruby/gems/2.1.0/gems/activerecord-4.2.0/lib/active_record/reflection.rb:147:in `class_name' from /home/fahidnasir/.rbenv/versions/2.1.5/lib/ruby/gems/2.1.0/gems/administrate-0.0.12/lib/generators/administrate/dashboard/dashboard_generator.rb:85:in `relationship_options_string' from /home/fahidnasir/.rbenv/versions/2.1.5/lib/ruby/gems/2.1.0/gems/administrate-0.0.12/lib/generators/administrate/dashboard/dashboard_generator.rb:72:in `association_type' from /home/fahidnasir/.rbenv/versions/2.1.5/lib/ruby/gems/2.1.0/gems/administrate-0.0.12/lib/generators/administrate/dashboard/dashboard_generator.rb:48:in `redundant_attributes_for' from /home/fahidnasir/.rbenv/versions/2.1.5/lib/ruby/gems/2.1.0/gems/administrate-0.0.12/lib/generators/administrate/dashboard/dashboard_generator.rb:43:in `block in redundant_attributes' from /home/fahidnasir/.rbenv/versions/2.1.5/lib/ruby/gems/2.1.0/gems/administrate-0.0.12/lib/generators/administrate/dashboard/dashboard_generator.rb:42:in `each' from /home/fahidnasir/.rbenv/versions/2.1.5/lib/ruby/gems/2.1.0/gems/administrate-0.0.12/lib/generators/administrate/dashboard/dashboard_generator.rb:42:in `flat_map' from /home/fahidnasir/.rbenv/versions/2.1.5/lib/ruby/gems/2.1.0/gems/administrate-0.0.12/lib/generators/administrate/dashboard/dashboard_generator.rb:42:in `redundant_attributes' from /home/fahidnasir/.rbenv/versions/2.1.5/lib/ruby/gems/2.1.0/gems/administrate-0.0.12/lib/generators/administrate/dashboard/dashboard_generator.rb:38:in `attributes' from (erb):17:in `template' from /home/fahidnasir/.rbenv/versions/2.1.5/lib/ruby/2.1.0/erb.rb:850:in `eval' from /home/fahidnasir/.rbenv/versions/2.1.5/lib/ruby/2.1.0/erb.rb:850:in `result' from /home/fahidnasir/.rbenv/versions/2.1.5/lib/ruby/gems/2.1.0/gems/thor-0.19.1/lib/thor/actions/file_manipulation.rb:116:in `block in template' from /home/fahidnasir/.rbenv/versions/2.1.5/lib/ruby/gems/2.1.0/gems/thor-0.19.1/lib/thor/actions/create_file.rb:53:in `call' from /home/fahidnasir/.rbenv/versions/2.1.5/lib/ruby/gems/2.1.0/gems/thor-0.19.1/lib/thor/actions/create_file.rb:53:in `render' from /home/fahidnasir/.rbenv/versions/2.1.5/lib/ruby/gems/2.1.0/gems/thor-0.19.1/lib/thor/actions/create_file.rb:46:in `identical?' from /home/fahidnasir/.rbenv/versions/2.1.5/lib/ruby/gems/2.1.0/gems/thor-0.19.1/lib/thor/actions/create_file.rb:72:in `on_conflict_behavior' from /home/fahidnasir/.rbenv/versions/2.1.5/lib/ruby/gems/2.1.0/gems/thor-0.19.1/lib/thor/actions/empty_directory.rb:113:in `invoke_with_conflict_check' from /home/fahidnasir/.rbenv/versions/2.1.5/lib/ruby/gems/2.1.0/gems/thor-0.19.1/lib/thor/actions/create_file.rb:60:in `invoke!' from /home/fahidnasir/.rbenv/versions/2.1.5/lib/ruby/gems/2.1.0/gems/thor-0.19.1/lib/thor/actions.rb:94:in `action' from /home/fahidnasir/.rbenv/versions/2.1.5/lib/ruby/gems/2.1.0/gems/thor-0.19.1/lib/thor/actions/create_file.rb:25:in `create_file' from /home/fahidnasir/.rbenv/versions/2.1.5/lib/ruby/gems/2.1.0/gems/thor-0.19.1/lib/thor/actions/file_manipulation.rb:115:in `template' from /home/fahidnasir/.rbenv/versions/2.1.5/lib/ruby/gems/2.1.0/gems/railties-4.2.0/lib/rails/generators/named_base.rb:26:in `block in template' from /home/fahidnasir/.rbenv/versions/2.1.5/lib/ruby/gems/2.1.0/gems/railties-4.2.0/lib/rails/generators/named_base.rb:60:in `inside_template' from /home/fahidnasir/.rbenv/versions/2.1.5/lib/ruby/gems/2.1.0/gems/railties-4.2.0/lib/rails/generators/named_base.rb:25:in `template' from /home/fahidnasir/.rbenv/versions/2.1.5/lib/ruby/gems/2.1.0/gems/administrate-0.0.12/lib/generators/administrate/dashboard/dashboard_generator.rb:25:in `create_dashboard_definition' from /home/fahidnasir/.rbenv/versions/2.1.5/lib/ruby/gems/2.1.0/gems/thor-0.19.1/lib/thor/command.rb:27:in `run' from /home/fahidnasir/.rbenv/versions/2.1.5/lib/ruby/gems/2.1.0/gems/thor-0.19.1/lib/thor/invocation.rb:126:in `invoke_command' from /home/fahidnasir/.rbenv/versions/2.1.5/lib/ruby/gems/2.1.0/gems/thor-0.19.1/lib/thor/invocation.rb:133:in `block in invoke_all' from /home/fahidnasir/.rbenv/versions/2.1.5/lib/ruby/gems/2.1.0/gems/thor-0.19.1/lib/thor/invocation.rb:133:in `each' from /home/fahidnasir/.rbenv/versions/2.1.5/lib/ruby/gems/2.1.0/gems/thor-0.19.1/lib/thor/invocation.rb:133:in `map' from /home/fahidnasir/.rbenv/versions/2.1.5/lib/ruby/gems/2.1.0/gems/thor-0.19.1/lib/thor/invocation.rb:133:in `invoke_all' from /home/fahidnasir/.rbenv/versions/2.1.5/lib/ruby/gems/2.1.0/gems/thor-0.19.1/lib/thor/group.rb:232:in `dispatch' from /home/fahidnasir/.rbenv/versions/2.1.5/lib/ruby/gems/2.1.0/gems/thor-0.19.1/lib/thor/base.rb:440:in `start' from /home/fahidnasir/.rbenv/versions/2.1.5/lib/ruby/gems/2.1.0/gems/railties-4.2.0/lib/rails/generators.rb:157:in `invoke' from /home/fahidnasir/.rbenv/versions/2.1.5/lib/ruby/gems/2.1.0/gems/railties-4.2.0/lib/rails/commands/generate.rb:13:in `<top (required)>' from /home/fahidnasir/.rbenv/versions/2.1.5/lib/ruby/gems/2.1.0/gems/activesupport-4.2.0/lib/active_support/dependencies.rb:274:in `require' from /home/fahidnasir/.rbenv/versions/2.1.5/lib/ruby/gems/2.1.0/gems/activesupport-4.2.0/lib/active_support/dependencies.rb:274:in `block in require' from /home/fahidnasir/.rbenv/versions/2.1.5/lib/ruby/gems/2.1.0/gems/activesupport-4.2.0/lib/active_support/dependencies.rb:240:in `load_dependency' from /home/fahidnasir/.rbenv/versions/2.1.5/lib/ruby/gems/2.1.0/gems/activesupport-4.2.0/lib/active_support/dependencies.rb:274:in `require' from /home/fahidnasir/.rbenv/versions/2.1.5/lib/ruby/gems/2.1.0/gems/railties-4.2.0/lib/rails/commands/commands_tasks.rb:123:in `require_command!' from /home/fahidnasir/.rbenv/versions/2.1.5/lib/ruby/gems/2.1.0/gems/railties-4.2.0/lib/rails/commands/commands_tasks.rb:130:in `generate_or_destroy' from /home/fahidnasir/.rbenv/versions/2.1.5/lib/ruby/gems/2.1.0/gems/railties-4.2.0/lib/rails/commands/commands_tasks.rb:50:in `generate' from /home/fahidnasir/.rbenv/versions/2.1.5/lib/ruby/gems/2.1.0/gems/railties-4.2.0/lib/rails/commands/commands_tasks.rb:39:in `run_command!' from /home/fahidnasir/.rbenv/versions/2.1.5/lib/ruby/gems/2.1.0/gems/railties-4.2.0/lib/rails/commands.rb:17:in `<top (required)>' from bin/rails:4:in `require' from bin/rails:4:in `<main>' ``` <issue_comment>username_0: it was fixed by refactoring the model's relationships. Apologies for that.<issue_closed> <issue_comment>username_1: @username_0 no problem!
<issue_start><issue_comment>Title: tsc.exe : Unsuported local username_0: When building in VS 2015 Update 3 (14.0.25422.01) I get this error: error TS6049:Build:Unsupported locale 'sv-SE'. (error MSB6006: "tsc.exe" exited with code 1.) 1. I did not specify this local 2. I don't want this local. 3. Stop localizing this kind of stuff since no one wants it, and you obviously can't get it to work. **Build Output** ``` 3>Task "VsTsc" 3> C:\Program Files (x86)\Microsoft SDKs\TypeScript\1.8\tsc.exe --module CommonJS --sourcemap --target ES5 --noEmitOnError --locale sv-SE "file1.ts" "file2.ts" ... 3> C:\Code\Pri\Elitlamm\Main\Source\ElitlammWebMobileApplication\error TS6049:Build:Unsupported locale 'sv-SE'. 3>C:\Program Files (x86)\MSBuild\Microsoft\VisualStudio\v14.0\TypeScript\Microsoft.TypeScript.targets(242,5): error MSB6006: "tsc.exe" exited with code 1. 3>Done executing task "VsTsc" -- FAILED. ``` **Visual Studio Version info** Microsoft Visual Studio Professional 2015 Version 14.0.25422.01 Update 3 Microsoft .NET Framework Version 4.6.01038 Installed Version: Professional LightSwitch for Visual Studio 2015 00322-40000-00000-AA915 Microsoft LightSwitch for Visual Studio 2015 Microsoft Visual Studio Tools for Applications 2015 00322-40000-00000-AA915 Microsoft Visual Studio Tools for Applications 2015 Windows Phone SDK 8.0 - ENU 00322-40000-00000-AA915 Windows Phone SDK 8.0 - ENU Visual Basic 2015 00322-40000-00000-AA915 Microsoft Visual Basic 2015 Visual C# 2015 00322-40000-00000-AA915 Microsoft Visual C# 2015 Visual C++ 2015 00322-40000-00000-AA915 Microsoft Visual C++ 2015 Visual F# 2015 00322-40000-00000-AA915 Microsoft Visual F# 2015 Add New File 3.4 The fastest and easiest way to add new files to any project - including files that start with a dot ASP.NET and Web Tools 2015 (RC1 Update 1) 14.1.20203.0 ASP.NET and Web Tools 2015 (RC1 Update 1) ASP.NET Web Frameworks and Tools 2012.2 4.1.41102.0 For additional information, visit http://go.microsoft.com/fwlink/?LinkID=309563 ASP.NET Web Frameworks and Tools 2013 5.2.40314.0 For additional information, visit http://www.asp.net/ Azure App Service Tools v2.9 14.0.20316.0 Azure App Service Tools v2.9 Azure Data Lake Node 1.0 This package contains the Data Lake integration nodes for Server Explorer. Azure Data Lake Tools for Visual Studio 2.0.6000.0 Microsoft Azure Data Lake Tools for Visual Studio Bundler & Minifier 2.1.258 Adds support for bundling and minifying JavaScript, CSS and HTML files in any project. Clang with Microsoft CodeGen 14.0.25317 Clang with Microsoft CodeGen Common Azure Tools 1.8 Provides common services for use by Azure Mobile Services and Microsoft Azure Tools. DataFactoryProject 1.0 Microsoft Data Factory Package [Truncated] An editor extension that checks the spelling of comments, strings, and plain text as you type or interactively with tool windows. https://GitHub.com/EWSoftware/VSSpellChecker Visual Studio Tools for Apache Cordova Update 10 Visual Studio Tools for Apache Cordova Visual Studio Tools for Universal Windows Apps 14.0.25420.01 The Visual Studio Tools for Universal Windows apps allow you to build a single universal app experience that can reach every device running Windows 10: phone, tablet, PC, and more. It includes the Microsoft Windows 10 Software Development Kit. Workflow Manager Tools 1.0 1.0 This package contains the necessary Visual Studio integration components for Workflow Manager. Xamarin 4.1.0.530 (2e39740) Visual Studio extension to enable development for Xamarin.iOS and Xamarin.Android. Xamarin.Android 6.1.0.71 (4e27558) Visual Studio extension to enable development for Xamarin.Android. Xamarin.iOS 9.8.0.323 (39ebb77) Visual Studio extension to enable development for Xamarin.iOS. <issue_comment>username_0: Temporary workaround Modify the file `C:\Program Files (x86)\MSBuild\Microsoft\VisualStudio\v14.0\TypeScript\Microsoft.TypeScript.targets` by changing line 92 from this `<PreferredUILang Condition="'$(BuildingInsideVisualStudio)' == 'true' and '$(PreferredUILang)' == ''">$([System.Globalization.CultureInfo]::CurrentUICulture.Name)</PreferredUILang>` to this `<PreferredUILang Condition="'$(BuildingInsideVisualStudio)' == 'true' and '$(PreferredUILang)' == ''"></PreferredUILang>` <issue_comment>username_1: For what it is worth it, I had the same issue with VS2015 Update 2, running on an English Windows 10 with NL language installed for typing. <issue_comment>username_2: this is the same as https://github.com/Microsoft/TypeScript/issues/8130. the issue has been fixed in TypeScript 2.0 and later.<issue_closed>
<issue_start><issue_comment>Title: cli: Support editing app metadata username_0: I should be able to manage app metadata via the CLI, for example: ``` $ flynn meta KEY VAL foo bar $ flynn meta set foo=baz bar=qux $ flynn meta KEY VAL foo baz bar qux $ flynn meta unset foo $ flynn meta KEY VAL bar qux ``` This will help with arbitrary tagging of resources (#1313). <issue_comment>username_1: I'm picking this ticket up, since we need the result and it seems easy enough to accomplish. Ping here if you need to know status. <issue_comment>username_2: Implemented in #1618.<issue_closed>
<issue_start><issue_comment>Title: add module list modulename to list commands in a module username_0: How about adding this feature for module commands module list helloworld ``` 127.0.0.1:6379> module list helloworld 1) "hello.push.call" 2) "hello.zsumrange" 3) "hello.push.call2" 4) "hello.list.splice.auto" 5) "hello.more.expire" 6) "hello.simple" 7) "hello.toggle.case" 8) "hello.hcopy" 9) "hello.push.native" 10) "hello.list.sum.len" 11) "hello.rand.array" 12) "hello.lexrange" 13) "hello.repl1" 14) "hello.repl2" 15) "hello.list.splice" 127.0.0.1:6379> module list helloworld2 (error) ERR no such module with that name ``` <issue_comment>username_0: ping @antirez <issue_comment>username_1: Thanks, this looks useful. @username_3? <issue_comment>username_2: I'm down to accept this. <issue_comment>username_3: It may be useful, even if just for introspection. if we do that though, it should not be an additional argument to `MODULE LIST`, but rather a new sub-command for MODULE, like `MODULE COMMANDS <module>`. (it's not a great idea to add a optional argument for a command that completely changes it's purpose (like we do in `COMMAND` with no arguments, vs `COMMAND INFO <cmd>`) Anyway, maybe such a feature actually falls into the role or COMMAND command. And coincidentally we do have an intention to soon extend COMMAND command quite a lot. it's probably gonna involve deprecating the plain use of COMMAND with no arguments (will keep producing the old format). and have a COMMAND LIST sub-command that will include a lot of metadata on each command (including everything that's now in redis.io's commands.json and help.h), so that metadata can easily include the module name. If we want a separate sub-command that generates a list of commands per topic maybe that COMMAND LIST should have a filter argument, or maybe we want a `COMMAND MODULE <module>` sub-command? @username_5 FYI. <issue_comment>username_4: vote for `MODULE COMMANDS <module>` <issue_comment>username_3: @username_5 please see if there's a nicer way to add some category filtering to COMMANDS LIST in #9504 , it not, let's change this one to `MODULE COMMANDS <module>` and merge it. <issue_comment>username_5: @username_3 what about pattern matching in `COMMANDS INFO`? examples: ``` COMMANDS INFO hello.* COMMANDS INFO X* COMMANDS INFO * ``` it might be a bit slower but we can have a designated path for the common case `*` so that it'll return the entire list <issue_comment>username_5: or ``` COMMANDS LIST [FILTERBY (MODULE <module-name>|ACLCAT <cat>|GROUP <group>|PATTERN <pattern>)] ``` <issue_comment>username_3: ack! i vote for the last one. see https://github.com/redis/redis/pull/9504#discussion_r710232677
<issue_start><issue_comment>Title: Update auto_save.py username_0: Prevent call to save if auto complete is visible. Note This change causes the save not to get called when auto complete popup is visible. So any file changes made while auto complete is visible is not saved. This is a solution for the issue discussed in [https://github.com/username_1/auto-save/issues/21] and [https://github.com/username_1/auto-save/issues/7] <issue_comment>username_1: @username_0 Thanks for the PR. I've been following along the conversations. Would this change be breaking for people without build 3103? <issue_comment>username_0: Also do mind this is not a good solution. As I stated in issue #21, this change causes the save not to get called when auto complete popup is visible. So any file changes made while auto complete is visible is not saved. Therefore to mitigate that, if possible in the plugin listen to auto complete close event (if there is something like that) and then at the event check if file has changes to be saved. If so call save. But overall it seems like all this is unnecessary complexity as @scholer has mentioned in issue #21. @username_1 this is a must-have plugin for Sublime Text. Maybe you can try to contact Sublime devs to get an insight to this annoying problem? For example there might be a better way to save file contents without using "save" command which is the reason to hide the auto complete in the first place. <issue_comment>username_1: @username_0 I don't think this can be merged as-is since it'd be breaking for people without build 3103. I don't know any of the Sublime devs or what the best way is to contact them. <issue_comment>username_0: What about their technical support forum [https://forum.sublimetext.com/c/technical-support]? <issue_comment>username_2: Hey @username_0 , I was also concerned with save being called when the autocomplete window was open, so I added some functionality to auto-save that James merged in the [previous pull request](https://github.com/username_1/auto-save/pull/26). Calling save on the current file can cause other problems as well, like deleting tabs before you start writing code in an indented block if you have `trim_trailing_white_space_on_save` enabled in your ST prefs. My approach to the problem was to add a mode so that __auto-save__ saves to a backup file in the same directory, as opposed to saving to the currently open file. The comments in the pull request explain it better than I've explained it here. <issue_comment>username_0: Hi @username_2 correct me if I'm wrong, but it seems like your change does not update the working file. The issue is some users want to see the update on their working file. (e.g. sass watching, web development .etc) <issue_comment>username_2: Hey @username_0 , that's correct, if you're working on `<file>.html`, then this mode will auto-save to `<file>.autosave.html`. If you're using something like `gulp-server-livereload`, you would just watch `<file>.autosave.html` instead of `<file>.html`. This is how I use auto-save for getting a live view of HTML and Markdown files that I edit. I [wrote a post](https://github.com/username_2/posts/blob/master/Code/Sublime%20Text/auto-save.md) about this that explains it in more detail... <issue_comment>username_3: I added a [pull request](https://github.com/username_1/auto-save/pull/37), that fixes the problem of the file not being saved after the pop-up closes by delaying the save repeatedly untill it closes.
<issue_start><issue_comment>Title: 🛑 donate.memo.ru is down username_0: In [`cff80fa`](https://github.com/username_0/uptime/commit/cff80fa912107245f606cb8f3e83cc1ce86b362d ), donate.memo.ru (https://donate.memo.ru) was **down**: - HTTP code: 403 - Response time: 324 ms <issue_comment>username_0: **Resolved:** donate.memo.ru is back up in [`b391c01`](https://github.com/username_0/uptime/commit/b391c018a0802369b6a0c20ad3a3c886dbd5bef8 ).<issue_closed>
<issue_start><issue_comment>Title: Fix rounding problem with 90 deg rotation username_0: Rounding errors seem to occur randomly, hence hard to test against. <issue_comment>username_0: Small fix, is already in library [in some other place](https://github.com/craftyjs/Crafty/blob/ed3c58bc899716a762cbb89f29e3ee919a077522/src/spatial/2d.js#L361-L363), merging.
<issue_start><issue_comment>Title: Fix for Issue #974. Double clicking on Mocha tests opens external editor. username_0: Issue #974 ##### Bug Double clicking in Mocha test causes JavaScript file to open in external editor. ##### Fix Jumping to line: 0, column: 0 seems to cause this. Changed default to line: 1, column: 1. <issue_comment>username_1: Thanks for reporting this issue and finding a fix so quickly. The change looks good overall. One nitpicky question, should it be `line 1, col 0` instead `line 1, col 1`? I don't think the current solution will cause any problem in most cases, but make sure to test opening a blank file this way to see if that does the correct thing as well. <issue_comment>username_0: Good question! It looks like 'NodejsTestInfo.SourceColumn' where it gets stored isn't actually used anywhere. Visual Studio uses 1 based for lines and columns. The v8 debugger and TypeScript package on NPM uses 0 based for both. I guess we can assume it should be consistent. Maybe code on the JavaScript side should work with 0 based lines/columns and only code on the C#/Visual Studio side should work with 1 based? Otherwise I can see this issue creeping in again and again when new frameworks are added. <issue_comment>username_0: This PR is obsolete if you choose to accept PR #982 instead (which would be my preference). <issue_comment>username_1: Merged in #982
<issue_start><issue_comment>Title: Drop special handling of bookmarks username_0: Bookmarks are now called "favorites", and can be accessed like other items, too. Therefore, they can be configured as "starting points" like any other collection. Except for some peculiarities, like the `members` field, and `collection:fav-user` queries only returning non-collection items. These have to be figured out, still.<issue_closed>
<issue_start><issue_comment>Title: 🛑 La Especial is down username_0: In [`c3aed6f`](https://github.com/erpcya/status/commit/c3aed6f0b02ee63be8429d1cb02137e9bd05a10b ), La Especial ($URL_16) was **down**: - HTTP code: 0 - Response time: 0 ms <issue_comment>username_0: **Resolved:** La Especial is back up in [`645e290`](https://github.com/erpcya/status/commit/645e2906ac79f10e0bc6e6a30f018fe9e1e1efcf ).<issue_closed>
<issue_start><issue_comment>Title: ActiveAdmin Filters For UUID username_0: I have implemented UUID functionality on user model using the below link [http://rny.io/rails/postgresql/2013/07/27/use-uuids-in-rails-4-with-postgresql.html](url) Now my current schema looks like this: ```sh create_table "users", force: :cascade do |t| t.string "email", t.string "first_name" t.string "last_name" t.uuid "user_uid", default: "uuid_generate_v4()" end ``` I tried to use below two lines to add a filter on active admin user index page ```sh filter :user_uid, as: :uuid, label: "USER UUID" filter :user_uid, as: :string, label: "USER UUID" ``` But I am getting following error like below: Started GET "/admin/users?utf8=%E2%9C%93&q%5Buser_uid_cont%5D=b216d8af-fbd5-401f-886e-3615b8f57cee&commit=Filter&order=id_desc" for 127.0.0.1 at 2016-04-04 23:08:03 +0530 Processing by Admin::UsersController#index as HTML Parameters: {"utf8"=>"✓", "q"=>{"user_uid_cont"=>"b216d8af-fbd5-401f-886e-3615b8f57cee"}, "commit"=>"Filter", "order"=>"id_desc"} AdminUser Load (0.3ms) SELECT "admin_users".* FROM "admin_users" WHERE "admin_users"."id" = $1 ORDER BY "admin_users"."id" ASC LIMIT 1 [["id", 1]] (1.0ms) SELECT COUNT(count_column) FROM (SELECT 1 AS count_column FROM "users" WHERE ("users"."user_uid" ILIKE NULL) LIMIT 30 OFFSET 0) subquery_for_count PG::UndefinedFunction: ERROR: operator does not exist: uuid ~~* unknown LINE 1: ...unt_column FROM "users" WHERE ("users"."user_uid" ILIKE NULL... ^ HINT: No operator matches the given name and argument type(s). You might need to add explicit type casts. : SELECT COUNT(count_column) FROM (SELECT 1 AS count_column FROM "users" WHERE ("users"."user_uid" ILIKE NULL) LIMIT 30 OFFSET 0) subquery_for_count Rendered /home/chetuiwk1457/.rvm/gems/ruby-2.2.3@amburcloud/bundler/gems/activeadmin-81537db36107/app/views/active_admin/resource/index.html.arb (165.8ms) Completed 500 Internal Server Error in 176ms (ActiveRecord: 3.2ms) ## Any suggestions for using ActiveAdmin filters with UUID <issue_comment>username_1: That's not surprising -- Postgres doesn't implement `ILIKE` for UUID data. You can get around this by specifying that you want an equality filter instead of the default text filter: ```ruby filter :user_uid_eq <issue_comment>username_0: This fixed my issue, I am going to close this ticket. Thank you...!<issue_closed> <issue_comment>username_2: I ran into errors with this solution on Rails `5` and ActiveAdmin `1.0.0.pre4`, but using this filter worked for me: `filter :id_eq, label: "ID"`
<issue_start><issue_comment>Title: Improve notification of missing TMX string username_0: <!-- Reviewable:start --> This change is [<img src="https://reviewable.io/review_button.svg" height="34" align="absmiddle" alt="Reviewable"/>](https://reviewable.io/reviews/bloombooks/bloomdesktop/1172) <!-- Reviewable:end --> <issue_comment>username_1: Not really an error, per se. --- *Comments from [Reviewable](https://reviewable.io:443/reviews/bloombooks/bloomdesktop/1172)* <!-- Sent from Reviewable.io --> <issue_comment>username_0: </details> Hmm, to me it is. But I chose that message just for its effect, to get the developer's attention. --- *Comments from [Reviewable](https://reviewable.io:443/reviews/bloombooks/bloomdesktop/1172)* <!-- Sent from Reviewable.io --> <issue_comment>username_1: </details> Yeah, that's what I figured (you wanted to get the dev's attention). --- *Comments from [Reviewable](https://reviewable.io:443/reviews/bloombooks/bloomdesktop/1172)* <!-- Sent from Reviewable.io --> <issue_comment>username_1: <img class="emoji" title=":lgtm:" alt=":lgtm:" align="absmiddle" src="https://reviewable.io/lgtm.png" height="20" width="61"/> --- Review status: 0 of 1 files reviewed at latest revision, all discussions resolved. --- *Comments from [Reviewable](https://reviewable.io:443/reviews/bloombooks/bloomdesktop/1172#-:-KQLuGY21RDdj-MVxf71:bnfp4nl)* <!-- Sent from Reviewable.io --> <issue_comment>username_1: Reviewed 1 of 1 files at r2. Review status: all files reviewed at latest revision, all discussions resolved. --- *Comments from [Reviewable](https://reviewable.io:443/reviews/bloombooks/bloomdesktop/1172)* <!-- Sent from Reviewable.io -->
<issue_start><issue_comment>Title: Depend on 'indentation-parsec' instead of 'indentation' username_0: <issue_comment>username_1: Can you also update stack.yaml and .travis.yml to not include my hack? <issue_comment>username_0: Done. I don't use stack, so I hope I did that correctly (in particular, not sure if I need to also tell it to pull `indentation-core` or if it can figure that out on its own) <issue_comment>username_1: I don't think you need to stack about indentation-core but if Travis says it's good, I'll merge.
<issue_start><issue_comment>Title: Bump cluster autoscaler to 0.3.0-beta3 username_0: cc: @username_1 @fgrzadkowski @jszczepkowski <!-- Reviewable:start --> --- This change is [<img src="https://reviewable.kubernetes.io/review_button.svg" height="34" align="absmiddle" alt="Reviewable"/>](https://reviewable.kubernetes.io/reviews/kubernetes/kubernetes/31430) <!-- Reviewable:end --> <issue_comment>username_1: LGTM
<issue_start><issue_comment>Title: Adding accessors for setting either pt or e in correctors username_0: For future compatibility (and for backwards compatibility with users code) it is useful to set both the E and pt for jet corrections. This has no underlying affect but will allow for possible future developments, and will help users to migrate their code correctly.
<issue_start><issue_comment>Title: Change 'Export Data - (c) Krajee' title username_0: I feel like the '(c) Krajee' and 'krajee yii2-export plugin' in the browser windows' titles and exported file descriptions are kind of overkill for an open source BSD3 license plugin. <issue_comment>username_1: I do not think it is any issue. You can change them to what you want by changing the respective properties for php excel.<issue_closed>
<issue_start><issue_comment>Title: Flutter app triggering anti-virus checkup on Windows username_0: The Windows .exe file of my Flutter developed application triggers an anti-virus checkup by Avast when it's started. The file comes up clean, but it's not a very user-friendly experience, since I'm planning a public release. Perhaps I'm barking at the wrong tree, and it's entierly Avast's fault... I'm not a windows developer so I'm kinda in the dark here. What could be the cause and is there anything at all I can do to prevent it? <issue_comment>username_1: Hi @username_0, Thanks for filing the issue. Does this happen with a newly created flutter app from `stable 2.10.3`? Also, does Avast provide any additional info about what it finds malicious about the app? Also, I see that there is a similar issue being tracked for this case https://github.com/flutter/flutter/issues/95167 <issue_comment>username_0: Thanks for responding, @username_1. Yes, it happens with the stable 2.10.3 version. Here's what the Avast log says about it: ``` 3/24/2022 1:42:11 PM Autosandbox candidate: D:\gq_win\build\windows\runner\Release\gq_win.exe [Source: ] [Opened by: D:\gq_win\build\windows\runner\Release\gq_win.exe] [Reason: 0x00020000] --> Result: Sandboxing (no custody) --> Instrumentation: Instrumentation inside sandbox requested ``` <issue_comment>username_1: Thanks for the info labeling this issue for further insights from the team. cc: @username_2 <issue_comment>username_2: Questions about how Avast works and what causes it to trigger for any particular application would need to be directed to its developers.<issue_closed>
<issue_start><issue_comment>Title: Extraction fix username_0: When the arguments are passed by parse_arguments, no information is keeped. But only string literals should be extracted. I've a patch for markey (PR#3, https://github.com/EnTeQuAk/markey/pull/3) to provide a new function named parse_arguments_with_type, so that type information can be examined by caller. As this PR depends on PR#3 for markey, it should not be merged before the PR#3 for markey. <issue_comment>username_0: @username_1 @nedbat This is my new patch that depends on the PR#3 for markey. As long as the PR#3 is accepted, this PR can work properly. <issue_comment>username_1: FYI this is related to bugs uncovered in https://github.com/edx/edx-platform/pull/10416
<issue_start><issue_comment>Title: Discussion: Use public mirrors, or unify as one service username_0: So i'm at a crossroads here. This ticket relates to #7 and #68. One thing that would be nice is to have folks mirror data for vapor. There would be an `update.php` script, and when I update, I could hit that script to update all the public mirrors. Then the clients could choose the best mirror, allow users to select mirrors, or even just choose mirrors at randomly. On the other hand, it would be nice to get some metrics, and short of hitting a centralized server everytime a download is initialized, these two options seem rather mutually exclusive. As mentioned in #44, it would also be nice to have some statistics which might be nice to include in the latter option. Perhaps we could do some brainstorming here, and get some ideas on how to solve both of these problems, or at least why we should pick one over the other. <issue_comment>username_1: I think a central server and one unified service would be the best way to go. This way username_0 would be able to have more control and if paid games were ever added maybe make a donate to LOVE fee and maintenance fee be there. <issue_comment>username_2: Well I would prefer using itch.io, but that is my personal preference. I think it already exists, has everything that is needed and has a wide user base. The only problem being the games that are not in there... <issue_comment>username_1: A main server to upload games too has many benefits. The most current one I just thought of is post processing. If we wanted to automatically add certain things to games we would be able to. For example compat layers. Sense I don't think compat layers are going to go away any time soon. <issue_comment>username_2: Please DONT! <issue_comment>username_1: Or any other features if they may arise. Another example testing the games ourselves. <issue_comment>username_1: itch.io allows any games to be uploaded if that same process happens with Vapor then Vapor will soon be full of crap. <issue_comment>username_2: Why wouldnt itch.io allows us to do that? After all we have the game.json file, and a Vapor API (hipothetical) <issue_comment>username_1: Having itch.io as an extra source will be cool. But having own set up would be nice too. If we ever need anything itch.io won't give then we can just do it ourselves. So developing our own system along side itch.io will help us if that time ever comes. <issue_comment>username_2: Uhm what else would we need? downloads, paying, users, our own api... Plus it is not like we cant use our own stuff when needed, we can still use our server, but itch.io handles most of the stuff, and leaves the rest of the neat things to us <issue_comment>username_1: Flexibility is a good thing to have. It would suck if at one point we do need something but can't do it because it would require a lot of new services we never made. <issue_comment>username_2: What I mean is that itch.io doesnt replace those services, those services can still exist and coexist with itch.io, using itch.io as the BIG base server is just the best way to go, since the basic stuff is already there. <issue_comment>username_1: Well itch.io would have to provide us with a private api I would assume. <issue_comment>username_1: For example our vapor site will upload to both it and itch.io rather than the other way around. Because people may mistake vapor as a client for all games and mess it up by adding non Löve games. Unless itch.io checks that for us. <issue_comment>username_2: Well I think itch.io should check, and Vapor should approve before it appears in Vapor <issue_comment>username_3: There's already an API for viewing game files and downloading them. Would be glad to provide any other support you need. https://github.com/username_3/itchio-app/blob/master/itchioapi.cpp <issue_comment>username_4: @username_1, well, I think that uploading directly in vapor would not be a feature. Rather, one uploads to itch.io, and there could be a tiny setting that takes a .love file and allows it to be accessed through vapor. Of course, this could be implemented vapor-side, after uploaded to itch.io, you fill in your game details to the vapor website. The vapor website will use the api to check if it's a love game and if it is, automatically add it to the vapor catalogue. I think you're just making things too complicated ;) <issue_comment>username_4: @username_1, well, I think that uploading directly in vapor would not be a feature. It would go something like this: * upload to itch.io * on the vapor authoring website, you copy the itch link * vapor website does checks to see if a .love file is available (if multiple love files, ask user) * add the itch game ID to the vapor catalogue I think you're just making things too complicated ;) <issue_comment>username_2: Well actually that is rather fast, you just give your itch.io game id and vapor verifies internally then it's done... sounds good to me <issue_comment>username_0: _contextual note: vapor2 will have a vapor-ui (front-end) and a vapor-core (library)_ My plan for vapor2 is to do the following: 1) Provide to users a default set of `source`s available when distributing vapor release. (The initial release would have `vapor.love2d.org` source enabled by default) 2) Provide a drupal site that would allow users to upload games for the vapor.love2d.org source. 3) Use said drupal site to also maintain framework information for the vapor.love2d.org source. 4) Allow the users to add&|enable other sources via `vapor-ui`, like a processed itch.io api to match the defined `source` structure) This way, we can control the curators (e.g. the vapor-dev team, and itch.io by default) but then others can offer to curate games and framework packages as third parties. This might open some doors to other communities who have similar executable package patterns like LÖVE. (e.g. <executable> <game file>) @username_3 - When vapor2 gets closer to release, how would you feel about detecting if a file ends in a `.love` and asking users on itch.io if they would like to submit to itch.io's vapor source list, and then asking a few more details? <issue_comment>username_4: Doesn't the drupal site make itch.io obsolete for hosting? Or am I mistaking this for a standalone website (rather than a website that in the backend that deals with itch.io) <issue_comment>username_1: Yay, we reached a verdict. <issue_comment>username_1: @username_4 having other sources can allow users to get other games. For example if the official source bans adult content then someone can upload to another source with less restrictions. That's the only benefit I see. <issue_comment>username_1: Not saying I want adult content. Just an example. <issue_comment>username_4: Oh, multiple sources is definitely a cool option. I was thinking that there would be only one way. This would definitely be a great option for those that don't want to go on itch.io and set up a full page for their game, but would just like it to be in the vapor catalogue. <issue_comment>username_0: @username_4 One could argue that using itch.io makes a drupal site obsolete for hosting. The idea is by providing different sources, we can allow users to choose different curation systems. I can see this being useful in a few ways: * People like/don't like how vapor curates. * People like/don't like how easy it is to get games on itch.io. * A developer would like a source to test with locally. * A developer would like to provide a UI that they can use to distribute their own game collections with * A developer would like to have a stable and a testing version available to users. vapor2 is going to be all about having your cake, and eating it too. <issue_comment>username_3: @username_0 I'd most likely rather have that information collected and part of itch.io's database. You should come up with a list of the additional stuff you would need so I can see. Avoiding an addition submission step would be nice. <issue_comment>username_0: @username_3 The "itch.io vapor source list" would be fully controlled by you and your servers! All of the metadata associated with the content will be on your servers for your "source" so to speak. <issue_comment>username_1: But some people may not want their games available on 3rd party clients or sites so I think an extra submission step would be cool. Or at least a checkbox. <issue_comment>username_4: @username_1 he means that instead of having to also go to vapor to add it, you press a button on itch.io :clap: :cake: :+1: <issue_comment>username_3: @username_0 Okay. Instead of a "Submit your game to vapor" page, I think a better experience would be a "I'm a love2d game checkbox" and then a tooltip about how it will be available on vapor. But that depends on the metadata you'll need. <issue_comment>username_0: @username_3 Something along those lines should be perfect. When I get the preliminaries done, I will make a ticket about it, and ping you in it! <issue_comment>username_1: @username_4 I brought vapor back to life so your welcome lol. <issue_comment>username_3: Great, my plans are to add detail pages for engines and tools on itch.io, and they would double as community hub pages. So love2d, aseprite, etc. would have a page on itch.io and you can browse the games that use the respective tool. Additionally I'll have sub flags for specifying exact versions. From the game edit page you'll add the tools you've used like you would add a tag. <issue_comment>username_1: Can Löve get front page and be the first community page? Drive people to Löve lol <issue_comment>username_4: @username_3 Sweet, although maybe a "I'm a love2d game checkbox and I want it on vapor" checkbox, leaving the option to actually publish on vapor to them? <issue_comment>username_1: Well tbh that might show favoritism he can't really do that for everyone who requests. Maybe and "Extras" drop down would be cool. <issue_comment>username_1: That's way he can just add whoever to the list <issue_comment>username_4: Hmm....? I'm just saying that itch.io detects it's a love game, and gives the option to add it to vapor. Not sure what you're on about @bobbyjoness <issue_comment>username_1: Idk lol. If for example multiple 3rd party clients want to have access to itch.io as a .love source. It would be convenient for him to just have an extras drop down. Idk. Itch.io isn't what I'm working on. #vapor-dev #teamVapor lol <issue_comment>username_4: Well vapor is the official love2d client and other clients have access to the api. <issue_comment>username_1: Its not official <issue_comment>username_1: Unless I missed a forum post somewhere. If it was official it would have its own sub forum. <issue_comment>username_4: Please stop double posting, [and this](http://vapor.love2d.org) <issue_comment>username_1: Sorry I'm on mobile so its difficult to edit my posts. <issue_comment>username_2: It is official, just not mainstream yet. On the topic, I think that what @username_0 proposed is fine. About itch.io, I think that if you are gonna specify framework and LÖVE will be an option, you could expand a checkbox that says "Submit game to Vapor", the problem is as @username_1 said that this shows favoritism and maybe other clients will want to have their own checkbox in there (say GameMaker, Unity or something like that) and have access to your data. About supporting other executable packages with patterns like love ([executable] [game file]) I think that is nice but will need some alerts to tell the user the executable is not LÖVE so it is not tested nor verified by Vapor, it may contains virus and all that stuff, plus legal stuff (I guess this same stuff applies to custom LÖVE builds too). Luckily my compat layer will fit perfectly in this pattern so no problems with that (will be able to reduce the number of executables with Cube whenever you need standard LÖVE builds) Well sorry for the long comment! <issue_comment>username_0: @username_2 Might make sense to add that warning on the UI when someone adds a new source. Good idea. <issue_comment>username_2: :+1: <issue_comment>username_0: I think the vapor1.x outline properly takes care of the concerns in this issue!<issue_closed>
<issue_start><issue_comment>Title: XML Error on Server Crash username_0: When our XMPP server crashed, the bot hooked up to it failed to reconnect when it was restarted. Looking at it, it seems it had an exception due to the XML (see screenshot). This seems to be a Pontarius-specific issue, rather than one related to my code. ![untitled](https://cloud.githubusercontent.com/assets/12365025/7626070/1daa58a6-f9c7-11e4-849c-8f83b66df790.png) <issue_comment>username_1: Thank you for creating this issue! This indeed looks like a bug in Pontarius XMPP. I'm quite busy these days, so I'm not quite sure when/if I could take a look at this. If you would like to investigate the issue yourself, pull requests are of course always welcome. <issue_comment>username_2: I reproduced this and am looking into it <issue_comment>username_2: Should be fixed<issue_closed>
<issue_start><issue_comment>Title: Workshop creation/update/delete username_0: <issue_comment>username_1: @username_0 I can take it up. Let me know if its fine. One more thing CBV vs FBV, which one is preferred . I have checked login, its using FBV. Pls confirm. <issue_comment>username_0: CBV would be good . Login page was temp it gone change soon. @username_1 please assign to your self <issue_comment>username_1: @username_0 I don't have the permission to assign. <issue_comment>username_1: @username_0 . I was just checking the settings files and saw it includes rest_framework library. So, I am creating api using django rest framework for this task. Or you want me to implement it as normally django CBV . <issue_comment>username_0: @username_1 API we will do it in Phase2 <issue_comment>username_1: Ok, thanks <issue_comment>username_1: Fixed and merged Commit# 5732162ef0008a3b88ac72d9be89c94dbd6bd4a7 <issue_comment>username_1: @username_0 pls close this issue<issue_closed>
<issue_start><issue_comment>Title: Removal of IE8, minor grammar and wording changes username_0: All the changes are just suggestions ;-) <issue_comment>username_0: Nothing major is this PR. Overall the site is looking cool! I'll have to update the RiotGear CSS now! <issue_comment>username_1: Not sure why there is a conflict. I'd like to just hit the button and see the changes on the site immediately. Any chance to offer me that pleasure? <issue_comment>username_0: I think I was making these changes whilst you were going live with the site ;-) Will re issue this PR..
<issue_start><issue_comment>Title: Log error username_0: Because it's useful. <issue_comment>username_1: why? <issue_comment>username_2: I'm not sure about this either. Also, did you check for `console` for IE awesome-ness compatibility? <issue_comment>username_0: We are catching the error and reporting it to Sentry, but the error doesn't appear in the console. You see an error occured because Raven logs "captured error", but it doesn't give you the original stack trace. All errors should be logged. If we're catching them to report to Sentry, we should most definitely be logging them alongside that. <issue_comment>username_0: Very good point, thanks! <issue_comment>username_1: And if you really need it, consider a custom logger, something along the lines of [`report-errors`](https://github.com/guardian/frontend/blob/master/facia-tool/public/js/utils/report-errors.js). You don't want to repeat that ugly bit of code every time. Moreover if `console` is not defined you'll still have a `ReferenceError` exception in IE. Better check that `window.console` is defined. Having said that I'm still :-1: You only look at the console during development, so you're shipping completely useless code for the user. <issue_comment>username_0: Done! I agree it could be abstracted so we don't have to remember to protect ourselves each time. <issue_comment>username_0: It's important to remember that uncaught exceptions are automatically logged by the browser. In this case, we are catching the error only to report it to Sentry with extra metadata (`feature: 'weather'`) (if uncaught, it would be automatically reported thanks to `window.onerror`). Just because we're doing that doesn't mean we want to lose the error logging, hence why I've added it back here. <issue_comment>username_1: Did you merge by accident? <issue_comment>username_0: @username_1 Did you want to reply to my comments above? It wasn't an accident! I was being impatient. <issue_comment>username_1: Nothing changed Nobody gave a +1 The build on TeamCity failed. I'm quite surprised you merged already. <issue_comment>username_0: Acknowledged, I broke the rules. Does it make sense why I added the logging?
<issue_start><issue_comment>Title: Update GettingStarted.md username_0: Adding the command to install `ts-jest` since it is an implied install here and may lead to the end user only running the `@types/jest` command which will not allow the tests to run. <!-- Thanks for submitting a pull request! Please provide enough information so that others can review your pull request. The two fields below are mandatory. --> <!-- Please remember to update CHANGELOG.md at the root of the project if you have not done so. --> ## Summary <!-- Explain the **motivation** for making this change. What existing problem does the pull request solve? --> ## Test plan <!-- Demonstrate the code is solid. Example: The exact commands you ran and their output, screenshots / videos if the pull request changes UI. --> <issue_comment>username_0: as requested @SimenB - let me know if you need anything else!
<issue_start><issue_comment>Title: [Event Request] Page 46 "Sales Order Subform" - New publisher OnBeforeNoOnAfterValidate username_0: Hello, we need a new Event publisher in procedure NoOnAfterValidate(). Can you please check that? Thank you! procedure NoOnAfterValidate() begin OnBeforeNoOnAfterValidate(Rec, xRec); <================ InsertExtendedText(false); if (Type = Type::"Charge (Item)") and ("No." <> xRec."No.") and (xRec."No." <> '') then CurrPage.SaveRecord(); OnNoOnAfterValidateOnBeforeSaveAndAutoAsmToOrder(); SaveAndAutoAsmToOrder(); OnNoOnAfterValidateOnAfterSaveAndAutoAsmToOrder(Rec); if Reserve = Reserve::Always then begin CurrPage.SaveRecord(); if ("Outstanding Qty. (Base)" <> 0) and ("No." <> xRec."No.") then begin AutoReserve(); CurrPage.Update(false); end; end; OnAfterNoOnAfterValidate(Rec, xRec); end; [IntegrationEvent(TRUE, false)] local procedure OnBeforeNoOnAfterValidate(var SalesLine: Record "Sales Line"; xSalesLine: Record "Sales Line") begin end;
<issue_start><issue_comment>Title: [WIP] SparseVector support + tests username_0: Addresses #6479 Definitely not ready yet, but there's been some chatter, so I figured I'd push what I've been working on. I'm kind of adding to it here and there (since there's a lot to do), but if anyone else wants to help push this along, I'd appreciate the help. Also a question, I'm not sure I quite understand the use of the two type parameters for `SparseMatrixCSC{Tv, Ti}`. `Tv` is obvious, but what does `Ti` really do as opposed to just using a consistent `Int32` or `Int64` vector. Is it really a space conservation option? I'm sure there's a good reason that I'm not aware of. <issue_comment>username_1: @username_0 Do you think we can get this in place for 0.4? <issue_comment>username_1: Marking this 0.4 for now. <issue_comment>username_0: Yeah I'll start tackling this again. Definitely good to have for 0.4 <issue_comment>username_2: Sorry being a bit negative so late in this discussion, but I think we should consider this one in relation to the discussion in #4774 and maybe also #10064. A partial conclusion from the first discussion is that as long as `n<3` then there is really no need for a vectors since a one columns or row matrix would suffice. To my knowledge the CSC format doesn't generalize to larger dimensions, so I think we should think carefully if a `SparseVector` makes thing simpler or more complicated. The only argument for a `SparseVector` I could find in #6479 was to have similar behavior for columns indexing of a `SparseMatrixCSC` and other matrix types. I think we should come up with more arguments for a `SparseVector`. Indexing is not really a key feature of CSC matrices and #10064 also questions if `SparseMatrixCSC` should be considered a `AbstractMatrix`. I think it would be more natural to get sparse vector functionality through a COO type. <issue_comment>username_3: If we have and treat dense vectors as separate things from dense 1-column matrices, we should do the same for sparse. If we don't, there is a slight overhead in the sparse case since a 1-column SparseMatrixCSC needs to store 3 extra integers that a sparse Vector doesn't. Probably not a big deal. But conflating 1-column matrices with vectors is harmful and buggy if we only do it for sparse. We could transition to the Matlab convention of doing all vectors as 1-column matrices even in the dense case, but I don't think that was the direction #4774 or #10064 were going, was it? That would also have significant consequences for broadcast and elsewhere. <issue_comment>username_1: I do think that irrespective of the storage, it is confusing for users if sparse and dense cases behave differently, when it comes to vectors. I suspect there will be other use cases for sparse vectors, and avoiding the extra loop over colptr will help with performance. I think the eventual best solution is to have a 1-vector as a special case of COO, but it is probably ok to do a special case implementation until we get there. <issue_comment>username_2: Think that #8416 is mainly a problem with the broadcasting behavior of `.*` for sparse matrices. The point is that by definition CSC cannot be similar to our dense arrays because it doesn't generalize. Adding a `SparseVector` type requires work to ensure interoperability and it doesn't really add any functionality. <issue_comment>username_3: CSC doesn't generalize beyond dimension 2, but that's not a very good reason for dimensions 1 and 2 to be inconsistent between sparse and dense. We need to make a decision, globally, whether or not 1-d vectors are the same thing as 1-column matrices. If they aren't, there needs to be a way to tell them apart, so the existing implementation of sparse vectors is broken and it would be good to replace it. Or remove the current `sparsevec` function - which people do use, and have reported issues due to its implementation being inconsistent with our handling of dense vectors. <issue_comment>username_0: I (obviously) think this is worth the effort. My use cases came from wanting broader support in the Distances.jl and Clustering.jl packages for sparse, but running into design issues because of the lack of a 1st class sparse vector type. I think the general feeling is that this is good, so I'll pick it back up and push forward, but if there are serious objections, it'd be nice to decide on a green light/red light before I go further. <issue_comment>username_1: I personally feel it is worth seriously exploring, and if we can do it as part of COO, that would be even better. It will need changes across the sparse code, which will be a bit painful, but not terrible. <issue_comment>username_2: I'm still not sure this is the right solution here, but I'm also a democrat and it appears that I'm the only one against, so please go ahead. I'm just wondering about the design here. If this is a first step towards COO then the storage should probably be something like an index vector `Vector{NTuple{1,Ti}}` and a vector of values `Vector{Tv}`. <issue_comment>username_4: +1 against. I think it should be in a package first, I feel like its not a good idea to be adding code to Base without it having the tires kicked. <issue_comment>username_5: Please let's have it in a package first. The sparse matrix code is only just starting to feel less half-baked. <issue_comment>username_6: Can we make getting reliable sparse matrix support a GSoC project? <issue_comment>username_1: @username_2 I originally had the same view about not having sparse vectors. However, it does get annoying when dense and sparse codes behave differently. For now, we should probably remove the `sparsevec` stuff, and have a pure 2d implementation in base, which we should go ahead and make fully robust and reliable. Then, we would have a COO package, with all operations on COO implemented. A `SparseVector` type can be bolted later as a special case into base when all this is ready. <issue_comment>username_1: @username_6 Good idea on a GSoC project for sparse. <issue_comment>username_1: BTW, I think while N-d COO should be a package, it wouldn't make sense to try a SparseVector implementation, like in this PR, in a package. <issue_comment>username_0: Yes, I was going to mention that as well @username_1. The thought of doing this specific implementation in a package would be hairy managing all the imports/exports and trying to reuse code. <issue_comment>username_3: +1. General COO sparse is a bigger idea and would be better for a package initially. But sparse vectors is something we pretend to support in Base right now, let's just fix it since it's not that big or terribly complicated to do right. If the code gets superseded as a special case of an awesome N-D COO implementation at some point, great. That's a ways off though and the implementation here could still be useful and an improvement on what we have now in the meantime. <issue_comment>username_1: What @username_0 started here can be easily taken to completion. The larger issues we will discuss and tackle elsewhere. <issue_comment>username_7: +100 for a sparse vector in Julia base. From an application standpoint, sparse vectors have been widely useful in AI-related fields, e.g. stochastic optimization, machine learning, document analysis, etc. The lack of sparse vector support makes it particularly difficult to develop such applications. As a stop gap, I pull a package [SparseVectors.jl](https://github.com/username_7/SparseVectors.jl). However, I strongly agree that we need *sparse vectors* in Julia base, and allow getting views of columns of ``SparseMatrixCSC`` as sparse vectors (or sparse vector views). <issue_comment>username_1: Why not just bring in that SparseVectors package into Base? <issue_comment>username_7: It seems to me that people are still debating whether we should have this in a package or in the Base. If the consensus has been reached, we can proceed with this PR. Also, that package can work with Julia 0.3. <issue_comment>username_7: Note that even if this PR is merged, the package is still useful, as we want to have many machine learning packages to work with Julia 0.3, and the *SparseVectors.jl* package can provide such support. That being said, if this PR is merged, the package can be rewritten as ```julia if VERSION < xxxx # the version that the PR is merged include("x.jl") include("xx.jl) end ``` <issue_comment>username_1: I think there is reasonable consensus that we should do this. Do they already operate with sparse matrices (like matvecs and such)? Perhaps some things like views may still need to stay in the package, until we have a coherent framework in Base, which also works for sparse matrices. <issue_comment>username_1: @username_7 IIUC, parts of your package would replace this PR. Or, would you merge some of the stuff here, and parts of your package? <issue_comment>username_7: If people think that it is good to have this PR, I can go ahead to fix issues in this PR, add tests, and make it mergeable. <issue_comment>username_0: @username_7, yes, please go ahead with a PR for your SparseVectors.jl package. This has unfortunately been a couple notches down on my TODO since I first needed the functionality and I haven't gotten around to really finishing it. <issue_comment>username_7: I will make a new PR later. <issue_comment>username_5: If we're going to move any packages into Base I think it makes sense to insist that they have 100% test coverage first. I know Base itself isn't up to that standard yet, but we are actively working towards it. <issue_comment>username_8: Is there really a need to rush this into base? Now that we have a usable package (thanks @username_7!), what's the harm of leaving it there for a bit so it can mature and develop according to people's needs? With sparse data structures, the general rule seems to be that everyone wants something different. For example, in JuMP, we use indexed vectors which use dense linear storage together with a list of nonzero indices. These feature O(1) getindex and setindex, which is useful for us. On the other hand, @username_7's sparse vector uses packed storage and sorted indices. Someone else might want to use packed storage and unsorted indices. Given that it's very hard to change a name that's exported from ``Base``, it seems possibly premature for this particular sparse vector implementation to have the name ``SparseVector``. At least sparse matrices include the data structure in the name. <issue_comment>username_3: The problem is that we're currently faking sparse vectors in Base via 1-column CSC matrices, and we need to stop doing that because it's buggy and wrong. <issue_comment>username_3: Given that Dahua's new package is quite good already, and being outside of Base does allow it to experiment and add features considerably more quickly, it might actually be best to remove `sparsevec` from base at this time. It's been said that we shouldn't remove any more code until 0.5-dev which I largely agree with, but Miles makes a good point too. <issue_comment>username_7: I am ok with either way, leaving this stuff in the package or moving it to Base. If we decide to have ``SparseVector`` in a package for a while, then we should definitely remove ``sparsevec`` and ``vec(::SparseMatrixCSC)``, etc. As @username_3 said, these functions are supposed to return a *vector*, that is, an instance of ``AbstractVector``, instead they return a matrix. These functions can be provided in *SparseVectors.jl* after they are removed from the Base. <issue_comment>username_3: Could they be added before deleting them from base? Perhaps not exported from the package to avoid name conflicts. <issue_comment>username_7: Sounds like a good idea. <issue_comment>username_1: This PR is not about adding a new package to base, but SparseVector functionality that has been developed separately in a package. Doing this now, will help avoid a lot of pain going forward - otherwise 0.4 will have the fake sparse vectors too. <issue_comment>username_1: @username_8 We could call this `SparseVectorCompressed` or something else so that the generic name is not taken. But, the general design of compressed and sorted indices is in line with the current sparse matrix data structure. This is exactly what this PR has been discussing for a really long time, and we finally have an implementation! I really would like to get this into 0.4, since as @username_3 pointed out, not doing anything means continuing to do what we are doing currently. @username_5 I do agree about 100% test coverage. <issue_comment>username_3: Given that we have `SparseMatrixCSC` in base and that's not going anywhere before 0.4, we should make a decision to either deprecate or fix the broken sparse vectors that we have now. We should make that decision and do one or the other relatively soon. Fixing them requires introducing a 1-d sparse vector type. There are many possible representations for a 1-d sparse vector type that someone might want to use, but for the methods related to taking a slice or view from a column of a `SparseMatrixCSC`, there's a pretty obvous natural choice. It doesn't have to be called exactly `SparseVector`, and it might not even need to be exported. (and I apparently typed this at the same time as @username_1 made almost exactly the same point) <issue_comment>username_7: #11324 makes sure that we will make a decision before 0.4.
<issue_start><issue_comment>Title: fix: temporarily handle httpchecksum trait the same as httpchecksumrequired username_0: ## Issue \# (none) ## Description of changes Temporarily adds support for handling `httpChecksum` the same as `httpChecksumRequired`. This enables consuming the latest model changes from S3 without fully implementing flexible checksums. Full support will come in [aws-sdk-kotlin#557](https://github.com/awslabs/aws-sdk-kotlin/issues/557). Companion PR: [aws-sdk-kotlin#558](https://github.com/awslabs/aws-sdk-kotlin/pull/558) By submitting this pull request, I confirm that my contribution is made under the terms of the Apache 2.0 license.
<issue_start><issue_comment>Title: Callback validator consumes string class name via ::class resolution username_0: Added ability to inject string class name into callback validator constructor. <issue_comment>username_0: Im not happy that `setCallback` brings more magic. However anonymous functions cannot be serialized (and pulled from merged application config), this PR brings possibility to bypass serialization restriction. I hope that reason for this PR is more obvious. <issue_comment>username_0: ping @weierophinney <issue_comment>username_0: Pinging @username_1 and @weierophinney for review. <issue_comment>username_1: As mentioned above, I don't like this auto-instantiating approach. Most callbacks require some sort of DB connection, lookup system, etc, so they will unlikely fit this particular validator. Generally 👎 from me, sorry. <issue_comment>username_2: Anonymous functions can be injected via `Module::getValidatorConfig()` in zend-modulemanager based projects, besides your specific use case would be better with static `['Class', 'method']` callable anyway. :-1: from me as well, sorry
<issue_start><issue_comment>Title: Equals for NfaNodeIdSet always returns false username_0: Overrided Equals for NfaNodeIdSet in FSA/GraphBasedFsa.fs always returns false. let n = yr.Length xr.Length = n && (let rec go i = (i < n) && xr.[i] = yr.[i] && go (i+1) go 0) Even if `xr` and `yr` are equal, `(i < n)` eventually will become false making `go i` also false. Possible fix: let n = yr.Length xr.Length = n && (let rec go i = (i >= n) || xr.[i] = yr.[i] && go (i+1) go 0) Similar to https://github.com/YaccConstructor/YC.FST/issues/7 <issue_comment>username_1: Certainly a bug. Can you think of a test case that would currently break? Do we even use Equals? <issue_comment>username_2: The code looks likes it's been fixed https://github.com/fsprojects/FsLexYacc/blob/06b47609f04b66ad553f7361c5a0e8aa6dc33924/src/FsLex.Core/fslexast.fs#L307 Should this issue be closed? <issue_comment>username_3: Thanks!<issue_closed>
<issue_start><issue_comment>Title: Traffic username_0: Hi, Is there any way to get the total traffic handled by Ion ?! I want to show my user how much of his/her data traffic my app has consumed<issue_closed> <issue_comment>username_1: check out stethomiddleware to see how to intercept request.
<issue_start><issue_comment>Title: Load Settings from file username_0: The settings need to be loaded from a file, which will be specified from the command line. The config file will be XML (to match the actual generated files). It will support having a single "job" with one set of settings, as well as having multiple "jobs" with different settings for each. Then the config file can be used to run all jobs, or only certain ones. <issue_comment>username_0: Config file is now loading<issue_closed>
<issue_start><issue_comment>Title: [WIP] Fix handling of visibility modifiers username_0: Fixes both #793 and #792. <issue_comment>username_1: Mein lieber Scholli! Looking forward to reviewing this one! <issue_comment>username_0: Ready for review! <issue_comment>username_1: Already penciled this bad mother in for my train ride to work tomorrow morning ;) <issue_comment>username_0: Enjoy! <issue_comment>username_1: Ok, that took me longer than expected. My train ride wasnt enough, I'm still reviewing, I'll finish this the next days, promised. <issue_comment>username_1: @username_0 I reviewed approximately 68% of this PR. I'll stop now to give you some time to actually respond to all my gazillion comments ;) <issue_comment>username_0: @username_1 awesome, thanks. I'll revisit the code sometime this week I think. <issue_comment>username_0: @username_1 I've fixed and/or commented on most of your points. Care to continue the review? <issue_comment>username_1: I will! Just gimme a day or two, I'm firing from all cylinders right now! <issue_comment>username_1: Well, let me say that again, great job! I did review it again and left a couple of minor comments, nothing big. From my POV we can get this ready to merge - let's address my last comments and then squash this one! <issue_comment>username_1: Merging this one, highly awesome pull request that addresses a lot issues!
<issue_start><issue_comment>Title: Fix lib install locations for multilib environments username_0: When installing libs or header files CMake variables should be used rather than using hard-coded paths. This can cause issues with multilib environments for example. <issue_comment>username_1: This looks like the advantage is that it lets the user set where they want to install the libs and include files separately. Alternatively, a user can override the CMAKE_INSTALL_PREFIX variable to place the libs and include files somewhere other than the default. Would this accomplish what you're thinking? <issue_comment>username_0: These variables are standard CMake variables which if not already populated will be populated by including GNUInstallDirs (https://cmake.org/cmake/help/v3.0/module/GNUInstallDirs.html). I forgot to add the include but have updated the pull request. I have put in a catch as a precaution for older CMake versions in case CMAKE_INSTALL_LIBDIR is not populated but I don't think this is necessary for 2.8.11 and above. CMAKE_INSTALL_PREFIX can still be used with this implementation. <issue_comment>username_1: @username_0 can you make this PR against develop instead of master? <issue_comment>username_0: Going to close this one and open a new PR against develop rather than creating a merge or rebase mess :)
<issue_start><issue_comment>Title: common.StringContains() is exactly equal compare, but used for "contains". username_0: For example: https://github.com/username_1/gopsutil/blob/master/host/host_linux.go#L340 In Linux: $ grep kvm /proc/modules kvm_intel 143109 6 - Live 0x0000000000000000 kvm 451552 1 kvm_intel, Live 0x0000000000000000 in the following code to test contains would be fail: contents, err := common.ReadLines("/proc/modules") common.StringContains(contents, "kvm") The common.StringContains() is : // Check the target string slice containes src or not func StringContains(target []string, src string) bool { for _, t := range target { if strings.TrimSpace(t) == src { return true } } return false } This is a common func in gopsutil, but it has a confused name. "String" can not display a "string list" mean, and "Contains" can not display a "contain this sub str" mean. <issue_comment>username_1: Hmm, I am not so good at English. Could you tell me what kind of Function name is better for the purpose? <issue_comment>username_0: You are too modest. :-) a poor example: package main import ( "fmt" "strings" ) // Check the target string slice containes src or not func StringsHas(target []string, src string) bool { for _, t := range target { if strings.TrimSpace(t) == src { return true } } return false } // Does src in any string of the target string slice func StringsFind(target []string, src string) bool { for _, t := range target { if strings.Index(t, src) >= 0 { return true } } return false } func main() { v := []string{ "hello, c", "hello, unix", "hello, gnu", "hello, linux", "hello, go", } test_data := []string{ "hello, c", "unix", "Java", } for _, s := range test_data { fmt.Printf("StringsHas(\"%s\", v) = %t\n", s, StringsHas(v, s)) fmt.Printf("StringsFind(\"%s\", v) = %t\n", s, StringsFind(v, s)) } } <issue_comment>username_1: Thank you for the suggestion! golang has the method called [strings.Contains](http://golang.org/pkg/strings/#Contains) to check strings has the substr or not. If I change your `StringsFind` to use `strings.Contains`, changing the name to `StringsContains` is acceptable? Sorry for annoying you. But I can not understand well such a nuances. Really. <issue_comment>username_0: Yes, it's good. I'm miss the strings.Contains . <issue_comment>username_1: I commit the change at b5ffc220a671e8ed5eb2c0136b24524f708ea361. If I have miss understanding, please re-open this issue. I am very appriciate to have discussion with you. Thank you.<issue_closed> <issue_comment>username_0: It's a nice job, thanks for your share.
<issue_start><issue_comment>Title: Fix learn.json in TodoMVC example username_0: I hit this while setting up my own "Relay TodoMVC with routing" example. You need to set up `data-framework` and the framework key on `learn.json` to get the sidebar to show up (otherwise `learn.json` doesn't do anything). The `source_path` key from the `learn.json` template also doesn't do anything, and you need to drop in the `templates` from their example as well. The `"url": ""` bit in the example is just to keep the "demo" link from showing up, since as far as I'm aware this isn't running publicly on the internet anywhere. <issue_comment>username_0: BTW, per https://github.com/facebook/relay/blob/v0.2.1/examples/todo/js/components/TodoListFooter.js#L20 I do also have a working TodoMVC example that includes routing. However, it depends on React Router, so I don't know if it'd be in scope here. It's also set up more as a demonstration of routing than of an optimal Relay TodoMVC implementation. <issue_comment>username_1: Very nice! Thanks for fixing this, and I am excited for your React Router implementation of the TodoMVC. For posterity, here is what gets rendered: <img width="1095" alt="screen shot 2015-09-09 at 10 23 49 am" src="https://cloud.githubusercontent.com/assets/55161/9769125/0d6da0d8-56dd-11e5-88a5-8ac18d5e23f1.png"> <issue_comment>username_0: @username_1 The React Router based implementation is here - https://github.com/username_0/relay-todomvc. I'd be happy to PR it if you think it fits. Aside from the caveat above with how the schema is set up, it has a few other external dependencies and uses webpack a bit differently from the existing examples, though.
<issue_start><issue_comment>Title: The “config” directory should be renamed to avoid a tab completion collision with “content” username_0: Yeah, I’m completely serious about this. I’m used to tab completion in a shell *working*—that the first three letters of a directory name will almost always be unique in a directory. But now I have “con&lt;Tab&gt;” uncertain about whether it should expand to “config” or “content”. “content” makes sense as a name, and everyone uses it; I don’t think it should be changed. But “config” could just as easily be “settings”, for in many such systems the two terms are synonymous; then there would be no collision in tab completion.
<issue_start><issue_comment>Title: Add schema validation methods and specs username_0: Replication requires that the schemas on the global and regional databases are *exactly* the same. This includes the ordering of columns. https://github.com/ManageIQ/manageiq/pull/8337 addressed the issue of old migration timestamps causing migrations to run out of order in upgraded environments. This PR will address any other external change that could cause an unexpected schema difference. This will also allow replication to do some validation against the local and remote databases before exposing itself to the unrecoverable error that would result from attempting to sync databases with differing structure (https://bugzilla.redhat.com/show_bug.cgi?id=1331114) - Added `db/schema.yml` which encodes the expected database format. - Added `EvmDatabase.check_schema` which determines if the structure of a database matches the `schema.yml` file - Added `EvmDatabase.write_expected_schema` which writes the `db/schema.yml` file as a representation of a database - Added rake tasks to run both new methods. - Added an additional replication spec test that fails if `EvmDatabase.check_schema` fails. - This will notify us if the schema changes at all and will require `schema.yml` to be regenerated when creating migrations. - This also would have caught the change made to the ordering of the columns which make up a polymorphic reference here https://github.com/rails/rails/commit/d26704a15f88d384dd282425daa832affdb5f8c1 in the PR that upgraded the gem version. @username_1 @username_2 @chessbyte please review @bdunne this should address https://github.com/ManageIQ/manageiq/pull/8337#issuecomment-215781218 <issue_comment>username_1: This looks good 👍 I'm +1 on the changes suggested by @bdunne and @username_2. <issue_comment>username_0: @username_2 Do we want a talk article about this and https://github.com/ManageIQ/manageiq/pull/8337 before this is merged? <issue_comment>username_0: Added a talk article about this change http://talk.manageiq.org/t/new-schema-specs-for-new-replication/1404 <issue_comment>username_0: @jrafanie you think this is good to go? <issue_comment>username_2: I'm not sure if this is darga/yes or darga/no. @username_0 ? #8337 was darga/no, but I'm not sure. <issue_comment>username_0: This should be darga/yes. I'm going to use the `schema.yml` file to fail fast when configuring replication and also for the tool to fix the tables that are out of order.
<issue_start><issue_comment>Title: scylla compat username_0: - some flexibility for CQL field names - some resiliancy for CQL errors that the datastax driver doesn't do a good job of catching (an error occurs, the connection closes, and the datastax client driver lets you check out the bad connection from the pool again without checking) - allow instantiation of multiple hikari pools (this was so I could do some parallel testing, but this is just a good feature) - add some useful debug logging <!-- Reviewable:start --> --- This change is [<img src="https://reviewable.io/review_button.svg" height="34" align="absmiddle" alt="Reviewable"/>](https://reviewable.io/reviews/palantir/atlasdb/771) <!-- Reviewable:end --> <issue_comment>username_0: @tmgordeeva your change seems good, I have tested it. <issue_comment>username_0: I found from testing that Tanya's new batching thing does have a bug that people wouldn't hit in practice, which is that batches with extra conditionals (read: put unless exist version of put) can't have the batch span across partition key token boundaries. This wouldn't happen from people inserting one transaction commit at a time (the only use of PUE currently) <issue_comment>username_1: @username_0 do we want to actually merge anything from this PR, given that making "AtlasDB + scylla" a thing is far in the distance? <issue_comment>username_2: @username_1 Per internal thread yes. <issue_comment>username_3: Additional context is this PR contains a bunch of CQLKVS improvements which we want irrespective of Scylla. We'd like to move to a Cassandra 3.X/3.0.X version and also a supported client (CQL) before investing in anything scylla specific. <issue_comment>username_0: I can split off the two or three scylla-specific parts of the change into another commit, if it pleases the court. <issue_comment>username_1: If the change in CKVSTest was Scylla-only, then this is also probably Scylla-only. --- *Comments from [Reviewable](https://reviewable.io:443/reviews/palantir/atlasdb/771)* <!-- Sent from Reviewable.io --> <issue_comment>username_0: </details> I'm going to keep this one of Tanya's Java6isms, I don't think it will look any less weird in java8 stream land given those two extras. --- *Comments from [Reviewable](https://reviewable.io:443/reviews/palantir/atlasdb/771)* <!-- Sent from Reviewable.io --> <issue_comment>username_1: </details> Agreed. --- *Comments from [Reviewable](https://reviewable.io:443/reviews/palantir/atlasdb/771)* <!-- Sent from Reviewable.io --> <issue_comment>username_1: </details> // don't care enough about this one to block it. I assume the second exception can be thrown by ``new HikariDataSource``. --- *Comments from [Reviewable](https://reviewable.io:443/reviews/palantir/atlasdb/771)* <!-- Sent from Reviewable.io --> <issue_comment>username_1: @username_0 this is still waiting on a few things. My only blockers are: 1. the question about a potential bug in CQLKeyValueService (line 367) 2. the ordering of statements in CassandraKeyValueService.init() - to be changed to avoid side-effects Circle is blocking it because the build is red. Probably checkstyle. <issue_comment>username_4: Reviewed 6 of 17 files at r1, 2 of 8 files at r4, 10 of 10 files at r5. Review status: all files reviewed at latest revision, 5 unresolved discussions. --- *Comments from [Reviewable](https://reviewable.io:443/reviews/palantir/atlasdb/771)* <!-- Sent from Reviewable.io -->
<issue_start><issue_comment>Title: Groups and Locations username_0: Currently there is no way to inspect or conveniently build a light target for groups or locations. <issue_comment>username_0: See https://github.com/username_0/LIFXHTTPKit/tree/groups-locations Deciding if it's worth using a stubbing library to start writing some more unit tests. <issue_comment>username_0: Done!<issue_closed>
<issue_start><issue_comment>Title: Terminal cursor position goes to last line when creating/destroying splits username_0: - Neovim version: 0.1.2 - Operating system: OSX - Terminal emulator: Iterm2 ### Actual behaviour In term normal mode, cursor position moves to the last line when adding/deleting a split. ### Expected behaviour The cursor position should be stay same. ### Steps to reproduce - open nvim - :vs | te (create a terminal split) - press enter to generate lines to fill terminal screen - enter normal mode then go to first line of terminal split - close the other split <issue_comment>username_1: I can confirm this. NVIM 0.1.3-dev Build type: RelWithDebInfo Compilation: /usr/bin/x86_64-linux-gnu-gcc -g -O2 -fstack-protector --param=ssp-buffer-size=4 -Wformat -Werror=format-security -Wconversion -U_FORTIFY_SOURCE -D_FORTIFY_SOURCE=1 -O2 -g -DDISABLE_LOG -Wall -Wextra -pedantic -Wno-unused-parameter -Wstrict-prototypes -std=gnu99 -Wvla -fstack-protector --param ssp-buffer-size=4 -DINCLUDE_GENERATED_DECLARATIONS -DHAVE_CONFIG_H -D_GNU_SOURCE -I/build/neovim-2G4J_c/neovim-0.1.2ubuntu1+git201602171545+2283+17~ubuntu14.04.1/build/config -I/build/neovim-2G4J_c/neovim-0.1.2ubuntu1+git201602171545+2283+17~ubuntu14.04.1/src -I/build/neovim-2G4J_c/neovim-0.1.2ubuntu1+git201602171545+2283+17~ubuntu14.04.1/.deps/usr/include -I/build/neovim-2G4J_c/neovim-0.1.2ubuntu1+git201602171545+2283+17~ubuntu14.04.1/.deps/usr/include -I/usr/include/luajit-2.0 -I/build/neovim-2G4J_c/neovim-0.1.2ubuntu1+git201602171545+2283+17~ubuntu14.04.1/.deps/usr/include -I/build/neovim-2G4J_c/neovim-0.1.2ubuntu1+git201602171545+2283+17~ubuntu14.04.1/.deps/usr/include -I/build/neovim-2G4J_c/neovim-0.1.2ubuntu1+git201602171545+2283+17~ubuntu14.04.1/.deps/usr/include -I/build/neovim-2G4J_c/neovim-0.1.2ubuntu1+git201602171545+2283+17~ubuntu14.04.1/.deps/usr/include -I/usr/include -I/build/neovim-2G4J_c/neovim-0.1.2ubuntu1+git201602171545+2283+17~ubuntu14.04.1/build/src/nvim/auto -I/build/neovim-2G4J_c/neovim-0.1.2ubuntu1+git201602171545+2283+17~ubuntu14.04.1/build/include Compiled by buildd@lgw01-16 <issue_comment>username_2: Cannot reproduce on current HEAD 7e6980a . Maybe related: https://github.com/neovim/neovim/pull/6185<issue_closed>
<issue_start><issue_comment>Title: Exception rendering swagger format (2.0.3) username_0: I just converted to 2.x. Maybe I have something misconfigured? ` File "/home/gwhite/.virtualenvs/inteleca-server/lib/python3.5/site-packages/django/core/handlers/base.py", line 174, in get_response response = self.process_exception_by_middleware(e, request) File "/home/gwhite/.virtualenvs/inteleca-server/lib/python3.5/site-packages/django/core/handlers/base.py", line 172, in get_response response = response.render() File "/home/gwhite/.virtualenvs/inteleca-server/lib/python3.5/site-packages/django/template/response.py", line 160, in render self.content = self.rendered_content File "/home/gwhite/.virtualenvs/inteleca-server/lib/python3.5/site-packages/rest_framework/response.py", line 70, in rendered_content ret = renderer.render(self.data, media_type, context) File "/home/gwhite/.virtualenvs/inteleca-server/lib/python3.5/site-packages/rest_framework_swagger/renderers.py", line 16, in render data = self.get_openapi_specification(data) File "/home/gwhite/.virtualenvs/inteleca-server/lib/python3.5/site-packages/rest_framework_swagger/renderers.py", line 29, in get_openapi_specification return json.loads(codec.dump(data)) File "/home/gwhite/.virtualenvs/inteleca-server/lib/python3.5/site-packages/openapi_codec/__init__.py", line 34, in dump data = generate_swagger_object(document) File "/home/gwhite/.virtualenvs/inteleca-server/lib/python3.5/site-packages/openapi_codec/encode.py", line 8, in generate_swagger_object parsed_url = urlparse.urlparse(document.url) AttributeError: 'ReturnList' object has no attribute 'url' ` <issue_comment>username_1: @username_0 I'm not sure. It looks like something threw an error in the schema generator before it got to the OpenAPI codec / renderer class. Will need to investigate. <issue_comment>username_2: similar issue here: Request Method: GET Request URL: http://172.28.128.3:8080/api/docs/?format=json Django Version: 1.9.8 Exception Type: AttributeError Exception Value: 'dict' object has no attribute 'url' Exception Location: /opt/virtualenv/django/lib/python3.4/site-packages/openapi_codec/encode.py in generate_swagger_object, line 8 Python Executable: /opt/virtualenv/django/bin/python3 Python Version: 3.4.2 Python Path: ['/vagrant_zeta', '/opt/virtualenv/django/src/pefile', '/opt/virtualenv/django/lib/python3.4', '/opt/virtualenv/django/lib/python3.4/plat-x86_64-linux-gnu', '/opt/virtualenv/django/lib/python3.4/lib-dynload', '/usr/lib/python3.4', '/usr/lib/python3.4/plat-x86_64-linux-gnu', '/opt/virtualenv/django/lib/python3.4/site-packages'] Server time: Wed, 3 Aug 2016 11:21:16 +0000 <issue_comment>username_3: Hi @username_2 could you share the code in urls.py? <issue_comment>username_4: Duplicated of #486<issue_closed>
<issue_start><issue_comment>Title: fixes support for gzip username_0: Hello! The original gzip code wasn't working properly due to `extname` being returned with `undefined` prefixed to the string. To simplify things and take advantage of the new build pipeline, I made an assumption that gzip support should automatically be enabled only if [ember-cli-deploy-gzip](https://github.com/ember-cli-deploy/ember-cli-deploy-gzip) is part of the pipeline. That said, I used the new deployment context to check to see what files were gzipped, if any, and then apply the proper `Content-Encoding` to Azure Blob Storage. Let me know what you think. Cheers. <issue_comment>username_0: Thanks for the quick merge. When do you expect a new release to be pushed to NPM? <issue_comment>username_1: Already done - do you need me to release a new version of `ember-cli-deploy-azure` too? <issue_comment>username_0: Ah, I didn't see it show up in the Github releases as it isn't tagged in the repo. Hence, my question. As for a new release of `ember-cli-deploy-azure`, yes, I suppose it would make sense to update. Is that something you can take care of or do you need me to submit a PR? I'm only using the blob service or else I would have already done it. <issue_comment>username_1: Yep, also released `ember-cli-deploy-azure`
<issue_start><issue_comment>Title: Individual visit spectra should be in a Barycentric frame username_0: See: Troup et al. paper on substellar companions <issue_comment>username_0: This is about whether the `apVisit.xxx.fits` files are Barycentric, not about the measurements in `allVisit.fits` (those are Barycentric). <issue_comment>username_1: uh, issue naming issues @username_0 ?
<issue_start><issue_comment>Title: Background preview not updating when deleting the node username_0: Dynamo 0.9.1.4062, Windows 7, Revit 2015 Created a circle, deleted the node but circle remained in the background. <issue_comment>username_1: @username_0 - I've seen some issues related to geometry not cleaning up while using CodeBlockNodes to create geometry.If delete those CBN after making some changes to the script Dynamo is not removing the geometry from view.This is tracked in MAGN-9434. I would like to know if the issue you are seeing is related to CBN or the Geometry nodes is we can track it seperately. <issue_comment>username_0: I didn't use code block nodes but circle.ByPointAndRadius Regards Paolo Serra BIM Technical Consultant Autodesk Consulting AEC - EMEA MOBILE +39 344 138 6032 -------- Messaggio originale -------- <issue_comment>username_1: @username_0 - we have fixed the issue in PR - https://github.com/DynamoDS/Dynamo/pull/6157. Which is merged into daily build - Sorry its disturbing you. If you are still able to see this in 1.0 please let me know! <issue_comment>username_2: @username_1 I'm closing this issue because it's fixed :) @username_0 Please re-open if you're still having issues.<issue_closed>
<issue_start><issue_comment>Title: bsunanda:Run2-hcx70 Backport PR 13491(scenario with new HCAL TP) username_0: Additional scenario allowing new HCAL Trigger Primitive keeping 2015 layout of HCAL readout channels <issue_comment>username_0: @username_1 @username_3 please take a look at this PR for 8_0_X. If possible it be included in 8_0_1. <issue_comment>username_1: +1 <issue_comment>username_2: @username_3 may we get it in 800p3? <issue_comment>username_1: +1 <issue_comment>username_3: +1
<issue_start><issue_comment>Title: Documentation? username_0: Is there a canonical source for Drone's documentation? The README links to http://readme.drone.io/usage/overview/ (which in turn links to http://drone.readthedocs.org/en/latest/ which is dead) and https://github.com/drone/drone/blob/v0.2.1/README.md#builds. My googling also turned up http://docs.drone.io/ which doesn't seem up to date or is for an unreleased version of Drone (at least, the screenshots look a lot different from my installation). So... where should I go to find Drone documentation?<issue_closed> <issue_comment>username_1: readme.drone.io is the closest we have for the 0.3-alpha release, however, the state of documentation is poor and something that is being addressed in the 0.4-beta release. I recommend using the forum if you have questions about how something works gitter.im/drone/drone
<issue_start><issue_comment>Title: not handling "?" in url username_0: ![screen shot 2016-06-14 at 11 39 51 am](https://cloud.githubusercontent.com/assets/8654692/16028679/ededbeae-3224-11e6-8c60-28af46e72822.png)<issue_closed> <issue_comment>username_1: Thanks for reporting. It was being too smart at shell parsing, and assumed the `?` was part of a glob. It now ignores globs
<issue_start><issue_comment>Title: [WIP] Fix for not working ctrl/cmd + s on PSM date/slug username_0: closes #4556 - extracts PSM specific save method - refactoring to the was post is saved <issue_comment>username_1: Hey @username_0, this has been languishing for a bit - I'm going to close it to keep things clean around here, would be more than happy to reopen it if you come back to it :)
<issue_start><issue_comment>Title: Question: Preloading username_0: When I use .downloadOnly() to preload an image into disk cache, and while it's still downloading it, If I try to load it into an ImageView normally, will Glide detect that it was already preloading it, and continue to do so, or will it start a new download for the same url? I was searching in the wiki but I couldn't find it. <issue_comment>username_1: There's an internal mechanism to attach requests to existing ones. Download-only requests are normal requests. I think you'll need to specify `.diskCacheStrategy(SOURCE)` (if not automatic, I can't remember) on the preload and `ALL` on the normal load to pick up the "preloaded" image. Best is to try anyway to make sure all is well. You can enable logging (see debug wiki) to see what's going on under the hood, or replace the default DataFetcher with your own which logs each HTTP request. <issue_comment>username_0: Didn't need `.diskCacheStrategy()` on the `.downloadOnly()`, used `DiskCacheStrategy.SOURCE` on the loading part and it worked perfectly. Thanks!<issue_closed> <issue_comment>username_1: Yep, `GenericTranscodeRequest.getDownloadOnlyRequest` does `SOURCE` for you. So you just needed to tell Glide to use what was downloaded before. `diskCacheStrategy` controls both reads and writes to those two caches.
<issue_start><issue_comment>Title: np.dtype doesn't represent datetime64 with timezone username_0: Code to reproduce (using Pandas): from dateutil.parser import parse import numpy as np import pandas as pd df = pd.DataFrame([parse("Mar 10, 2016 11:20 PM EDT")], columns=['ts']) df.dtypes['ts'] == np.dtype('datetime64[ns, tzlocal()]') This above last line fails with TypeError: Invalid datetime unit in metadata string "[ns, tzlocal()]" <issue_comment>username_1: NumPy has never supported datetime64 with timezones -- pandas uses it's own dtype system, as described in the issue above.<issue_closed>
<issue_start><issue_comment>Title: CSS-based paths for L.Icon.Default. #4604, #3696, #4579 username_0: Takes some inspiration from #3559 (thanks,@sheppard!). This: * Allows overriding the retina URLs for `L.Icon.Default` (#3696) * Allows replacing the autodetected path for `L.Icon.Default` images (#4604) * The CSS method from #3559 *should* make IE8 compute paths properly (#4579) * Adds unit tests for the default images (just checking image size, which means default icon images are loaded) * Removes the phantomJS hack in the unit tests (by adding `leaflet.css` to the Karma conf), as the CSS method enables phantomJS to autodetect the path!! <issue_comment>username_1: This is very clean, which makes me happy. Really awesome work. The only potential issue I could see: @sheppard used an actual image for the test, would `background-image: url(images/);` be too dependent on browser behavior? Are there any browsers that would choke on this or generate a warning in the console? <issue_comment>username_2: I like this! If all support browsers are fine with this, me too. The only minor thing I'd change is to still support the global `L.Icon.Default.imagePath` for compatibility — e.g. if it is set, use it instead of autocomputing. <issue_comment>username_0: @username_2 Just tried it in IE8, it just needed a minor tweak. Will have a look at backwards compatibility of `L.Icon.Default.imagePath`. @username_1 In the browsers I've tested this, the CSS rule doesn't even make a network request for the image, it just sets the string in the style property. The exception is PhantomJS, which sets the style, makes a (failing) network request, and still works as expected. <issue_comment>username_1: @username_0 I will give your code a spin. I just tried creating a simple test page (on IIS) where a DIV had the `.leaflet-default-icon-path` class, and I got a 403 error when the `images\` folder exists and a 404 when it doesn't. I tried it by adding the DIV dynamically as well. But like I said, I will need to try it just as you have it. <issue_comment>username_0: It seems no errors are shown when this is used in a map, but shows errors when the unit tests are run. If this is an issue, we could replace `images/` with `images/marker-icon.png` in the CSS, and make the regexp a tiny bit more complicated. <issue_comment>username_3: I made an attempt to address the comments from @username_2, @sheppard and @hyperknot so that `options` are not overwritten, but otherwise functionally equivalent. Feel free to revert this if you feel there's some issue with this apprach. <issue_comment>username_0: @username_3 Looks good to me. I'll try to rebase to get rid of the merge conflict.
<issue_start><issue_comment>Title: Update version numbers for listed technologies in README.md username_0: The version numbers in install/download_tech have outpaced the documentation. Help them catch up. <issue_comment>username_0: I have a fix ready for this, I just need a branch that I have permission to push to. <issue_comment>username_1: @username_0 You can fork the repo, create a new branch, push changes on that repo and then you can make a pull request. Let us know if you have further questions.
<issue_start><issue_comment>Title: Added parallel collation for fms and fixed a small bug in '-d' implementation username_0: Paul's collation jobs were taking a long time, he was saving at high temporal resolution and doing ensembles, and my high temporal resolution 0.1 deg sims were taking 12+ hours to collate, so this is a fix of sorts. I think it would be better to submit multiple collate jobs, but that would have been a bigger change, so I settled for using multiprocessing to submit multiple jobs within one collate job. This adds config option collate_ncpus, to specify the number of cpus to be requested for the collate PBS submission. The drawback is that multiprocessor jobs can't be submitted to the copy queue on raijin, so normal or express must be specified. Care must be taken to also request enough memory for multiple collate jobs to run simultaneously. There was a bug in the '-d' directory path implementation, that it didn't work at all when invoked in a directory that wasn't the laboratory, which sort of defeats the purpose. In any case a minimal config.yaml must be supplied so that payu knows the model type. If the job name can't be determined payu-collate falls back to the name the PBS job after the directory path being collated. I had an issue with a trailing slash on the PBS job name interacting poorly with PBS. I sprinkled a few os.path.normpath calls around, but in the end the only one that matters is in collate_cmd.py. I left the others in as I didn't think they could do any harm, and might be beneficial. <issue_comment>username_1: Thanks, this looks interesting. Do we need to do a new release ASAP? <issue_comment>username_0: I've installed this as payu/dev version under /projects/v45 so Paul can access it. I'm not sure there is anyone else who needs any of these features just yet. It might be a good idea to test it for a while first in that case. <issue_comment>username_1: ok. I am doing multidim support for f90nml so I'd rather get that working before an update. <issue_comment>username_0: Yup. Might sneak a few more features in too ... :) <issue_comment>username_0: I stuffed up and introduced a bug, so I've fixed that and added a couple of other things, but I realised that by default, as it is currently written, when running payu-collate it will run in parallel and use as many CPUs as are available. This could be considered a feature or a bug, depending on who pays your salary. Should I change this behaviour? <issue_comment>username_1: "as many CPUs as are available" as in as many requested for the PBS job? As many on the node? Or raijin? I don't see the first as a problem. If you request 16 then you should use 16. The other cases could be a problem though. <issue_comment>username_0: Yeah, I realised that payu-run is still faithful to the ncpus setting in config.yaml, so payu-collate should be too. I'll change that before I commit & pull. <issue_comment>username_1: "ncpus" might be the problem.. maybe a collate_ncpus... (or maybe a new collate subcategory) <issue_comment>username_0: There is already a collate_ncpus config option, I just wasn't doing anything with it when payu-collate was called. I will honour that setting. As I said, makes it consistent, which is a virtue. <issue_comment>username_0: I don't think we can have a new collate subcategory without break previous scripts can we? <issue_comment>username_1: ah.. well you just check for both for backwards compatibility.. but its not really necessary. Dont worry about it
<issue_start><issue_comment>Title: TroubleShooting username_0: I need help with making a wiki page for common issues and how to fix them, here is what is done so far [[Troubleshooting]]. If you can just post some common errors you have came into while using multimc (Like the Java virtual machine one) Please comment the issue and how to solve it. <issue_comment>username_1: One of the most common issues comes from people trying to use mods for a different Minecraft version than they are meant for. <issue_comment>username_2: Also often occuring: Putting forge/liteloader mods in Jarmods (usability issue) <issue_comment>username_0: Added both of these. <issue_comment>username_0: Going to close this for now, can be reopened if any other common things start appearing. (other than the ones listed here) if only github had a feature that made you read a wiki page before reporting, imagine the benefits.<issue_closed>
<issue_start><issue_comment>Title: mmap over 4gb? username_0: I came across your blog post about mmapping files over 2 gb. http://nyeggen.com/post/2014-05-18-memory-mapping-%3E2gb-of-data-in-java/ I tried it, and I get EXCEPTION_ACCESS_VIOLATION when I try to access a file position over 4gb. Java frames: (J=compiled Java code, j=interpreted, Vv=VM code) j sun.misc.Unsafe.copyMemory(Ljava/lang/Object;JLjava/lang/Object;JJ)V+0 Have you actually gotten this to work with large files? (I'm using a 64 bit JVM on Windows.) <issue_comment>username_1: I have used it on 64 bit Linux & OS X successfully. Googling that error I see it might be related to UAC (seems to be a recurring issue in Minecraft?), but unfortunately I can't debug on Windows. https://java.com/en/download/help/exception_access.xml <issue_comment>username_0: Yup, confirmed that it does work fine on Ubuntu. So, it's probably a limitation of Oracle's implementation of the mmap function on Windows. Probably nothing you can do. Closing the issue...<issue_closed>
<issue_start><issue_comment>Title: Avoid unnecessary locks in ROOT function GetBaseClassOffset() username_0: The ROOT function GetBaseClassOffset() does locking that seriously hurts threading efficiency, especially in analysis jobs using pat::. In the vast majority of the cases (in fact in all cases so far observed), the actual offset to the base class object from the derived class object was 0. This PR replaces GetBaseClassOffset() with a call to a function of a class template that returns 0 by default. In cases where the actual offset is non-zero, the class can be specialized to return the proper value. In cases where the true offset is non-zero, and the class has not been properly specialzed, a segfault should result, which will be noticed. <issue_comment>username_0: @davidlange6 The changes to analysis code consist entirely of adding two missing std includes that were made visible by removing unnecessary includes in the framework. So, this PR should not be held up by a late analysis signature. <issue_comment>username_1: +1 <issue_comment>username_1: @smuzaffar is this test still running?
<issue_start><issue_comment>Title: Map Pack Support username_0: Now that we have a new map available (#85), there should be a simple way to add/remove map packs. Ideally it would be nice to be able to switch between "map packs" via the config file (that way the user does not need to download 2 different versions of the skin; they can just change a config parameter). Currently I'm viewing the each map pack to be a folder with the following folder structure: ``` - vanilla-ets2 +- /tiles +- map.js - promods-rusmap +- /tiles +- map.js ..... and so forth ``` Then in the config the user will enter the folder name of the map they want to use (i.e. `vanilla-ets2` or `promods-rusmap`). <issue_comment>username_1: This is a good idea, especially since I also want to generate a map for ATS (I still need to do some editing to make that work though). One thing to take into consideration: Do you want to let that config parameter point to a folder or to a config file that refers to folders? The second one may be a bit more work, but in the long run it may be more flexible. However, it could also make it more complex than it needs to be. <issue_comment>username_0: My plan was to introduce two new config parameters: `ets2MapFolder` and `atsMapFolder`. The value for the parameter would be the folder where the `tiles` folder and `map.js` file would be (looking at the folder structure in my original post). As long as each map pack follows the correct requirements (tiles in the `tiles` folder, creating a `map.js` file implementing the necessary functions, etc), this approach should be okay. <issue_comment>username_0: Well, I started work on this, but there seems to be some magic limit on the length of a URL on the telemetry server. Until I hear back from Funbit at Funbit/ets2-telemetry-server#79, this is essentially blocked. <issue_comment>username_0: Funbit changed the subdirectory limit from 5 to 9, which should solve the majority of our issues.<issue_closed>
<issue_start><issue_comment>Title: Feature/pep8 username_0: Here's another sweep through the code for pep8iness. Mostly whitespace fixes, as well as a few stylistic changes/modernization. A couple of missing imports too. I did not include long line fixing for tests/. I will include that in a separate PR. <issue_comment>username_1: looks awesome, thanks man. all tests passing
<issue_start><issue_comment>Title: mako <%include: TemplateLookupException (cache problem?) username_0: A comment in `/usr/share/pyshared/salt/utils/mako.py` states: ``` If URL is an absolute path then it's treated as if it has been prefixed with salt://. ``` This doesn't work, but prefixing with salt:// does. `<%include file='/machine/ns/master/gen_serial.mako"/>` gives a TemplateLookupException; `<%include file='salt://machine/ns/master/gen_serial.mako"/>` works.
<issue_start><issue_comment>Title: Specify padding for base64URL encoded values username_0: We may want to specify the padding for base64URL encoding in https://interledger.org/five-bells-condition/spec.html. From [rfc4648](https://tools.ietf.org/html/rfc4648) : ``` In some circumstances, the use of padding ("=") in base-encoded data is not required or used. In the general case, when assumptions about the size of transported data cannot be made, padding is required to yield correct decoded data. Implementations MUST include appropriate pad characters at the end of encoded data unless the specification referring to this document explicitly states otherwise. ``` It looks like some implementation (like the one in this directory) drops padding. Implementations in other languages may not (for instance in Java and Haskell), creating inconsistencies. <issue_comment>username_1: Good point! Our decision is to use no padding. If padding appears, the condition should be considered invalid.
<issue_start><issue_comment>Title: Get Response body KW is not reading the json object which has got array. username_0: Hi, I am trying to read below response body using Get Resonse Body KW. { "addresses": [ { "id": "490", "city": "city93111", "country": "country93111", "house": "house93111", "state": "state93111", "street": "street93111", "zip": 93111, "firstname": "singh", "lastname": "singh", "defaultAddress": 1, "phone": null, "gender": null, "landmark": null, "area": null }, { "id": "491", "city": "city93111", "country": "country93111", "house": "house93111", "state": "state93111", "street": "street93111", "zip": 93111, "firstname": "singh", "lastname": "singh", "defaultAddress": 0, "phone": null, "gender": null, "landmark": null, "area": null } ] } But the response it is reading is below ${Response} = {"addresses":[]} can you please help me how to get the content inside the list? Quick response is very much appreciated. <issue_comment>username_1: Could you provide a little more context ? What is your actual RF code, what is your server's HTTP response?
<issue_start><issue_comment>Title: Add ability to disable or filter object embed - admin / settings username_0: Object embed is a source of bots-net styled Denial of Service attacks for chat based systems For the admin / settings page, it would be nice to have: 1. setting to disable embed altogether 2. setting to replace iFramely oembed with simple-embed (we may not have access to item 3 with iFramely) 3. setting to configure [access, deny] filter rules for "after embed transforms", "before render", URLs item 3 is likely a pre-requisite for corporate installs <issue_comment>username_1: https://github.com/RocketChat/Rocket.Chat/pull/230 <issue_comment>username_1: Depends on #222<issue_closed>
<issue_start><issue_comment>Title: Capstone-window pypi package is broken username_0: (Test) C:\Users\mic>dir Test\Lib\site-packages\users\mic\test\Lib\site-packages\capstone\capstone.dll Volume in drive C has no label. Volume Serial Number is 46F6-6EB2 Directory of C:\Users\mic\Test\Lib\site-packages\users\mic\test\Lib\site-packages\capstone 02/11/2016 02:44 AM 2,599,424 capstone.dll 1 File(s) 2,599,424 bytes 0 Dir(s) 2,817,589,248 bytes free (Test) C:\Users\mic>dir Test\Lib\site-packages\capstone\capstone.dll Volume in drive C has no label. Volume Serial Number is 46F6-6EB2 Directory of C:\Users\mic\Test\Lib\site-packages\capstone File Not Found (Test) C:\Users\mic>dir Test\Lib\site-packages\capstone\ Volume in drive C has no label. Volume Serial Number is 46F6-6EB2 Directory of C:\Users\mic\Test\Lib\site-packages\capstone 02/11/2016 02:44 AM <DIR> . 02/11/2016 02:44 AM <DIR> .. 02/11/2016 02:44 AM 1,882 arm.py 02/11/2016 02:44 AM 4,017 arm.pyc 02/11/2016 02:44 AM 1,923 arm64.py 02/11/2016 02:44 AM 4,605 arm64.pyc ``` It seems to be putting the dll in the wrong place. It seems to be appending the absolute path of where it wants to put it (in site-packages) to the end of the site-packages. I think the problem is that the setup.py is using data_files. Searching the internet it appears that data_files is not the correct thing to use because you can not reliably predict where the file will end up. Since you need to ensure the file ends up in the capstone package we should probably use package_data. Additionally distutils is deprecated and the current advice it to use setuptools. I changed the setup.py a bit: ``` --- setup.py 2015-10-23 22:54:26.443463855 +0200 +++ /tmp/setup.py 2016-02-11 12:15:18.324714375 +0100 @@ -9,8 +9,8 @@ from distutils import log from distutils import dir_util from distutils.command.build_clib import build_clib -from distutils.command.sdist import sdist -from distutils.core import setup +from setuptools.command.sdist import sdist +from setuptools import setup from distutils.sysconfig import get_python_lib # prebuilt libraries for Windows - for sdist @@ -55,6 +55,7 @@ dir_util.copy_tree("../../arch", "src/arch/") dir_util.copy_tree("../../include", "src/include/") + dir_util.copy_tree("../../msvc/headers", "src/msvc/headers") src.extend(glob.glob("../../*.[ch]")) src.extend(glob.glob("../../*.mk")) @@ -77,6 +78,10 @@ """Reshuffle files for distribution.""" def run(self): + try: + os.unlink("capstone/capstone.dll") + except: pass + # if prebuilt libraries are existent, then do not copy source if os.path.exists(PATH_LIB64) and os.path.exists(PATH_LIB32): return sdist.run(self) @@ -108,10 +113,10 @@ if SYSTEM in ("win32", "cygwin"): # if Windows prebuilt library is available, then include it if is_64bits and os.path.exists(PATH_LIB64): - SETUP_DATA_FILES.append(PATH_LIB64) + shutil.copy(PATH_LIB64, "capstone") return elif os.path.exists(PATH_LIB32): - SETUP_DATA_FILES.append(PATH_LIB32) + shutil.copy(PATH_LIB32, "capstone") [Truncated] packages=['capstone'], @@ -186,6 +190,8 @@ sources=dummy_src() ), )], - - data_files=[(SITE_PACKAGES, SETUP_DATA_FILES)], + zip_safe=False, + package_data = { + "capstone": ["*.dll"], + } ) ``` This does a couple of things: * It uses setuptools instead of distutils. This makes it possible to build wheels. * It copies the right dll into the capstone directory. It will then be included as a package_data dependency when you install it. * For sdist it removes the dll so it does not end up in the source. Finally I would like to ask why the need for a whole new package - `capstone-windows`? I understand that we want to distribute binary packages on windows but why not just upload binary wheels to the main capstone pypi package? This is the supported way of shipping binary distributions for users, giving them the option of building if they want to. Shipping capstone in this way makes users of the package have to twiddle their dependencies depending on the OS (e.g. in our project we need to add capstone-windows on windows and capstone on other OSs). <issue_comment>username_1: the reason i put out a separate package for Windows is because the binary size of Windows is quite big (it must include compiled DLL). perhaps having setup.py to download DLLs separately when it installs is a better approach? <issue_comment>username_0: There is no need to do that - pip will fetch the wheel if it needs to and compiler the source if it doesnt. The wheels are separate packages per OS (so you have one wheel for x86 one for amd64) and pip will get the right one. <issue_comment>username_1: But compile from source on Windows is a big issue. This is why our capstone-windows package was introduced with precompiled DLL inside. <issue_comment>username_0: Will you be uploading windows binary wheels for the main capstone package as well? This will help our dependency management because we can just ignore capstone-windows package and use capstone everywhere. <issue_comment>username_1: the most important reason why we have `capstone-windows` package is that it includes precompiled DLLs, so those on Windows without compiler (MSVC) can still install Capstone. meanwhile package `capstone` is for those who want to compile from source on any machine - including Windows with MSVC readily available. now if we want to combine these 2 packages into 1 only, the question is: how can `setup.py` figures out when users want to compile from source, and when they just want to install precompiled binaries? <issue_comment>username_0: In case you are not familiar with wheels: <issue_comment>username_0: In case you are not familiar with wheels: http://pythonwheels.com/ They are binary packages for specific OS versions. If you publish wheels users will just get the binary package without needing to compile it. But if they want to they can always compile it too. It is the very same setup.py used in all cases. <issue_comment>username_1: no i am not familiar with wheels. with PR #589, can you please explain the steps users need to do compile from source, and the steps they need to do to install from binaries (without having to compile)? <issue_comment>username_0: Sure - users just need to do: ``` pip install capstone ``` If there are wheels available for their OS it will just install those - it takes a second or two. If there is no wheel available for their OS it will download the source package and try to build it. At this time, if they dont have compilers, it will fail but otherwise it should just go ahead and compile as normal. As a packager, one needs to do this: ``` python setup.py bdist_wheel ``` to make wheels on the windows system with compilers etc. The produced wheel goes in `dist` you upload it with `twine upload dist/*whl`. <issue_comment>username_1: does this limit the choice for users? for example, in the case there are wheels (Windows), "pip" just install the binaries. but what if users want to compile from source instead? currently we dont have this problem because we purposely separate binary & source packages ("capstone-windows" & "capstone" PyPi). so users can decide what to do (either install from binaries, or compile from source), rather than let "pip" decides that for themselves. do i miss something here? <issue_comment>username_0: If you want to compiler from source: ``` pip install --no-use-wheel ``` Will force a recompile. <issue_comment>username_1: great, this answers my concern. so depending on the environment of the install machine, the right wheels will be downloaded separately by "pip", and OSX will not download Windows DLLs, correct? will look closer at your PR, thanks! <issue_comment>username_0: Yeah you can see this here for example: https://pypi.python.org/pypi?:action=files&name=pytsk3&version=20160207.post1 This package has one OSX version and two windows versions (32 and 64 bits). All other OSs are built from source as normal. PyPi currently have a policy that no wheels are hosted for Linux - I guess there are just too many flavors of Linux to support and compiling on Linux is much easier.<issue_closed> <issue_comment>username_2: Good day, I have just recently tried to install capstone on windows and I get the same error: ``` raise ImportError("ERROR: fail to load the dynamic library.") ImportError: ERROR: fail to load the dynamic library.``` I have downloaded https://pypi.python.org/pypi/capstone-windows and executed "setup.py install"; I have even tried installing it using pip (pip.exe install capstone // pip.exe install capstone-windows); I have downloaded the binaries (windows core engine and Python module for Windows - Binaries ) from http://www.capstone-engine.org/download.html But I still can't get it to work on my windows 7 (64 or 32 bit). Did someone end up finding a solution for this? Thank <issue_comment>username_1: we have Python modules prebuilt at http://www.capstone-engine.org/download.html, you can install those .MSI files. <issue_comment>username_2: Hi username_1 Yes, I installed those .msi files (64 bit and tried 32 bit) but I still get the same error <issue_comment>username_1: something is wrong with your Python path. can you overwrite your `__init__.py` (that was installed from .MSI file) with https://github.com/username_1/capstone/blob/master/bindings/python/capstone/__init__.py, then comment out a debug line https://github.com/username_1/capstone/blob/master/bindings/python/capstone/__init__.py#L218, then try again & report the output here? <issue_comment>username_1: you can see that it tries to load capstone.dll from those places, but failed. now can you find exactly where capstone.dll is installed in your machine? <issue_comment>username_2: Yes, it is installed in "C:\Python27\Lib\site-packages\capstone\lib" directory aswell as the C:\Python27\Lib\site-packages\capstone directory. I just don't get why it can't load them. <issue_comment>username_1: i compiled capstone.dll using VS2015. my best guess is that you dont have the right version of MSVCRT. you may try to download and install MSVCRT from https://www.microsoft.com/en-sg/download/details.aspx?id=48145 <issue_comment>username_2: I had already installed MSVCRT from the link you mentioned. But I still had no luck. I will try compile it with VS as per https://github.com/username_1/capstone/blob/master/COMPILE_MSVC.TXT and see how it goes and let you know. Thanks <issue_comment>username_3: I had the same issue and the MSVCRT Redistributable for VS2015 fixed it for me <issue_comment>username_0: Just FYI - I made this pypi package which should build by itself on windows without anything special installed. Just install the usual VC for python compilers (if you dont have them pip will print the URL to get them from). https://github.com/rekall-innovations/rekall-capstone ``` pip install rekall-capstone ```
<issue_start><issue_comment>Title: Avoiding re-render with functions in props username_0: When the app has a single state (e.g, Redux), updating the state causes the top-level component to redraw. Components that take a function in props consider the props changed, if the function is created in the ancestor's `render` method. For example: ``` Button( Button.Props( s"Fetch", Callback { props.dispatch(FetchDataAction) })) ``` I worked around this by storing the callback in a `val` outside `render`. However, with Redux, for example, that requires annoying boilerplate: Redux `dispatch` method is made available in the props, so the binding needs to be done in `connect`, and for every action (and for every argument set in case the action takes arguments). ``` class Actions(dispatch: Redux.ScalaDispatchFunction) { val fetchData = Callback { dispatch(FetchDataAction) } val completeTodo = (index: Int) => Callback { dispatch(CompleteTodoAction(index)) } ... } ... Button( Button.Props( s"Fetch", props.actions.fetchData)) ... binding magic in connect ``` Is there a better way to avoid a function causing the re-render? I needed to define `shouldComponentUpdate` explicitly for each component: ``` .shouldComponentUpdate { f => f.$.props != f.nextProps } ``` I have a feeling that I should not need to do that, but that it should be the default behavior. <issue_comment>username_0: Ok, I found https://github.com/japgolly/scalajs-react/blob/master/doc/PERFORMANCE.md, which I expect will answer my questions.<issue_closed>
<issue_start><issue_comment>Title: Smooth out post flight / MAVLink FTP usability quirks username_0: @DonLakeFlyer I'm not sure if that's fun for your out of town time, but one of the challenges our flight testers have is to pull logs off the SD card. There are two main aspects to it: 1) UI: The file transfer dialog is not quite accessible and probably needs a default location so users can find it. 2) Transfer time: Because each data packet has to be acked from the GCS, the latency between send and receive means most of the time both systems spend time idling. If you would implement the "multiple packets in flight" scheme you discussed earlier that would probably increase throughput dramatically. Thanks! <issue_comment>username_0: Closing as stale.<issue_closed>
<issue_start><issue_comment>Title: Add functional style syntax (prefix notation support) username_0: For not defined identifiers, treat them as method calls. Examle: ```ruby lc("STRING") == "STRING". lc; # when function " lc" is not defined by the user ```<issue_closed> <issue_comment>username_0: Added here: https://github.com/username_0/sidef/commit/19d18a9504546a15faf3b0edd865f09b7989a61f Examples: https://github.com/username_0/sidef/blob/master/scripts/functional_style.sf
<issue_start><issue_comment>Title: Offline: Make it possible to store (and load) maps on more than just the first of externalFilesDirs() username_0: I just browsed through the code and it looks like currently it is only possible to download regions to the externalFilesDir (i.e. the first result of getExternalFilesDirs(), which means that right now it's not possible to store an OfflineRegion on an external sdcard). Would it be possible to allow creation of an OfflineRegion and providing a preferred/mandatory storage location?<issue_closed> <issue_comment>username_1: No, sorry. The offline implementation depends on all offline regions being placed into the same database file on disk.
<issue_start><issue_comment>Title: Odd behaviour when using a virtual foreach as the sole child of a template username_0: Say I want to use templates to insert `td`s: <table data-bind="foreach:$data"> <tr> <td data-bind="text:template_name"></td> <!-- ko template: { name: template_name, data: template_data } --><!-- /ko --> </tr> </table> What works is having a template insert just one `td`: <script type="text/html" id="a-single-cell"> <td data-bind="text:$data"></td> </script> Or having a template insert multiple `td`s, if it uses a nested template: <script type="text/html" id="this-works"> <!-- ko foreach: $data --> <!-- ko template: { name: "a-single-cell", data: $data } --><!-- /ko --> <!-- /ko --> </script> But, what doesn't work is if the template tries to define all the `td`s itself: <script type="text/html" id="this-breaks"> <!-- ko foreach: $data --> <td data-bind="text:$data">curious</td> <!-- /ko --> </script> The `td` elements are never inserted - though oddly enough their original text is. Unless, that is, the template is hardcoded to also insert another `td` first. That works: <script type="text/html" id="this-also-works"> <td>and curiouser</td> <!-- ko foreach: $data --> <td data-bind="text:$data">no?</td> <!-- /ko --> </script> For completeness, the javascript: var cells = [ "A", "B", "C" ]; ko.applyBindings([ { template_name: "a-single-cell", template_data: "I'm the first row"}, { template_name: "this-works", template_data: cells }, { template_name: "this-breaks", template_data: cells }, { template_name: "this-also-works", template_data: cells }, ]) [Here's a jsfiddle of the complete bug using Knockout-3.4.0](http://jsfiddle.net/twbjv8e1/2/). <issue_comment>username_1: Thanks @username_0 – Is it this problem: https://github.com/knockout/knockout/issues/1759#issuecomment-91534874 ? <issue_comment>username_0: Nope, the behaviour's the same when I [manually include the `<tbody>` tag](http://jsfiddle.net/twbjv8e1/3/). And I forgot to mention, this is Firefox-42.0 (though I've seen it in earlier ones). <issue_comment>username_1: Got it. That is weird. And broken. :o) Thanks for the detailed report. Will have a look, hopefully shortly! <issue_comment>username_0: And I'm also seeing the same behaviour in Chrome-47.0.2526.73. Complete html for testing: <!doctype html> <html lang="en"> <head> <meta http-equiv="Content-Type" content="text/html; charset=utf-8" /> <title>Knockout bug</title> <style> table { border-collapse: collapse } td { border: 1px solid black } </style> </head> <body> <!-- a template can insert one td --> <script type="text/html" id="a-single-cell"> <td data-bind="text:$data"></td> </script> <!-- a template can insert multiple tds... if it's using a nested template--> <script type="text/html" id="this-works"> <!-- ko foreach: $data --> <!-- ko template: { name: "a-single-cell", data: $data } --><!-- /ko --> <!-- /ko --> </script> <!-- but if it tries to insert multiple tds directly, it fails --> <script type="text/html" id="this-breaks"> <!-- ko foreach: $data --> <td data-bind="text:$data">curious</td> <!-- /ko --> </script> <!-- unless there's at least one hardwired td first --> <script type="text/html" id="this-also-works"> <td>and curiouser</td> <!-- ko foreach: $data --> <td data-bind="text:$data">no?</td> <!-- /ko --> </script> <table><tbody data-bind="foreach:$data"> <tr> <td data-bind="text:template_name"></td> <!-- ko template: { name: template_name, data: template_data } --><!-- /ko --> </tr> </tbody></table> <script type="text/javascript" src="http://knockoutjs.com/downloads/knockout-3.4.0.js"></script> <script type="text/javascript"> var cells = [ "A", "B", "C" ]; ko.applyBindings([ { template_name: "a-single-cell", template_data: "I'm the first row"}, { template_name: "this-works", template_data: cells }, { template_name: "this-breaks", template_data: cells }, { template_name: "this-also-works", template_data: cells }, ]) </script> </body> </html> <issue_comment>username_2: `simpleHtmlParse` doesn't strip initial comments and so doesn't see the `<td>`. Another workaround would be to include jQuery so that Knockout uses its parsing code. Also if you only need to target modern browsers, you can use `<template>` instead of `<script>` for the templates.
<issue_start><issue_comment>Title: Fix recommended mime-types require username_0: `mime/types/columnar` wasn’t added until 2.6.1. <issue_comment>username_1: Whoops <issue_comment>username_1: thanks! <issue_comment>username_0: Actually, this should probably be tweaked to be `>= 2.6.1, < 4` so that `derailed_benchmarks` guidelines don’t lock people to an obsolete version of mime-types (and mime-types 3 loads columnar by default). To be fair, this wasn’t released until November (a month after this PR was filed), but the recommendation should be updated.
<issue_start><issue_comment>Title: 404 error when accessing to a shared table username_0: **Steps** From a new brand organisation, with two new users (A -the owner-, and B): * Create a new table in user B. Add some rows randomly. Share it with read permissions to user A * Log into user A, go to the datasets dashboard and try to access the table **Result** The table shows fine and you're able to edit it: ![image](https://cloud.githubusercontent.com/assets/1730320/8088070/1b8412c2-0f9f-11e5-984e-8267527039eb.png) The following error appears: *GET https://orgtestded02.cartodb-staging.com/u/userorgtestded02/api/v1_1/tables/untitled_table 404 (Not Found)J.support.ajax.J.ajaxTransport.send @ cartodb.uncompressed.js:27J.extend.ajax @ cartodb.uncompressed.js:27a.sync @ cartodb.uncompressed.js:1872Backbone.sync @ app.js:85f.extend.fetch @ cartodb.uncompressed.js:857cdb.decorators.elder.b @ cartodb.uncompressed.js:25856cdb.decorators.elder.e.elder @ cartodb.uncompressed.js:25873cdb.core.Model.Backbone.Model.extend.fetch @ cartodb.uncompressed.js:26400cdb.decorators.elder.b @ cartodb.uncompressed.js:25856cdb.decorators.elder.e.elder @ cartodb.uncompressed.js:25873cdb.decorators.elder.b @ cartodb.uncompressed.js:25859cdb.decorators.elder.e.elder @ cartodb.uncompressed.js:25873cdb.admin.CartoDBTableMetadata.cdb.ui.common.TableProperties.extend.fetch @ table.js:561(anonymous function) @ cartodb_layer.js:44a.Events.trigger @ cartodb.uncompressed.js:676f.extend.change @ cartodb.uncompressed.js:986f.extend.set @ cartodb.uncompressed.js:827f.extend.fetch.b.success @ cartodb.uncompressed.js:853J.Callbacks.l @ cartodb.uncompressed.js:25J.Callbacks.m.fireWith @ cartodb.uncompressed.js:25d @ cartodb.uncompressed.js:27J.support.ajax.J.ajaxTransport.send.d @ cartodb.uncompressed.js:27* *userorgtest4ded02.untitled_table:1 Uncaught SyntaxError: Unexpected end of input* Failing request in detail: *Request URL:https://orgtestded02.cartodb-staging.com/u/userorgtestded02/api/v1_1/tables/untitled_table Request Method:GET Status Code:404 Not Found* **More info** @juanignaciosl has deactivated the flag `active_record_layers_endpoint` which seemed to be the issue. cc @username_1<issue_closed>
<issue_start><issue_comment>Title: API: Index.map should return Index rather than array username_0: ``` python df2=pd.DataFrame([[0,1],[2,3]],columns=['c1','c2'],index=['i1','i2']) df2.index.name='index' df2.to_html() ``` ``` html <table border="1" class="dataframe">\n <thead>\n <tr style="text-align: right;">\n <th></th>\n <th>c1</th>\n <th>c2</th>\n </tr>\n <tr>\n <th>index</th>\n <th></th>\n <th></th>\n </tr>\n </thead>\n <tbody>\n <tr>\n <th>i1</th>\n <td>0</td>\n <td>1</td>\n </tr>\n <tr>\n <th>i2</th>\n <td>2</td>\n <td>3</td>\n </tr>\n </tbody>\n</table> ``` ``` python df2.index = df2.index.map(lambda x: x.upper()) df2.to_html() ``` ``` html <table border="1" class="dataframe">\n <thead>\n <tr style="text-align: right;">\n <th></th>\n <th>c1</th>\n <th>c2</th>\n </tr>\n </thead>\n <tbody>\n <tr>\n <th>I1</th>\n <td>0</td>\n <td>1</td>\n </tr>\n <tr>\n <th>I2</th>\n <td>2</td>\n <td>3</td>\n </tr>\n </tbody>\n</table> ``` I think `map` should accept `inplace` ``` INSTALLED VERSIONS ------------------ commit: None python: 2.7.11.final.0 python-bits: 64 OS: Windows OS-release: 7 machine: AMD64 processor: Intel64 Family 6 Model 60 Stepping 3, GenuineIntel byteorder: little LC_ALL: None LANG: None pandas: 0.18.0 nose: 1.3.7 pip: 8.1.1 setuptools: 18.2 Cython: None numpy: 1.10.4 scipy: 0.17.0 statsmodels: None xarray: None IPython: 4.1.2 sphinx: None patsy: None dateutil: 2.5.1 pytz: 2016.2 blosc: None bottleneck: 1.0.0 tables: None numexpr: 2.5 matplotlib: 1.5.1 openpyxl: None xlrd: None xlwt: None xlsxwriter: None lxml: None bs4: None html5lib: None httplib2: None apiclient: None sqlalchemy: 1.0.12 pymysql: None psycopg2: None jinja2: 2.8 boto: None ```<issue_closed>
<issue_start><issue_comment>Title: Update Geoserve to load GeoNames from FTP site username_0: Update the install process to load from the FTP site (instead of the GeoNames site). This way all servers will have the same copy of the GeoNames dataset. There are four files that are loaded during the Geoserve install process, these files need to be transferred from the GeoNames site (http://download.geonames.org/export/dump/) to the FTP server (ftp://hazards.cr.usgs.gov/web/hazdev-geoserve-ws/): - cities1000.zip - US.zip - admin1CodesASCII.txt - countryInfo.txt<issue_closed>

Dataset Card for "issues_content_500k"

More Information needed

Downloads last month
0
Edit dataset card