The full dataset viewer is not available (click to read why). Only showing a preview of the rows.
The dataset generation failed because of a cast error
Error code: DatasetGenerationCastError Exception: DatasetGenerationCastError Message: An error occurred while generating the dataset All the data files must have the same columns, but at some point there are 3 new columns ({'target', 'trigger', ' consequences'}) and 3 missing columns ({'comment_id', 'tbdf', 'comment_body'}). This happened while the csv dataset builder was generating data using hf://datasets/davanstrien/incivility-dataset/data/annotated_issue_level.csv (at revision e265d04ceed8dbd3cba8c6a64950522effc682a2) Please either edit the data files to have matching columns, or separate them into different configurations (see docs at https://hf.co/docs/hub/datasets-manual-configuration#multiple-configurations) Traceback: Traceback (most recent call last): File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 2011, in _prepare_split_single writer.write_table(table) File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/arrow_writer.py", line 585, in write_table pa_table = table_cast(pa_table, self._schema) File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/table.py", line 2302, in table_cast return cast_table_to_schema(table, schema) File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/table.py", line 2256, in cast_table_to_schema raise CastError( datasets.table.CastError: Couldn't cast id: int64 issue_id: int64 trigger: string target: string consequences: string -- schema metadata -- pandas: '{"index_columns": [{"kind": "range", "name": null, "start": 0, "' + 827 to {'id': Value(dtype='int64', id=None), 'issue_id': Value(dtype='int64', id=None), 'comment_id': Value(dtype='int64', id=None), 'tbdf': Value(dtype='string', id=None), 'comment_body': Value(dtype='string', id=None)} because column names don't match During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 1321, in compute_config_parquet_and_info_response parquet_operations = convert_to_parquet(builder) File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 935, in convert_to_parquet builder.download_and_prepare( File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1027, in download_and_prepare self._download_and_prepare( File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1122, in _download_and_prepare self._prepare_split(split_generator, **prepare_split_kwargs) File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1882, in _prepare_split for job_id, done, content in self._prepare_split_single( File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 2013, in _prepare_split_single raise DatasetGenerationCastError.from_cast_error( datasets.exceptions.DatasetGenerationCastError: An error occurred while generating the dataset All the data files must have the same columns, but at some point there are 3 new columns ({'target', 'trigger', ' consequences'}) and 3 missing columns ({'comment_id', 'tbdf', 'comment_body'}). This happened while the csv dataset builder was generating data using hf://datasets/davanstrien/incivility-dataset/data/annotated_issue_level.csv (at revision e265d04ceed8dbd3cba8c6a64950522effc682a2) Please either edit the data files to have matching columns, or separate them into different configurations (see docs at https://hf.co/docs/hub/datasets-manual-configuration#multiple-configurations)
Need help to make the dataset viewer work? Open a discussion for direct support.
id
int64 | issue_id
int64 | comment_id
int64 | tbdf
string | comment_body
string |
---|---|---|---|---|
1 | 6,209,234 | 6,209,234 | null | I noticed that Prepared Statements seem to be emulated client-side by escaping certain characters.
Any plans to fully support service-side Prepared Statements? This can be done via the binary protocol, but there's a slower SQL-based approach available for non-binary clients:
http://dev.mysql.com/doc/refman/5.5/en/sql-syntax-prepared-statements.html |
2 | 6,209,234 | 7,718,792 | Entitlement | Yes, prepared statements are on my todo list. I don't need them myself, so unfortunately they kind of linger at the bottom of the list unless somebody wants to sponsor some of my time to work on the feature.
That being said, the SQL based approach looks interesting as a stop-gap solution for the short term. |
3 | 6,209,234 | 7,722,530 | null | The only downside with the SQL-based approach is that you probably still end up needing to do client-side escaping. Still it does offer a little bit more structure, so it might still buy some protection. Depending on how you do it, it might also simplify the escaping part.
Unless I'm mistaken, you are already implementing the actual protocol at the lower levels of your driver. I wonder how much more you need at that level to finish?
http://dev.mysql.com/doc/internals/en/command-packet-details.html |
4 | 6,209,234 | 7,722,736 | null | Prepared statements use a range of additional packets that are currently not implemented by my driver:
- http://dev.mysql.com/doc/internals/en/prepared-statement-initialization-packet.html
- http://dev.mysql.com/doc/internals/en/parameter-packet.html
- http://dev.mysql.com/doc/internals/en/long-data-packet.html
- http://dev.mysql.com/doc/internals/en/execute-packet.html
I have not yet analyzed how much work it would be to implement them, but my gut feeling is ~5 days of work. |
5 | 6,209,234 | 7,744,377 | Impatience | How does https://github.com/sidorares/nodejs-mysql-native handle this? Any reason not to just borrow parts of the way it's done over there?
I'm still somewhat struggling with the number of different MySQL drivers for Node.JS. I think Node makes it way too fun to write network protocol code. :P Maybe in a year or so the community will have coalesced around one or two really solid libraries. |
6 | 6,209,234 | 7,750,467 | Impatience | > How does https://github.com/sidorares/nodejs-mysql-native handle this?
It seems to implement the parts of the protocol that are required for prepared statements.
> Any reason not to just borrow parts of the way it's done over there?
Yes, I didn't have the time to work on this yet. I'm also not in the business of copying code unless it's up to my personal coding standards. So even with good inspiration like this, it will still take me some time.
> Maybe in a year or so the community will have coalesced around one or two really solid libraries.
This library is solid. It just does not implement all features. |
7 | 6,209,234 | 10,515,691 | Bitter frustration | Couldn't we just prepare and execute statements using SQL instead of raw packets? |
8 | 6,209,234 | 10,531,343 | Mocking | @dresende the SQL method still winds up tampering with values to make them safe (escaping quotes, etc), whereas the protocol method explicitly separates query from values so tampering is not necessary. To be fair, as long as its impossible to smuggle a query in as a value, the driver is plenty secure enough. I suppose I'm just being a nitpicky ex-PHP developer who wants everything to be conceptually elegant. :P |
9 | 12,894,489 | 12,894,489 | null | I'm not sure if this is officially supported by Doctrine, since it is not documented, but it works - with an error notice.
For example, if I have documents A and B where document B has an embedded document, E, I can design the mapping as follows:
``` php
/** @Document */
class A {
/** @ReferenceMany(targetDocument="B", mappedBy="foo.bar") */
private $whatever = [];
}
/** @Document */
class B {
/** @EmbedMany(targetDocument="E") */
private $foo = [];
}
/** @EmbeddedDocument */
class E {
/** @ReferenceOne(targetDocument="A", inversedBy="whatever") */
private $bar;
}
```
However, reading A::$whatever generates the following notice:
> Notice: Undefined index: foo.bar in mongodb-odm/lib/Doctrine/ODM/MongoDB/Persisters/DocumentPersister.php on line 709
Despite this, the document collection is retrieved correctly. |
10 | 12,894,489 | 19,156,555 | null | This doesn't work if class `B` has a binary ID and `$bar` is a discriminator reference. I'm not sure what the culprit is here, but the symptomns are that `count(A::$whatever)` will never be > 1. That is, at most only one item is loaded into the collection on the field with an embedded document mapping. |
11 | 12,894,489 | 913,841,398 | null | This issue has been automatically marked as stale because it has not had any recent activity. It will be closed in a week if no further activity occurs. Thank you for your contributions. |
12 | 12,894,489 | 913,845,504 | Irony | Imagine thinking anyone cares about design flaws. |
13 | 12,894,489 | 914,030,250 | Irony | Imagine thinking anyone cares about Open Source being a joint effort. |
14 | 12,894,489 | 914,097,655 | Irony | Imagine thinking open source means other people work for you for free. |
15 | 13,258,430 | 13,258,430 | Vulgarity | If you set up some task this way:
before 'deploy', 'some_task', And in some_task you simply read the "latest_release" variable (in a puts line for instance), then:
Current results:
a) If it's the first deploy ever in capistrano (the www folder doesn't exist), capistrano breaks completely, it cannot deploy.
b) If it's not the first deploy, then it will make capistrano change its behaviour with regards to the "current" symlink, because it will point to the previous release instead of the last one (after the deploy happened).
Expected results:
a) It should work.
b) It should point current to the latest release.
This is a big fuckup IMHO. |
16 | 13,258,430 | 16,461,106 | null | > This is a big fuckup IMHO.
Profanity aside, you may have a point. I'll take a patch. |
17 | 13,258,430 | 16,461,191 | null | Why do you close the issue? |
18 | 13,258,430 | 16,461,276 | Bitter frustration | Because you don't offer a patch, and profanity really pisses me off. |
19 | 13,258,430 | 16,461,378 | Bitter frustration | Mutable programming pisses me off even more.
Anyway, the point of github issues is to have a bug tracker.
The contributions comes in the pull-requests tab. |
20 | 13,258,430 | 16,461,583 | null | Indeed, in short - if there's no release, you are doing something wrong if you are calling `latest_release`. It's not safe for use when it's a cold deploy. _It's that simple._
There are other variables which are safe for use throughout the deploy process.
Capistrano, the 2.0.x branch mainline is effectively on the verge of being deprecated, send a patch or things won't be fixed. And even if you do send a patch, changing the behaviour of core variable lookups is not something I approach lightly.
I'd rather invest that time into the v3 branch which will coincide with the Rails 4 release in the coming weeks, and have a stable version which has fewer insane design decisions behind it. |
21 | 13,258,430 | 16,461,645 | null | As for your second point, you may be onto something there, but it's too late to change it, it's unreasonable overhead to call back to the server to calculate that every time, the results are cached for a reason. If you populate the cache with bad data, then you can't really effectively clear it. I accept that it's confusing, but I won't change it. |
22 | 13,258,430 | 16,461,791 | Impatience | I understand that latest_release may not have the proper value when it's queried at the wrong time... but CHANGING BEHAVIOUR???? Sorry for the caps, but really finding out this bug has put me on my nerves.
In regards to your backwards-compatibility policy: fair enough, I understand if you prefer to accept only a patch to fix this in master rather than the 2.0.x branch, but:
a) I never proposed to fix it in a branch.
b) This bug is not fixed yet on any branch, so it should be reopened. Otherwise people confused about it will not be able to find it.
This is, my friend, how bug trackers work. |
23 | 13,258,430 | 16,462,166 | null | I won't fix it, and it's been the way it is for five years without anyone running into problems. I can't afford the time to test it, fix it and make sure it's safe before releasing it. That's what it boils down to. Those variables are a source of a lot of confusion, and they are fragile, and have different meanings wherever you call them, anyway.
I'd like to be able to fix it, your profane issue report aside; but the reality is, I simply can't. |
24 | 13,258,430 | 16,462,334 | Impatience | Then leave this issue open until the end of times. |
25 | 13,258,430 | 16,462,422 | null | Otherwise it would be the first time in my entire life in which I see a bug closed as WONTFIX.
I mean, I've seen feature requests closed as WONTFIX status. But bugs? It's like denying to recognize that there is a bug. |
26 | 13,258,430 | 16,462,602 | null | Just for the sake of people being able to find the issue (and be enlightened by the work-around: "not read a variable"), this issue should remain open. |
27 | 13,258,430 | 16,462,723 | null | Agreed. I've re-ragged with v2. |
28 | 13,258,430 | 27,621,011 | null | Why did you close this? People are getting here from this SO question: http://stackoverflow.com/questions/3141454/deploysymlink-on-capistrano-points-the-current-directory-to-previous-release/16043844#16043844 and if they find this issue closed, they might be misled into thinking that it is fixed. |
29 | 13,258,430 | 27,625,808 | null | @knocte did you reproduce it on v3? |
30 | 13,258,430 | 27,630,493 | null | Oh, when was that released? |
31 | 13,258,430 | 27,630,538 | null | See rubygems, early september
On 2 Nov 2013 20:30, "Andres G. Aragoneses" notifications@github.com
wrote:
> Oh, when was that released?
> > —
> Reply to this email directly or view it on GitHubhttps://github.com/capistrano/capistrano/issues/440#issuecomment-27630493
> . |
32 | 13,258,430 | 27,630,609 | null | Ok, then I have not tested that. And I'm afraid I will not be able to test it very soon.
Just one advice: if you want to close issue in order to request feedback from users about new versions, add a comment explaining why you are closing the issue.
Thanks |
33 | 13,258,430 | 27,642,117 | null | We have more than 100 obsolete issues to close, and past experience
suggests the OPs never reapond
On 2 Nov 2013 20:36, "Andres G. Aragoneses" notifications@github.com
wrote:
> Ok, then I have not tested that. And I'm afraid I will not be able to test
> it very soon.
> > Just one advice: if you want to close issue in order to request feedback
> from users about new versions, add a comment explaining why you are closing
> the issue.
> Thanks
> > —
> Reply to this email directly or view it on GitHubhttps://github.com/capistrano/capistrano/issues/440#issuecomment-27630609
> . |
34 | 13,258,430 | 27,737,223 | Irony | That doesn't respect the Robust principle ;) http://en.wikipedia.org/wiki/Robustness_principle |
35 | 13,258,430 | 286,312,317 | null | People are still upvoting my stackoverflow answer linked above, which seems to hint that this bug is still present in 3.x. |
36 | 13,258,430 | 286,354,139 | null | thanks for checking the so answer upvote rates @knocte |
37 | 13,258,430 | 378,936,727 | null | Running into this issue currently I think.
How does one call the current release path that is right now being built?
I have a task running at `after :updated, :build do` and I'm trying to change directories and then run composer install, but changing directories using release_path is giving me the previous release, not the one currently being built.
Is that this issue? If not I'm sorry but the stackoverflow question led me here and I'm going kind of crazy trying to figure out how to change directories and then run a command, without it doing it in the release that's about to be replaced. |
38 | 13,258,430 | 378,988,513 | null | This issue is more than five years old, and refers to a long obsolete version of Capistrano, please open a new issue. |
39 | 13,258,430 | 534,820,283 | null | Can someone link to this "new" issue? |
40 | 13,258,430 | 535,014,772 | null | This issue is very old and the discussion was not very productive, so I will lock it. If you are a Capistrano v3 user running into an error using the `latest_release` setting please open a new issue. |
41 | 13,258,430 | 535,014,941 | null | @batmanbury there is no "new" issue; please open one if you are running into this problem. |
42 | 22,814,728 | 22,814,728 | null | The only query / issue I have is the big lines it draws blue (or themed borders) around countries.
There is a method to remove them? |
43 | 22,814,728 | 28,680,545 | null | Pass the options parameter:
http://leafletjs.com/reference.html#polygon
http://leafletjs.com/reference.html#path |
44 | 22,814,728 | 364,323,894 | Impatience | It's a shame you couldn't provide an answer. Pointing to a documentation page that has no response to the original question isn't useful at all. Equally, it's pretty useless having something that draws a map with a big blue border around every country and no explanation as to how to remove it. |
45 | 22,814,728 | 364,366,530 | Mocking | @abrice This kind of passive-aggressiveness is disrespectful towards maintainers.
I suggest reading ["How To Ask Questions The Smart Way"](http://www.catb.org/esr/faqs/smart-questions.html) and ["How to Report Bugs Effectively"](http://www.chiark.greenend.org.uk/~sgtatham/bugs.html) so you can make good bug reports in the future. |
46 | 24,996,968 | 24,996,968 | null | Because most of the projects require Composer I have a hard time installing PHP application. This is because I don't have SSH access to my shared hosting. Sadly Composer makes PHP projects for only rich people who can buy hostings with SSH access. Can you please make Composer so that it does not require SSH. http://stackoverflow.com/questions/20894518/how-do-i-install-composer-on-a-shared-hosting |
47 | 24,996,968 | 31,499,832 | null | Write a PHP script that you can hit in your browser that executes the `composer.phar` on your server? |
48 | 24,996,968 | 31,500,589 | null | @milesj, any tutorial on it? |
49 | 24,996,968 | 31,501,732 | null | No tutorial really, just execute shell commands or something similar. http://php.net/shell_exec
`shell_exec('php /path/to/composer.phar install');`
Never really tried it however. |
50 | 24,996,968 | 31,510,203 | null | Unfortunately, `shell_exec` and similar functions are also disabled on most shared hosting that don't support ssh login.
The current solution is to run composer locally and upload the vendor dir with the rest of your files.
As a gift, you can use [composer as a service](http://composer.borreli.com/) by @pborreli to download a zip of all your dependencies. It must be possible to create a simple PHP page that upload your `composer.json`, download the zip and extract it. |
51 | 24,996,968 | 31,510,483 | null | @GromNaN there are plans to have it done this via a composer plugin or extension or configuration (via composer as a service future api), that hopefully soon :baby: |
52 | 24,996,968 | 31,510,661 | null | @cordoval The composer plugin is not enough as the cli of composer cannot be run on these environments. A webpage is necessary. But that will be easy. |
53 | 24,996,968 | 31,510,933 | null | If you don't have a CLI access on these systems, there is a high probably than other tools used by composer are not available either. In such case, the proper workflow would indeed be to run composer locally and to upload the code including the vendor folder, a suggested by @GromNaN |
54 | 24,996,968 | 31,521,494 | null | WordPress can download and modify files from internet. I'm sure it is possible to make a similar installer (dependency-manager) with user interface. |
55 | 24,996,968 | 31,524,346 | null | @ilhanyumer it is indeed possible to build an app performing the composer job, but composer itself relies on git, svn and hg (depending of the VCS being used) to install from source. And it is not always possible to install from an archive. Shared hostings don't offer them generally.
Thus, it relies on `proc_open` to call these tools, which is often disabled on these shared hostings |
56 | 24,996,968 | 60,229,012 | null | https://github.com/CurosMJ/NoConsoleComposer
This can sort this thing out, hopefully. |
57 | 24,996,968 | 60,236,353 | null | We have created a client for the Contao CMS to use composer directly from the CMS Backend.
https://github.com/contao-community-alliance/composer-client
The client can run composer in 3 modes:
- inline: directly within the web servers php process
- process: start a process with `proc_open` within the web servers php process
- detached: start and fork composer into background process |
58 | 24,996,968 | 92,323,369 | null | This isn't really an issue with composer and should be closed. > Sadly Composer makes PHP projects for only rich people who can buy hostings with SSH access.
That's entirely untrue as Digital Ocean sells VPS for $5/month and Linode sells them for $10/month. |
59 | 27,120,442 | 27,120,442 | Impatience | Hi, im running node-mysql latest on node-latest.
Somebody using the acunetix vulnerability scanner has triggered this error:
UNKNOWN COLUMN '$acunetix' IN WHERE CLAUSE.
The query: SELECT id, email FROM accounts WHERE username = ?
How is this possible? Its very dangerous to our application, please respond quickly. |
60 | 27,120,442 | 34,424,093 | null | The problem seems to be about params that are not strings. Although I'll continue to sanitize all my user inputs (to avoid username impersonation attacks like `admіn` posing as `admin`), I'd expect the query engine to convert any param to a string if it should have been one in the first place. If it already was, `String(param)` should be of low cost. |
61 | 27,120,442 | 34,425,569 | null | @thekiur @mk-pmb can you post code samples? |
62 | 27,120,442 | 34,425,679 | null | We can confirm that the problem is caused by passing objects to the query call.
The objects come from the express bodyParser middleware.
We were simply passing req.body.username as the parameter for that query.
The acunetic vulnerability tester injected an object there.
We are not sure on the severity of this issue, but its unexpected to say atleast.
As we experienced, this can crash a live application in production mode if you dont expect any db errors.
There is no code to show: its as simple as passing a req.body.something to the .query call of node-mysql when using express with the bodyparser middleware. Running the vulnerability scanner against https://gist.github.com/ssafejava/9a2d77704712a8769322 causes the exception to be thrown. |
63 | 27,120,442 | 34,441,093 | null | This is not an issue with escaping with this library; this library is properly escaping all values and column names. The security issue is just with the way you are combining express and this library, such that you were expecting to get a string from express, so you were only expecting the `?` to expand according to string rules.
`req.body` properties can be anything with `bodyParser` and as such you need to at least verify what you are using is a string before passing to your query. |
64 | 27,120,442 | 34,467,169 | null | I consider prepared statements as intended to mitigate lack of input validation in the params in general. Therefor, limiting it to the case where input has already been validated as being a string, in my opinion misses the point.
Yours, MK |
65 | 27,120,442 | 34,470,964 | null | These are not prepared statements, they are done client-side and have various rules for how `?` is replaced depending on the data type, which is documented. If you want to be sure you are using the string-based `?` replacement though the API, you have to give the API a string. If you don't want to validate at all, you can use the `String()` function:
`conn.query('SELECT * FROM user WHERE username = ?', [String(req.body.username)]')`
The _purpose_ if it doing stuff different for objects is to help people who want to easily use `SET`:
`conn.query('UPDATE user SET ? WHERE id = ?', [{username: 'bob', password: '1234'}, 43])`
Please see the "Different value types are escaped differently, here is how:" section in https://github.com/felixge/node-mysql#escaping-query-values |
66 | 27,120,442 | 34,478,411 | null | I see. Looks like an unlucky case of embrace and extend. I wish you had opted for something like `??` in that case. Probably too late to change the interface?
Edit: Not really embrace and extend, as you wrote they aren't prepared statements. Rather just a pitfall for people who learn from tutorials and conclude topical similarity from visual similarity.
Edit 2: I see, `??` is already used for column names. |
67 | 27,120,442 | 38,685,690 | Identity attacks/Name-Calling | I can't see how it's the type system's fault when programmers assume that a mechanism that looks like prepared statements will defuse any data they pass in. Let's at least blame it at the programmers for trusting visual similarity instead of reading the manual thoroughly. |
68 | 27,120,442 | 38,686,940 | null | @mk-pmb sure, though this module only has a small Readme, which has all the `?` stuff explained (https://github.com/felixge/node-mysql#escaping-query-values), so it's not even some weird hidden feature. Unfortunately if people on the Internet are writing tutorials about this module and giving incomplete or wrong information, it's hard for us to even try to police that. |
69 | 27,120,442 | 43,112,213 | null | @mk-pmb it's the programmers role to understand the libraries he/she is using at least to the extend they are documented before including them in any production environment. If the library isn't fully documented, that's on the creator, but since this is an open-source world you can't really blame somebody for dedicating their time towards creating something for free.
Inferring functionality from syntax is useful, but think rationally: if the `?` operator accepts strings, would it only accepts strings? What if it accepted other data types? Jumping to blind assumptions about a library is a recipe for disaster, and good security protocols still mandate data validation.
Libraries and languages that make it easier to start developing are extremely useful, but I fear it gives a novice developer a misplaced sense of confidence. It's easy to build a small application, and when it "just works" assume nothing could possibly go wrong. |
70 | 27,120,442 | 43,314,833 | null | > Jumping to blind assumptions about a library is a recipe for disaster, and good security protocols still mandate data validation.
I agree with that. And still, lots of people do it. So for all software that I manage, I'll try and have it be compatible with everyday flawed humans, in hopes to lessen the risk and impact of errors in software based on mine, written by fallible humans.
BOfH would ship a GNU/Linux distro where the default shell acts fully like bash, just that on every line starting with an uppercase letter, the meaning of `&&` and `||` is swapped. Might even document it properly. You'd read the manual and probably wouldn't use it. However, if the next day a toy drone crashes into your car because it's pilot didn't read the manual as thoroughly as you did, your expectations of how humans should act had much less impact than how they really do act. And I'd still partially blame that BOfH.
Update: Thanks for making it opt-in. |
71 | 27,120,442 | 43,332,022 | null | Please, this issue doesn't need any more comments. It is still open as a tracking issue for me. There are coming changes that will affects this module and even things like `express` which will make any kind of "shoot yourself in the foot" operations opt-in. As an example, for this module `?` really should strictly only result in a single entry in the SQL (i.e. numbers, strings, null, etc.). Anything over that should be opt-in (on the connection-level or one-off on the query level to reduce accidental exposure.
These are changes that are coming I listed, not speculation. Please just know that this issue is taken seriously. |
72 | 27,120,442 | 69,188,777 | null | Are there any circumstances where this would lead to an injection attack?
As far as I can work out so far this appears to only ever result in syntax errors. |
73 | 27,120,442 | 69,234,364 | null | @SystemParadox: I don't think so. The report seems to be badly explained and seems to be related to constructing SQL based on user input without any check.
Good usage:
``` js
db.query("SELECT * FROM users WHERE id = ?", [ +req.params.id ], next);
```
No harm on that, casting forces it to be a number. Even if it wasn't a number and the `+` was omitted, it's just fine (or else you would have problems when UPDATing columns with binary data - there's tests for that).
The problem here seems to be with something more like:
``` js
// BAD! BAD!
db.query("SELECT * FROM " + req.params.table + " WHERE ....", next);
``` |
74 | 27,120,442 | 69,249,939 | null | @SystemParadox yeah I just took a look at the formatting and escaping code. I don't see any way that passing unvalidated data to be interpolated into the query could result in an injection vulnerability. Without validation you can easily get a syntax error. |
75 | 30,381,958 | 30,381,958 | null | Could you please implement xBR shader or xBRZ filter or both in GSDX plugin. It would be very beneficial for both PS2 and PSX 2D and sprite-based games.
xBR and xBRZ are pixel art scaling algorithms ,they give best results in 2D/sprite based games with low resolution textures and games with pre-rendered backgrounds which dont upscale well with higher internal resolutions but they also give good results in 3D games. xBR/xBRZ are already used with good results in emulators like Retroarch , Higan, Desmume and PPSSPP.
Here is explanation:http://code.google.com/p/2dimagefilter/wiki/ImageScaling#xBR
http://www.vogons.org/viewtopic.php?t=34125
Here is newest xBR source code including hybrid variants: https://github.com/libretro/common-shaders
Source code for xBRZ is in source code of HqMAME : https://sourceforge.net/projects/hqmame/ ,http://sourceforge.net/projects/hqmame/files/xBRZ.zip/download, Spline36: http://code.google.com/p/remote-joy-lite-fix/source/browse/trunk/RemoteJoyLite_pc/spline36.psh
https://github.com/xbmc/xbmc/tree/master/xbmc/cores/VideoRenderers/VideoShaders
Here is comparison for 3D graphics:http://blog.metaclassofnil.com/?p=306
Here is official tutorial about xBR: http://www.libretro.com/forums/viewtopic.php?f=6&t=134
http://forum.zdoom.org/viewtopic.php?f=19&t=37373&sid=57269f5e32514a88a5d5252839c9ff6a&start=45
Some 2D graphics of old version of xBR: http://imgur.com/a/ZZiiH
I also found interesting algorithm Libdepixelize: http://bazaar.launchpad.net/~vinipsmaker/libdepixelize/trunk/revision/184
http://vinipsmaker.wordpress.com/tag/libdepixelize/
https://sourceforge.net/projects/inkscape/
There is also ''Ours'' but I cant find source code for it anywhere: http://research.microsoft.com/en-us/um/people/kopf/pixelart/supplementary/
But Libdepixelize and Ours both use Kopf-Lischinski algorithm so they should have similiar effects.
http://www.mediafire.com/download/22o6ahnchkbzhef/Shaders.rar
http://www.mediafire.com/download/86bo6bl66cnwv2j/chromaNEDI.rar
https://github.com/jpsdr/NNEDI3
http://forum.doom9.org/showthread.php?t=170727 |
76 | 30,381,958 | 38,958,157 | null | It would work pretty well as a texture scaler too.
PPSSPP already does this. |
77 | 30,381,958 | 39,003,420 | null | I think best option would be to use xBR in hardware mode and xBRZ can work in both software and hardware modes. |
78 | 30,381,958 | 39,018,418 | null | Instead of asking, try submitting a patch with your desired changes. |
79 | 30,381,958 | 39,020,604 | null | Unfortunately I am not programmer and I am not skilled enough to implement it myself. |
80 | 30,381,958 | 42,581,963 | null | This is something i would like to see in the future, the way its implemented in PPSSPP is great. Although really not very necessary for PS2 emulation just a nice extra. |
81 | 30,381,958 | 42,582,779 | null | It would be great for textures. |
82 | 30,381,958 | 42,586,983 | null | It wouldn't actually... since its design/function is for sprites, not textures. |
83 | 30,381,958 | 42,634,624 | null | Its possible to use it for textures as well. |
84 | 30,381,958 | 42,634,674 | null | PPSSPP is using xBRZ for textures and it looks very good. |
85 | 30,381,958 | 42,645,331 | null | yes but one more option that the team don't wan't to do is mipmap at hardware mode in gsdx but i think the mipmap are better than xBRZ |
86 | 30,381,958 | 42,647,630 | null | Mipmapping would be useful but is pretty basic xBRZ may be better. Techniques which are clearly better are Tessellation, Displacement Mapping and Parallax Occlusion Mapping. |
87 | 30,381,958 | 43,626,599 | Mocking | xBRZ makes textures and sprites ugly, I rather have Nearest. |
88 | 30,381,958 | 43,628,117 | Impatience | xBR/xBRZ looks ugly only in your subjective opinion. Nearest-neighbor is primitive technique which looks blurry and pixelated, |
89 | 30,381,958 | 43,635,582 | Bitter frustration | xBR/xBRZ are ugly because computers are not artists, they ruin the artwork of games. |
90 | 30,381,958 | 43,637,627 | Bitter frustration | I am not sure what you mean but not everyone wants to use emulators with ugly native graphics.
PCSX2 already has option to increase internal resolution but xBR/xBRZ would be very useful for 2D sprites, pre-rendered backgrounds and textures. |
91 | 30,381,958 | 43,638,741 | Impatience | yes that's true ratchet and clank games have this problem and are very ugly only works in software mode and it's freaking me out with a amd fx 8350 ¬¬ a points of fps of 20 or 30 |
92 | 30,381,958 | 43,639,134 | Bitter frustration | You mean Native graphics, most of which created professionally by artist then you want to ruin it with an over-exaggerated interpolation. |
93 | 30,381,958 | 43,639,820 | null | Ratchet & Clank was my first PS2 game I ever owned, i have being waiting years for it to be fixed in PCSX2 |
94 | 30,381,958 | 43,645,481 | null | Developers wanted games to look like they look in high resolution but were limited by underpowered hardware. |
95 | 30,381,958 | 43,650,840 | Insulting | Developers certainly not want to apply a silly image interpolation like xBR to ruin all their artwork |
96 | 30,381,958 | 43,650,986 | Impatience | Prove they dont. |
97 | 30,381,958 | 43,651,225 | Insulting | If you take a SNES Game like Super Mario World, the developers created all the artwork pixel by pixel.. xBR simply adds unwanted pixels everywhere, ruining the sharp-crisp sprites, it's a silly gimmicky interpolation . |
98 | 30,381,958 | 43,651,763 | Bitter frustration | Snes games have huge blocky pixels because hardware was primitive and not because developers intended it to look that way. xBR is advanced upscaling algorithm not ''silly gimmicky interpolation'' show little respect to shader/filter developers. |
99 | 30,381,958 | 43,652,703 | Mocking | xBR is silly gimmicky interpolation because computers are not artists, you need an Artist to recreate the sprites and textures. |
100 | 30,381,958 | 43,653,442 | Insulting | You are obsessed about computers not being artists but they dont need to be. Shader developers being artists is enough. Artists already created xBR/xBRZ. |
End of preview.
No dataset card yet
New: Create and edit this dataset card directly on the website!
Contribute a Dataset Card- Downloads last month
- 2