question
stringlengths 11
28.2k
| answer
stringlengths 26
27.7k
| tag
stringclasses 130
values | question_id
int64 935
78.4M
| score
int64 10
5.49k
|
---|---|---|---|---|
I'm trying to split a string that can either be comma, space or semi-colon delimitted. It could also contain a space or spaces after each delimitter. For example
22222,11111,23232
OR
22222, 11111, 23232
OR
22222; 11111; 23232
OR
22222 11111 23232
Any one of these would produce an array with three values ["22222","11111","23232"]
So far I have var values = Regex.Split("22222, 11111, 23232", @"[\\s,;]+") but this produces an array with the second and third values including the space(s) like so:
["22222"," 11111"," 23232"]
| You have two possibilities:
Regex.Split
String.Split
In this case, you want to split your string by specific delimiters caracters. String.Split has been created for this special purpose. This method will be faster than Regex.Split.
char[] delimiters = new [] { ',', ';', ' ' }; // List of your delimiters
var splittedArray = myString.Split(delimiters, StringSplitOptions.RemoveEmptyEntries);
| Split | 14,689,044 | 21 |
I have this array:
$array = array(a, b, c, d, e, f, g);
I want to split it in two arrays depending if the index is even or odd, like this:
$odd = array(a, c, e, g);
$even = array(b, d, f);
Thanks in advance!
| One solution, using anonymous functions and array_walk:
$odd = array();
$even = array();
$both = array(&$even, &$odd);
array_walk($array, function($v, $k) use ($both) { $both[$k % 2][] = $v; });
This separates the items in just one pass over the array, but it's a bit on the "cleverish" side. It's not really any better than the classic, more verbose
$odd = array();
$even = array();
foreach ($array as $k => $v) {
if ($k % 2 == 0) {
$even[] = $v;
}
else {
$odd[] = $v;
}
}
| Split | 12,405,264 | 21 |
How can I split strings separated by "/" inside a jsp page using JSTL?
I have a string in this format:
**
"23/11/2010"
*. Sometimes, the string could be like this: *
"1/1/2010"
*.
I need to do something in order to split the string in three different substrings:
*
"23", "11", "2010".
**
This is because I need to put each one of them inside three different text fields, like these:
<input type="text" value="23">/
<input type="text" value="11">/
<input type="text" value="2010">
I could not find any working example yet.
Thanks in advance!
| You can use the fn:split() function for this.
<%@taglib prefix="c" uri="http://java.sun.com/jsp/jstl/core" %>
<%@taglib prefix="fn" uri="http://java.sun.com/jsp/jstl/functions" %>
...
<c:set var="dateParts" value="${fn:split(dateString, '/')}" />
...
<input type="text" name="day" value="${dateParts[0]}" />
<input type="text" name="month" value="${dateParts[1]}" />
<input type="text" name="year" value="${dateParts[2]}" />
Be sure that the date format is validated beforehand :) It would be easier if it was a java.util.Date, not a java.lang.String. You could then use <fmt:formatDate> to format it to a reliable and fixed string format first. Otherwise you'd need to add checks on array length by fn:length() and to prevent potential XSS attack holes by fn:escapeXml().
Also important to note is that the function takes a regular expression as argument and not just a plain character sequence. So in case you'd like to split on characters which represent special characters in regex, then you'd need to escape them with backslashes. See also How to split a string in Java for general guidelines which also apply to fn:split().
| Split | 10,304,084 | 21 |
I'm trying to divide a string into words, removing spaces and punctuation marks.
I tried using the split() method, passing all the punctuation at once, but my results were incorrect:
>>> test='hello,how are you?I am fine,thank you. And you?'
>>> test.split(' ,.?')
['hello,how are you?I am fine,thank you. And you?']
I actually know how to do this with regexes already, but I'd like to figure out how to do it using split(). Please don't give me a regex solution.
| If you want to split a string based on multiple delimiters, as in your example, you're going to need to use the re module despite your bizarre objections, like this:
>>> re.split('[?.,]', test)
['hello', 'how are you', 'I am fine', 'thank you', ' And you', '']
It's possible to get a similar result using split, but you need to call split once for every character, and you need to iterate over the results of the previous split. This works but it's u-g-l-y:
>>> sum([z.split()
... for z in sum([y.split('?')
... for y in sum([x.split('.')
... for x in test.split(',')],[])], [])], [])
['hello', 'how', 'are', 'you', 'I', 'am', 'fine', 'thank', 'you', 'And', 'you']
This uses sum() to flatten the list returned by the previous iteration.
| Split | 9,797,357 | 21 |
How can I split a string using [ as the delimiter?
String line = "blah, blah [ tweet, tweet";
if I do
line.split("[");
I get an error
Exception in thread "main" java.util.regex.PatternSyntaxException:
Unclosed character class near index 1 [
Any help?
| The [ is a reserved char in regex, you need to escape it,
line.split("\\[");
| Split | 8,141,698 | 21 |
What is the right way to split a string into words ?
(string doesn't contain any spaces or punctuation marks)
For example: "stringintowords" -> "String Into Words"
Could you please advise what algorithm should be used here ?
! Update: For those who think this question is just for curiosity. This algorithm could be used to camеlcase domain names ("sportandfishing .com" -> "SportAndFishing .com") and this algo is currently used by aboutus dot org to do this conversion dynamically.
| Let's assume that you have a function isWord(w), which checks if w is a word using a dictionary. Let's for simplicity also assume for now that you only want to know whether for some word w such a splitting is possible. This can be easily done with dynamic programming.
Let S[1..length(w)] be a table with Boolean entries. S[i] is true if the word w[1..i] can be split. Then set S[1] = isWord(w[1]) and for i=2 to length(w) calculate
S[i] = (isWord[w[1..i] or for any j in {2..i}: S[j-1] and isWord[j..i]).
This takes O(length(w)^2) time, if dictionary queries are constant time. To actually find the splitting, just store the winning split in each S[i] that is set to true. This can also be adapted to enumerate all solution by storing all such splits.
| Split | 3,466,972 | 21 |
What I want is similar to this question. However, I want the directory that is split into a separate repo to remain a subdirectory in that repo:
I have this:
foo/
.git/
bar/
baz/
qux/
And I want to split it into two completely independent repositories:
foo/
.git/
bar/
baz/
quux/
.git/
qux/ # Note: still a subdirectory
How to do this in git?
I could use the method from this answer if there is some way to move all the new repo's contents into a subdirectory, throughout history.
| You could indeed use the subdirectory filter followed by an index filter to put the contents back into a subdirectory, but why bother, when you could just use the index filter by itself?
Here's an example from the man page:
git filter-branch --index-filter 'git rm --cached --ignore-unmatch filename' HEAD
This just removes one filename; what you want to do is remove everything but a given subdirectory. If you want to be cautious, you could explicitly list each path to remove, but if you want to just go all-in, you can just do something like this:
git filter-branch --index-filter 'git ls-tree -z --name-only --full-tree $GIT_COMMIT | grep -zv "^directory-to-keep$" | xargs -0 git rm --cached -r' -- --all
I expect there's probably a more elegant way; if anyone has something please suggest it!
A few notes on that command:
filter-branch internally sets GIT_COMMIT to the current commit SHA1
I wouldn't have expected --full-tree to be necessary, but apparently filter-branch runs the index-filter from the .git-rewrite/t directory instead of the top level of the repo.
grep is probably overkill, but I don't think it's a speed issue.
--all applies this to all refs; I figure you really do want that. (the -- separates it from the filter-branch options)
-z and -0 tell ls-tree, grep, and xargs to use NUL termination to handle spaces in filenames.
Edit, much later: Thomas helpfully suggested a way to remove the now-empty commits, but it's now out of date. Look at the edit history if you've got an old version of git, but with modern git, all you need to do is tack on this option:
--prune-empty
That'll remove all commits which are empty after the application of the index filter.
| Split | 2,797,191 | 21 |
While using Vim I'll sometimes want to look at a function definition or a struct definition, so I'll use C-] to jump to it. However, there are a few problems I run into. First off, I don't know how to jump back easily. It appears the previous file I was in closes and I'm now in the new one. Is there a way to jump back, or keep a stack of open files that I can pop back to or something?
Another thing I've noticed that when I have a change in the current file I need to save it because, like a mentioned a moment ago, my current file is being closed before the next one opens.
And sometimes I want to view my current code and my header at once. Is there a way open the tag definition in a split?
| Add
set hidden
to you vimrc. It'll allow you switch files without saving them. I think this is one of 'must have' options.
Use C-o to jump back to previous locations which were autosaved in a jumplist.
:h jumplist
| Split | 1,728,311 | 21 |
I'm using Jinja2 template engine (+pelican).
I have a string saying "a 1", and I am looking for a way to split that string in two
by using the white-space as the delimiter.
So the end result I'm looking for is a variable which holds the two values in a form of an array. e.g. str[0] evaluates to "a" & str[1] evaluates to "1".
| Calling split on the string should do the trick:
"a 1".split()
| Split | 20,678,004 | 20 |
When trying to remove the suffix from a filename, I'm only left with the suffix, which is exactly not what I want.
What (how many things) am I doing wrong here:
let myTextureAtlas = SKTextureAtlas(named: "demoArt")
let filename = (myTextureAtlas.textureNames.first?.characters.split{$0 == "."}.map(String.init)[1].replacingOccurrences(of: "\'", with: ""))! as String
print(filename)
This prints png which is the most dull part of the whole thing.
| If by suffix you mean path extension, there is a method for this:
let filename = "demoArt.png"
let name = (filename as NSString).deletingPathExtension
// name - "demoArt"
| Split | 39,887,738 | 20 |
I have a list
['Tests run: 1', ' Failures: 0', ' Errors: 0']
I would like to convert it to a dictionary as
{'Tests run': 1, 'Failures': 0, 'Errors': 0}
How do I do it?
| Use:
a = ['Tests run: 1', ' Failures: 0', ' Errors: 0']
d = {}
for b in a:
i = b.split(': ')
d[i[0]] = i[1]
print d
returns:
{' Failures': '0', 'Tests run': '1', ' Errors': '0'}
If you want integers, change the assignment in:
d[i[0]] = int(i[1])
This will give:
{' Failures': 0, 'Tests run': 1, ' Errors': 0}
| Split | 22,980,977 | 20 |
I would like to split a large file (10^6 rows) according to the value in the 6th column (about 10*10^3 unique values). However, I can't get it working because of the number of records. It should be easy but it's taking hours already and I'm not getting any further.
I've tried two options:
Option 1
awk '{print > $6".txt"}' input.file
awk: cannot open "Parent=mRNA:Solyc06g051570.2.1.txt" for output (Too many open files)
Option 2
awk '{print > $6; close($6)}' input.file
This doesn't cause an error but the files it creates contain only the last line corresponding to 'grouping' value $6
This is the beginning of my file, however, this file doesn't cause an error because it's so small:
exon 3688 4407 + ID=exon:Solyc06g005000.2.1.1 Parent=mRNA:Solyc06g005000.2.1
exon 4853 5604 + ID=exon:Solyc06g005000.2.1.2 Parent=mRNA:Solyc06g005000.2.1
exon 7663 7998 + ID=exon:Solyc06g005000.2.1.3 Parent=mRNA:Solyc06g005000.2.1
exon 9148 9408 + ID=exon:Solyc06g005010.1.1.1 Parent=mRNA:Solyc06g005010.1.1
exon 13310 13330 + ID=exon:Solyc06g005020.1.1.1 Parent=mRNA:Solyc06g005020.1.1
exon 13449 13532 + ID=exon:Solyc06g005020.1.1.2 Parent=mRNA:Solyc06g005020.1.1
exon 13711 13783 + ID=exon:Solyc06g005020.1.1.3 Parent=mRNA:Solyc06g005020.1.1
exon 14172 14236 + ID=exon:Solyc06g005020.1.1.4 Parent=mRNA:Solyc06g005020.1.1
exon 14717 14803 + ID=exon:Solyc06g005020.1.1.5 Parent=mRNA:Solyc06g005020.1.1
exon 14915 15016 + ID=exon:Solyc06g005020.1.1.6 Parent=mRNA:Solyc06g005020.1.1
exon 22106 22261 + ID=exon:Solyc06g005030.1.1.1 Parent=mRNA:Solyc06g005030.1.1
exon 23462 23749 - ID=exon:Solyc06g005040.1.1.1 Parent=mRNA:Solyc06g005040.1.1
exon 24702 24713 - ID=exon:Solyc06g005050.2.1.3 Parent=mRNA:Solyc06g005050.2.1
exon 24898 25402 - ID=exon:Solyc06g005050.2.1.2 Parent=mRNA:Solyc06g005050.2.1
exon 25728 25845 - ID=exon:Solyc06g005050.2.1.1 Parent=mRNA:Solyc06g005050.2.1
exon 36352 36835 + ID=exon:Solyc06g005060.2.1.1 Parent=mRNA:Solyc06g005060.2.1
exon 36916 38132 + ID=exon:Solyc06g005060.2.1.2 Parent=mRNA:Solyc06g005060.2.1
exon 57089 57096 + ID=exon:Solyc06g005070.1.1.1 Parent=mRNA:Solyc06g005070.1.1
exon 57329 58268 + ID=exon:Solyc06g005070.1.1.2 Parent=mRNA:Solyc06g005070.1.1
exon 59970 60505 - ID=exon:Solyc06g005080.2.1.24 Parent=mRNA:Solyc06g005080.2.1
exon 60667 60783 - ID=exon:Solyc06g005080.2.1.23 Parent=mRNA:Solyc06g005080.2.1
exon 63719 63880 - ID=exon:Solyc06g005080.2.1.22 Parent=mRNA:Solyc06g005080.2.1
exon 64143 64298 - ID=exon:Solyc06g005080.2.1.21 Parent=mRNA:Solyc06g005080.2.1
exon 66964 67191 - ID=exon:Solyc06g005080.2.1.20 Parent=mRNA:Solyc06g005080.2.1
exon 71371 71559 - ID=exon:Solyc06g005080.2.1.19 Parent=mRNA:Solyc06g005080.2.1
exon 73612 73717 - ID=exon:Solyc06g005080.2.1.18 Parent=mRNA:Solyc06g005080.2.1
exon 76764 76894 - ID=exon:Solyc06g005080.2.1.17 Parent=mRNA:Solyc06g005080.2.1
exon 77189 77251 - ID=exon:Solyc06g005080.2.1.16 Parent=mRNA:Solyc06g005080.2.1
exon 80044 80122 - ID=exon:Solyc06g005080.2.1.15 Parent=mRNA:Solyc06g005080.2.1
exon 80496 80638 - ID=exon:Solyc06g005080.2.1.14 Parent=mRNA:Solyc06g005080.2.1
| Option 2, use “>>” instead of “>”, to append.
awk '{print >> $6; close($6)}' input.file
| Split | 16,635,396 | 20 |
When I enter some of URLs in Google Chrome omnibox, I see message in it "Press TAB to search in $URL". For example, there are some russian sites habrahabr.ru or yandex.ru. When you press TAB you'll be able to search in that site, not in your search engine.
How to make my site to be able for it? Maybe, I need to write some special code in my site pages?
| Chrome usually handles this through user preferences. (via chrome://settings/searchEngines)
However, if you'd like to implement this specifically for your users, you need to add a OSD (Open Search Description) to your site.
Making usage of Google Chrome's OmniBox [TAB] Feature for/on personal website?
You then add this XML file to the root of your site, and link to it in your <head> tag:
<link rel="search" type="application/opensearchdescription+xml" title="Stack Overflow" href="/opensearch.xml" />
Now, visitors to your page will automatically have your site's search information placed into Chrome's internal settings at chrome://settings/searchEngines.
OpenSearchDescription XML Format Example
<OpenSearchDescription xmlns="http://a9.com/-/spec/opensearch/1.1/" xmlns:moz="http://www.mozilla.org/2006/browser/search/">
<ShortName>Your website name (shorter = better)</ShortName>
<Description>
Description about your website search here
</Description>
<InputEncoding>UTF-8</InputEncoding>
<Image width="16" height="16" type="image/x-icon">your site favicon</Image>
<Url type="text/html" method="get" template="http://www.yoursite.com/search/?query={searchTerms}"/>
</OpenSearchDescription>
The important part is the <url> item. {searchTerms} will be replaced with what the user searches for in the omnibar.
Here's a link to OpenSearch for more information.
| OpenSearch | 7,630,144 | 173 |
I am building an open search add-on for Firefox/IE and the image needs to be Base64 Encoded so how can I base 64 encode the favicon I have?
I am only familiar with PHP
| As far as I remember there is an xml element for the image data. You can use this website to encode a file (use the upload field). Then just copy and paste the data to the XML element.
You could also use PHP to do this like so:
<?php
$im = file_get_contents('filename.gif');
$imdata = base64_encode($im);
?>
Use Mozilla's guide for help on creating OpenSearch plugins. For example, the icon element is used like this:
<img width="16" height="16">data:image/x-icon;base64,imageData</>
Where imageData is your base64 data.
| OpenSearch | 35,879 | 65 |
I would like to search JIRA's "Quick Search" from Chrome's Omnibox. This is not the same as this Chrome Omnibox search string:
https://myserver/jira/browse/%s
That string will only open perfectly (not partially) matched JIRA IDs. A Quick Search will automatically open the issue that uniquely matches the search criteria--even if the search criteria is a partial match. For example, consider a system where there is only one JIRA issue that contains -77, and that JIRA issue is CLS-77. Using Quick Search (at the upper-right corner of the JIRA site) to search for "77" will open issue CLS-77 automatically. Performing the same search through Chrome Omnibox custom search string I listed earlier will not launch CLS-77 when searching for 77.
| In searching for the same answer, I discovered a partial solution. This solution requires that you use the keyword searching feature of the omnibox, so searching for "ISSUE-123" will not work but "jira ISSUE-123" will. It also supports quick-search text searching for example "jira some search string."
In chrome, follow these steps to configure:
Open chrome's settings and click Manage Search Engines in the
search section.
Scroll to the bottom and enter the following information:
Search engine name: JIRA (or some description about which instance of jira like "Apache JIRA")
Keyword: jira (or something simple to remember on the omnibox, could be the base URL of your jira instance)
URL: https://jira.example.com/jira/secure/QuickSearch.jspa?searchString=%s (obviously, replace the jira.example.com with the host name of your jira instance)
Click done
To use:
In the omnibox type jira (or your keyword you configured above) followed by a tab or space
Enter your quick-search term which could be an issue key, free-form text or project text
Examples of what to type in the OmniBox:
An issue key: jira WEB-123
Free-form text: jira Logo change
Project-specific search: jira WEB logo
| OpenSearch | 17,239,740 | 42 |
I am looking for a way to enable Google Chrome's "Tab to search" feature on my website, does anyone have experience with this?
Google did not supply sufficient information for me and I am guessing this community is faster.
Much appreciated
| You have to serve an opensearch xml, and link to it in your <head> tag. See the specs here:
https://github.com/dewitt/opensearch
And a user friendly description here:
https://developer.mozilla.org/en-US/docs/Web/OpenSearch
| OpenSearch | 5,604,030 | 20 |
Having switched from Elasticsearch to Opensearch, my application now fails to run a simple query with:
"Text fields are not optimised for operations that require
per-document field data like aggregations and sorting, so these
operations are disabled by default. Please use a keyword field
instead. Alternatively, set fielddata=true on [status] in order to
load field data by uninverting the inverted index. Note that this can
use significant memory."
There's a question concerning the same error at Searchkick / Elasticsearch Error: Please use a keyword field instead. Alternatively, set fielddata=true on [name], but there the problem was only affecting tests and I only get the problem (so far) in development mode.
Here's the query being run:
::Record.search(q ? q : "*",
where: where_clause,
fields: fields,
match: :word_middle,
per_page: max_per_page(per_page) || 30,
page: page || 1,
order: sort_clause,
aggs: aggs,
misspellings: {below: 5}
If I take out aggs then the search is fine, but they're essential for the application. Removing :status from the list of aggregation fields causes the error to name the next field in the array as the problem. So, I presumably need to specify the correct type for each field used in aggregations. But how?
The Searchkick docs suggest this example under "Advanced Mapping" (https://github.com/ankane/searchkick):
class Product < ApplicationRecord
searchkick mappings: {
properties: {
name: {type: "keyword"}
}
}
end
So, I tried this:
# in models/Record.rb
mapping_properties = {}
aggregation_fields.each do |af|
mapping_properties[af] = { type: 'keyword' }
end
searchkick mappings: {
properties: mapping_properties
}
But, the same problem persists. I also tried something similar to that shown in the linked post, e.g.
mappings: {
properties: {
name: {
type: "text",
fielddata: true,
fields: {
keyword: {
type: "keyword"
}
}
}
}
}
...but similarly without luck.
Can anyone suggest how this might be fixed?
| The immediate issue was dealt with by changing all the fields used for aggregations, so rather than:
aggs = %w(field1 field2 field3 ...)
...in the above search query.
I used:
aggs = %w(field1.keyword field2.keyword field3.keyword ...)
| OpenSearch | 71,951,968 | 19 |
I think the title explains it all but I am going deeper into my question anyway:
How can I make use of the Chrome's Omnibox [TAB] feature for my website?
As many users requested me to implement that feature on the site, I did research on the OpenSearchDescription and was very successful in implementation with the FireFox and IE7/IE8 Searchbar.
Yet the implementation didn't quite work for the Chrome Omnibox [TAB] feature..
Can you help me with that?
My OSD.xml code:
<OpenSearchDescription xmlns="http://a9.com/-/spec/opensearch/1.1/"
xmlns:moz="http://www.mozilla.org/2006/browser/search/">
<ShortName>MySite</ShortName>
<Description>My Site</Description>
<InputEncoding>UTF-8</InputEncoding>
<Image width="16" height="16" type="image/x-icon">http://MySite.com/favicon.ico</Image>
<Url type="application/x-suggestions+json" method="GET"
template="http://ff.search.yahoo.com/gossip?output=fxjson&command={searchTerms}" />
<Url type="text/html" method="POST" template="http://MySite.com/query.php">
<Param name="sString" value="{searchTerms}"/>
</Url>
<Url type="application/x-suggestions+json" template="suggestionURL"/>
<moz:SearchForm>http://www.MySite.com</moz:SearchForm>
</OpenSearchDescription>
And this is the link to the osd file on my page:
<link rel="search" type="application/opensearchdescription+xml" title="MySite" href="/opensearch.xml" />
| I've compared what you have against the OpenSearchDescription on my own site and I cannot see why yours is not working. The only real difference is that you are using POST to search whereas I am using GET. According to this page, IE7 does not support POST requests, so it may be that other browsers also do not support POST.
The one on my site definitely works in IE8, Chrome 3.0 and FF 2.0+. Feel free to compare them yourself and see if you can spot a difference: opensearch.XML
| OpenSearch | 1,317,051 | 15 |
I have a multi-flavored, multi-build-typed android project and I want to integrate the NewRelic plugin. But I have to apply it only for one of the customers, thus only for one product flavor.
NewRelic uses instrumentation and the plugin would generate code in other flavors if I applied the plugin there, and that is not permitted for us.
So my question is: How can I use the apply plugin: something command in the gradle file to be applied to only one of my flavors?
| Use this code:
if (!getGradle().getStartParameter().getTaskRequests()
.toString().contains("Develop")){
apply plugin: 'com.google.gms.google-services'
}
getGradle().getStartParameter().getTaskRequests().toString() returns something like [DefaultTaskExecutionRequest{args=[:app:generateDevelopDebugSources],projectPath='null'}] so as stated in the comments Develop must start with an uppercase.
| New Relic | 31,379,795 | 76 |
I just started using New Relic RPM with my rails app, and one of the metrics they provide is "Throughput RPM". I have googled everywhere and thoroughly combed the New Relic docs, and I cannot find ANY written explanation of the RPM throughput metric.
Is it "requests per minute" or "requests per millisecond" or something else? ** combustion engines and revolutions per minute make this impossible to find answers about in Google.
What is throughput RPM? Is a good number higher or lower, what are some average benchmarks, etc?
I'd greatly appreciate an explanation of this metric, thanks!!
| The product name "RPM" stands for "Rails Performance Management" - which is an anachronism, now that we support Ruby, Java, PHP and .NET (stay tuned for other languages).
The suffix "rpm" stands for "Requests per Minute". Typically used to measure throughput, either for the whole application, or a specific Web Transaction (Controller Action in Rails).
Lew Cirne
Founder and CEO
New Relic
| New Relic | 5,252,561 | 72 |
We are using NewRelic to provide server-side application traces.
We have noticed that some of our applications consistently spend about 100ms in the method System.Web.Mvc.MvcHandler.BeginProcessRequest().
This happens before any custom controller code is called (which is logged separately, and not cumulatively) - it's not obvious why it would be spending so much time in this method.
What kinds of things will MVC do in this method? Could this simply be request queuing?
[EDIT:] As suspected - Scalayer's answer below was spot-on. We removed & optimized away all our session dependencies, and saw a massive increase in application scalability & stability
| What you might be seeing is commonly referred to as thread agility in .NET.
What you're probably seeing as far as the results underneath the topical label (i.e. Application code in System.Web.HttpApplication.BeginRequest()) is a thread agility problem; in most cases the time you see here isn't necessarily code being executed but the web context waiting for the threads to be released back to it from a reader-writer lock.
The Application_BeginRequest() "pause" is one that is pretty pervasive in a ASP.NET web stack. In general when you see long load times in BeginRequest, you are dealing with ASP.NET thread agility and/or thread locks - especially when dealing with IO and session based operations. Not that it's a bad thing, this is just how .net makes sure your threads remain concurrent.
The time gap generally occurs between BeginRequest and PreRequestHandlerExecute. If the application is writing several things to session then ASP.NET will issue a reader-writer lock on HttpContext.Current.Session.
A good way to see if this is an issue that you might be facing would be to check the thread IDs to see if agility is an issue - the IDs will be different for a given request.
For instance. While debugging, perhaps you could add the following to your Global.asax.cs:
protected void Application_BeginRequest(Object sender, EventArgs e) {
Debug.WriteLine("BeginRequest_" + Thread.CurrentThread.ManagedThreadId.ToString());
}
Open up the debug output window (From Visual Studio: View >> Output, then select "Debug" from the "show output from" dropdown).
While debugging, hit a page where you have seen the long time. Then view the output log - if you see multiple id's then you might be suffering from this.
This is why you might see the delay sometimes but not other times, the application code might be using session a little differently or session or IO operations might be higher or lower from page to page.
If this is the case some things you can do to help speed things up depending on how session is used on the site or each given page.
For pages that do not modify session:
<% @Page EnableSessionState="ReadOnly" %>
For pages that do not use session state:
<% @Page EnableSessionState="False" %>
If the app does not use session (web.config):
<configuration>
<system.web>
<sessionState mode="Off" />
</system.web>
</configuration>
So let's take the following example:
User loads a page, then decides to go to another page before the first request is done loading ASP.NET will force a session lock causing the new page request load to wait until the first page request finishes. With ASP.NET MVC each action locks the user session for synchronization; causing the same issue.
All of the time it took for the lock to be release will be reported via new relic, not to mention the ones where the user abandoned the session and the thread coming back is looking for a user who no longer exists.
Incidentally the UpdatePanel control causes the same behavior -
http://msdn.microsoft.com/en-us/magazine/cc163413.aspx
What can be done:
This locking problem is one of the reasons Microsoft has the SessionStateUtility class -
http://msdn.microsoft.com/en-us/library/system.web.sessionstate.sessionstateutility.aspx
So that you can override the default behavior if you face this problem as seen here in this
Redis implementation:https://github.com/angieslist/AL-Redis
There are many options to the default state provider used by .net based websites. But know generally this transaction time indicates that threads are being locked and waiting on requests to the server to be completed.
| New Relic | 17,064,380 | 59 |
I'm looking into using a performance and monitoring tool for my web application hosted on Azure.
I was wondering what the main differences are between Microsoft's Application Insights and New Relic?
Thanks.
| There are many feature differences between the two products and such comparisons are usually subjective in nature. The following key themes are noted by customers as particular strengths of App Insights, when compared to New Relic:
Developer-centric approach - SDK that rides with an app (as opposed to an agent installed aside of an app), provides better flexibility and control for developers; easier support for deployment, auto-scaling. See more here
Rich, open sourced SDKs – see here
Integrated with Visual Studio & Azure Developer Workflow
Single product to collect and correlate all 360 degree data, including integrated Usage Analytics (beyond RUM) and Log Search; powerful and intuitive multi-dimensional analysis with drill-through into raw data
Cloud friendlier pricing model
(Disclaimer: the answerer lists themselves as "Architect in Visual Studio Application Insights team".)
| New Relic | 31,147,968 | 56 |
Where should I call NewRelic.Api.Agent.NewRelic.IgnoreApdex() or NewRelic.Api.Agent.NewRelic.IgnoreTransaction() in my SignalR hubs to prevent long-running persistent connections from overshadowing my application monitoring logs?
| To continue with Micah's answer, here is the custom instrumentation file for ignoring all signalr calls.
Create it to C:\ProgramData\New Relic.NET Agent\Extensions\IgnoreSignalR.xml
<?xml version="1.0" encoding="utf-8"?>
<extension xmlns="urn:newrelic-extension">
<instrumentation>
<!-- Optional for basic traces. -->
<tracerFactory name="NewRelic.Agent.Core.Tracer.Factories.IgnoreTransactionTracerFactory">
<match assemblyName="Microsoft.AspNet.SignalR.Core" className="Microsoft.AspNet.SignalR.PersistentConnection">
<exactMethodMatcher methodName="ProcessRequest"/>
</match>
</tracerFactory>
</instrumentation>
</extension>
Remember to do iisreset.
| New Relic | 13,490,473 | 29 |
This is very specific, but I will try to be brief:
We are running a Django app on Heroku. Three servers:
test (1 web, 1 celery dyno)
training (1 web, 1 celery dyno)
prod (2 web, 1 celery dyno).
We are using Gunicorn with gevents and 4 workers on each dyno.
We are experiencing sporadic high service times. Here is an example from Logentries:
High Response Time:
heroku router - - at=info
method=GET
path="/accounts/login/"
dyno=web.1
connect=1ms
service=6880ms
status=200
bytes=3562
I have been Googling this for weeks now. We are unable to reproduce at will but experience these alerts 0 to 5 times a day. Notable points:
Occurs on all three apps (all running similar code)
Occurs on different pages, including simple pages such as 404 and /admin
Occurs at random times
Occurs with varying throughput. One of our instances only drives 3 users/day. It is not related to sleeping dynos because we ping with New Relic and the issue can occur mid-session
Unable to reproduce at will. I have experienced this issue personally once. Clicking a page that normally executes in 500ms resulted in a 30 second delay and eventually an app error screen from Heroku's 30s timeout
High response times vary from 5000ms - 30000ms.
New Relic does not point to a specific issue. Here are the past few transactions and times:
RegexURLResolver.resolve 4,270ms
SessionMiddleware.process_request 2,750ms
Render login.html 1,230ms
WSGIHandler 1,390ms
The above are simple calls and do not normally take near that amount of time
What I have narrowed it down to:
This article on Gunicorn and slow clients
I have seen this issue happen with slow clients but also at our office where we have a fiber connection.
Gevent and async workers not playing nicely
We've switched to gunicorn sync workers and problem still persists.
Gunicorn worker timeout
It's possible that workers are somehow being kept-alive in a null state.
Insufficient workers / dynos
No indication of CPU/memory/db overutilization and New Relic doesn't display any indication of DB latency
Noisy Neighbors
Among my multiple emails with Heroku, the support rep has mentioned at least one of my long requests was due to a noisy neighbor, but was not convinced that was the issue.
Subdomain 301
The requests are coming through fine, but getting stuck randomly in the application.
Dynos restarting
If this were the case, many users would be affected. Also, I can see that our dynos have not restarted recently.
Heroku routing / service issue
It is possible that the Heroku service is less than advertised and this is simply a downside of using their service.
We have been having this issue for the past few months, but now that we are scaling it needs to be fixed. Any ideas would be much appreciated as I have exhausted nearly every SO or Google link.
| I have been in contact with the Heroku support team over the past 6 months. It has been a long period of narrowing down through trial/error, but we have identified the problem.
I eventually noticed these high response times corresponded with a sudden memory swap, and even though I was paying for a Standard Dyno (which was not idling), these memory swaps were taking place when my app had not received traffic recently. It was also clear by looking at the metrics charts that this was not a memory leak because the memory would plateau off. For example:
After many discussions with their support team, I was provided this explanation:
Essentially, what happens is some backend runtimes end up with a combination of applications that end up using enough memory that the runtime has to swap. When that happens, a random set of dyno containers on the runtime are forced to swap arbitrarily by small amounts (note that "random" here is likely containers with memory that hasn't been accessed recently but is still resident in memory). At the same time, the apps that are using large amounts of memory also end up swapping heavily, which causes more iowait on the runtime than normal.
We haven't changed how tightly we pack runtimes at all since this issue started becoming more apparent, so our current hypothesis is that the issue may be coming from customers moving from versions of Ruby prior to 2.1 to 2.1+. Ruby makes up for a huge percentage of the applications that run on our platform and Ruby 2.1 made changes to it's GC that trades memory usage for speed (essentially, it GCs less frequently to get speed gains). This results in a notable increase in memory usage for any application moving from older versions of Ruby. As such, the same number of Ruby apps that maintained a certain memory usage level before would now start requiring more memory usage.
That phenomenon combined with misbehaving applications that have resource abuse on the platform hit a tipping point that got us to the situation we see now where dynos that shouldn't be swapping are. We have a few avenues of attack we're looking into, but for now a lot of the above is still a little bit speculative. We do know for sure that some of this is being caused by resource abusive applications though and that's why moving to Performance-M or Performance-L dynos (which have dedicated backend runtimes) shouldn't exhibit the problem. The only memory usage on those dynos will be your application's. So, if there's swap it'll be because your application is causing it.
I am confident this is the issue I and others have been experiencing, as it is related to the architecture itself and not to any combination of language/framework/configs.
There doesn't seem to be a good solution other than
A) tough up and wait it out or
B) switch to one of their dedicated instances
I am aware of the crowd that says "This is why you should use AWS", but I find the benefits that Heroku offers to outweigh some occasional high response times and their pricing has gotten better over the years. If you are suffering from the same issue, the "best solution" will be your choice. I will update this answer when I hear anything more.
Good luck!
| New Relic | 29,088,113 | 26 |
We're in the process of improving performance of the our rails app hosted at Heroku (rails 3.2.8 and ruby 1.9.3). During this we've come across one alarming problem for which the source seems to be extremely difficult to track. Let me quickly explain how we experience the problem and how we've tried to isolate it.
--
Since around June we've experienced weird lag behavior in Time to First Byte all over the site. The problems is obvious from using the site (sometimes the application doesn't respond for 10-20 seconds), and it's also present in waterfall analysis via webpagetest.org.
We're based in Denmark but get this result from any host.
To confirm the problem we've performed a benchmark test where we send 300 identical requests to a simple page and measured the response time.
If we send 300 requests to the front page the median response time is below 1 second, which is fairly good. What scares us is that 60 requests takes more that double that time and 40 of those takes more than 4 seconds. Some requests take as much as 16 seconds.
None of these slow requests show up in New Relic, which we use for performance monitoring. No request queuing shows up and the results are the same no matter how high we scale our web processes.
Still, we couldn't reject that the problem was caused by application code, so we tried another experiment where we responded to the request via rack middleware.
By placing this middleware (TestMiddleware) at the beginning of the rack stack, we returned a request before it even hit the application, ensuring that none of the following middleware or the rails app could cause the delay.
Middleware setup:
$ heroku run rake middleware
use Rack::Cache
use ActionDispatch::Static
use TestMiddleware
use Rack::Rewrite
use Rack::Lock
use Rack::Runtime
use Rack::MethodOverride
use ActionDispatch::RequestId
use Rails::Rack::Logger
use ActionDispatch::ShowExceptions
use ActionDispatch::DebugExceptions
use ActionDispatch::RemoteIp
use Rack::Sendfile
use ActionDispatch::Callbacks
use ActiveRecord::ConnectionAdapters::ConnectionManagement
use ActiveRecord::QueryCache
use ActionDispatch::Cookies
use ActionDispatch::Session::DalliStore
use ActionDispatch::Flash
use ActionDispatch::ParamsParser
use ActionDispatch::Head
use Rack::ConditionalGet
use Rack::ETag
use ActionDispatch::BestStandardsSupport
use NewRelic::Rack::BrowserMonitoring
use Rack::RailsExceptional
use OmniAuth::Builder
run AU::Application.routes
We then ran the same script to document response time and got pretty much the same result. The median response time was around 130ms (obviously faster because it doesn't hit the app. But still 60 requests took more than 400ms and 25 requests took more than 1 second. Again, with some requests as slow as 16 seconds.
One explanation could be related to slow hops on the network or DNS setup, but the results of traceroute looks perfectly OK.
This result was confirmed from running the response script on another rails 3.2 and ruby 1.9.3 application hosted on Heroku - no weird behavior at all.
The DNS setup follows Heroku's recommendations.
--
We're confused to say the least. Could there be something fishy with Heroku's routing network?
Why the heck are we seeing this weird behavior? How do we get rid of it? And why can't we see it in New Relic?
| It Turned out that it was a kind of request queuing. Sometimes, that web server was busy, and since heroku just routs randomly incoming requests randomly to any dyno, then I could end up in a queue behind a dyno, which was totally stuck due to e.g. database problems. The strange thing is, that this was hardly noticeable in new relic (it's a good idea to uncheck all other resources when viewing thins in their charts, then the queuing suddenly appears)
EDIT 21/2 2013: It has turned out, that the reason why it wasn't hardly noticeable in Newrelic was, that it wasn't measured! http://rapgenius.com/Lemon-money-trees-rap-genius-response-to-heroku-lyrics
We find this very frustrating, and we ended up leaving Heroku in favor of dedicated servers. This gave us 20 times better performance at a 1/10 of the cost. Additionally I must say that we are disappointed by Heroku who at the time this happened, denied that the slowness was due to their infrastructure even though we suspected it and highlighted it several times. We even got answers like this back:
Heroku 28/8 2012: "If you're not seeing request queueing or other slowness reported in New Relic, then this is likely not a server-side issue. Heroku's internal routing should take <1ms. None of our monitoring systems are indicating any routing problems currently."
Additionally we spoke to Newrelic who also seemed unaware of the issue, even though they according to them selfs has a very close work relationship with Heroku.
Newrelic 29/8 2012: "It looks like whatever is causing this is happening before the Ruby agent's visibility starts. The queue time that the agent records is from the time the request enters a dyno, so the slow down is occurring before then."
The bottom-line was, that we ended up spending hours and hours on optimizing code that wasn't really the bottleneck. Additionally running with a too high dyno scale in a desperate try to boost our performance, but the only thing that we really got from this was bigger receipts from both Heroku and Newrelic - NOT COOL. I'm glad that we changed.
PS. At that time there even was a bug that caused newrelic pro to be charged on ALL dynos even though we, (according to Newrelics own advice), had disabled the monitoring on our background worker processes. It took a lot of time and many emails before the mistake was admitted by both parties.
PPS. If you are not aware of the current ongoing discussion, then here is the link http://rapgenius.com/James-somers-herokus-ugly-secret-lyrics
EDIT 26/2 2013
Heroku has just announced in their newsletter, that Newrelic has released an update that apparently should cast some light on the situation at Heroku.
EDIT 8/4 2013
Heroku has just released an FAQ over the topic
| New Relic | 12,181,133 | 22 |
I've got gem 'newrelic_rpm' in my Gemfile as per Heroku's documentation. When I attempt to run git push heroku master I receive the following:
-----> Ruby/Rails app detected
-----> Installing dependencies using Bundler version 1.3.0.pre.5
Running: bundle install --without development:test --path vendor/bundle --binstubs vendor/bundle/bin --deployment
Fetching gem metadata from https://rubygems.org/........
Fetching gem metadata from https://rubygems.org/..
Could not find newrelic_rpm-3.5.6.46 in any of the sources
!
! Failed to install gems via Bundler.
!
! Heroku push rejected, failed to compile Ruby/rails app
To git@heroku.com:reponame.git
! [remote rejected] master -> master (pre-receive hook declined)
Any ideas on how to fix this? I've already tried bundle update as per this SO answer: https://stackoverflow.com/a/4576816/337903 to no avail.
| EDIT: 3.5.8.72 of the gem has been released @thanks Chris
It appears the Bundler Dependency API is having issues.
newrelic_rpm-3.5.6.46 was yanked on January 22, 2013. But is still being requested by the API.
Locking your gemfile to the current release will fix the issue in the meantime.
gem "newrelic_rpm", "~> 3.5.5.38"
| New Relic | 14,845,445 | 21 |
I don't even use new relic and I'm getting errors for them. It just happened all of the sudden.
I'm using the latest Android Studio build (0.61). Even my master branch has the same error. There are other projects on my machine that use new relic, but not this one. This project does not use new relic in any way, not so much as a wayward gradle dependency.
I've tried clearing out my gradle cache and re-downloading all the third party libs, didn't work.
StackTrace:
06-15 01:05:54.872 20117-20117/com.waxwings.happyhour.staging D/HappyHourApplication﹕ CREATE TABLE job_holder (_id integer primary key autoincrement , `priority` integer, `group_id` text, `run_count` integer, `base_job` byte, `created_ns` long, `delay_until_ns` long, `running_session_id` long, `requires_network` integer );
06-15 01:05:54.874 20117-20117/com.waxwings.happyhour.staging D/AndroidRuntime﹕ Shutting down VM
06-15 01:05:54.877 20117-20117/com.waxwings.happyhour.staging E/AndroidRuntime﹕ FATAL EXCEPTION: main
Process: com.waxwings.happyhour.staging, PID: 20117
java.lang.NoClassDefFoundError: Failed resolution of: Lcom/newrelic/agent/android/instrumentation/SQLiteInstrumentation;
at com.path.android.jobqueue.persistentQueue.sqlite.DbOpenHelper.onCreate(DbOpenHelper.java:42)
at android.database.sqlite.SQLiteOpenHelper.getDatabaseLocked(SQLiteOpenHelper.java:252)
at android.database.sqlite.SQLiteOpenHelper.getWritableDatabase(SQLiteOpenHelper.java:164)
at com.path.android.jobqueue.persistentQueue.sqlite.SqliteJobQueue.<init>(SqliteJobQueue.java:42)
at com.path.android.jobqueue.JobManager$DefaultQueueFactory.createPersistentQueue(JobManager.java:594)
at com.path.android.jobqueue.JobManager.<init>(JobManager.java:77)
at com.waxwings.happyhour.HappyHourApplication.configureJobManager(HappyHourApplication.java:84)
at com.waxwings.happyhour.HappyHourApplication.onCreate(HappyHourApplication.java:38)
at android.app.Instrumentation.callApplicationOnCreate(Instrumentation.java:1030)
at android.app.ActivityThread.handleBindApplication(ActivityThread.java:4425)
at android.app.ActivityThread.access$1500(ActivityThread.java:139)
at android.app.ActivityThread$H.handleMessage(ActivityThread.java:1270)
at android.os.Handler.dispatchMessage(Handler.java:102)
at android.os.Looper.loop(Looper.java:136)
at android.app.ActivityThread.main(ActivityThread.java:5102)
at java.lang.reflect.Method.invoke(Native Method)
at com.android.internal.os.ZygoteInit$MethodAndArgsCaller.run(ZygoteInit.java:779)
at com.android.internal.os.ZygoteInit.main(ZygoteInit.java:595)
Caused by: java.lang.ClassNotFoundException: Didn't find class "com.newrelic.agent.android.instrumentation.SQLiteInstrumentation" on path: DexPathList[[zip file "/data/app/com.waxwings.happyhour.staging-1.apk"],nativeLibraryDirectories=[/data/app-lib/com.waxwings.happyhour.staging-1, /vendor/lib, /system/lib]]
at dalvik.system.BaseDexClassLoader.findClass(BaseDexClassLoader.java:56)
at java.lang.ClassLoader.loadClass(ClassLoader.java:511)
at java.lang.ClassLoader.loadClass(ClassLoader.java:469)
at com.path.android.jobqueue.persistentQueue.sqlite.DbOpenHelper.onCreate(DbOpenHelper.java:42)
at android.database.sqlite.SQLiteOpenHelper.getDatabaseLocked(SQLiteOpenHelper.java:252)
at android.database.sqlite.SQLiteOpenHelper.getWritableDatabase(SQLiteOpenHelper.java:164)
at com.path.android.jobqueue.persistentQueue.sqlite.SqliteJobQueue.<init>(SqliteJobQueue.java:42)
at com.path.android.jobqueue.JobManager$DefaultQueueFactory.createPersistentQueue(JobManager.java:594)
at com.path.android.jobqueue.JobManager.<init>(JobManager.java:77)
at com.waxwings.happyhour.HappyHourApplication.configureJobManager(HappyHourApplication.java:84)
at com.waxwings.happyhour.HappyHourApplication.onCreate(HappyHourApplication.java:38)
at android.app.Instrumentation.callApplicationOnCreate(Instrumentation.java:1030)
at android.app.ActivityThread.handleBindApplication(ActivityThread.java:4425)
at android.app.ActivityThread.access$1500(ActivityThread.java:139)
at android.app.ActivityThread$H.handleMessage(ActivityThread.java:1270)
at android.os.Handler.dispatchMessage(Handler.java:102)
at android.os.Looper.loop(Looper.java:136)
at android.app.ActivityThread.main(ActivityThread.java:5102)
at java.lang.reflect.Method.invoke(Native Method)
at com.android.internal.os.ZygoteInit$MethodAndArgsCaller.run(ZygoteInit.java:779)
at com.android.internal.os.ZygoteInit.main(ZygoteInit.java:595)
Suppressed: java.lang.ClassNotFoundException: com.newrelic.agent.android.instrumentation.SQLiteInstrumentation
at java.lang.Class.classForName(Native Method)
at java.lang.BootClassLoader.findClass(ClassLoader.java:781)
at java.lang.BootClassLoader.loadClass(ClassLoader.java:841)
at java.lang.ClassLoader.loadClass(ClassLoader.java:504)
... 19 more
Caused by: java.lang.NoClassDefFoundError: Class "Lcom/newrelic/agent/android/instrumentation/SQLiteInstrumentation;" not found
... 23 more
build.gradle for module:
apply plugin: 'android'
apply plugin: 'newrelic'
android {
compileSdkVersion 19
buildToolsVersion "19.1.0"
defaultConfig {
minSdkVersion 19
targetSdkVersion 19
versionCode 1
versionName "1.0"
testInstrumentationRunner "com.google.android.apps.common.testing.testrunner.GoogleInstrumentationTestRunner"
}
buildTypes {
release {
runProguard true
proguardFiles getDefaultProguardFile('proguard-android.txt'), 'proguard-rules.txt'
}
}
useOldManifestMerger true
productFlavors {
staging {
applicationId "com.waxwings.happyhour.staging"
}
production {
applicationId "com.waxwings.happyhour"
}
}
packagingOptions {
exclude 'LICENSE.txt'
exclude 'META-INF/DEPENDENCIES'
exclude 'META-INF/LICENSE.txt'
exclude 'META-INF/LICENSE'
exclude 'META-INF/NOTICE'
}
}
dependencies {
compile 'com.path:android-priority-jobqueue:1.1.2'
compile "com.android.support:support-v4:19.1.0"
compile 'com.google.android.gms:play-services:4.4.52'
// compile fileTree(dir: 'libs', include: ['*.jar'])
compile files('libs/wearable-preview-support.jar')
compile group: 'com.squareup.okhttp', name: 'okhttp', version: '1.5.3'
compile group: 'com.squareup.picasso', name: 'picasso', version: '2.2.0'
compile 'com.jakewharton:butterknife:5.0.1'
compile 'com.squareup.retrofit:retrofit:1.5.1'
compile 'com.squareup:otto:+'
compile 'com.squareup.phrase:phrase:+'
compile 'com.newrelic.agent.android:android-agent:3.402.0'
// Mockito dependencies
androidTestCompile "org.mockito:mockito-core:1.9.5"
androidTestCompile files(
'libs/dexmaker-1.0.jar',
'libs/dexmaker-mockito-1.0.jar')
androidTestCompile ('com.squareup:fest-android:1.0.8'){
exclude group:'com.android.support', module: 'support-v4'
}
androidTestCompile 'com.squareup.spoon:spoon-client:1.1.1'
androidTestCompile('junit:junit:4.11') {
exclude module: 'hamcrest-core'
}
androidTestCompile('com.jakewharton.espresso:espresso:1.1-r3') {
exclude group: 'org.hamcrest:hamcrest-core:1.1'
exclude group: 'org.hamcrest:hamcrest-library:1.1'
exclude group: 'org.hamcrest', module: 'hamcrest-integration'
exclude group:'com.android.support', module: 'support-v4'
}
androidTestCompile ('com.jakewharton.espresso:espresso-support-v4:1.1-r3'){
exclude group:'com.android.support', module: 'support-v4'
}
}
build.gradle for project:
buildscript {
repositories {
mavenCentral()
maven {
url 'https://oss.sonatype.org/content/repositories/comnewrelic-1153'
}
maven {
url 'https://oss.sonatype.org/content/repositories/comnewrelic-1154'
}
}
dependencies {
classpath 'com.android.tools.build:gradle:0.11.+'
classpath 'com.newrelic.agent.android:agent-gradle-plugin:3.402.0'
}
}
allprojects {
repositories {
mavenCentral()
maven {
url 'https://oss.sonatype.org/content/repositories/comnewrelic-1153'
}
}
}
Edit
The project now imports New Relic in the build.gradle per a user suggestions. This fixed the issue but I'm still exploring why, as it doesn't seem like it should be necessary.
The class throwing the error is in Path's JobQueue lib, the library hasn't been updated in 4 months, and my app has been running fine, this just started happening suddenly. The class that is erroring in the 3rd party lib doesn't even use New Relic.
Edit 2
The priority job queue lib does not use new relic. I have no clue why the stack trace says it does, seems like a red herring. I've heard the New Relic SDK modifies the Android API and gives weird errors. But again, I don't use new relic in my project. Is it possible using the NR sdk in another project somehow infected this one (maybe a bug in Android Studio)?
Edit 3
OK, the Priority Job Queue lib in the original stack trace is definitely a false flag. I went ahead and accessed my own Provider before the JobQueue had a chance to access its (knowing this would force the creation of my own DB ahead of the JobQueue lib). My logic was that if Android Sqlite was being infected by New Relic then it would cause a similar error on my own OpenHelper, it did.
06-15 15:29:39.848 1368-1368/com.waxwings.happyhour.staging W/dalvikvm﹕ threadid=1: thread exiting with uncaught exception (group=0xa4d81b20)
06-15 15:29:39.848 1368-1368/com.waxwings.happyhour.staging E/AndroidRuntime﹕ FATAL EXCEPTION: main
Process: com.waxwings.happyhour.staging, PID: 1368
java.lang.NoClassDefFoundError: com.newrelic.agent.android.instrumentation.SQLiteInstrumentation
at com.waxwings.happyhour.services.HHOpenHelper.onCreate(HHOpenHelper.java:56)
at android.database.sqlite.SQLiteOpenHelper.getDatabaseLocked(SQLiteOpenHelper.java:252)
at android.database.sqlite.SQLiteOpenHelper.getReadableDatabase(SQLiteOpenHelper.java:188)
at com.waxwings.happyhour.services.HappyHourProvider.query(HappyHourProvider.java:121)
at android.content.ContentProvider.query(ContentProvider.java:857)
at android.content.ContentProvider$Transport.query(ContentProvider.java:200)
at android.content.ContentResolver.query(ContentResolver.java:461)
at android.content.ContentResolver.query(ContentResolver.java:404)
at com.waxwings.happyhour.HappyHourApplication.onCreate(HappyHourApplication.java:39)
at android.app.Instrumentation.callApplicationOnCreate(Instrumentation.java:1007)
at android.app.ActivityThread.handleBindApplication(ActivityThread.java:4344)
at android.app.ActivityThread.access$1500(ActivityThread.java:135)
at android.app.ActivityThread$H.handleMessage(ActivityThread.java:1256)
at android.os.Handler.dispatchMessage(Handler.java:102)
at android.os.Looper.loop(Looper.java:136)
at android.app.ActivityThread.main(ActivityThread.java:5017)
at java.lang.reflect.Method.invokeNative(Native Method)
at java.lang.reflect.Method.invoke(Method.java:515)
at com.android.internal.os.ZygoteInit$MethodAndArgsCaller.run(ZygoteInit.java:779)
at com.android.internal.os.ZygoteInit.main(ZygoteInit.java:595)
at dalvik.system.NativeStart.main(Native Method)
Edit 4
I just ran the project on a machine that has never had a New Relic library installed in any project and it ran fine. I think this is pretty strong evidence that New Relic is doing something funny w/ their plugin/library.
Edit 5
I updated my gradle-wraper.properties file to use the rc-1 distribution and that seems to have fixed the problem. I removed the new relic dependencies from my build.gradle that one commenter suggested, and the app still runs fine.
distributionUrl=http\://services.gradle.org/distributions/gradle-1.12-rc-1-all.zip
My guess is that the New Relic library was being cached in the 1.12-all gradle distro and this was the common link causing other projects to error. I'm still not sure what makes the New Relic library different such that gradle thinks it should include it in other projects. Given that New Relic had an almost identical issue w/ their Eclipse plugin not too long ago, I think it's reasonable to assume there's something going on w/ their Maven distribution. I have a support ticket open with them and I will continue to investigate and report back.
The odd part is if I delete all the gradle distributions/caches/etc in ~/.gradle dir and try to use the 1.12-all distro it still throws that new relic error. The only way it will work is if I use the 1.12-rc-1-all gradle distro.
| ./gradlew --stop
./gradlew cleanBuildCache
./gradlew clean
works for me
| New Relic | 24,226,772 | 21 |
I am using Django and trying out New Relic. Is it possible to monitor the Django development server? I can only seem to find help on setting up New Relic with production servers.
Edit
'How to' for future reference:
(I used Django1.4)
Follow this:
https://newrelic.com/docs/python/python-agent-installation
As the last step of that guide (Integration with your Application) change your wsgi.py file to the following:
import newrelic.agent
newrelic.agent.initialize('/home/username/path/to/myproject/newrelic-1.9.0.21/newrelic.ini')
import os
os.environ.setdefault("DJANGO_SETTINGS_MODULE", "myproject.settings")
from django.core.wsgi import get_wsgi_application
application = get_wsgi_application()
application = newrelic.agent.wsgi_application()(application)
Now sign in to your account on the New Relic platform, make a few requests to your development server and see the changes on the the platform. Grats, you made it!
| As of Django 1.4, the startproject command creates a wsgi file that the runserver command will use.
If you have an older Django project that does not have a wsgi file, you can create one as described in the Django docs, and set WSGI_APPLICATION in your settings.py file.
You should be able to set up new relic by modifying this wsgi file as described by the New Relic docs for Python agent integration.
| New Relic | 13,982,435 | 18 |
Anytime I try to build my project I keep getting this error:
Execution failed for task ':app:processReleaseGoogleServices'.
No matching client found for package name 'com.my.package'
I have made and remade the google-services.json and have used the app and the package com.my.package.
Here's my project build.gradle:
buildscript {
repositories {
...
}
dependencies {
classpath 'com.android.tools.build:gradle:2.0.0-beta6'
classpath 'com.github.JakeWharton:sdk-manager-plugin:220bf7a88a7072df3ed16dc8466fb144f2817070'
classpath 'com.neenbedankt.gradle.plugins:android-apt:1.8'
classpath 'io.fabric.tools:gradle:1.+'
classpath 'com.newrelic.agent.android:agent-gradle-plugin:4.265.0'
classpath 'com.google.gms:google-services:2.0.0-alpha9'
}
}
allprojects {
repositories {
...
}
}
// Define versions in a single place
ext {
supportLibraryVersion = '23.2.0'
playServicesVersion = '8.4.0'
}
Here's my app build.gradle:
apply plugin: 'android-sdk-manager'
apply plugin: 'com.android.application'
apply plugin: 'io.fabric'
apply plugin: 'newrelic'
apply plugin: 'com.neenbedankt.android-apt'
android {
packagingOptions {
exclude 'LICENSE.txt'
exclude 'META-INF/LICENSE'
exclude 'META-INF/LICENSE.txt'
exclude 'META-INF/NOTICE'
exclude 'META-INF/services/javax.annotation.processing.Processor'
}
dexOptions {
jumboMode true
}
lintOptions {
disable 'InvalidPackage'
abortOnError false
}
compileSdkVersion 23
buildToolsVersion '23.0.2'
defaultConfig {
applicationId "com.my.package"
minSdkVersion 15
targetSdkVersion 23
}
buildTypes {
debug {
applicationIdSuffix '.debug'
versionNameSuffix '-DEBUG'
...
}
staging {
applicationIdSuffix '.staging'
versionNameSuffix '-STAGING'
...
}
release {
...
}
}
}
dependencies {
compile "com.android.support:support-v4:$rootProject.supportLibraryVersion",
"com.android.support:support-annotations:$rootProject.supportLibraryVersion",
"com.android.support:percent:$rootProject.supportLibraryVersion",
"com.android.support:appcompat-v7:$rootProject.supportLibraryVersion",
"com.android.support:mediarouter-v7:$rootProject.supportLibraryVersion",
"com.google.android.gms:play-services-base:$rootProject.playServicesVersion",
"com.google.android.gms:play-services-cast:$rootProject.playServicesVersion",
"com.google.android.gms:play-services-gcm:$rootProject.playServicesVersion",
"com.google.android.gms:play-services-analytics:$rootProject.playServicesVersion",
...
'com.newrelic.agent.android:android-agent:4.265.0',
'com.android.support:multidex:1.0.0'
compile 'com.squareup.dagger:dagger:1.2.1'
}
apply plugin: 'com.google.gms.google-services'
I've followed the instructions here several times. I am also using my release configuration so there isn't any reason the applicationIdSuffix should be an issue. Also, com.my.pacakage is just a stand-in for my pacakge name. What can I do to resolve this issue?
| You need to provide google-services.json for all flavors (release and development etc)
Single google-services.json can have json/data for all flavors. Go to Google Developers Console and regenerate google-services.json file
Update
You can also create separate google-services.json files for flavors
https://developers.google.com/android/guides/google-services-plugin
| New Relic | 35,876,643 | 18 |
I was diving into a really long request to one of my Rails applications using NewRelic and found a number of SQL queries that appear entirely foreign that are taking up a significant length of time. I've Google'd around but I've come up empty handed as to what they are, let alone whether I can prevent them from occurring.
SELECT COUNT(*) FROM pg_class c LEFT JOIN pg_namespace n ON n.oid = c.relnamespace WHERE c.relkind in (?, ?) AND c.relname = ? AND n.nspname = ANY (current_schemas(false))
…and…
SELECT a.attname, format_type(a.atttypid, a.atttypmod), pg_get_expr(d.adbin, d.adrelid), a.attnotnull, a.atttypid, a.atttypmod FROM pg_attribute a LEFT JOIN pg_attrdef d ON a.attrelid = d.adrelid AND a.attnum = d.adnum WHERE a.attrelid = ?::regclass AND a.attnum > ? AND NOT a.attisdropped ORDER BY a.attnum
…each occurred 7 times, taking 145ms and 135ms (respectively) total.
SELECT DISTINCT(attr.attname) FROM pg_attribute attr INNER JOIN pg_depend dep ON attr.attrelid = dep.refobjid AND attr.attnum = dep.refobjsubid INNER JOIN pg_constraint cons ON attr.attrelid = cons.conrelid AND attr.attnum = cons.conkey[?] WHERE cons.contype = ? AND dep.refobjid = ?::regclass
…was performed 2 times at a cost of 104ms, and…
SHOW search_path
…commanded 45ms in a single call.
My gut says these are related to the Postgres Rails adapter, but I don't understand what triggers them or what they're doing, or (more importantly) why they fired during a typical request.
I just checked out the logs more thoroughly and it looks like the Dyno this request ran on had been transitioned to "up" just a few seconds earlier, so it's likely this request was the first.
| The tables pg_class, pg_attribute, pg_depend etc all describe table, columns and dependencies in postgres. In Rails, model classes are defined by the tables, so Rails reads the tables and columns to figure out the attributes for each model.
In development mode it looks up these values everytime the model is accessed, so if you've mad e a recent change, Rails knows about it. In production mode, Rails caches this so you would see these much less frequently, and so it really isn't a concern.
| New Relic | 14,694,392 | 15 |
Does anyone have sample code for a ServiceStack Api that successfully reports transactions into NewRelic?
This doesn't appear to be trivial – it doesn't happen out of the box, and adding a RequestFilter that calls NewRelic.Api.Agent.NewRelic.SetTransactionName doesn't change that.
The server and apps seem to be configured correctly - the other apps (ASP WebForms and ASP MVC) on the same IIS server are reporting correctly. We are using IIS 7.5. Each app is in its own app pool and has a NewRelic.AppName in the web.config
| I work at New Relic.
In addition to what @mythz said, I wanted to confirm that you've got the app pool configured separately (so you're not seeing the transactions in one of your other monitored pools) - you have set up a Web.config file with a separate application name (NewRelic.AppName setting) for this pool, right?
Assuming yes, and that you are seeing at least an agent startup event in the logs for the agent from that pool, I can verify that we've heard the same from other customers, though it's been awhile since we last visited the issue. Better automatic support for ServiceStack is on the horizon but I don't have an ETA.
When working with one customer experiencing this issue, we found that ServiceStack was calling into our agent for some reason. The solution that worked for them was to override ServiceStack's implementation of AppHostBase.Release in the project and validate the object that is to be released if it is a New Relic instance. They overrode it in the web service project's AppHost (AppHost template is in your project at App_Start\AppHost.cs), like this:
public override void Release(object instance)
{
if (instance.GetType().ToString().StartsWith("NewRelic.Agent", StringComparison.CurrentCultureIgnoreCase))
return;
base.Release(instance);
}
In case this doesn't lead to a solution, if you could open a support ticket at https://support.newrelic.com, we could work with you further to get these stats tracked or at least get you on the notification list when we improve servicestack support.
| New Relic | 20,288,194 | 15 |
How can I zoom out on the New Relic graph? I must close the browser panel and open New Relic again in a new panel. Can I zoom out more comfortably?
| If you wish to change the time period displayed on the graph, you di that by selecting the 'time picker' in the upper right corner of the dashboard right under the name of your account. By default it will show 'Last 30 minutes Ending now'. Click on that and a selector gadget will appear allowing you to change the time period displayed.
You can read more about this in the documentation:
https://docs.newrelic.com/docs/site/timepicker-setting-time-periods-to-view-data
| New Relic | 23,065,219 | 15 |
We picked up quite a high number of ajax calls taking a significant amount of time in AcquireRequestState, in our travels we stumbled upon the session locking gem in ASP.Net so we implemented a custom session state handler (Based on link below).
After making the change and deploying it, we saw a very sharp drop in AcquireRequestState but it had been replaced with PreExecuteRequestHandler.
This morning it suddenly dawned on me that we had included OWIN which was probably the reason for the PreExecuteRequestHandler taking up so much time. I then proceeded to remove that and the moment I deployed the code, PreExecuteRequestHandler disappeared off of the list. Sadly, it has now been replaced with AcquireRequestState again at pretty much the exact same cost.
We do seem the be getting hit quite hard on AJAX calls that return Partial views, AJAX calls returning primitive types or JSON objects seem largely unaffected despite higher throughput.
So this leaves me with 3 questions that I am absolutely stumped on and I presume the answer for one would lead us to the answer for the other 2.
1) Why did the cost move from AcquireRequestState to PreExecuteEventHandler when OWIN was installed? Is something on OWIN marked as IRequireSessionState?
(It is my understanding that AcquireRequestState should have occurred earlier in the managed pipeline)
2) How do we obtain more information on what's actually going on inside that AcquireRequestState? Or is our time better spent returning JSON object and using that to render what we require on the UI?
3) We do see a couple of requests (very few though) that map to /{controller}/{action}/{id} in New Relic and is then completely stuck for the duration of the request in the above mentioned. This despite setting constraints on our routing to only route to controllers and actions we have in the project.
PS:
This does seem very similar to the following, we are also seeing this in New Relic: long delays in AcquireRequestState
Custom Session Module from :
I just discovered why all ASP.Net websites are slow, and I am trying to work out what to do about it
| For anyone trying to rule out the session problem we ultimately faced above, but who still needs to rely on session values (so you can't just disable the session on the controller level) have a look at the following sessionstate provider:
https://github.com/aspnet/AspNetSessionState
Specifically make sure you pay attention to the following application setting:
<add key="aspnet:AllowConcurrentRequestsPerSession" value="[bool]"/>
You will need to upgrade to .Net Framework 4.6.2 but in our case it's a small price to pay.
| New Relic | 42,300,438 | 15 |
Is there anything like New Relic for .Net apps?
| Sam, I'm happy to tell you that as of today, there is something that is very much like New Relic for .NET. It's New Relic for .NET. Same service, UI, pricing, etc. New agent that supports the .NET framework. Try it out for free: https://newrelic.com/docs/dotnet/new-relic-for-net.
| New Relic | 2,121,259 | 14 |
So my typical router log on the Cedar platform looks might look like
2012-03-22T18:26:34+00:00 heroku[router]: GET [my_url] dyno=web.9 queue=0 wait=0ms service=228ms status=302 bytes=212
2012-03-22T18:26:36+00:00 heroku[router]: GET [my_url] dyno=web.7 queue=0 wait=0ms service=23ms status=200 bytes=360
2012-03-22T18:26:45+00:00 heroku[router]: GET [my_url] dyno=web.30 queue=0 wait=0ms service=348ms status=201 bytes=1
I want to confirm my understanding of the terms queue, wait and service
My initial thoughts where that:
queue: The name of the queue if using background_job or resque
wait: how long is the request waiting in the router (Request Queueing in New Relic)
service: how long it actually takes your application to handle the request (not including queing time)
But my wait in my logs is always 0ms. Even if I have significant backlog.
Are my definitions wrong?
|
Queue: The number of requests waiting to be processed by a dyno.
Wait: The length of time this request sat in the queue before being processed.
Service: The processing time of the request.
Your total response time will be wait + service.
| New Relic | 9,828,846 | 14 |
When I try to start my Rails server, I am getting the following error:
I am using ruby 1.9.2
=> Booting WEBrick
=> Rails 3.1.8 application starting in development on http://0.0.0.0:3000
=> Call with -d to detach
=> Ctrl-C to shutdown server
/Users/toptier/.rvm/gems/ruby-1.9.2-p320/gems/newrelic_rpm-3.4.2/lib/new_relic/agent/agent.rb:318:in `log_app_names': undefined method `join' for nil:NilClass (NoMethodError)
from /Users/toptier/.rvm/gems/ruby-1.9.2-p320/gems/newrelic_rpm-3.4.2/lib/new_relic/agent/agent.rb:439:in `start'
from /Users/toptier/.rvm/gems/ruby-1.9.2-p320/gems/newrelic_rpm-3.4.2/lib/new_relic/control/instance_methods.rb:95:in `start_agent'
from /Users/toptier/.rvm/gems/ruby-1.9.2-p320/gems/newrelic_rpm-3.4.2/lib/new_relic/control/instance_methods.rb:83:in `init_plugin'
from /Users/toptier/.rvm/gems/ruby-1.9.2-p320/gems/newrelic_rpm-3.4.2/lib/newrelic_rpm.rb:36:in `block in <class:Railtie>'
from /Users/toptier/.rvm/gems/ruby-1.9.2-p320/gems/railties-3.1.8/lib/rails/initializable.rb:30:in `instance_exec'
from /Users/toptier/.rvm/gems/ruby-1.9.2-p320/gems/railties-3.1.8/lib/rails/initializable.rb:30:in `run'
from /Users/toptier/.rvm/gems/ruby-1.9.2-p320/gems/railties-3.1.8/lib/rails/initializable.rb:55:in `block in run_initializers'
from /Users/toptier/.rvm/gems/ruby-1.9.2-p320/gems/railties-3.1.8/lib/rails/initializable.rb:54:in `each'
from /Users/toptier/.rvm/gems/ruby-1.9.2-p320/gems/railties-3.1.8/lib/rails/initializable.rb:54:in `run_initializers'
from /Users/toptier/.rvm/gems/ruby-1.9.2-p320/gems/railties-3.1.8/lib/rails/application.rb:96:in `initialize!'
from /Users/toptier/.rvm/gems/ruby-1.9.2-p320/gems/railties-3.1.8/lib/rails/railtie/configurable.rb:30:in `method_missing'
from /Users/toptier/Desktop/Proyectos/CursoIngles/config/environment.rb:5:in `<top (required)>'
from /Users/toptier/.rvm/gems/ruby-1.9.2-p320/gems/activesupport-3.1.8/lib/active_support/dependencies.rb:240:in `require'
from /Users/toptier/.rvm/gems/ruby-1.9.2-p320/gems/activesupport-3.1.8/lib/active_support/dependencies.rb:240:in `block in require'
It is using the following gem: newrelic_rpm (3.4.2).
If I comment newrelic line in gemfile it works well,
Any idea?
| I work at New Relic and we've tracked down the problem.
This happens when nil is explicitly set as the app name, which typically happens for local development of heroku apps that pull their app name from ENV["NEW_RELIC_APP_NAME"]. Since this environment variable is not typically set on your local dev box it comes into the agent's config as nil and crashes the local server. It doesn't effect deployed versions of the app where this variable is set.
Obviously the agent should handle this case gracefully, and we'll have a patch out in the next day or two. We just completed a major refactoring of the agent's configuration, and this edge case was missed in our internal testing.
etoleb gives a good workaround in the comment. We're very sorry for causing you this headache.
If you have any questions or concerns feel free to email me directly at sam@newrelic.com.
Thanks!
| New Relic | 12,334,340 | 12 |
Environment:
Ruby: 2.1.2
Rails: 4.1.4
Heroku
In our rails app hosted on Heroku, there are times that requests take a long time to execute. It is just 1% of times or less, but we cannot figure out what it is happening.
We have newrelic agent installed and it says that it is not request-queuing, it is the transaction itself who takes all that time to execute.
However, transaction trace shows this:
(this same request most of the times takes only 100ms to be executed)
As far as I can tell, the time is being consumed before our controller gets invoked. It is consumed on
Rack::MethodOverride#call
and that is what we cannot understand.
Also, most of the times (or even always, we are not sure) this happens on POST requests that are sent by mobile devices. Could this have something to do with a slow connection? (although POST-payload is very tiny).
Has anyone experienced this? Any advice on how to keep exploring this issue is appreciated.
Thanks in advance!
| Since the Ruby agent began to instrument middleware in version 3.9.0.229, we've seen this question arise for some users. One possible cause of the longer timings is that Rack::MethodOverride needs to examine the request body on POST in order to determine whether the POST parameters contain a method override. It calls Rack::Request#POST, which ends up triggering a read that reads in the entire request body.
This may be why you see that more time than expected is being spent in this middleware. Looking more deeply into how the POST body relates to the time spent in the middleware might be a fruitful avenue for investigation.
| New Relic | 24,639,701 | 12 |
Recently, we convert a tomcat/spring app to spring boot. Everything is working fine apart from new relic. Is there a way I can easily config new relic with spring boot project. I don't want to hard code the location of new relic agent jar path, then run the spring boot project with the path.
edit: Spring boot project is with maven
| You can include NewRelic Maven dependency and use maven-dependency-plugin to unpack in into your target/classes directory, which allows Maven to include it into final Jar file. Then you have to add Premain-Class attribute into manifest file and you can use your application jar as -javaagent source. You can find details on my blog post
| New Relic | 26,901,959 | 12 |
I am trying to make a New Relic deployment API call as a Jenkins build step using the Groovy pipeline. I'm having trouble because of the use of both single and double quotes within the shell ('sh') command on the groovy script. Whenever I execute the following:
node {
//...
def json = '''\
{"deployment": {"revision": "v1","user": "me"}}'
'''
sh "curl -o /dev/null -s -X POST 'https://api.newrelic.com/v2/applications/[redacted]/deployments.json' \
-H 'X-Api-Key:[redacted]' \
-H 'Content-Type: application/json' \
-d '${json}'"
// ...
}
I get an error in Jenkins that says:
/var/lib/jenkins/jobs/[redacted]/workspace@tmp/durable-0f6c52ef/script.sh: line 2: unexpected EOF while looking for matching `''
| The 'json' variable contains a string that has an extra trailing single quote (').
When this is used in
-d '${json}'"
I suspect it will result in an extra (') in the data block. The data block will require the JSON be enclosed in single quotes so make certain those are included.
Not being a Groovy person (pun intended) you may have to play with escaping characters it ensure that the correct string is passed to the cURL command.
| New Relic | 41,497,385 | 12 |
I want to install New Relic on one of my open source rails applications (v 3.2.12). I don't want to have the license key in the repo. I'd like to load it with something like ENV.
By default that's loaded in the newrelic.yml file.
Where is that YAML file loaded? I guess I could manually merge it with a hash that loads the license from the ENV hash.
Any hints on how to do that?
| I use the Figaro gem to handle secret keys with ENV environment variables, similar to you. For New Relic, I have:
config/application.yml (.gitignored and not pushed to source control)
# ...
NEW_RELIC_LICENSE_KEY: {{MY_KEY}}
which is then referenced in config/newrelic.yml:
# ...
license_key: <%= ENV['NEW_RELIC_LICENSE_KEY'] %>
A file called config/application.example.yml gets pushed up to the source code repo with instructions to put your own license key in:
config/application.example.yml
# ...
NEW_RELIC_LICENSE_KEY: # put your license key here
Also see this StackOverflow Q&A for more details:
What should be removed from public source control in Ruby on Rails?
| New Relic | 14,864,743 | 11 |
I am seeing some requests coming through one of my sites that have the X-NewRelic-ID request header attached. It's always in the form of a header.
Does this identify a user or simply a unique request passing through one of their services?
Thanks
| This header is related to a product called NewRelic.
As mentioned in tutsplus:
NewRelic is a managed service (SaaS) that you “plug in” to your web
app, which collects and aggregates performance metrics of your live
web application.
This header is automatically added by that plugin and also it has some scripts injected on all pages (if requested by owner) to monitor web page usages in client browser and gather data and send them back to the server for monitoring and statistical analysis..
| New Relic | 18,924,327 | 11 |
I am running Ubuntu 12.04 with Nginx and the latest PHP. The story goes like this:
I tried to install the new relic PHP agent per the instructions for ubuntu:
wget -O - http://download.newrelic.com/548C16BF.gpg | sudo apt-key add -
sudo sh -c 'echo "deb http://apt.newrelic.com/debian/ newrelic non-free"
> /etc/apt /sources.list.d/newrelic.list'
sudo apt-get update
sudo apt-get install newrelic-php5
sudo newrelic-install install
And it doesn't work. After everything the PHP agent simply can't start. I even whipped up a quick phpinfo.php page to see if the newrelic module was listed and it's not. So then I googled "New relic .deb" and came across this page: https://docs.newrelic.com/docs/server/server-monitor-installation-ubuntu-and-debian and followed the instructions. The install all goes through but the agent also doesn't start. I like to keep my servers clean so I decided "OK, since it doesn't work, until new relic support gets back to me and I can start from fresh I will remove the new relic stuff that was installed". So once again I followed the instructions on that link. The install seemed to work normally. However, if I execute the command "PHP" I get the following error:
root@MYHOSTNAME:/home# php
PHP Warning: PHP Startup: Unable to load dynamic library '/usr/lib/php5/20121212
/newrelic.so' - /usr/lib/php5/20121212/newrelic.so: cannot open shared object file:
No such file or directory in Unknown on line 0
I made sure there is no reference to newrelic in my /etc/php/fpm/php.ini file and double checked to see if there was anything in that folder. Nothing.
So my question is: how do I get rid of the error? How do I make PHP stop trying to load that newrelic.so module? Is there any reference to it somewhere that I might be missing?
| Ok, I found the answer. I can't describe how grateful I am to @mike in the following post: Error In PHP5 ..Unable to load dynamic library. I ran $ grep -Hrv ";" /etc/php5 | grep -i "extension=" and it returned a large list of files and one of them was newrelic.ini in /etc/php5/cli/conf.d/ which to be honest with you I wasn't even aware was a php directory. So I ran sudo rm -rf /etc/php5/cli/conf.d/newrelic.ini and restarted nginx and php5-fpm, and problem solved :)
Thanks @WayneWhitty for the suggestions! I am also going to let newrelic know that they should fix that on their uninstall script.
| New Relic | 19,740,094 | 11 |
I am working on cakephp 2.x. I found in the transaction trace summary of New Relic, that there are some APIs which are taking a lot of time(almost 20-30 sec) to execute out of which almost 90% of the time is spent in the Controller::invokeAction method.
Can somebody explain why so much time is spent on invokeAction method.
| I agree with @ndm that Controller::invokeAction() method encapsulates everything that is triggered by the Controller action. Your method did not take much time to execute, but when it sends the resulting data to the client - the time it takes to finish unloading the data gets logged into this method. In New Relic's parlance this is uninstrumented time.
New Relic's Transaction time includes this network time too, which will get logged into Controller::invokeAction() method, because New Relic couldn't find some other action to blame for the untracked time.
According the the New Relic Documentation -
the most frequent culprits are functions that send large blocks of
data or large files to users. If the user is on a slow connection,
sending small files (small images for example) could take a long time
due to simple network latency. Since no internal or C extension
functions are instrumented, the PHP agent has no one to "blame" the
time spent on, and this appears in a transaction trace as
uninstrumented time.
You can read more about it here https://docs.newrelic.com/docs/agents/php-agent/troubleshooting/uninstrumented-time-traces
If you still want to figure out what happened, or trace the uninstrumented time of your method; you can set the newrelic.transaction_tracer.detail to 0 in your New Relic's PHP Agent. This will ensure maximum visibility.
You can read more about setting up New Relic's PHP custom instrumentation here : https://docs.newrelic.com/docs/agents/php-agent/features/php-custom-instrumentation
| New Relic | 48,169,441 | 11 |
Has anyone succesfully deployed the New Relic addon to a PHP app running on Heroku Cedar stack? I'm running a fairly high traffic Facebook app on a few dynos and can't get it to work.
The best info I can find details a Python deployment: http://newrelic.com/docs/python/python-agent-and-heroku
Thanks!
| Heroku has just recently rolled out support for PHP with Cedar and we at New Relic don't know anything more than you do. We'll be talking with Heroku ASAP to get some docs developed which will certainly be on (New Relic's knowledge base), and I'll report back here as well.
Edited to add:
Sorry for the long delay in me checking back in. Unfortunately this is still not possible in a well-supported way, reason being that our php agent requires a standalone daemon to be running in addition to the dyno that is serving your content. While you can find terrible hacks to get you into the space where you could fire up the daemon temporarily, it's not sustainable and won't port to the next dyno that spins up. This means that we can't support you running the agent in this environment.
Edited to add:
As @aaron-heusser mentioned, support is finally official as of a month or so ago: https://github.com/heroku/heroku-buildpack-php
Note: I work at New Relic.
| New Relic | 8,092,070 | 10 |
I'm using the newrelic_rpm developer mode locally in a rails 3.2 app. This is working fine.
When I install ruby-prof and click "start profiling" in the newrelic local dashboard and go back to my app, every page in my app gives "undefined method `pop' for #.
The top few lines of the traceback:
newrelic_rpm (3.6.4.122) lib/new_relic/agent/instrumentation/controller_instrumentation.rb:421:in `ensure in perform_action_with_newrelic_profile'
newrelic_rpm (3.6.4.122) lib/new_relic/agent/instrumentation/controller_instrumentation.rb:421:in `perform_action_with_newrelic_profile'
newrelic_rpm (3.6.4.122) lib/new_relic/agent/instrumentation/controller_instrumentation.rb:305:in `perform_action_with_newrelic_trace'
newrelic_rpm (3.6.4.122) lib/new_relic/agent/instrumentation/rails3/action_controller.rb:37:in `process_action'
actionpack (3.2.13) lib/abstract_controller/base.rb:121:in `process'
Any ideas how to work out what's going wrong?
| Dropped back to version 3.5.8.72 and it worked again. Just update your Gemfile with gem "newrelic_rpm", "3.5.8.72". I've logged an issue with them on it.
| New Relic | 17,195,319 | 10 |
By no means, NewRelic is taking the world by storm with many successful deployments.
But what are the cons of using it in production?
PHP monitoring agent works as a .so extension. If I understand correctly, it connects to another system aggregation service, which filters data out and pushes them into the NewRelic cloud.
This simply means that it works transparently under the hood. However, is this actually true?
Any monitoring, profiling or api service adds some overhead to the entire stack.
The extension itself is 0.6 MB, which adds up to each php process, this isn't much so my concern is rather CPU and IO.
The image shows CPU Utilization on a production EC2 t1.micro instances with NewRelic agent (top blue one) and w/o the agent (other lines)
What does NewRelic really do what cause the additional overhead?
What are other negative sides when using it?
| Your mileage may vary based on the settings, your particular site's code base, etc...
The additional overhead you're seeing is less the memory used, but the tracing and profiling of your php code and gathering analytic data on it as well as DB request profiling. Basically some additional overhead hooked into every php function call. You see similar overhead if you left Xdebug or ZendDebugger running on a machine or profiling. Any module will use some resources, ones that hook deep in for profiling can be the costliest, but I've seen new relic has config settings to dial back how intensively it profiles, so you might be able to lighten it's hit more than say Xdebug.
All that being said, with the newrelic shared PHP module loaded with the default setup and config from their site my company's website overall server response latency went up about 15-20% across the board when we turned it on for all our production machines. I'm only talking about the time it takes for php-fpm to generate an initial response. Our site is http://www.nara.me. The newrelic-daemon and newrelic-sysmon services running as well, but I doubt they have any impact on response time.
Don't get me wrong, I love new relic, but the perfomance hit in my specific situation hit doesn't make me want to keep the PHP module running on all our live load balanced machines. We'll probably keep it running on one machine all the time. We do plan to keep the sysmon stuff going 100% and keep the module disabled in case we need it for troubleshooting.
My advice is this:
Wrap any calls to new relic functions in if(function_exists ( $function_name )) blocks so your code can run without error if the new relic module isn't loaded
If you've multiple identical servers behind a loadbalancer sharing the same code, only enable the php module on one image to save performance. You can keep the sysmon stuff running if you use new relic for this.
If you've just one server, only enable the shared php module when you need it--when you're actually profiling your code or mysql unless a 10-20% performance hit isn't a problem.
One other thing to remember if your main source of info is the new relic website: they get paid by the number of machines you're monitoring, so don't expect them to convince you to not use it on anything less than 100% of your machines even if it not needed. I think one of their FAQ's or blogs state basically you should expect some performance impact, but if you use it as intended and fix the issues you see from it, you should recoup the latency lost. I agree, but I think once you fix the issues, limit the exposure to the smallest needed number of servers.
| New Relic | 22,702,056 | 10 |
I am developing an application that relies on stock market information. For the moment, I use Yahoo Finance CSV API. Unfortnautely OpenTick stopped its service, Google Finance API will soon, too.
I have a list of stock symbols I am interested in and download a CSV and parse it. I do not need "live" and "legit" data, as I want to test out how my application can handle high-frequent stock event stream. Ideally, at least several 100k quotes should be contained, the more, the better (up to a certain extent).
Data should be somehow similar to this site, but I'd need way more data. Only concern is, that it must contain the typical stock symbols, the data itself doenst need to be too detailed (date/time, high, low, EOD, volume would do).
Does anyone know, where I could get historic data (like an enourmous CSV), which I could feed into my application?
Paying for that data is not an option. Would be great, if somebody could share his experiences/knowledge, where to get such data. I know about xignite, NxCore etc. but as this is an academic project, it must be free-to-use data. I cannot hope for some free equivalent of NxCore, but probaly you guys can help me out with some advice and hints...
If I am too optimistic and there basically is no free source, I'll have to "randomize" stock quotes, but this is the last option. The big advantage with a static historic data set is, that I can really compare the performance of my application with regard to the same input data.
I've already searched StackOverflow, but most threads are quite old and refer to no more existent soultions as in this question.
EDIT:
This site EODdata.com mentioned in a related question gets close - but unfortunately their data is not free, but at least the prices seem reasonable.
Another related SO question can be found here.
| On the site of Southwest Cyberport one can download some historic stock market data sets.
I've downloaded S&P 500 historic data as "daily update" and got approx. 11 MB of uncompressed txt files. Each file is 25 KB and can easily be concatenated into one big single file.
The format is CSV and a corresponds to:
Date, Company, Open, High, Low, Close, Volume
Small sample can be found below:
20080306,A,30.51,30.7,30.1,30.14,21131
20080306,AA,38.85,39.28,38.26,38.37,112800
20080306,AAPL,124.9,127.5,120.81,120.93,526320
20080306,ABC,41.24,41.26,40.26,40.26,13738
20080306,ABI,34.18,34.21,33.59,33.63,21597
20080306,ABK,7.99,8.5,7.25,7.42,195953
20080306,ABT,52.83,52.98,52.05,52.09,60385
20080306,ACAS,34.75,34.86,32.57,32.65,27887
....
| DataSet | 11,645,541 | 14 |
I am trying to infer tinyYOLO-V2 with INT8 weights and activation. I can convert the weights to INT8 with TFliteConverter. For INT8 activation, I have to give representative dataset to estimate the scaling factor. My method of creating such dataset seems wrong.
What is the correct procedure ?
def rep_data_gen():
a = []
for i in range(160):
inst = anns[i]
file_name = inst['filename']
img = cv2.imread(img_dir + file_name)
img = cv2.resize(img, (NORM_H, NORM_W))
img = img / 255.0
img = img.astype('float32')
a.append(img)
a = np.array(a)
print(a.shape) # a is np array of 160 3D images
img = tf.data.Dataset.from_tensor_slices(a).batch(1)
for i in img.take(BATCH_SIZE):
print(i)
yield [i]
# https://www.tensorflow.org/lite/performance/post_training_quantization
converter = tf.lite.TFLiteConverter.from_keras_model_file("./yolo.h5")
converter.optimizations = [tf.lite.Optimize.DEFAULT]
converter.target_ops = [tf.lite.OpsSet.TFLITE_BUILTINS_INT8]
converter.inference_input_type = [tf.int8]
converter.inference_output_type = [tf.int8]
converter.representative_dataset=rep_data_gen
tflite_quant_model = converter.convert()
ValueError: Cannot set tensor: Got tensor of type STRING but expected type FLOAT32 for input 27, name: input_1
| I used your code for reading in a dataset and found the error:
img = img.astype('float32') should be
img = img.astype(np.float32)
Hope this helps
| DataSet | 57,877,959 | 14 |
I am having to convert an ASP classic system to C#
I have a stored procedure that can return up to 7 recordsets (depending on the parameters passed in).
I need to know how I can simply return all the recordsets as individual DataTables so that I can loop through whatever is there, skipping to the next DataTable when I get to the end of it without having to run multiple SQL statements and use multiple adapter.Fill statements to add each table into a DataSet.
In classic it was a simple Do While not objRS.EOF loop with a objRS.NextRecordset() when I got to the end of the loop to move to the next statement.
Is there anything I can use that doesn't require a total rewrite of the current back end code?
Each recordset has a different number of columns and rows. They are unrelated to each other. We return multiple recordsets from Stored Proc's to reduce traffic.
Examples would be nice.
Thanks
| SqlConnection con=new SqlConnection("YourConnection String");
SqlCommand cmd=new SqlCommand();
SqlDataAdapter da=new SqlDataAdapter();
DataSet ds = new DataSet();
cmd = new SqlCommand("name of your Stored Procedure", con);
cmd.CommandType = CommandType.StoredProcedure;
//cmd.Parameters.AddWithValue("@SuperID", id);//if you have parameters.
da = new SqlDataAdapter(cmd);
da.Fill(ds);
con.Close();
After this you can take advantage of different (7) recordsets using
ds.Tables[0]
ds.Tables[1]
ds.Tables[2]
ds.Tables[3]
ds.Tables[4]
ds.Tables[5]
ds.Tables[6]
| DataSet | 18,510,901 | 14 |
I've been stuck with this problem for a few hours and can't seem to figure it out, so I'm asking here :)
Alright, I've got this function:
private void XmlDump()
{
XDocument doc = new XDocument(new XDeclaration("1.0", "utf-8", "yes"));
XElement rootElement = new XElement("dump");
rootElement.Add(TableToX("Support"));
string connectionString = ConfigurationManager.ConnectionStrings["MyDb"].ConnectionString;
SqlConnection con = new SqlConnection(connectionString);
string sql = "select * from support";
SqlDataAdapter da = new SqlDataAdapter(sql, con);
DataSet ds = new DataSet("Test");
da.Fill(ds, "support");
// Convert dataset to XML here
var docresult = // Converted XML
Response.Write(docResult);
Response.ContentType = "text/xml; charset=utf-8";
Response.AddHeader("Content-Disposition", "attachment; filename=test.xml");
Response.End();
}
I've been trying all kind of different things but I keep getting errors, so I've left the how to convert DataSet to XML part blank.
And another thing, this query contains columns with special characters.
| You can use ds.WriteXml, but that will require you to have a Stream to put the output into. If you want the output in a string, try this extension method:
public static class Extensions
{
public static string ToXml(this DataSet ds)
{
using (var memoryStream = new MemoryStream())
{
using (TextWriter streamWriter = new StreamWriter(memoryStream))
{
var xmlSerializer = new XmlSerializer(typeof(DataSet));
xmlSerializer.Serialize(streamWriter, ds);
return Encoding.UTF8.GetString(memoryStream.ToArray());
}
}
}
}
USAGE:
var xmlString = ds.ToXml();
// OR
Response.Write(ds.ToXml());
| DataSet | 8,384,014 | 14 |
How can I search rows in a datatable for a row with Col1="MyValue"
I'm thinking something like
Assert.IsTrue(dataSet.Tables[0].Rows.
FindAll(x => x.Col1 == "MyValue" ).Count == 1);
But of course that doesn't work!
| You can use LINQ to DataSets to do this:
Assert.IsTrue(dataSet.Tables[0].AsEnumerable().Where(
r => ((string) r["Col1"]) == "MyValue").Count() == 1);
Note, you can also do this without the call to Assert:
dataSet.Tables[0].AsEnumerable().Where(
r => ((string) r["Col1"]) == "MyValue").Single();
If the number of rows does not equal one (hence, the call to Single), then an exception will be thrown, and that unhandled exception should fail your test case. Personally, I like the latter, as it has a clearer semantic meaning.
The above can be further whittled down to:
dataSet.Tables[0].AsEnumerable().Single(
r => ((string) r["Col1"]) == "MyValue");
Additionally, you can take advantage of the Field method on the DataRowExtensions class to simplify type-safe access to the field (as well as providing the extra benefit of converting DBNull to null counterparts in .NET):
dataSet.Tables[0].AsEnumerable().Single(
r => r.Field<string>("Col1") == "MyValue");
| DataSet | 3,459,595 | 14 |
I assume I have to do this via a DataSet, but it doesn't like my syntax.
I have an XMLDocument called "XmlDocument xmlAPDP".
I want it in a DataTable called "DataTable dtAPDP".
I also have a DataSet called "DataSet dsAPDP".
-
if I do DataSet dsAPDP.ReadXML(xmlAPDP) it doesn't like that because ReadXML wants a string, I assume a filename?
| No hacks required:
xmlAPDP = new XmlDocument()
...
xmlReader = new XmlNodeReader(xmlAPDP)
dataSet = new DataSet()
...
dataSet.ReadXml(xmlReader)
XmlDocument is an XmlNode, and XmlNodeReader is a XmlReader, which ReadXml accepts.
| DataSet | 836,806 | 14 |
I'd like to be able to open a TDataSet asynchronously in its own thread so that the main VCL thread can continue until that's done, and then have the main VCL thread read from that TDataSet afterwards. I've done some experimenting and have gotten into some very weird situations, so I'm wondering if anyone has done this before.
I've seen some sample apps where a TDataSet is created in a separate thread, it's opened and then data is read from it, but that's all done in the separate thread. I'm wondering if it's safe to read from the TDataSet from the main VCL thread after the other thread opens the data source.
I'm doing Win32 programming in Delphi 7, using TmySQLQuery from DAC for MySQL as my TDataSet descendant.
| Provided you only want to use the dataset in its own thread, you can just use synchronize to communicate with the main thread for any VCL/UI update, like with any other component.
Or, better, you can implement communication between the mainthread and worker threads with your own messaging system.
check Hallvard's solution for threading here:
http://hallvards.blogspot.com/2008/03/tdm6-knitting-your-own-threads.html
or this other one:
http://dn.codegear.com/article/22411
for some explanation on synchronize and its inefficiencies:
http://www.eonclash.com/Tutorials/Multithreading/MartinHarvey1.1/Ch3.html
| DataSet | 78,475 | 14 |
Is anyone aware of a script/class (preferably in PHP) that would parse a given MySQL table's structure and then fill it with x number of rows of random test data based on the field types?
I have never seen or heard of something like this and thought I would check before writing one myself.
| What you are after would be a data generator.
There is one available here which i had bookmarked but i haven't got around to trying it yet.
| DataSet | 19,162 | 14 |
How to create excel file with multiple sheets from DataSet using C#?
I have successfully created an excel file with single sheet. But I am not able to do that for multiple sheets.
| Here is a simple C# class that programatically creates an Excel WorkBook and adds two sheets to it, and then populates both sheets. Finally, it saves the WorkBook to a file in the application root directory so that you can inspect the results...
public class Tyburn1
{
object missing = Type.Missing;
public Tyburn1()
{
Excel.Application oXL = new Excel.Application();
oXL.Visible = false;
Excel.Workbook oWB = oXL.Workbooks.Add(missing);
Excel.Worksheet oSheet = oWB.ActiveSheet as Excel.Worksheet;
oSheet.Name = "The first sheet";
oSheet.Cells[1, 1] = "Something";
Excel.Worksheet oSheet2 = oWB.Sheets.Add(missing, missing, 1, missing)
as Excel.Worksheet;
oSheet2.Name = "The second sheet";
oSheet2.Cells[1, 1] = "Something completely different";
string fileName = Path.GetDirectoryName(Assembly.GetExecutingAssembly().Location)
+ "\\SoSample.xlsx";
oWB.SaveAs(fileName, Excel.XlFileFormat.xlOpenXMLWorkbook,
missing, missing, missing, missing,
Excel.XlSaveAsAccessMode.xlNoChange,
missing, missing, missing, missing, missing);
oWB.Close(missing, missing, missing);
oXL.UserControl = true;
oXL.Quit();
}
}
To do this, you would need to add a reference to Microsoft.Office.Interop.Excel to your project (you may have done this already since you are creating one sheet).
The statement that adds the second sheet is...
Excel.Worksheet oSheet2 = oWB.Sheets.Add(missing, missing, 1, missing)
as Excel.Worksheet;
the '1' argument specifies a single sheet, and it can be more if you want to add several sheets at once.
Final note: the statement oXL.Visible = false; tells Excel to start in silent mode.
| DataSet | 8,156,616 | 13 |
I'm loading large datasets and then caching them for reference throughout my code. The code looks something like this:
val conversations = sqlContext.read
.format("com.databricks.spark.redshift")
.option("url", jdbcUrl)
.option("tempdir", tempDir)
.option("forward_spark_s3_credentials","true")
.option("query", "SELECT * FROM my_table "+
"WHERE date <= '2017-06-03' "+
"AND date >= '2017-03-06' ")
.load()
.cache()
If I leave off the cache, the code executes quickly because Datasets are evaluated lazily. But if I put on the cache(), the block takes a long time to run.
From the online Spark UI's Event Timeline, it appears that the SQL table is being transmitted to the worker nodes and then cached on the worker nodes.
Why is cache executing immediately? The source code appears to only mark it for caching when the data is computed:
The source code for Dataset calls through to this code in CacheManager.scala when cache or persist is called:
/**
* Caches the data produced by the logical representation of the given [[Dataset]].
* Unlike `RDD.cache()`, the default storage level is set to be `MEMORY_AND_DISK` because
* recomputing the in-memory columnar representation of the underlying table is expensive.
*/
def cacheQuery(
query: Dataset[_],
tableName: Option[String] = None,
storageLevel: StorageLevel = MEMORY_AND_DISK): Unit = writeLock {
val planToCache = query.logicalPlan
if (lookupCachedData(planToCache).nonEmpty) {
logWarning("Asked to cache already cached data.")
} else {
val sparkSession = query.sparkSession
cachedData.add(CachedData(
planToCache,
InMemoryRelation(
sparkSession.sessionState.conf.useCompression,
sparkSession.sessionState.conf.columnBatchSize,
storageLevel,
sparkSession.sessionState.executePlan(planToCache).executedPlan,
tableName)))
}
}
Which only appears to mark for caching rather than actually caching the data. And I would expect caching to return immediately based on other answers on Stack Overflow as well.
Has anyone else seen caching happening immediately before an action is performed on the dataset? Why does this happen?
| cache is one of those operators that causes execution of a dataset. Spark will materialize that entire dataset to memory. If you invoke cache on an intermediate dataset that is quite big, this may take a long time.
What might be problematic is that the cached dataset is only stored in memory. When it no longer fits, partitions of the dataset get evicted and are re-calculated as needed (see https://spark.apache.org/docs/latest/rdd-programming-guide.html#rdd-persistence). With too little memory present, your program could spend a lot of time on re-calculations.
To speed things up with caching, you could give the application more memory, or you can try to use persist(MEMORY_AND_DISK) instead of cache.
| DataSet | 45,419,963 | 13 |
I have a report with multiple data-sets. Different fields from different data-sets are used in different locations of the report.
In one part of the report, I need to do a calculation using fields from two different data-sets. Is this possible within an expression?
Can I somehow reference the data-set the field is in, in the expression?
For example, I'd like to do something like this:
=Fields.Dataset1.Field / Fields.Dataset2.Field
| You can achieve that by specifying the scope of you fields like this:
=First(Fields!fieldName_A.Value, "Dataset1") / First(Fields!fieldName_B.Value, "Dataset2")
Assuming A is 10 and B is 2 and they are of type numeric then you will have the result of 5 when the report renders.
When you are in the expression builder you can choose the Category: Datasets, your desired dataset highlighted under Item: and then double click the desired field under Value: and it will appear in your expression string with the scope added.
Using same logic you can concatenate two fields like so:
=First(Fields!fieldName_A.Value, "Dataset1") & “ “ & First(Fields!fieldName_B.Value, "Dataset2")
| DataSet | 9,676,149 | 13 |
I want to do an OCR benchmark for scanned text (typically any scan, i.e. A4). I was able to find some NEOCR datasets here, but NEOCR is not really what I want.
I would appreciate links to sources of free databases that have appropriate images and the actual texts (contained in the images) referenced.
I hope this thread will also be useful for other people doing OCR surfing for datasets, since I didn't find any good reference to such sources.
Thanks!
| I've had good luck using university research data sets in a number of projects. These are often useful because the input and expected results need to be published to independently reproduce the study results. One example is the UNLV data set for the Fourth Annual Test of OCR Accuracy discussed more below.
Another approach is to start with a data set and create your own training set. It may also be worthwhile to work with Project Gutenberg which has transcribed 57,136 books. You could take the HTML version (with images) and print it out using a variety of transformations like fonts, rotation, etc. Then you could convert the images and scan them to compare against the text version. See an example further below.
1) Annual Tests of OCR Accuracy DOE and UNLV
The Department of Energy (DOE) and Information Science Research Institute (ISRI) of UNLV ran OCR tests for 5 years from 1992 to 1995. You can find the study descriptions for each year here:
Overview: http://www.expervision.com/testimonial-world-leading-and-champion-ocr/annual-test-of-ocr-accuracy-by-us-department-of-energy-doe-university-of-nevada-las-vegas-unlv
1.1) UNLV Tesseract OCR Test Data published in Fourth Annual Test of OCR Accuracy
The data for the fourth annual test using Tesseract is posted online. Since this was an OCR study, it may suit your purposes.
This data is now hosted as part of the ISRI of UNLV OCR Evaluation Tools project posted on Google Code:
Project: https://code.google.com/archive/p/isri-ocr-evaluation-tools/
Images and Ground Truth text and zone files for several thousand English and some Spanish pages that were used in the UNLV/ISRI annual tests of OCR accuracy between 1992 and 1996.
Source code of OCR evaluation tools used in the UNLV/ISRI annual tests of OCR Accuracy.
Publications of the Information Science Research Institute of UNLV applicable to OCR and text retrieval.
You can find information on this data set here:
Description: https://github.com/tesseract-ocr/tesseract/wiki/UNLV-Testing-of-Tesseract
Datasets: https://code.google.com/archive/p/isri-ocr-evaluation-tools/downloads
At the datasets link, you'll find a number of gziped tarballs you can download. In each tarball is a number of directories with a set of files. Each document has 3 files:
.tif binary image file
.txt text file
.uzn zone file for describing the scanned image
Note: while posting, I noticed this data set was originally posted in a comment by @Stef above.
2) Project Gutenberg
Project Gutenberg has transcribed 57,136 free ebooks in the following formats:
HTML
EPUB (with images)
EPUB (no images)
Kindle (with images)
Kindle (no images)
Plain Text UTF-8
Here is an example: http://www.gutenberg.org/ebooks/766
You could create a test data set by doing the following:
Create test files:
Start with HTML, ePub, Kindle, or plain text versions
Render and transform using different fonts, rotation, background color, with and without images, etc.
Convert the rendering to the desired format, e.g. TIFF, PDF, etc.
Test:
Run generated images through OCR system
Compare with original plain text version
| DataSet | 41,181,742 | 13 |
I want to do a very simple thing: move some code in VS13 from one project in to another one and I'm facing the strange problem with datasets. For simplicity let's say that in my source project I have one dataset named MyDataSet which consists from 5 files: MyDataSet.cs, MyDataSet.Designer.cs, MyDataSet.xsc, MyDataSet.xsd, MyDataSet.xss.
Then I copy these files to my destination project folder using standard windows functionality and use Include in Project menu option in VS13. After that I see that one extra file was added: MyDataSet1.Designer.cs.
I tried to check cproj files and they are different.
Source (only parts different from target are shown):
<Compile Include="MyDataSet.Designer.cs">
<AutoGen>True</AutoGen>
<DesignTime>True</DesignTime>
<DependentUpon>MyDataSet.xsd</DependentUpon>
</Compile>
<None Include="MyDataSet.xsd">
<SubType>Designer</SubType>
<Generator>MSDataSetGenerator</Generator>
<LastGenOutput>MyDataSet.Designer.cs</LastGenOutput>
</None>
Target (only part different from source are shown):
<Compile Include="MyDataSet.Designer.cs">
<DependentUpon>MyDataSet.cs</DependentUpon>
</Compile>
<Compile Include="MyDataSet1.Designer.cs">
<AutoGen>True</AutoGen>
<DesignTime>True</DesignTime>
<DependentUpon>MyDataSet.xsd</DependentUpon>
</Compile>
<None Include="MyDataSet.xsd">
<Generator>MSDataSetGenerator</Generator>
<LastGenOutput>MyDataSet1.Designer.cs</LastGenOutput>
<SubType>Designer</SubType>
</None>
Also I noticed that in MyDataSet.cs and MyDataSet1.Designer.cs namespaces were automatically changed to correct ones.
I'm using ReSharper, and first I thought that it can be a reason of that, but I disabled ReSharper and the same behavior continues to happen.
Probably I can fix that by removing newly created files and modifying cproj files, but actually there are a lot of datasets that I need to copy and I really don't like that kind of work.
Does anyone have any ideas what can be a reason of such problem and how can it be solved?
| Move the dataset from within Visual Studio by right-clicking the dataset root node in Solution Explorer (usually the .xsd) and selecting Copy, and then right-click the destination project or project folder and select Paste. This should copy the files and correctly markup the csproj files.
| DataSet | 29,609,528 | 13 |
How can we easily import/export database data which dbunit could take in the following format?
<dataset>
<tablea cola="" colb="" />
<tableb colc="" cold="" />
</dataset>
I'd like to find a way to export the existing data from database for my unit test.
| Blue, this will let you export your data in the format you wanted.
public class DatabaseExportSample {
public static void main(String[] args) throws Exception {
// database connection
Class driverClass = Class.forName("org.hsqldb.jdbcDriver");
Connection jdbcConnection = DriverManager.getConnection(
"jdbc:hsqldb:sample", "sa", "");
IDatabaseConnection connection = new DatabaseConnection(jdbcConnection);
// partial database export
QueryDataSet partialDataSet = new QueryDataSet(connection);
partialDataSet.addTable("FOO", "SELECT * FROM TABLE WHERE COL='VALUE'");
partialDataSet.addTable("BAR");
FlatXmlDataSet.write(partialDataSet, new FileOutputStream("partial.xml"));
// full database export
IDataSet fullDataSet = connection.createDataSet();
FlatXmlDataSet.write(fullDataSet, new FileOutputStream("full.xml"));
// dependent tables database export: export table X and all tables that
// have a PK which is a FK on X, in the right order for insertion
String[] depTableNames =
TablesDependencyHelper.getAllDependentTables( connection, "X" );
IDataSet depDataSet = connection.createDataSet( depTableNames );
FlatXmlDataSet.write(depDataSet, new FileOutputStream("dependents.xml"));
}
}
| DataSet | 14,355,969 | 13 |
I want to get a specific row on an asp.net DataTable and move it to be the first one onto this DataTable base on a column column1 value. My Datatable dt1 is populated via a DB query and the value to search is via another query from another DB so I don't know the value to search at the dt1 select time.
// I use this variable to search into
// DataTable
string valueToSearch = "some value";
So I need to search the value some value into my DataTable in the column column1. and then move the entire row to the first position.
Thank you.
| We have to clone the row data before:
DataRow[] dr = dtable.Select("column1 ='" + valueToSearch +"'");
DataRow newRow = dtable.NewRow();
// We "clone" the row
newRow.ItemArray = dr[0].ItemArray;
// We remove the old and insert the new
ds.Tables[0].Rows.Remove(dr[0]);
ds.Tables[0].Rows.InsertAt(newRow, 0);
| DataSet | 13,825,772 | 13 |
I want to create a 'fake' data field in a DataSet (not ClientDataSet):
the field should not be stored in the db
it's not a calculated field (the user should be allowed to enter input data)
the field has business logic meaning, so after the user updates its value it should update other fields (with the OnFieldChange event)
I know I can have a simple no-dbaware control, capture it's OnChange event and perform the calculations there (or call a DataModule function, where the DataSet resides) but I think it's more clean if I can reutilize the dataset automatic binding with db-ware controls and dataset events..
Also this way the unique connection between the Form (Presentation) and the DataModule (Model) it's the DataSet (less coupling)..
PD: I'm using fibplus, with I think the solution (if any) will be at the VCL level..
Thanks!
| Have you tried using an InternalCalc field? Your data aware controls will let you edit an InternalCalc field's value, and the value is stored in the dataset.
If you create an InternalCalc field in the dataset (TClientDataSet, TQuery, etc.) at design time, it's almost exactly what you're asking for.
| DataSet | 7,997,932 | 13 |
I have the following code which connects to a database and stores the data into a dataset.
What I need to do now is get a single value from the data set (well actually its two the first row column 4 and 5)
OdbcConnection conn = new OdbcConnection();
conn.ConnectionString = ConfigurationManager.ConnectionStrings["ConnectionString2"].ConnectionString;
DataSet ds = new DataSet();
OdbcDataAdapter da = new OdbcDataAdapter("SELECT * FROM MTD_FIGURE_VIEW1", conn);
da.Fill(ds)
So, I need to get two specific items and store them into ints, the psudo code would be
int var1 = ds.row1.column4
int var2 = ds.row1.column5
Any ideas on how I can do this?
Also, can some one shed a light on data tables too as this may be related to how I'm going about doing this.
| You can do like...
If you want to access using ColumnName
Int32 First = Convert.ToInt32(ds.Tables[0].Rows[0]["column4Name"].ToString());
Int32 Second = Convert.ToInt32(ds.Tables[0].Rows[0]["column5Name"].ToString());
OR, if you want to access using Index
Int32 First = Convert.ToInt32(ds.Tables[0].Rows[0][4].ToString());
Int32 Second = Convert.ToInt32(ds.Tables[0].Rows[0][5].ToString());
| DataSet | 6,346,458 | 13 |
I have a TClientDataSet, which is provided by a TTable’s dataset.
The dataset has two fields: postalcode (string, 5) and street (string, 20)
At runtime I want to display a third field (string, 20). The routine of this field is getting the postalcode as a parameter and gives back the city belongs to this postalcode.
The problem is only about adding a calculated field to the already existing ones. Filling the data itself is not the problem.
I tried:
cds.SetProvider(Table1);
cds.FieldDefs.Add('city', ftString, 20);
cds.Open;
cds.Edit;
cds.FieldByName('city').AsString := 'Test'; // --> errormessage (field not found)
cds.Post;
cds is my clientdataset, Table1 is a paradox Table, but the problem is the same with other databases.
Thanks in advance
| If you want to add additional fields other than those exist in the underlying data, you need to also add the existing fields manually as well. The dataset needs to be closed when you're adding fields, but you can have the necessary metadata with FieldDefs.Update if you don't want to track all field details manually. Basically something like this:
var
i: Integer;
Field: TField;
begin
cds.SetProvider(Table1);
// add existing fields
cds.FieldDefs.Update;
for i := 0 to cds.FieldDefs.Count - 1 do
cds.FieldDefs[i].CreateField(cds);
// add calculated field
Field := TStringField.Create(cds);
Field.FieldName := 'city';
Field.Calculated := True;
Field.DataSet := cds;
cds.Open;
end;
Also see this excellent article by Cary Jensen.
| DataSet | 4,934,103 | 13 |
The application's code and configuration files are maintained in a code repository. But sometimes, as a part of the project, I also have a some data (which in some cases can be >100MB, >1GB or so), which is stored in a database. Git does a nice job in handling the code and its changes, but how can the development team easily share the data?
It doesn't really fit in the code version control system, as it is mostly large binary files, and would make pulling updates a nightmare. But it does have to be synchronised with the repository, because some code revisions change the schema (ie migrations).
How do you handle such situations?
| We have the data and schema stored in xml and use liquibase to handle the updates to both the schema and the data. The advantage here is that you can diff the files to see what's going on, it plays nicely with any VCS and you can automate it.
Due to the size of your database this would mean a sizable "version 0" file. But, using the migration strategy, after that the updates should be manageable as they would only be deltas. You might be able to convert your existing migrations one-to-one to liquibase as well which might be nicer than a big-bang approach.
You can also leverage @belisarius' strategy if your deltas are very large so each developer doesn't have to apply the delta individually.
| DataSet | 3,362,917 | 13 |
I am really having a hard time here. I need to design a "Desktop app" that will use WCF as the communications channel. Its a multi-tiered application (DB and application server are the same, the client goes through the internet cloud).
The application is a little complex (in terms of SQL and code logics) then the usual LOB applications, but the concept is the same: Read from DB, update to DB, handle concurrency etc. My problem is that now with Entity Framework out in the open, I cant decide which way to proceed: Should I use Entity Framework, Dataset or Custom Classes.
As I understand by Entity Framework, it will create the object mapping of my DB tables ALONG WITH the CRUD scripts as well. Thats all well and good for simple CRUD, but most of the times the "Select" is complex and it requires a custom SQL. I understand I can use Stored Procedures in EF (I dont like SP btw, i dont know why, I like to code my SQL in the DAL by hand, I feel more secure and comfortable that way).
With DataSet, I will use my custom SQLs and populate on the data set. With Custom classes (objects for DB tables) I will populate my custom SQLs on those custom classes (collections and lists etc). I want to use EF, but i dont feel confident in deploying an application whose SQL I have not written and cant see in the code. Am I missing something here.
Any help in this regard would be greatly appreciated.
Xeshu
| I would agree with Marc G. 100% - DataSets suck, especially in a WCF scenario (they add a lot of overhead for handling in-memory data manipulation) - don't use those. They're okay for beginners and two-tier desktop apps on a small scale maybe - but I wouldn't use them in a serious, professional app.
Basically, your question boils down to how do you transform your rows from the database into something you can remote across WCF. This means some form of mapping - either you do it yourself, using DataReaders and then shoving all the data into WCF [DataContract] classes - you can certainly do that, gives you the ultimate control, but it's also tedious, cumbersome, and error-prone.
Or you let some ready-made ORM handle this grunt work for you - take your pick amongst Linq-to-SQL (great, easy-to-use, flexible, but SQL Server only), EF v4 (out by March 2010 - looks very promising, very flexible) or any other ORM, really - whatever suits your needs best.
Other serious competitors in the ORM space might include Subsonic 3.0 and NHibernate (amongst many many others).
So to sum up:
forget about Datasets
either you have 100% control and to the mapping between SQL and your objects yourself
you let some capable ORM handle that (Linq-to-SQL, EF v4, Subsonic, NHibernate et al) - which one really doesn't matter all that much, i.e. it's also a matter of personal preference and coding style
| DataSet | 1,679,064 | 13 |
What was (or would be) the reasoning behind creating TDataSource as an intermediary between data bound components and the actual underlying TDataSets, rather than having the components just connect directly to the TDataSets themselves?
This may seem like kind of a stupid question, but I am working on a broad set of "data viewer" components, which link to a common "data connector" component, etc; and in designing this set of components, I find myself referencing the structure of the classic Delphi "TDataSet -> TDataSource -> Data-bound-component" setup for guidance. In my component set, however, I keep wanting to essentially merge the functionality of the "TDataSource" and "TDataSet" equivalents into a single class. It got me wondering what the reasoning was behind separating them in the first place.
| It is all about decoupling and indirection.
And with TDataSource there are two kinds of them:
Decoupling the master detail relations (TDataSource is in the same module as the TDataSets that are being bound; the detail TDataSet references the master TDataSet by pointing its' MasterSource property to the TDataSource that points to the master TDataSet)
Decoupling the UI from the business layer (TDataSets are in a DataModule; TDataSource is on the Form/Frame which contains your UI controls, UI controls reference their DataSource property).
Since many components can point to the same DataSource, you can quickly switch which underlying TDataSet they use by just flipping one TDataSource.DataSet property.
| DataSet | 1,610,908 | 13 |
I need a list of common first names for people, like "Bill", "Gordon", "Jane", etc. Is there some free list of lots of known names, instead of me having to type them out? Something that I can easily parse with the programme to fill in an array for example?
I'm not worried about:
Knowing if a name is masculine or feminine (or both)
If the dataset has a whole pile of false positives
If there are names that aren't on it, obviously no dataset like this will be complete.
If there are 'duplicates', i.e. I don't care if the dataset lists "Bill" and "William" and "Billy" as different names. I'd rather have more data than less
I don't care about knowing the popularity the name
I know Wikipedia has a list of most popular given names, but that's all in a HTML page and manged up with horrible wiki syntax. Is there a better way to get some sample data like this without having to screen scrape wikipedia?
|
A CSV from the General Register Office of Scotland with all the forenames registered there in 2007.
Another large set of first names in CSV format and SQL format too (but they didn't say which DB dumped the SQL).
GitHub page with the top 1000 baby names from 1880 to 2009, already parsed into a CSV for you from the Social Security Administration.
CSV of baby names and meanings from a Princeton CS page.
That ought to be enough to get you started, I'd think.
| DataSet | 1,452,003 | 13 |
I am just learning C# through Visual Studio 2008?
I was wondering what exactly is the correlation between dabases, datasets and binding sources?
As well, what is the function of the table adapter?
| At a super high level:
Database -- stores raw data
DataSet -- a .NET object that can be used to read, insert, update and delete data in a database
BindingSource -- a .NET object that can be used for Data Binding for a control. The BindingSource could point to a DataSet, in which case the control would display and edit that data
TableAdapter -- Maps data from a database table into a DataSet
There is a lot more to all of these, and understanding the way ADO.NET is architected can take a bit of time. Good luck!
| DataSet | 598,669 | 13 |
I'm working on a database in C# when I hit the display button I get an error:
Error:
Cannot bind to the property or column LastName on the DataSource.
Parameter name: dataMember
Code:
private void Display_Click(object sender, EventArgs e)
{
Program.da2.SelectCommand = new SqlCommand("Select * From Customer", Program.cs);
Program.ds2.Clear();
Program.da2.Fill(Program.ds2);
customerDG.DataSource = Program.ds2.Tables[0];
Program.tblNamesBS2.DataSource = Program.ds.Tables[0];
customerfirstname.DataBindings.Add(new Binding("Text", Program.tblNamesBS2, "FirstName"));
customerlastname.DataBindings.Add(new Binding("Text", Program.tblNamesBS2, "LastName")); //Line Error occurs on.
}
Not sure what it means can anyone help, if I comment out the last two lines it will display properly.
| You will also run into this error if you bind to a NULL object.
| DataSet | 11,645,551 | 12 |
I wanted to create a data set with a specific Mean and Std deviation.
Using np.random.normal() gives me an approximate. However for what I want to test I need an exact Mean and Std deviation.
I have tried using a combination of norm.pdf and np.linspace however the data set generated doesn't match up either (It could just be me misusing it though).
It really doesn't matter whether the data set is random or not as long as I can set a specific Sample size, mean and Std deviation.
Help would be much appreciated
| The easiest would be to generate some zero-mean samples, with the desired standard deviation. Then subtract the sample mean from the samples so it is truly zero mean. Then scale the samples so that the standard deviation is spot on, and then add the desired mean.
Here is some example code:
import numpy as np
num_samples = 1000
desired_mean = 50.0
desired_std_dev = 10.0
samples = np.random.normal(loc=0.0, scale=desired_std_dev, size=num_samples)
actual_mean = np.mean(samples)
actual_std = np.std(samples)
print("Initial samples stats : mean = {:.4f} stdv = {:.4f}".format(actual_mean, actual_std))
zero_mean_samples = samples - (actual_mean)
zero_mean_mean = np.mean(zero_mean_samples)
zero_mean_std = np.std(zero_mean_samples)
print("True zero samples stats : mean = {:.4f} stdv = {:.4f}".format(zero_mean_mean, zero_mean_std))
scaled_samples = zero_mean_samples * (desired_std_dev/zero_mean_std)
scaled_mean = np.mean(scaled_samples)
scaled_std = np.std(scaled_samples)
print("Scaled samples stats : mean = {:.4f} stdv = {:.4f}".format(scaled_mean, scaled_std))
final_samples = scaled_samples + desired_mean
final_mean = np.mean(final_samples)
final_std = np.std(final_samples)
print("Final samples stats : mean = {:.4f} stdv = {:.4f}".format(final_mean, final_std))
Which produces output similar to this:
Initial samples stats : mean = 0.2946 stdv = 10.1609
True zero samples stats : mean = 0.0000 stdv = 10.1609
Scaled samples stats : mean = 0.0000 stdv = 10.0000
Final samples stats : mean = 50.0000 stdv = 10.0000
| DataSet | 51,515,423 | 12 |
I am searching some optimized datatypes for "observations-variables" table in Matlab, that can be fast and easily accessed by columns (through variables) and by rows (through observations).
Here is сomparison of existing Matlab datatypes:
Matrix is very fast, hovewer, it has no built-in indexing labels/enumerations for its dimensions, and you can't always remember variable name by column index.
Table has very bad performance, especially when reading individual rows/columns in a for loop (I suppose it runs some slow convertion methods, and is designed to be more Excel-like).
Scalar structure (structure of column arrays) datatype - fast column-wise access to variables as vectors, but slow row-wise conversion to observations.
Nonscalar structure (array of structures) - fast row-wise access to observations as vectors, but slow column-wise conversion to variables.
I wonder if I can use some simpler and optimized version of Table data type, if I want just to combine row-number and column-variable indexing with only numerical variables -OR- any variable type.
Results of test script:
----
TEST1 - reading individual observations
Matrix: 0.072519 sec
Table: 18.014 sec
Array of structures: 0.49896 sec
Structure of arrays: 4.3865 sec
----
TEST2 - reading individual variables
Matrix: 0.0047834 sec
Table: 0.0017972 sec
Array of structures: 2.2715 sec
Structure of arrays: 0.0010529 sec
Test script:
Nobs = 1e5; % number of observations-rows
varNames = {'A','B','C','D','E','F','G','H','I','J','K','L','M','N','O'};
Nvar = numel(varNames); % number of variables-colums
M = randn(Nobs, Nvar); % matrix
T = array2table(M, 'VariableNames', varNames); % table
NS = struct; % nonscalar structure = array of structures
for i=1:Nobs
for v=1:Nvar
NS(i).(varNames{v}) = M(i,v);
end
end
SS = struct; % scalar structure = structure of arrays
for v=1:Nvar
SS.(varNames{v}) = M(:,v);
end
%% TEST 1 - reading individual observations (row-wise)
disp('----'); disp('TEST1 - reading individual observations');
tic; % matrix
for i=1:Nobs
x = M(i,:); end
disp(['Matrix: ', num2str(toc()), ' sec']);
tic; % table
for i=1:Nobs
x = T(i,:); end
disp(['Table: ', num2str(toc), ' sec']);
tic;% nonscalar structure = array of structures
for i=1:Nobs
x = NS(i); end
disp(['Array of structures: ', num2str(toc()), ' sec']);
tic;% scalar structure = structure of arrays
for i=1:Nobs
for v=1:Nvar
x.(varNames{v}) = SS.(varNames{v})(i);
end
end
disp(['Structure of arrays: ', num2str(toc()), ' sec']);
%% TEST 2 - reading individual variables (column-wise)
disp('----'); disp('TEST2 - reading individual variables');
tic; % matrix
for v=1:Nvar
x = M(:,v); end
disp(['Matrix: ', num2str(toc()), ' sec']);
tic; % table
for v=1:Nvar
x = T.(varNames{v}); end
disp(['Table: ', num2str(toc()), ' sec']);
tic; % nonscalar structure = array of structures
for v=1:Nvar
for i=1:Nobs
x(i,1) = NS(i).(varNames{v});
end
end
disp(['Array of structures: ', num2str(toc()), ' sec']);
tic; % scalar structure = structure of arrays
for v=1:Nvar
x = SS.(varNames{v}); end
disp(['Structure of arrays: ', num2str(toc()), ' sec']);
| I would use matrices, since they're the fastest and most straightforward to use, and then create a set of enumerated column labels to make indexing columns easier. Here are a few ways to do this:
Use a containers.Map object:
Given your variable names, and assuming they map in order from columns 1 through N, you can create a mapping like so:
varNames = {'A','B','C','D','E','F','G','H','I','J','K','L','M','N','O'};
col = containers.Map(varNames, 1:numel(varNames));
And now you can use the map to access columns of your data by variable name. For example, if you want to fetch the columns for variables A and C (i.e. the first and third) from a matrix data, you would do this:
subData = data(:, [col('A') col('C')]);
Use a struct:
You can create a structure with the variable names as its fields and the corresponding column indices as their values like so:
enumData = [varNames; num2cell(1:numel(varNames))];
col = struct(enumData{:});
And here's what col contains:
struct with fields:
A: 1
B: 2
C: 3
D: 4
E: 5
F: 6
G: 7
H: 8
I: 9
J: 10
K: 11
L: 12
M: 13
N: 14
O: 15
And you would access columns A and C like so:
subData = data(:, [col.A col.C]);
% ...or with dynamic field names...
subData = data(:, [col.('A') col.('C')]);
Make a bunch of variables:
You could just create a variable in your workspace for every column name and store the column indices in them. This will pollute your workspace with more variables, but gives you a terse way to access column data. Here's an easy way to do it, using the much-maligned eval:
enumData = [varNames; num2cell(1:numel(varNames))];
eval(sprintf('%s=%d;', enumData{:}));
And accessing columns A and C is as easy as:
subData = data(:, [A C]);
Use an enumeration class:
This is probably a good dose of overkill, but if you're going to use the same mapping of column labels and indices for many analyses you could create an enumeration class, save it somewhere on your MATLAB path, and never have to worry about defining your column enumerations again. For example, here's a ColVar class with 15 enumerated values:
classdef ColVar < double
enumeration
A (1)
B (2)
C (3)
D (4)
E (5)
F (6)
G (7)
H (8)
I (9)
J (10)
K (11)
L (12)
M (13)
N (14)
O (15)
end
end
And you would access columns A and C like so:
subData = data(:, [ColVar.A ColVar.C]);
| DataSet | 44,679,592 | 12 |
I'm working on this project based on TensorFlow.
I just want to train an OCR model by attention_ocr based on my own datasets, but I don't know how to store my images and ground truth in the same format as FSNS datasets.
Is there anybody also work on this project or know how to solve this problem?
| The data format for storing training/test is defined in the FSNS paper https://arxiv.org/pdf/1702.03970.pdf (Table 4).
To store tfrecord files with tf.Example protos you can use tf.python_io.TFRecordWriter. There is a nice tutorial, an existing answer on the stackoverflow and a short gist.
Assume you have an numpy ndarray img which has num_of_views images stored side-by-side (see Fig. 3 in the paper):
and a corresponding text in a variable text. You will need to define some function to convert a unicode string into a list of character ids padded to a fixed length and unpadded as well. For example:
char_ids_padded, char_ids_unpadded = encode_utf8_string(
text='abc',
charset={'a':0, 'b':1, 'c':2},
length=5,
null_char_id=3)
the result should be:
char_ids_padded = [0,1,2,3,3]
char_ids_unpadded = [0,1,2]
If you use functions _int64_feature and _bytes_feature defined in the gist you can create a FSNS compatible tf.Example proto using a following snippet:
char_ids_padded, char_ids_unpadded = encode_utf8_string(
text, charset, length, null_char_id)
example = tf.train.Example(features=tf.train.Features(
feature={
'image/format': _bytes_feature("PNG"),
'image/encoded': _bytes_feature(img.tostring()),
'image/class': _int64_feature(char_ids_padded),
'image/unpadded_class': _int64_feature(char_ids_unpadded),
'height': _int64_feature(img.shape[0]),
'width': _int64_feature(img.shape[1]),
'orig_width': _int64_feature(img.shape[1]/num_of_views),
'image/text': _bytes_feature(text)
}
))
| DataSet | 44,430,310 | 12 |
I am inexperienced with parsing XML files, and I am saving line graph data to an xml file, so I did a little bit of research. According to this article, out of all the ways to read an XML file, DataSet is the fastest. And it makes sense that I use DataSet since there could be a significant amount of data. Here's how my graph documents look:
<?xml version="1.0" encoding="utf-8" standalone="yes" ?>
<BreezyCalc>
<Graph Version="3.0" Mode="static">
<Range>
<X Min="-20" Max="20" />
<Y Min="-20" Max="20" />
</Range>
<Lines>
<Line Name="MyLine1" R="0" G="255" B="0">
<Point X="-17" Y="9" />
<Point X="7" Y="-5" />
<Point X="10" Y="4" />
<Point X="-6" Y="2" />
</Line>
<Line Name="MyLine2" R="255" G="0" B="0">
<Point X="-7" Y="3" />
<Point X="8" Y="-1" />
<Point X="-4" Y="-4" />
<Point X="-1" Y="6" />
</Line>
</Lines>
</Graph>
</BreezyCalc>
Since there could be a large number of points in these lines, I need to get the data as quickly and with as little resources as possible. If there is a faster approach than DataSet, please enlighten me. Otherwise, could someone show me how I would get my graph data using a DataSet as my XML parser?
| If you want to use a DataSet, it is very simple.
// Here your xml file
string xmlFile = "Data.xml";
DataSet dataSet = new DataSet();
dataSet.ReadXml(xmlFile, XmlReadMode.InferSchema);
// Then display informations to test
foreach (DataTable table in dataSet.Tables)
{
Console.WriteLine(table);
for (int i = 0; i < table.Columns.Count; ++i)
Console.Write("\t" + table.Columns[i].ColumnName.Substring(0, Math.Min(6, table.Columns[i].ColumnName.Length)));
Console.WriteLine();
foreach (var row in table.AsEnumerable())
{
for (int i = 0; i < table.Columns.Count; ++i)
{
Console.Write("\t" + row[i]);
}
Console.WriteLine();
}
}
If you want something faster, you can try with XmlReader which read line after line. But it is a bit more difficult to develop.
You can see it here : http://msdn.microsoft.com/library/cc189056(v=vs.95).aspx
| DataSet | 14,412,186 | 12 |
I am doing a query to get Title and RespondBY from the tbl_message table, I want to decrypt the Title before I do databinding to the repeater. How can I access the title value before doing databind.
string MysqlStatement = "SELECT Title, RespondBy FROM tbl_message WHERE tbl_message.MsgID = @MsgID";
using (DataServer server = new DataServer())
{
MySqlParameter[] param = new MySqlParameter[1];
param[0] = new MySqlParameter("@MsgID", MySqlDbType.Int32);
param[0].Value = MessageID;
command.Parameters.AddWithValue("@MsgID", MessageID);
ds = server.ExecuteQuery(CommandType.Text, MysqlStatement, param);
}
rptList.DataSource = ds;
rptList.DataBind();
<table style="width: 498px; color: #F5F5F5;">
<asp:Repeater ID="rptList" runat="server">
<HeaderTemplate>
</HeaderTemplate>
<ItemTemplate>
<tr>
<td width="15%">
<b>Subject</b>
</td>
<td width="60%">
<asp:Label ID="lbl_Subj" runat="server" Text='<%#Eval("Title")%>' />
</td>
</tr>
| Probably, like following code part you can get the Title and try this coding before
rptList.DataSource = ds;
rptList.DataBind();
The following code part can get the Title from dataset
string title = ds.Tables[0].Rows[0]["Title"].ToString();
| DataSet | 7,765,548 | 12 |
I am looking for real world applications where topological sorting is performed on large graph sizes.
Some fields where I image you could find such instances would be bioinformatics, dependency resolution, databases, hardware design, data warehousing... but I hope some of you may have encountered or heard of any specific algorithms/projects/applications/datasets that require topsort.
Even if the data/project may not be publicly accessible any hints (and estimates on the order of magnitude of potential graph sizes) might be helpful.
| Here are some examples I've seen so far for Topological Sorting:
While scheduling task graphs in a distributed system, it is usually
needed to sort the tasks topologically and then assign them to
resources. I am aware of task graphs containing more than 100,000
tasks to be sorted in a topological order. See this in this context.
Once upon a time I was working on a Document Management System. Each
document on this system has some kind of precedence constraint to a
set of other documents, e.g. its content type or field referencing.
Then, the system should be able to generate an order of the documents
with the preserved topological order. As I can remember, there were
around 5,000,000 documents available two years ago !!!
In the field of social networking, there is famous query to know the
largest friendship distance in the network. This problem needs to
traverse the graph by a BFS approach, equal to the cost of a
topological sorting. Consider the members of Facebook and find your
answer.
If you need more real examples, do not hesitate to ask me. I have worked in lots of projects working on on large graphs.
P.S. for large DAG datasets, you may take a look at Stanford Large Network Dataset Collection and Graphics@ Illinois page.
| DataSet | 7,260,847 | 12 |
If I do something like:
DataSet ds = GetMyDataset();
try
{
string somevalue = ds.Tables[0].Rows[0]["col1"];
}
catch
{
//maybe something was null
}
Is there a good way to check for null values without using the try/catch? It's just that I don't care if the value in "col1" is null, OR if "col1" didn't exist, OR if there were no rows returned, OR if the table doesn't exist!
Maybe I should care? :)
Maybe try/catch is the best way of approaching this but I just wondered if there was another way to do it?
Thanks!
| It is kind of strange not to care about the Table or the Column.
It is a much more normal practice to expect table[0].Rows.Count == 0 for instance.
And the best way to check for NULL values is with if(...) ... else ....
The worst way is to wait for Exceptions (in whatever way).
| DataSet | 6,280,935 | 12 |
In R, I have used the write.foreign() function from the foreign library in order to write a data frame as a SAS data set.
write.foreign(df = test.df, datafile = 'test.sas7bdat', codefile = 'test.txt', package = "SAS")
The SAS data file is written, but when I try to open it in SAS Viewer 9.1 (Windows XP), I receive the following message - "SAS Data set file format is not supported".
Note: I am generally unfamiliar with SAS, so if an answer exists that would have been known by a regular SAS user, please excuse my ignorance.
| write.foreign with option package="SAS" actually writes out a comma-delimited text file and then creates a script file with SAS statements to read it in. You have to run SAS and submit the script to turn the text file into a SAS dataset. Your call should look more like
write.foreign(df=test.df, datafile="test.csv", codefile="test.sas", package="SAS")
Note the different extension. Also, write.foreign writes factor variables as numeric variables with a format controlling their appearance -- ie, the R definition of a factor. If you just want the character representation, you'll have to convert the factors via as.character before exporting.
| DataSet | 5,476,826 | 12 |
I know this is a long shot, but does anyone know of a dataset of English words that has stress information by syllable? Something as simple as the following would be fantastic:
AARD vark
A ble
a BOUT
ac COUNT
AC id
ad DIC tion
ad VERT ise ment
...
| I closest thing I'm aware of is the CMU Pronouncing Dictionary. I don't think it explicitly marks the stressed syllable, but it should be a start.
| DataSet | 2,839,548 | 12 |
I have a data table and I want to perform a case insensitive group by over a column of data table (say Column1 of type string). I observed that normally LINQ to DataSet performs a case sensitive comparison. For example, if Column1 has two string values "Test" and "test", after applying group by it returns two separate rows with the values "Test" and "test", instead of one.
The query is:
var countGroupQuery = from table in dataTable.AsEnumerable()
group table by table.Field<string>(Column1) into groupedTable
select new
{
value = groupedTable.Key,
count = groupedTable.Count()
};
Is there any method to perform a case-insensitive group by so that in the above example I get only one row with one value (either "Test" or "test")? ToUpper or ToLower would actually change the values to either upper case or lower case instead of using at least one of the input values, so I don't want to use this:
group table by table.Field<string>(Column1).ToUpper() into groupedTable
| You can't do this from a query expression, but you can do it with dot notation:
var query = dataTable.AsEnumerable()
.GroupBy(x => table.Field<string>(Column1),
StringComparer.InvariantCultureIgnoreCase)
.Select(groupedTable => new
{
value = groupedTable.Key,
count = groupedTable.Count()
});
You can even use a more complicated overload of GroupBy to do it in one call:
var query = dataTable.AsEnumerable()
.GroupBy(x => table.Field<string>(Column1),
(key, group) => { value = key,
count = group.Count() },
StringComparer.InvariantCultureIgnoreCase));
Obviously that's using the invariant culture - you could also use the current culture or ordinal rules.
| DataSet | 1,490,988 | 12 |
I am using Linq to dataset to query a datatable. If i want to perform a group by on "Column1" on data table, I use following query
var groupQuery = from table in MyTable.AsEnumerable()
group table by table["Column1"] into groupedTable
select new
{
x = groupedTable.Key,
y = groupedTable.Count()
}
Now I want to perform group by on two columns "Coulmn1" and "Column2". Can anybody tell me the syntax or provide me a link explaining multiple group by on a data table??
Thanks
| You should create an anonymous type to do a group by multiple columns:
var groupQuery = from table in MyTable.AsEnumerable()
group table by new { column1 = table["Column1"], column2 = table["Column2"] }
into groupedTable
select new
{
x = groupedTable.Key, // Each Key contains column1 and column2
y = groupedTable.Count()
}
| DataSet | 1,225,710 | 12 |
I always seem to use Get when working with data (strongly typed or otherwise) from the database and I have never really needed to use Fill although I just as easily could use Fill instead of get when pulling out and updating data.
Can anyone provide guidance as to the implications and gotchas of each method?
In what situations is it preferable to use one or the other?
Any performance implications?
Thanks in advance for the answers! I love this community!
| Using Fill can be great for debugging exceptions because the DataTable passed into the method can be interrogated for more details. Get does not return in the same situation.
Tips:
DataTable.GetErrors() returns an
array of DataRow instances that are in error
DataRow.RowError contains a
description of the row error
DataRow.GetColumnsInError() returns
an array of DataColumn instances in
error
| DataSet | 172,436 | 12 |
I'm looking for a way to learn to be comfortable with large data sets. I'm a university student, so everything I do is of "nice" size and complexity. Working on a research project with a professor this semester, and I've had to visualize relationships between a somewhat large (in my experience) data set. It was a 15 MB CSV file.
I wrote most of my data wrangling in Python, visualized using GNUPlot.
Are there any accessible books or websites on the subject out there? Bonus points for using Python, more bonus points for a more "basic" visualization system than relying on gnuplot. Cairo or something, I suppose.
Looking for something that takes me from data mining, to processing, to visualization.
EDIT: I'm more looking for something that will teach me the "big ideas". I can write the code myself, but looking for techniques people use to deal with large data sets. I mean, my 15 MB is small enough where I can put everything I would ever need into memory and just start crunching. What do people do to visualize 5 GB data sets?
| I'd say the most basic skill is a good grounding in math and statistics. This can help
you assess and pick from the variety of techniques for filtering data, and
reducing its volume and dimensionality while keeping its integrity. The last
thing you'd want to do is make something pretty that shows patterns or
relationships which aren't really there.
Specialized math
To tackle some types of problems you'll need to learn some math to understand how particular algorithms work and what effect they'll have on your data. There are various algorithms for clustering data, dimensionality reduction, natural
language processing, etc. You may never use many of these, depending on the type of data you wish to analyze, but there are abundant resources on the Internet
(and Stack Exchange sites) should you need help.
For an introductory overview of data mining techniques, Witten's Data Mining is good. I have the 1st edition, and it explains concepts in plain language with a bit of math thrown in. I recommend it because it provides a good overview and it's not too expensive -- as you read more into the field you'll notice many of the books are quite expensive. The only drawback is a number of pages dedicated to using WEKA, an Java data mining package, which might not be too helpful as you're using Python (but is open source, so you may be able to glean some ideas from the source code. I also found Introduction to Machine Learning to provide a good overview, also reasonably priced, with a bit more math.
Tools
For creating visualizations of your own invention, on a single machine, I think the basics should get you started: Python, Numpy, Scipy, Matplotlib, and a
good graphics library you have experience with, like PIL or
Pycairo. With these you can crunch numbers, plot them on graphs, and pretty things up via custom drawing routines.
When you want to create moving, interactive visualizations, tools like the
Java-based Processing library make this easy. There
are even ways of writing Processing sketches in
Python via Jython, in case you don't want to write Java.
There are many more tools out there, should you need them, like OpenCV (computer vision,
machine learning), Orange (data mining,
analysis, viz), and NLTK (natural language, text
analysis).
Presentation principles and techniques
Books by folks in the field like Edward
Tufte and references like
Information
Graphics
can help you get a good overview of the ways of creating visualizations and
presenting them effectively.
Resources to find Viz examples
Websites like Flowing Data, Infosthetics, Visual Complexity and Information is
Beautiful show recent, interesting
visualizations from across the web. You can also look through the many compiled lists of of visualization sites out there on the Internet. Start with these as a seed and start navigating around, I'm sure you'll find a lot of useful sites and inspiring examples.
(This was originally going to be a comment, but grew too long)
| DataSet | 5,890,935 | 11 |
I am unable to download the original ImageNet dataset from their official website. However, I found out that pytorch has ImageNet as one of it’s torch vision datasets.
Q1. Is that the original ImageNet dataset?
Q2. How do I get the classes for the dataset like it’s being done in Cifar-10
classes = [‘airplane’, ‘automobile’, ‘bird’, ‘cat’, ‘deer’, ‘dog’, ‘frog’, ‘horse’, ‘ship’, ‘truck’]
| The torchvision.datasets.ImageNet is just a class which allows you to work with the ImageNet dataset. You have to download the dataset yourself (e.g. from http://image-net.org/download-images) and pass the path to it as the root argument to the ImageNet class object.
Note that the option to download it directly by passing the flag download=True is no longer possible:
if download is True:
msg = ("The dataset is no longer publicly accessible. You need to "
"download the archives externally and place them in the root "
"directory.")
raise RuntimeError(msg)
elif download is False:
msg = ("The use of the download flag is deprecated, since the dataset "
"is no longer publicly accessible.")
warnings.warn(msg, RuntimeWarning)
(source)
If you just need to get the class names and the corresponding indices without downloading the whole dataset (e.g. if you are using a pretrained model and want to map the predictions to labels), then you can download them e.g. from here or from this github gist.
| DataSet | 60,607,824 | 11 |
I'm little confused about how does the class StratifiedShuffleSplit of Sklearn works.
The code below is from Géron's book "Hands On Machine Learning", chapter 2, where he does a stratified sampling.
from sklearn.model_selection import StratifiedShuffleSplit
split = StratifiedShuffleSplit(n_splits=1, test_size=0.2, random_state=42)
for train_index, test_index in split.split(housing, housing["income_cat"]):
strat_train_set = housing.loc[train_index]
strat_test_set = housing.loc[test_index]
Especially, what is been doing in split.split?
Thanks!
| Since you did not provide a dataset, I use sklearn sample to answer this question.
Prepare dataset
# generate data
import numpy as np
from sklearn.model_selection import StratifiedShuffleSplit
data = np.array([[1, 2], [3, 4], [1, 2], [3, 4], [1, 2], [3, 4]])
group_label = np.array([0, 0, 0, 1, 1, 1])
This generate a dataset data, which has 6 obseravations and 2 variables. group_label has 2 value, means group 0 and group 1. In this case, group 0 contains 3 samples, same is group 1. To be general, the group size are not need to be the same.
Create a StratifiedShuffleSplit object instance
sss = StratifiedShuffleSplit(n_splits=5, test_size=0.5, random_state=0)
sss.get_n_splits(data, group_label)
Out:
5
In this step, you can create a instance of StratifiedShuffleSplit, you can tell the function how to split(At random_state = 0,split data 5 times,each time 50% of data will split to test set). However, it only split data when you call it in the next step.
Call the instance, and split data.
# the instance is actually a generater
type(sss.split(data, group_label))
# split data
for train_index, test_index in sss.split(data, group_label):
print("n_split",,"TRAIN:", train_index, "TEST:", test_index)
X_train, X_test = X[train_index], X[test_index]
y_train, y_test = y[train_index], y[test_index]
out:
TRAIN: [5 2 3] TEST: [4 1 0]
TRAIN: [5 1 4] TEST: [0 2 3]
TRAIN: [5 0 2] TEST: [4 3 1]
TRAIN: [4 1 0] TEST: [2 3 5]
TRAIN: [0 5 1] TEST: [3 4 2]
In this step, spliter you defined in the last step will generate 5 split of data one by one. For instance, in the first split, the original data is shuffled and sample 5,2,3 is selected as train set, this is also a stratified sampling by group_label; in the second split, the data is shuffled again and sample 5,1,4 is selected as train set; etc..
| DataSet | 59,674,072 | 11 |
I am trying to set permissions on BigQuery in order to have users being able to see and query tables on one dataset but being able to edit, create and delete tables on another dataset.
I'm not able to figure out how to do this "dataset-level segregation" on the Cloud Platform Console.
Ideal scenario would be:
Dataset1 - Permissions to see data and query tables
Dataset2 - Permissions to see, query, create, edit and delete tables.
Any ideas on how to do this?
| 2021 update:
The old UI (the original answer) has not been available for a long time, but the new UI (now called the regular BQ UI) now has this ability.
To change permissions on the new UI, it's a 3 step process:
First, you need to open the details of the dataset by clicking the contextual menu ⋮ on the dataset and selecting "Open" (clicking or double-clicking the dataset name will not open the details pane):
On the top bar of the details pane, you can open the ⁺👤 Sharing dropdown, and select "Permissions" to reveal the permissions sidebar:
On the open sidebar, click the ⁺👤 ADD PRINCIPAL button to open the contextual menu:
On the contextual menu, write the list of emails or Google groups that you want to grant access to, and select the right roles (roles/bigquery.dataViewer role for query permissions, roles/bigquery.dataEditor role for edit permissions):
2019 answer:
According to the docs, the permissions are set on a per-dataset basis, so what you want to accomplish is possible.
I can't see how to do that in the new interface (in https://console.cloud.google.com/bigquery), but it's quite easy to do so in the classic UI (in https://bigquery.cloud.google.com) by opening the drop-down next to the dataset and click on "Share dataset":
This will open the sharing panel, where you can select "Can view" for running queries, or "Can edit" to modify the dataset.
In the docs there are additional options, like using the CLI or the API, but I think the simplest way is to use the web UI.
| DataSet | 54,517,521 | 11 |
What does the function load_iris() do ?
Also, I don't understand what type of data it contains and where to find it.
iris = datasets.load_iris()
X = iris.data
target = iris.target
names = iris.target_names
Can somebody please tell in detail what does this piece of code does?
Thanks in advance.
| load_iris is a function from sklearn. The link provides documentation: iris in your code will be a dictionary-like object. X and y will be numpy arrays, and names has the array of possible targets as text (rather than numeric values as in y).
| DataSet | 43,159,754 | 11 |
I am working on University Management System on which I am using a WCF service and in the service I am using DataTables and DataSets for getting data from database and database is sql server.
My questions are
Is using DataTables and Datasets "Good Practice" or "Bad Practice" ?
If it is bad, what is the alternative of DataTable/DataSet ?
If it is bad, what are the main reasons ?
| Returning data sets from web services is not typically considered a “good practice”. The issues have been documented thoroughly in the following links:
http://msdn.microsoft.com/en-us/magazine/cc163751.aspx
https://web.archive.org/web/20210125131938/https://www.4guysfromrolla.com/articles/051805-1.aspx
http://msdn.microsoft.com/en-us/magazine/cc188755.aspx
In summary, the biggest issues with returning DataSet objects from web services seem to involve serialization performance, non-.net interoperability. In addition, the generic, polymorphic nature of the DataSet generally high the data structure until runtime, as such, the WSDL definition does not provide a complete description of the method signature. As with any design decision, however, you need to weigh the costs vs the benefits and determine the best fit given your specific goals and constraints.
In terms of alternatives, you could consider using a generic collection (e.g. List<yourClassHere>) or maybe even consider some architecture revisions to permit the use of ODATA.
The following links provide some good background reference for returning entities via web services.
http://msdn.microsoft.com/en-us/library/orm-9780596520281-01-14.aspx
http://www.codeproject.com/Articles/127395/Implementing-a-WCF-Service-with-Entity-Framework
http://msdn.microsoft.com/en-us/data/hh237663.aspx
| DataSet | 25,874,224 | 11 |
I have created a webservice which returns two datasets(return type) as results.
Is it possible to combine two datasets results into one so that I can display it on one datalist? I try using arraylist but it returns nothing in datalist.
GetDepartureFlightsDetails() and getDepartureFlights() both returns a dataset values.
Below is the method i use to retrieve the webservice results.
public ArrayList GetDepartureFlightsDetails(String departurecountry, String arrivalcountry, DateTime departuredate)
{
DLSA datalayerTS = new DLSA();
DLJS datalayerJW = new DLJS();
ArrayList array = new ArrayList();
array.Add(datalayerSA.GetDepartureFlightsDetails(departurecountry, arrivalcountry, departuredate));
array.Add(datalayerJW.getDepartureFlights(departurecountry, arrivalcountry, departuredate));
return array;
}
| You can use the DataSet.Merge method:
firstDataSet.Merge(secondDataSet);
Update:
public DataSet GetDepartureFlightsDetails(String departurecountry, String arrivalcountry, DateTime departuredate)
{
DLSA datalayerTS = new DLSA();
DLJS datalayerJW = new DLJS();
var firstDataSet = datalayerSA.GetDepartureFlightsDetails(departurecountry, arrivalcountry, departuredate));
var secondDataSet = datalayerJW.getDepartureFlights(departurecountry, arrivalcountry, departuredate));
firstDataSet.Merge(secondDataSet);
return firstDataSet;
}
| DataSet | 14,117,818 | 11 |
This is working for me just fine. With if checks if dataset is empty or not. If so, return null value. But is the check of dataset right way or should i do some other way?
da2 = new SqlDataAdapter("SELECT project_id FROM project WHERE _small_project_id = '" + cb_small_project.SelectedValue + "' ORDER BY NEWID()", conn);
ds2 = new DataSet();
da2.Fill(ds2);
DataRow[] rowProject = dt2.Select();
if (ds2.Tables[0].Rows.Count == 0)
cmd.Parameters["@_project_id"].Value = guidNull;
else
cmd.Parameters["@_project_id"].Value = rowProject[0]["project_id"];
| In my opinion the 'right' way is to check both:
ds2.Tables.Count
ds2.Tables[0].Rows.Count
| DataSet | 9,172,976 | 11 |
I wan't to create DataSet from code and set it as data source for crystal report.
I don't want to create a DataSet xsd file in VS if I don't have to. Just pure code.
DataSet ds = new DataSet();
DataTable tbl = new DataTable();
DataColumn cln = new DataColumn();
// I fill row, columns, table and add it to ds object
...
Then when I need report I use:
myReport.SetDataSource(ds);
The problem here is I don't know how to bind this to report? How to add fields?
I have a text and binary data (image).
| There is only way out. As suggested by rosado. Little bit explained
1. CReate a RPT File.
2. Create a XSD with the desired columns.
3. Drag drop the columns on the rpt. Format it as required.
4. Now create connection, use adapter to fill that dataset.
5. Filling u dataset will automatically fill the report columns.
Below is a sample code from one of mine project.
Invoice invoice = new Invoice(); // instance of my rpt file
var ds = new DsBilling(); // DsBilling is mine XSD
var table2 = ds.Vendor;
var adapter2 = new VendorTableAdapter();
adapter2.Fill(table2);
var table = ds.Bill;
var adapter = new BillTableAdapter();
string name = cboCustReport.Text;
int month = int.Parse(cboRptFromMonth.SelectedItem.ToString());
int year = int.Parse(cboReportFromYear.SelectedItem.ToString());
adapter.Fill(table, name,month,year);
ds.AcceptChanges();
invoice.SetDataSource(ds);
crystalReportViewer1.ReportSource = invoice;
crystalReportViewer1.RefreshReport();
| DataSet | 8,341,272 | 11 |
i try to get some Data from a Access Database via OleDB in a DataSet.
But the DataSet is empty after the Fill() method. The same statement works and return 1 row when i trigger them manually in D*.
OleDbConnection connection =
new OleDbConnection("Provider=Microsoft.ACE.OLEDB.12.0;Data Source=Inventar.accdb");
DataSet1 DS = new DataSet1();
connection.Open();
OleDbDataAdapter DBAdapter = new OleDbDataAdapter(
@"SELECT tbl_Computer.*, tbl_Besitzer.*
FROM tbl_Computer
INNER JOIN tbl_Besitzer ON tbl_Computer.FK_Benutzer = tbl_Besitzer.ID
WHERE (((tbl_Besitzer.Vorname)='ma'));",
connection);
DBAdapter.Fill(DS);
Thanks in advance.
New working code:
DataSet ds = new DataSet();
OleDbDataAdapter DBAdapter = new OleDbDataAdapter();
OleDbConnection connection = new OleDbConnection("Provider=Microsoft.ACE.OLEDB.12.0;Data Source=Inventar.accdb");
string query = @"
SELECT tbl_Computer.*, tbl_Besitzer.*
FROM tbl_Computer
INNER JOIN tbl_Besitzer ON tbl_Computer.FK_Benutzer = tbl_Besitzer.ID
WHERE (((tbl_Besitzer.Vorname)='ma'));";
connection.Open();
using (OleDbCommand command = new OleDbCommand(query, connection))
using (OleDbDataAdapter adapter = new OleDbDataAdapter(command))
{
adapter.Fill(ds);
}
Dictionary<string, string> DictValues = new Dictionary<string, string>();
for (int i = 0; i <= ds.Tables[0].Rows[0].ItemArray.Length - 1; i++)
{
MessageBox.Show(ds.Tables[0].Rows[0].ItemArray[i] + " -- " + ds.Tables[0].Rows[0].Table.Columns[i]);
DictValues.Add(ds.Tables[0].Rows[0].Table.Columns[i].ToString(), ds.Tables[0].Rows[0].ItemArray[i].ToString());
}
Now The Right code is posted above, with an Dictonary to access the Data more comfortable.
hope anybody find help in this post. Thank you all for get it work !
| DataSet ds = new DataSet();
using (OleDbConnection connection = new OleDbConnection(connectionString))
using (OleDbCommand command = new OleDbCommand(query, connection))
using (OleDbDataAdapter adapter = new OleDbDataAdapter(command))
{
adapter.Fill(ds);
}
return ds;
| DataSet | 6,532,304 | 11 |
Is there any valid use case for DataSet and DataTable now that we have Entity Framework?
Should DataTable/DataSet be considered obsolete?
| When you know the data schema at compile time then I'd EF would be all you need. However, there are situation where you're getting data from a service and you don't know what the schema/datatypes will be ahead of time. I think DataSet/DataTable would still be useful in that kind of scenario.
| DataSet | 5,547,926 | 11 |
For music data in audio format, there's The Million Song Dataset (http://labrosa.ee.columbia.edu/millionsong/), for example. Is there a similar one for music in symbolic form (that is, where the notes - not the sound - is stored)? Any format (like MIDI or MusicXML) would be fine.
| I'm not aware of a "standard" dataset. However, the places I know of for music scores in symbolic form are:
The Mutopia Project, a repository for free/libre music scores in Lilypond format. They standardise on Lilypond because it is a free/libre tool, it produces high-quality scores, and it’s possible to convert from many formats into Lilypond. They currently host over 1700 scores.
The aforementioned Gutenberg Sheet Music Project, an interesting one to watch. It hosts less than 100 scores now. However, it’s an offshoot of the tremendously successful Gutenburg Project for free ebooks (literature in plain text form), so they know how to run this sort of project. They have an excellent organised approach to content production.
MuseScore, a repository for music arrangements. They prefer MuseScore's own .mscz format, but support many others. [Added December 2019]
Wikifonia, a repository for lead sheets of songs. [As of December 2019, this site announces that it has closed.] A lead sheet is a simplified music score, perhaps enough to sing at a piano with friends, but not enough to publish a vocal score. They use MusicXML as their standard format. I estimate they have over 4000 scores. Interestingly, they have an arrangement to pay royalties for music they host. This is probably the best home for re-typeset scores of non-free/libre music. [This site was in operation in January 2012, when the answer was first written, but has ceased operation by December 2019, when this edit was made. Since the question is also old and closed, it's worth leaving this legacy entry in the answer.]
| DataSet | 5,384,695 | 11 |
I am in the need to add additional fields to a TDataSet that don't exist in the underlying database but can be derived from existing fields. I can easily do this with caclulated fields and that works perfectly.
Now I want to edit these fields and write the changed data back. I can reverse the calculation to write the data back into the existing fields, but the DB controls just don't let me edit calculated fields.
Is there any approach that allows me to do this?
Update:
Ok, some more details about the background.
The dataset has a blob field, which is a TBytes representation. Some of the bytes are identified to contain information that can be represented in a convenient way with existing DB edit fields. Not all of the bytes are known, though, so the TBytes representation has to be kept as it is for processing through another application that knows about it. This app also modifies existing and inserts new records.
The TBytes of different records in the dataset often map to different fields representations, although setting a filter or range on the dataset will ensure that they have the same mapping.
As I said, extracting the known bytes and convert it into strings, dates, numbers and so on via calculated fields is no problem. Reconverting those values into the TBytes is also possible. The problem is making those extra fields editable, while keeping the dataset navigation intact.
If it helps: We have classes that do the bidirectional mapping, exposing the fields as published properties.
| The answer depends on a data access components you are using. I am using Anydac and it support fkInternalCalc fields, which may be as calculated as manually edited.
| DataSet | 5,351,564 | 11 |
I have about 500 HDF5 files each of about 1.5 GB.
Each of the files has the same exact structure, which is 7 compound (int,double,double) datasets and variable number of samples.
Now I want to concatenate all this files by concatenating each of the datasets so that at the end I have a single 750 GB file with my 7 datasets.
Currently I am running a h5py script which:
creates a HDF5 file with the right datasets of unlimited max
open in sequence all the files
check what is the number of samples (as it is variable)
resize the global file
append the data
this obviously takes many hours,
would you have a suggestion about improving this?
I am working on a cluster, so I could use HDF5 in parallel, but I am not good enough in C programming to implement something myself, I would need a tool already written.
| I found that most of the time was spent in resizing the file, as I was resizing at each step, so I am now first going trough all my files and get their length (it is variable).
Then I create the global h5file setting the total length to the sum of all the files.
Only after this phase I fill the h5file with the data from all the small files.
now it takes about 10 seconds for each file, so it should take less than 2 hours, while before it was taking much more.
| DataSet | 5,346,589 | 11 |
I have a c# generated dataset. How can I change the connection string so I can use the dataset with another (identically structured yet differently populated) database? This has to occur at runtime as I do not know the server or database name at compile time. I am using c# 2.0.
| You can modify a single instance of the table adapter.
_myAdapter.Connection.ConnectionString = connectionString;
| DataSet | 3,477,544 | 11 |
We have a pricing dataset that changes the contained values or the number of records. The number of added or removed records is small compared to the changes in values. The dataset usually has between 50 and 500 items with 8 properties.
We currently use AJAX to return a JSON structure that represents the dataset and update a webpage using this structure with the new values and where necessary removing or adding items.
We make the request with two hash values, one for the values and another for the records. These are MD5 hashes returned with the JSON structure to be sent with a following request. If there is a change to the hashes we know we need a new JSON structure otherwise the hashes are just returned to save bandwidth and eliminate unnecessary client-side processing.
As MD5 is normally used with encryption is the best choice of hashing algorithm for just detecting data changes?
What alternative ways can we detect a change to the values and update as well as detecting added or removed items and manipulating the page DOM accordingly?
| MD5 is a reasonable algorithm to detect changes to a set of data. However, if you're not concerned with the cryptographic properties, and are very concerned with the performance of the algorithm, you could go with a simpler checksum-style algorithm that isn't designed to be cryptographically secure. (though weaknesses in MD5 have been discovered in recent years, it's still designed to be cryptographically secure, and hence does more work than may be required for your scenario).
However, if you're happy with the computational performance of MD5, I'd just stick with it.
| DataSet | 756,407 | 11 |
I get a DataTable from a DataSet and then bind that DataTable to a DataGridView. Once the user edits the information on the DataGridView how do I take those changes and put them back into a DataTable that was used that I can then put back into my DataSet?
I want to make a Save Button on my DataGrid that when pressed actually saves the changes.
I don't if I can get anymore specific than that, because it is a fairly simple question.
Thanks in advance!
Let me know if you need me to elaborate more.
| If you are using data-binding to a DataGridView, then you are already updating the DataTable / DataSet. If you mean changes down to the database, then that is where adapters come into play.
Here's an example:
using System;
using System.Data;
using System.Linq;
using System.Windows.Forms;
static class Program
{
[STAThread]
static void Main()
{
Application.EnableVisualStyles();
DataSet set = new DataSet();
DataTable table = set.Tables.Add("MyTable");
table.Columns.Add("Foo", typeof(int));
table.Columns.Add("Bar", typeof(string));
Button btn;
using (Form form = new Form
{
Text = "DataGridView binding sample",
Controls =
{
new DataGridView {
Dock = DockStyle.Fill,
DataMember = "MyTable",
DataSource = set
},
(btn = new Button {
Dock = DockStyle.Bottom,
Text = "Total"
})
}
})
{
btn.Click += delegate
{
form.Text = table.AsEnumerable().Sum(
row => row.Field<int>("Foo")).ToString();
};
Application.Run(form);
}
}
}
| DataSet | 520,051 | 10 |
torchtext.data.TabularDataset can be created from a TSV/JSON/CSV file and then it can be used for building the vocabulary from Glove, FastText or any other embeddings. But my requirement is to create a torchtext.data.TabularDataset directly, either from a list or a dict.
Current implementation of the code by reading TSV files
self.RAW = data.RawField()
self.TEXT = data.Field(batch_first=True)
self.LABEL = data.Field(sequential=False, unk_token=None)
self.train, self.dev, self.test = data.TabularDataset.splits(
path='.data/quora',
train='train.tsv',
validation='dev.tsv',
test='test.tsv',
format='tsv',
fields=[('label', self.LABEL),
('q1', self.TEXT),
('q2', self.TEXT),
('id', self.RAW)])
self.TEXT.build_vocab(self.train, self.dev, self.test, vectors=GloVe(name='840B', dim=300))
self.LABEL.build_vocab(self.train)
sort_key = lambda x: data.interleave_keys(len(x.q1), len(x.q2))
self.train_iter, self.dev_iter, self.test_iter = \
data.BucketIterator.splits((self.train, self.dev, self.test),
batch_sizes=[args.batch_size] * 3,
device=args.gpu,
sort_key=sort_key)
This is the current working code for reading data from a file. So in order to create the dataset directly from a List/Dict I tried inbuilt functions like Examples.fromDict or Examples.fromList but then while coming to the last for loop, it throws an error that AttributeError: 'BucketIterator' object has no attribute 'q1'
| It required me to write an own class inheriting the Dataset class and with few modifications in torchtext.data.TabularDataset class.
class TabularDataset_From_List(data.Dataset):
def __init__(self, input_list, format, fields, skip_header=False, **kwargs):
make_example = {
'json': Example.fromJSON, 'dict': Example.fromdict,
'tsv': Example.fromTSV, 'csv': Example.fromCSV}[format.lower()]
examples = [make_example(item, fields) for item in input_list]
if make_example in (Example.fromdict, Example.fromJSON):
fields, field_dict = [], fields
for field in field_dict.values():
if isinstance(field, list):
fields.extend(field)
else:
fields.append(field)
super(TabularDataset_From_List, self).__init__(examples, fields, **kwargs)
@classmethod
def splits(cls, path=None, root='.data', train=None, validation=None,
test=None, **kwargs):
if path is None:
path = cls.download(root)
train_data = None if train is None else cls(
train, **kwargs)
val_data = None if validation is None else cls(
validation, **kwargs)
test_data = None if test is None else cls(
test, **kwargs)
return tuple(d for d in (train_data, val_data, test_data)
if d is not None)
| DataSet | 53,046,583 | 10 |