type
class label
1 class
id
stringlengths
7
7
subreddit.id
stringclasses
1 value
subreddit.name
stringclasses
1 value
subreddit.nsfw
bool
1 class
created_utc
unknown
permalink
stringlengths
61
109
body
large_stringlengths
0
9.98k
sentiment
float32
-1
1
score
int32
-65
195
1comment
c27jf0m
2r97t
datasets
false
"2011-07-23T23:32:26Z"
https://old.reddit.com/r/datasets/comments/ixmht/does_anyone_have_access_to_a_dataset_of_clinical/c27jf0m/
Interesting! Thanks!
0.7339
1
1comment
c27jd32
2r97t
datasets
false
"2011-07-23T23:21:52Z"
https://old.reddit.com/r/datasets/comments/ixmht/does_anyone_have_access_to_a_dataset_of_clinical/c27jd32/
It's quite common on the ward round for us to question the appropriateness for a certain drug in a patient. We can often remember that, say, the Jupiter trial investigated the effects of rosuvastatin, but was it all cause mortality or cardiovascular death? Were patients with chronic renal failure excluded? If you google search it it can take some time, signal can be poor, and some sites just don't list the finer aspects of the trials. A local database would be great, but I expect it's a very ambitious request.
-0.7664
1
1comment
c27jbi5
2r97t
datasets
false
"2011-07-23T23:13:54Z"
https://old.reddit.com/r/datasets/comments/ixmht/does_anyone_have_access_to_a_dataset_of_clinical/c27jbi5/
Out of pure curiosity, why would you need/want access to such a large database from a phone?
0
1
1comment
c27il2c
2r97t
datasets
false
"2011-07-23T20:57:28Z"
https://old.reddit.com/r/datasets/comments/ixmht/does_anyone_have_access_to_a_dataset_of_clinical/c27il2c/
[Clinical Trials.gov](http://clinicaltrials.gov/)
0
2
1comment
c27i3p9
2r97t
datasets
false
"2011-07-23T19:33:12Z"
https://old.reddit.com/r/datasets/comments/ixmht/does_anyone_have_access_to_a_dataset_of_clinical/c27i3p9/
Thats something you'll have to compile yourself. Or get a research monkey! As a fellow healthcare worker, I need to get/train one myself!
0
1
1comment
c27i0y9
2r97t
datasets
false
"2011-07-23T19:19:56Z"
https://old.reddit.com/r/datasets/comments/ixmht/does_anyone_have_access_to_a_dataset_of_clinical/c27i0y9/
Yes, it does, but I'd quite like something I could quickly index on my phone, such as a huge table with the drug in question, primary end point, number needed to treat etc., rather than google searching etc.
0.9098
1
1comment
c27i0pl
2r97t
datasets
false
"2011-07-23T19:18:45Z"
https://old.reddit.com/r/datasets/comments/ixmht/does_anyone_have_access_to_a_dataset_of_clinical/c27i0pl/
Whatever medical centre you work for should have access to the Cochrane Library.
0
1
1comment
c27gk7s
2r97t
datasets
false
"2011-07-23T14:52:34Z"
https://old.reddit.com/r/datasets/comments/ixmht/does_anyone_have_access_to_a_dataset_of_clinical/c27gk7s/
I don't, but if you find one you should post it!
0
1
1comment
c24wpo8
2r97t
datasets
false
"2011-07-11T13:25:53Z"
https://old.reddit.com/r/datasets/comments/frk5y/looking_for_stock_market_data_15_minute_delayed/c24wpo8/
Here's a relatively unknown, undocumented Google stock API - http://www.google.com/ig/api?stock=MSFT
0
1
1comment
c24qaxa
2r97t
datasets
false
"2011-07-10T13:48:23Z"
https://old.reddit.com/r/datasets/comments/ik9kq/thanks_rdatasets_you_helped_me_impress_my_boss_if/c24qaxa/
I found gigabyte twitter datasets.
null
1
1comment
c24qafy
2r97t
datasets
false
"2011-07-10T13:43:38Z"
https://old.reddit.com/r/datasets/comments/ik9kq/thanks_rdatasets_you_helped_me_impress_my_boss_if/c24qafy/
Uh I know how to use this program http://gephi.org/ . Maybe I will do a screencast today. Note my java code is still in alpha quality stages. I can point you to dynamic statistics that matter.
0.4019
2
1comment
c24o47s
2r97t
datasets
false
"2011-07-10T03:05:34Z"
https://old.reddit.com/r/datasets/comments/ik9kq/thanks_rdatasets_you_helped_me_impress_my_boss_if/c24o47s/
Interesting! Can you explain further? Would be great if you could post a demo (screenshots or something) :)
0.8777
1
1comment
c24hldq
2r97t
datasets
false
"2011-07-09T02:01:12Z"
https://old.reddit.com/r/datasets/comments/ik9kq/thanks_rdatasets_you_helped_me_impress_my_boss_if/c24hldq/
What did /r/datasets do to impress your boss?
0.4404
7
1comment
c1zw5wu
2r97t
datasets
false
"2011-06-15T20:08:36Z"
https://old.reddit.com/r/datasets/comments/hwd72/im_looking_for_a_gpsannotated_roadmapgraph_of_any/c1zw5wu/
Now, I'm more convinced. http://www.reddit.com/r/openstreetmap/comments/i09l8/openstreetmap_data_in_study_of_multimodal_trip/
0.4576
1
1comment
c1zbcws
2r97t
datasets
false
"2011-06-12T22:41:20Z"
https://old.reddit.com/r/datasets/comments/ftlgu/publicly_available_twitter_data/c1zbcws/
A dataset of around 400k+ tweets containing questions/answers, http://snippet.in/blog/what-do-tweeters-ask
0
0
1comment
c1z4jft
2r97t
datasets
false
"2011-06-11T19:15:00Z"
https://old.reddit.com/r/datasets/comments/hwpu0/any_interest_in_a_data_dump_of_all_e3related_news/c1z4jft/
Sweet, thank you!
0.6996
1
1comment
c1z4erf
2r97t
datasets
false
"2011-06-11T18:50:41Z"
https://old.reddit.com/r/datasets/comments/hwpu0/any_interest_in_a_data_dump_of_all_e3related_news/c1z4erf/
Orangered alert: I updated this thread with a link to the data dump.
-0.1027
2
1comment
c1z4dt7
2r97t
datasets
false
"2011-06-11T18:45:50Z"
https://old.reddit.com/r/datasets/comments/hwpu0/any_interest_in_a_data_dump_of_all_e3related_news/c1z4dt7/
Orangered alert: I updated this thread with a link to the data dump.
-0.1027
2
1comment
c1z280x
2r97t
datasets
false
"2011-06-11T07:20:40Z"
https://old.reddit.com/r/datasets/comments/hwpu0/any_interest_in_a_data_dump_of_all_e3related_news/c1z280x/
Yup bring it on...
0
1
1comment
c1z272i
2r97t
datasets
false
"2011-06-11T07:11:48Z"
https://old.reddit.com/r/datasets/comments/hwpu0/any_interest_in_a_data_dump_of_all_e3related_news/c1z272i/
Sure, I'd love to look around at it, and judging by your upvotes, others do too :)
0.8591
1
1comment
c1yx5bt
2r97t
datasets
false
"2011-06-10T16:04:12Z"
https://old.reddit.com/r/datasets/comments/hwd72/im_looking_for_a_gpsannotated_roadmapgraph_of_any/c1yx5bt/
Most states have quite a bit of this information public for public mapping. Search Google for GIS and your State/County.
0
1
1comment
c1ywug7
2r97t
datasets
false
"2011-06-10T15:17:18Z"
https://old.reddit.com/r/datasets/comments/hwd72/im_looking_for_a_gpsannotated_roadmapgraph_of_any/c1ywug7/
Admittedly, jfasi would probably have to do some data entry to get this to work right, but it is a great starting point. I would hope that some city would have a detailed GIS database with speed limits, traffic lights, and so on for download. If you wanted a pretty high quality dataset with all of this I'd recommend NavTeq, but I'm also guessing that you don't have funding for the product. **EDIT**: After thinking about this a little more, I think North Carolina's DOT has a fairly good handle on GIS and has [quite a bit of data available for download](http://www.ncdot.org/it/gis/DataDistribution/DOTData/default.html). Note that I've never looked at these datasets.
0.9716
1
1comment
c1ywj1l
2r97t
datasets
false
"2011-06-10T14:24:10Z"
https://old.reddit.com/r/datasets/comments/hwd72/im_looking_for_a_gpsannotated_roadmapgraph_of_any/c1ywj1l/
Allow me to introduce you to http://www.openstreetmap.org/.
0.2263
1
1comment
c1ynhdd
2r97t
datasets
false
"2011-06-09T05:03:39Z"
https://old.reddit.com/r/datasets/comments/huvc9/200_million_tweets_from_13_million_users_543/c1ynhdd/
[deleted]
null
1
1comment
c1ymdh6
2r97t
datasets
false
"2011-06-09T01:45:36Z"
https://old.reddit.com/r/datasets/comments/huvc9/200_million_tweets_from_13_million_users_543/c1ymdh6/
anyone know if this data contains the date/time stamp of each tweet?
0
2
1comment
c1yldr0
2r97t
datasets
false
"2011-06-08T22:51:08Z"
https://old.reddit.com/r/datasets/comments/huvc9/200_million_tweets_from_13_million_users_543/c1yldr0/
Thanks but its down... twitter made them take it down.. :( http://twitter.com/#!/calufa/status/78556903772393474 Although someone made a torrent for it.... http://burnbit.com/torrent/170493/twitter_sql_bz2
-0.4926
7
1comment
c1yk13k
2r97t
datasets
false
"2011-06-08T19:28:28Z"
https://old.reddit.com/r/datasets/comments/huvc9/200_million_tweets_from_13_million_users_543/c1yk13k/
discussion on hackernews: http://news.ycombinator.com/item?id=2633384 I'd suggest downloading this asap as it's against twitter's terms and likely to disappear before too long.
-0.2263
4
1comment
c1yiwjo
2r97t
datasets
false
"2011-06-08T16:47:56Z"
https://old.reddit.com/r/datasets/comments/huqnk/free_sql_dump_with_200mm_tweets_from_13mm_users/c1yiwjo/
[deleted]
null
1
1comment
c1ubcv5
2r97t
datasets
false
"2011-05-16T11:11:41Z"
https://old.reddit.com/r/datasets/comments/hcgkb/price_of_weed/c1ubcv5/
I found this interesting dataset but I forgot what it was and then..wait have you ever noticed how your hand is like..what was I saying?
0.7478
2
1comment
c1tfkn5
2r97t
datasets
false
"2011-05-11T02:22:56Z"
https://old.reddit.com/r/datasets/comments/h8m72/library_hack_records_and_images_from_australian/c1tfkn5/
It's a little late for the mash-up contest which ends this month, but it's still a great collection of data sets. Also, many more at [data.gov.au](http://data.gov.au).
0.7684
1
1comment
c1pryz2
2r97t
datasets
false
"2011-04-16T22:31:40Z"
https://old.reddit.com/r/datasets/comments/g0b1u/time_series_data_on_uk_household_indebtedness/c1pryz2/
There is data similar to this available for the U.S. through both the Federal Reserve Economic Database (FRED) and the Census. I know that you can get data on Metropolitan Statistical Areas (MSA's) from the Census.
0
1
1comment
c1p7lv4
2r97t
datasets
false
"2011-04-13T15:54:19Z"
https://old.reddit.com/r/datasets/comments/febek/rdatasets_does_anyone_here_have_a_copy_of_the/c1p7lv4/
Have you checked with the library of congress? I understand that they keep the twitter database,so they might have (somewhat) restricted access.
-0.3197
1
1comment
c1on60q
2r97t
datasets
false
"2011-04-10T03:36:48Z"
https://old.reddit.com/r/datasets/comments/febek/rdatasets_does_anyone_here_have_a_copy_of_the/c1on60q/
[deleted]
null
1
1comment
c1on4dh
2r97t
datasets
false
"2011-04-10T03:25:09Z"
https://old.reddit.com/r/datasets/comments/febek/rdatasets_does_anyone_here_have_a_copy_of_the/c1on4dh/
[deleted]
null
1
1comment
c1omqcq
2r97t
datasets
false
"2011-04-10T01:43:26Z"
https://old.reddit.com/r/datasets/comments/febek/rdatasets_does_anyone_here_have_a_copy_of_the/c1omqcq/
[deleted]
null
2
1comment
c1ojr20
2r97t
datasets
false
"2011-04-09T10:50:59Z"
https://old.reddit.com/r/datasets/comments/febek/rdatasets_does_anyone_here_have_a_copy_of_the/c1ojr20/
[deleted]
null
2
1comment
c1oiiyb
2r97t
datasets
false
"2011-04-09T03:02:28Z"
https://old.reddit.com/r/datasets/comments/ftlgu/publicly_available_twitter_data/c1oiiyb/
Nope... None yet.. Some other projects came up and this one took a step back :). Have to restart this once the semester is done :).
0
1
1comment
c1oihn2
2r97t
datasets
false
"2011-04-09T02:52:56Z"
https://old.reddit.com/r/datasets/comments/febek/rdatasets_does_anyone_here_have_a_copy_of_the/c1oihn2/
[deleted]
null
2
1comment
c1oihj7
2r97t
datasets
false
"2011-04-09T02:52:06Z"
https://old.reddit.com/r/datasets/comments/ftlgu/publicly_available_twitter_data/c1oihj7/
Heya! Did you end up having any working code that can classify tweets as positive or negative? I am still having problems trying to get all these different tools to work together :s
-0.6476
1
1comment
c1n02l5
2r97t
datasets
false
"2011-03-30T08:40:00Z"
https://old.reddit.com/r/datasets/comments/ge1nz/imdb_data_and_tools/c1n02l5/
Yeah, after I submitted this I went to look over IMDb's TOS. Now I'm scared to even go near this when making my website.
-0.1779
1
1comment
c1mxzln
2r97t
datasets
false
"2011-03-30T00:08:51Z"
https://old.reddit.com/r/datasets/comments/ge1nz/imdb_data_and_tools/c1mxzln/
The data over at [freebase](http://www.freebase.com/view/film) probably has better terms. But I'm not sure how complete it is.
-0.1263
1
1comment
c1mjqfp
2r97t
datasets
false
"2011-03-27T12:38:47Z"
https://old.reddit.com/r/datasets/comments/gcgif/is_there_any_legitimate_way_i_can_get_the_whois/c1mjqfp/
This is the only info I have, and it seems to be the Zonefiles only. http://stackoverflow.com/questions/307553/possible-to-download-entire-whois-database-list-of-registered-domains
0
1
1comment
c1kyu6r
2r97t
datasets
false
"2011-03-16T13:57:09Z"
https://old.reddit.com/r/datasets/comments/g4ot9/free_hd_video_sequence_to_be_used_as_a_dataset/c1kyu6r/
Is this any good? you'll probably have to select certain clipped scenes from the longer segments they provide, there is plenty of footage of a newscaster against a static background. http://cc.aljazeera.net/content/about-repository - *The Al Jazeera Creative Common Repository hosts select broadcast quality footage that Al Jazeera has released under various Creative Commons licenses. Select Al Jazeera video footage – at this time footage of the War on Gaza - is available for free to be downloaded, shared, remixed, subtitled and eventually rebroadcasted by users and TV stations across the world with acknowledgement to Al Jazeera.*
0.891
2
1comment
c1kxou1
2r97t
datasets
false
"2011-03-16T05:32:44Z"
https://old.reddit.com/r/datasets/comments/g4ot9/free_hd_video_sequence_to_be_used_as_a_dataset/c1kxou1/
[deleted]
null
1
1comment
c1kwavh
2r97t
datasets
false
"2011-03-16T00:01:53Z"
https://old.reddit.com/r/datasets/comments/g4ot9/free_hd_video_sequence_to_be_used_as_a_dataset/c1kwavh/
I could record something for you on 720p, but until tomorrow, what format do you need it in? Going to a camera shop is also a great choice.
0.7684
2
1comment
c1kvpy9
2r97t
datasets
false
"2011-03-15T21:52:00Z"
https://old.reddit.com/r/datasets/comments/g4ot9/free_hd_video_sequence_to_be_used_as_a_dataset/c1kvpy9/
You could rent a Canon 5D MKII or a 7D And do your own high-res video, or you can go down to your local video shop and ask the owners to help you get some footage from one of their RED cameras- if you are publishing this, they won't mind the publicity at all.
0.4019
1
1comment
c1kh4wj
2r97t
datasets
false
"2011-03-12T19:42:19Z"
https://old.reddit.com/r/datasets/comments/g23s2/100000_web_servers_located_in_japan_in_xml_format/c1kh4wj/
Thanks.
null
1
1comment
c1kgzrq
2r97t
datasets
false
"2011-03-12T19:06:04Z"
https://old.reddit.com/r/datasets/comments/g23s2/100000_web_servers_located_in_japan_in_xml_format/c1kgzrq/
I created it to see how the earthquake/ tsunami affected the availability of web servers in Japan (my test at the time showed 86% were still online). That many banners can also be used to determine software usage stats for Japan's web servers or to create fingerprints in general.
0.3612
2
1comment
c1kefl8
2r97t
datasets
false
"2011-03-12T02:13:14Z"
https://old.reddit.com/r/datasets/comments/g23s2/100000_web_servers_located_in_japan_in_xml_format/c1kefl8/
What can this be used for?
0
1
1comment
c1jy9c3
2r97t
datasets
false
"2011-03-09T03:22:17Z"
https://old.reddit.com/r/datasets/comments/flk42/healthdatagov_launches_feb_15_a_consolidated/c1jy9c3/
Where the fuck was this when I was in grad school?! I regret that I've let my SAS skills fall away in the years since, but I'm slowly trying to pick up R in my spare time. I used to love doing freelance or personal data analysis projects when I had access to propriety university info. I can't wait to get back to passable proficiency and dig into this. Thanks for the heads up!
0.9117
1
1comment
c1jmc7r
2r97t
datasets
false
"2011-03-07T00:37:05Z"
https://old.reddit.com/r/datasets/comments/fezdi/twitter_data_set_9281007_tweets_across_135825/c1jmc7r/
Yes, you two look very happy together! :) *Cheers*
0.9239
2
1comment
c1jlx5c
2r97t
datasets
false
"2011-03-06T22:40:45Z"
https://old.reddit.com/r/datasets/comments/fezdi/twitter_data_set_9281007_tweets_across_135825/c1jlx5c/
Upvoted for making her day.
0
2
1comment
c1jlaht
2r97t
datasets
false
"2011-03-06T20:03:54Z"
https://old.reddit.com/r/datasets/comments/fezdi/twitter_data_set_9281007_tweets_across_135825/c1jlaht/
Your wife is rather pretty.
0.4939
3
1comment
c1jl3y1
2r97t
datasets
false
"2011-03-06T19:19:08Z"
https://old.reddit.com/r/datasets/comments/fy9v5/high_resolution_pictures_of_the_earth_geotiffpng/c1jl3y1/
Is there anywhere something like a google earth (google maps satellite) dump? I'd guess it would be too huge until i install some kind of huge raid array in my basement, has anyone information about this?
0.5525
2
1comment
c1jkvx0
2r97t
datasets
false
"2011-03-06T18:24:12Z"
https://old.reddit.com/r/datasets/comments/fyczy/scraperwiki_collection_of_scrapers_great_source/c1jkvx0/
Also, don't forget that it's a community project, too. If you know of some data that is exposed but imprisoned in its HTML jail, you *too* can set it free. I am hoping that tools such as the following Chrome extension will help more people jump in: http://mnmldave.github.com/scraper/
0.8352
3
1comment
c1jkr12
2r97t
datasets
false
"2011-03-06T17:50:01Z"
https://old.reddit.com/r/datasets/comments/ftlgu/publicly_available_twitter_data/c1jkr12/
Dont know if you've seen this already. but theres a package in R called [twitteR](http://cran.r-project.org/web/packages/twitteR/index.html). It does some queries automatically to the twitter api. I'm still trying to figure out how to have it download multiple queries/min and get a sizeable database.
0
1
1comment
c1jkaup
2r97t
datasets
false
"2011-03-06T15:29:24Z"
https://old.reddit.com/r/datasets/comments/ftcqr/need_help_with_imdb_dataset/c1jkaup/
You can use IMDBpy http://imdbpy.sourceforge.net/ to read the files. It comes with a number of example programs, one of them can import the IMDB files into a mysql database. Once in a SQL database it is real easy to do whatever you want with the data.
0.5423
3
1comment
c1jjghx
2r97t
datasets
false
"2011-03-06T07:46:27Z"
https://old.reddit.com/r/datasets/comments/fy9v5/high_resolution_pictures_of_the_earth_geotiffpng/c1jjghx/
Clicky http://ueod-globe.net/globe/TrueMarble_GeoTIFF/TrueMarble.250m.21600x21600.A1.tif.gz http://ueod-globe.net/globe/TrueMarble_GeoTIFF/TrueMarble.250m.21600x21600.B1.tif.gz http://ueod-globe.net/globe/TrueMarble_GeoTIFF/TrueMarble.250m.21600x21600.C1.tif.gz http://ueod-globe.net/globe/TrueMarble_GeoTIFF/TrueMarble.250m.21600x21600.D1.tif.gz http://ueod-globe.net/globe/TrueMarble_GeoTIFF/TrueMarble.250m.21600x21600.E1.tif.gz http://ueod-globe.net/globe/TrueMarble_GeoTIFF/TrueMarble.250m.21600x21600.F1.tif.gz http://ueod-globe.net/globe/TrueMarble_GeoTIFF/TrueMarble.250m.21600x21600.G1.tif.gz http://ueod-globe.net/globe/TrueMarble_GeoTIFF/TrueMarble.250m.21600x21600.H1.tif.gz http://ueod-globe.net/globe/TrueMarble_GeoTIFF/TrueMarble.250m.21600x21600.A2.tif.gz http://ueod-globe.net/globe/TrueMarble_GeoTIFF/TrueMarble.250m.21600x21600.B2.tif.gz http://ueod-globe.net/globe/TrueMarble_GeoTIFF/TrueMarble.250m.21600x21600.C2.tif.gz http://ueod-globe.net/globe/TrueMarble_GeoTIFF/TrueMarble.250m.21600x21600.D2.tif.gz http://ueod-globe.net/globe/TrueMarble_GeoTIFF/TrueMarble.250m.21600x21600.E2.tif.gz http://ueod-globe.net/globe/TrueMarble_GeoTIFF/TrueMarble.250m.21600x21600.F2.tif.gz http://ueod-globe.net/globe/TrueMarble_GeoTIFF/TrueMarble.250m.21600x21600.G2.tif.gz http://ueod-globe.net/globe/TrueMarble_GeoTIFF/TrueMarble.250m.21600x21600.H2.tif.gz http://ueod-globe.net/globe/TrueMarble_GeoTIFF/TrueMarble.250m.21600x21600.A3.tif.gz http://ueod-globe.net/globe/TrueMarble_GeoTIFF/TrueMarble.250m.21600x21600.B3.tif.gz http://ueod-globe.net/globe/TrueMarble_GeoTIFF/TrueMarble.250m.21600x21600.C3.tif.gz http://ueod-globe.net/globe/TrueMarble_GeoTIFF/TrueMarble.250m.21600x21600.D3.tif.gz http://ueod-globe.net/globe/TrueMarble_GeoTIFF/TrueMarble.250m.21600x21600.E3.tif.gz http://ueod-globe.net/globe/TrueMarble_GeoTIFF/TrueMarble.250m.21600x21600.F3.tif.gz http://ueod-globe.net/globe/TrueMarble_GeoTIFF/TrueMarble.250m.21600x21600.G3.tif.gz http://ueod-globe.net/globe/TrueMarble_GeoTIFF/TrueMarble.250m.21600x21600.H3.tif.gz http://ueod-globe.net/globe/TrueMarble_GeoTIFF/TrueMarble.250m.21600x21600.A4.tif.gz http://ueod-globe.net/globe/TrueMarble_GeoTIFF/TrueMarble.250m.21600x21600.B4.tif.gz http://ueod-globe.net/globe/TrueMarble_GeoTIFF/TrueMarble.250m.21600x21600.C4.tif.gz http://ueod-globe.net/globe/TrueMarble_GeoTIFF/TrueMarble.250m.21600x21600.D4.tif.gz http://ueod-globe.net/globe/TrueMarble_GeoTIFF/TrueMarble.250m.21600x21600.E4.tif.gz http://ueod-globe.net/globe/TrueMarble_GeoTIFF/TrueMarble.250m.21600x21600.F4.tif.gz http://ueod-globe.net/globe/TrueMarble_GeoTIFF/TrueMarble.250m.21600x21600.G4.tif.gz http://ueod-globe.net/globe/TrueMarble_GeoTIFF/TrueMarble.250m.21600x21600.H4.tif.gz Anyone knows how to merge large picture files?
0
5
1comment
c1jjcmv
2r97t
datasets
false
"2011-03-06T07:07:53Z"
https://old.reddit.com/r/datasets/comments/frk5y/looking_for_stock_market_data_15_minute_delayed/c1jjcmv/
Etrade has an API too, though I haven't used it yet. https://us.etrade.com/e/t/activetrading/api
0
1
1comment
c1jj7z3
2r97t
datasets
false
"2011-03-06T06:26:09Z"
https://old.reddit.com/r/datasets/comments/frk5y/looking_for_stock_market_data_15_minute_delayed/c1jj7z3/
Correct me if I've misinterpreted, but: tl;dr: Robots.txt does not disallow, ergo, it should be all cool with the Google.
0.3182
1
1comment
c1jg57z
2r97t
datasets
false
"2011-03-05T16:46:02Z"
https://old.reddit.com/r/datasets/comments/ftcqr/need_help_with_imdb_dataset/c1jg57z/
How did you go with the imdb / jmdb ? I started mucking about with it in my spare time, I got the original C source (with some tweaking) to build on a linux box (ubuntu) and to build on a windows 7 box (using Watcom C). The data files are largish but can be sliced and diced with unix text processing tools fairly readily (head, tail, ssed). I've only just started on the jmdb, so far I've got PostGres and the jmdb.jar things talking to each other via the latest jdbc (java data base connection) and am halfway through loading and converting the text data from the .list imdb files ---> a postgres db. If you're still mucking about I can fill you in on what I did if you can describe what you're working with & how far you got.
0.1655
1
1comment
c1iybj4
2r97t
datasets
false
"2011-03-02T05:52:19Z"
https://old.reddit.com/r/datasets/comments/ftlgu/publicly_available_twitter_data/c1iybj4/
Bringing it into R? Does R read SQL? I'm thinking of running a shell script that basically eats up the Twitter feed for the next few months, and puts it into a raw file, then using JSON>SQL converter to write it straight to an SQL database and then analyzing it out of there. while [ 1 ]; do 'curl http://stream.twitter.com/1/statuses/sample.json -s -uUSER:PASS -Y 0 -retry 9999 -retry-max-time 0 >> /tmp/tweets'; sleep 2s; done; This script pulls in about 8000 tweets / 10 min for me in testing this morning
0.3094
2
1comment
c1ixf4l
2r97t
datasets
false
"2011-03-02T02:35:05Z"
https://old.reddit.com/r/datasets/comments/ftlgu/publicly_available_twitter_data/c1ixf4l/
Yeah, it's not too bad. The 1% stream is available at: http://stream.twitter.com/1/statuses/sample.json Gives you a JSON pack which you just throw into your a database. Keep refreshing it for new chunks of data. Not sure if there are rate limiting guidelines you must follow, or if that sample stream is already limited. It's one line of code with MongoDB: curl http://stream.twitter.com/1/statuses/sample.json -u <user>:<pass> | mongoimport -c twitter_live As per [Eliot's page](http://eliothorowitz.com/post/459890033/streaming-twitter-into-mongodb) which explains it. Working with the data is another matter though lol.
0.6234
1
1comment
c1ixewe
2r97t
datasets
false
"2011-03-02T02:33:40Z"
https://old.reddit.com/r/datasets/comments/ftlgu/publicly_available_twitter_data/c1ixewe/
I'm looking at very similar research. You are trying to learn Python to get data from Twitter yourself?
0
2
1comment
c1ipvy9
2r97t
datasets
false
"2011-02-28T20:18:04Z"
https://old.reddit.com/r/datasets/comments/ftlgu/publicly_available_twitter_data/c1ipvy9/
Thanks ! I did download the data and am looking at it now trying to bring it into R. I am still simultaneously trying to learn python so that I can write a my own code (this step is not going so well). Do let me know if you get any data or anything. AM looking to fit the data to financial metrics and see if there is a causal relationship. What are you looking at?
0.8143
1
1comment
c1impee
2r97t
datasets
false
"2011-02-28T04:03:16Z"
https://old.reddit.com/r/datasets/comments/ftcqr/need_help_with_imdb_dataset/c1impee/
yes, some of them will. but most of the files are so large that notepad will stop working when I try to open them...i think my best bet will be to go through the setup guide again and figure out how to get JMDB to work
0.705
1
1comment
c1ilkus
2r97t
datasets
false
"2011-02-27T23:31:43Z"
https://old.reddit.com/r/datasets/comments/ftlgu/publicly_available_twitter_data/c1ilkus/
Hi pzauteenk, I was the one that submitted that link to /r/datasets, download it while you can. We are in the same boat: I am also researching economic sentiment, and require at least a year's worth of data. I will keep you posted if I find/develop anything else though, as I am currently testing a pretty intense scraping script. Mine-your-own seems to be the way to go at this stage. As per rialtund's suggestion of contacting Stanford and Edinburgh, don't bother. I have already been in a bit of back-and-forth contact with them, and they are not budging. Twitter slammed down pretty hard there. [See my thread here where I ask if anyone has a copy of the data](http://www.reddit.com/r/datasets/comments/febek/rdatasets_does_anyone_here_have_a_copy_of_the/)... Keep in touch, please let me know if you find anything as well.
0.9124
2
1comment
c1ik6nn
2r97t
datasets
false
"2011-02-27T17:44:56Z"
https://old.reddit.com/r/datasets/comments/ftlgu/publicly_available_twitter_data/c1ik6nn/
Unfortunately Twitter is shutting down much of this unless you want to pay through Gnip. We were allowing public downloads of anything you wanted through the 140kit.com but Twitter shut us down (even after many good emails with them and positive feedback) as per their new ToS implementation. Learning enough Ruby to interface with Twitter's API shouldn't take more than 20 hours of work at most.
0.8402
2
1comment
c1iiz9p
2r97t
datasets
false
"2011-02-27T08:34:43Z"
https://old.reddit.com/r/datasets/comments/ftlgu/publicly_available_twitter_data/c1iiz9p/
Well http://140kit.com/ has some code on github where you could make your own dataset.
0.2732
2
1comment
c1iiymy
2r97t
datasets
false
"2011-02-27T08:26:24Z"
https://old.reddit.com/r/datasets/comments/ftlgu/publicly_available_twitter_data/c1iiymy/
Unfortunately, there aren't any other bigger public twitter datasets. Stanford's and Edinburgh's datasets aren't available publicly anymore. If you contact them, they might make it available to you. It doesn't hurt to try. The Library of Congress has a database of every tweet made, but they haven't released it. Other than that, you're going to need to mine your own data.
0.0485
2
1comment
c1iix5q
2r97t
datasets
false
"2011-02-27T08:07:51Z"
https://old.reddit.com/r/datasets/comments/ftlgu/publicly_available_twitter_data/c1iix5q/
yes I saw that. They provide data from sept 09 to jan 10. I was looking for atleast a years data since I need to do some 'out of sample' prediction modeling for economic sentiment. I'm not very particular about twitter and all... Any popular blogging service would work.. Just that twitter is one of the most popular ones
0.8221
1
1comment
c1iiwzh
2r97t
datasets
false
"2011-02-27T08:05:47Z"
https://old.reddit.com/r/datasets/comments/ftlgu/publicly_available_twitter_data/c1iiwzh/
[deleted]
null
1
1comment
c1iiweo
2r97t
datasets
false
"2011-02-27T07:59:02Z"
https://old.reddit.com/r/datasets/comments/ftlgu/publicly_available_twitter_data/c1iiweo/
If you would look on the frontpage of r/datasets http://www.reddit.com/r/datasets/comments/fezdi/twitter_data_set_9281007_tweets_across_135825/ http://infolab.tamu.edu/resources/dataset/ ....
0
3
1comment
c1ihp5c
2r97t
datasets
false
"2011-02-27T01:48:27Z"
https://old.reddit.com/r/datasets/comments/ftcqr/need_help_with_imdb_dataset/c1ihp5c/
It looks like you'll get the best results using the unix tools in the ftp\tools directory to parse the .list files into a more database like format. I had a quick look at a few .list files (eg composers) and the format is deceptive. The unix tools appear to use a lexical scanner generated by flex to parse the raw files. EDIT: It appears *in theory* that the JMDB (java movie database) route is probably easiest as it will convert the IMDB text files into a straightforward MySQL or PostGres database. How far through the [setup guide](http://www.jmdb.de/content/doc/readme.txt) did you get?
0.9058
1
1comment
c1ihgjn
2r97t
datasets
false
"2011-02-27T00:37:11Z"
https://old.reddit.com/r/datasets/comments/ftcqr/need_help_with_imdb_dataset/c1ihgjn/
they appear to be tab delimited text files. Try lading them in excel.
0.4588
1
1comment
c1ih4im
2r97t
datasets
false
"2011-02-26T23:03:21Z"
https://old.reddit.com/r/datasets/comments/ftcqr/need_help_with_imdb_dataset/c1ih4im/
Have you tried opening it in notepad or something similar?
0
1
1comment
c1i8rxb
2r97t
datasets
false
"2011-02-25T03:08:57Z"
https://old.reddit.com/r/datasets/comments/frk5y/looking_for_stock_market_data_15_minute_delayed/c1i8rxb/
[robots.txt](http://finance.google.com/robots.txt)
null
1
1comment
c1i8ik9
2r97t
datasets
false
"2011-02-25T02:05:04Z"
https://old.reddit.com/r/datasets/comments/frk5y/looking_for_stock_market_data_15_minute_delayed/c1i8ik9/
If you're not necessarily looking to use the data in a web app, I like the [MSN Money Central](http://www.ilovefreesoftware.com/15/windows/productivity/finance/download-msn-moneycentral-stock-quotes-add-in-for-excel.html) add-in for Excel - it's great for setting up recurring queries and pulling the data into a spreadsheet format.
0.8625
2
1comment
c1i4fx5
2r97t
datasets
false
"2011-02-24T08:43:33Z"
https://old.reddit.com/r/datasets/comments/frk5y/looking_for_stock_market_data_15_minute_delayed/c1i4fx5/
From what i can see from the page source, the content of the "related companies" and "recent quotes" is nicely formatted in json, the price for the stock you are seeing is instead in a <span> with what seems to be a "random" id. But i will be wary about constantly requesting their pages, it looks plausible that they could have some sort of request rate limitation, so that no one can build their own stock prices API using their data.
-0.6209
1
1comment
c1i4cbd
2r97t
datasets
false
"2011-02-24T07:59:57Z"
https://old.reddit.com/r/datasets/comments/fegmm/looking_for_continuously_updated_sports_datasets/c1i4cbd/
[Look into PitchFX. I don't know where it's available, but it's amazing data if you can get a hold of it.](http://en.wikipedia.org/wiki/Pitch_f/x) You may also like this recent book [Scorecasting](http://www.amazon.com/Scorecasting-Hidden-Influences-Behind-Sports/dp/0307591794), just released a few weeks ago. The two guys who wrote it have some great ideas and resources that they worked from, so you can refine what you're looking for and see where they found the data.
0.9442
2
1comment
c1i3yos
2r97t
datasets
false
"2011-02-24T05:58:49Z"
https://old.reddit.com/r/datasets/comments/frk5y/looking_for_stock_market_data_15_minute_delayed/c1i3yos/
Back in 2008, I used to scrape Google Finance pages for their tick data. Though I'm not sure if it's still as easily available on the page with the graph flash applet but their resolution is much finer than other free sources including Yahoo Finance. 2 min intervals on high volume stocks. Try looking for it in the source for each Goog Finance page.
0.6877
4
1comment
c1i3mnz
2r97t
datasets
false
"2011-02-24T04:38:22Z"
https://old.reddit.com/r/datasets/comments/frk5y/looking_for_stock_market_data_15_minute_delayed/c1i3mnz/
For a quick solution try use the YQL web service API from Yahoo! Quick tutorial for pulling stock quotes [here](http://jarloo.com/tutorials/get-yahoo-finance-api-data-via-yql/) It works similar to SQL: select * from yahoo.finance.stocks where symbol=”aapl” or for example select * from yahoo.finance.historicaldata where symbol = “YHOO” and startDate = “2009-09-11″ and endDate = “2010-03-10″ > YQL works through a REST API. You pass the REST API your query and other optional parameters such as the format to return your query in: XML or JSON.
0.3802
7
1comment
c1hxxlm
2r97t
datasets
false
"2011-02-23T04:09:18Z"
https://old.reddit.com/r/datasets/comments/fqjik/visualize_how_federal_income_taxes_are_spent_win/c1hxxlm/
For those of you looking for more examples, I like the "[Death and Taxes](http://www.wallstats.com/deathandtaxes/)" series by WallStats.
-0.2815
3
1comment
c1hi9lm
2r97t
datasets
false
"2011-02-20T02:15:55Z"
https://old.reddit.com/r/datasets/comments/fnofj/has_anyone_put_the_jeopardy_questions_into_a/c1hi9lm/
The format of the episodes on j-archive varies. J-archive isn't 100% comprehensive as it's missing quite a few episodes from the early seasons. My scraper may have caught a corner case and not completed scraping. My unique ID isn't exactly unique either, I think I'm going to use a md5 sum of a string containing the category, dollar value, and episode number. I'll look into it.
-0.0618
1
1comment
c1hfzd3
2r97t
datasets
false
"2011-02-19T14:39:59Z"
https://old.reddit.com/r/datasets/comments/fnofj/has_anyone_put_the_jeopardy_questions_into_a/c1hfzd3/
Perfect!! Thanks so much. I don't have time to peruse the dataset yet, but it's curious that ScraperWiki indicates some 11,000-odd records when there are over 200,000 Jeopardy questions. Maybe they throttle the scraper and it will load over the next few days. I haven't read any FAQs on the site though. I love Reddit, and I'm glad that we're getting some cool stuff in /r/datasets.
0.9471
2
1comment
c1hfi76
2r97t
datasets
false
"2011-02-19T09:09:40Z"
https://old.reddit.com/r/datasets/comments/fnofj/has_anyone_put_the_jeopardy_questions_into_a/c1hfi76/
http://scraperwiki.com/scrapers/jarchive_scraper/ Bam motherfucker!
null
3
1comment
c1hd9gz
2r97t
datasets
false
"2011-02-18T21:54:31Z"
https://old.reddit.com/r/datasets/comments/fnofj/has_anyone_put_the_jeopardy_questions_into_a/c1hd9gz/
I was planning on using Postgres, too. Would a dump be easier? I'm going to (try to) make a multi-player web-based Jeopardy game.
-0.4404
1
1comment
c1hcmeg
2r97t
datasets
false
"2011-02-18T19:48:11Z"
https://old.reddit.com/r/datasets/comments/fnofj/has_anyone_put_the_jeopardy_questions_into_a/c1hcmeg/
While IANAL, if the pages are in clear text on the public Internet (i.e. no registration required to view them), then I doubt "stealing" enters into the discussion. Even the so-called "site ToS" is a bit legally gray, IMHO, and absolutely a fantasy if the site visitor is not in the same jurisdiction as the site [author? server? see... filled with silliness].
-0.7691
1
1comment
c1hcla6
2r97t
datasets
false
"2011-02-18T19:42:17Z"
https://old.reddit.com/r/datasets/comments/fnofj/has_anyone_put_the_jeopardy_questions_into_a/c1hcla6/
Thanks for the suggestions. No, I haven't contacted j-archive, and while can do that, i fear that people clam up if they think you are "stealing" their site (even though I'm aware as a fan site that this data doesn't belong to them). I hadn't heard of ScraperWiki. That's awesome; definitely gonna bookmark that site. I'll look at this tonight. Edit: liking ScraperWiki the more I think of it since it allows me to not only backfill but to create an ongoing feed. Thanks!
0.8805
2
1comment
c1hckhq
2r97t
datasets
false
"2011-02-18T19:37:52Z"
https://old.reddit.com/r/datasets/comments/fnofj/has_anyone_put_the_jeopardy_questions_into_a/c1hckhq/
It will go into Postgres, so CSV or some other flattened format works for me; I'd handle normalizing in the db. Thanks for your interest!
0.7345
1
1comment
c1hch62
2r97t
datasets
false
"2011-02-18T19:20:02Z"
https://old.reddit.com/r/datasets/comments/fnofj/has_anyone_put_the_jeopardy_questions_into_a/c1hch62/
I have two suggestions, hopefully in increasing order of helpfulness: 1. Have you written to j-archive.com to see if you could have (or compensate for time spent) a database dump? I would think from their perspective it would save a ton on bandwidth costs given that they wouldn't need to serve out the $whatever-kb HTML wrapper around the data. 2. Barring that, if you were not already aware there is a community web scraping site (http://ScraperWiki.com/) which is almost designed for what you are proposing. There are two great things about that service: it allows community maintenance of the scraper, reducing the burden of keeping it fresh and they *seem* to be using a key-value datastore behind the scenes, so persisting (and thus doing incremental scrapes) should be less problematic. For your consideration.
0.796
7
1comment
c1hcg1z
2r97t
datasets
false
"2011-02-18T19:14:17Z"
https://old.reddit.com/r/datasets/comments/fnofj/has_anyone_put_the_jeopardy_questions_into_a/c1hcg1z/
I wouldn't mind taking a stab at parsing the data on j-archive.com. What type of format would be most useful? XML, CSV, something totally different?
-0.2422
1
1comment
c1hc6dw
2r97t
datasets
false
"2011-02-18T18:23:07Z"
https://old.reddit.com/r/datasets/comments/fnofj/has_anyone_put_the_jeopardy_questions_into_a/c1hc6dw/
Nice try, Watson.
null
4
1comment
c1h9y3y
2r97t
datasets
false
"2011-02-18T05:21:08Z"
https://old.reddit.com/r/datasets/comments/fnofj/has_anyone_put_the_jeopardy_questions_into_a/c1h9y3y/
[This guy](http://www.slate.com/id/2284678/) has, maybe he'll give you his data?
0
3
1comment
c1h04nc
2r97t
datasets
false
"2011-02-16T14:17:51Z"
https://old.reddit.com/r/datasets/comments/flk42/healthdatagov_launches_feb_15_a_consolidated/c1h04nc/
So true. On Monday, [Todd Park](http://gov20.govfresh.com/hhs-cto-todd-park-on-healthdata-gov-text4baby-and-open-health-data/) said it was to launch Tuesday. UPDATE: the website is up.
0.4754
1
1comment
c1gzadc
2r97t
datasets
false
"2011-02-16T04:22:56Z"
https://old.reddit.com/r/datasets/comments/fikrq/creating_ngrams_from_the_twitter_dataset_posted/c1gzadc/
Heya! Fantastic work! Just checked back in here, and was very glad to see this in action! I am going down the list of n-grams, salivating over this data! *bookmarked*
0.8427
2
1comment
c1gyqy6
2r97t
datasets
false
"2011-02-16T02:21:12Z"
https://old.reddit.com/r/datasets/comments/flk42/healthdatagov_launches_feb_15_a_consolidated/c1gyqy6/
No sign of it yet(?)
-0.296
0
1comment
c1gwuta
2r97t
datasets
false
"2011-02-15T19:48:19Z"
https://old.reddit.com/r/datasets/comments/f81wt/remember_that_dataset_of_ufo_sightings_i_used_it/c1gwuta/
Hello, I think that this is an interesting data set. I am an undergraduate economics student and got a research grant to fund some research looking into this with some econometrics. I plan on running some regressions and will make sure to share my results when I have them. I was wondering if you had the "cleaned" dataset available and would be willing to share it with me to save me some time in cleaning it myself.
0.9201
-1
1comment
c1gani5
2r97t
datasets
false
"2011-02-10T22:55:23Z"
https://old.reddit.com/r/datasets/comments/fikrq/creating_ngrams_from_the_twitter_dataset_posted/c1gani5/
While I've done that in the past ( http://chir.ag/projects/preztags/ ), for this project I actually need them for the autosuggest feature.
0
1
1comment
c1gaeal
2r97t
datasets
false
"2011-02-10T21:58:27Z"
https://old.reddit.com/r/datasets/comments/fikrq/creating_ngrams_from_the_twitter_dataset_posted/c1gaeal/
Have you tried excluding common stopwords from the rankings and seeing how the results change? Cool work, by the way.
0.3182
3