<?xml version="1.0" encoding="UTF-8"?>
<!-- generator="wordpress/2.0.2" -->
<rss version="2.0" 
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	>

<channel>
	<title>Ruby, Rails, Web2.0</title>
	<link>http://www.rubyrailways.com</link>
	<description>Experiences with Ruby and Rails, Web2.0 and other development technologies</description>
	<pubDate>Thu, 18 Jan 2007 08:21:51 +0000</pubDate>
	<generator>http://wordpress.org/?v=2.0.2</generator>
	<language>en</language>
			<item>
		<title>Data extraction for Web 2.0: Screen scraping in Ruby/Rails</title>
		<link>http://www.rubyrailways.com/data-extraction-for-web-20-screen-scraping-in-rubyrails/</link>
		<comments>http://www.rubyrailways.com/data-extraction-for-web-20-screen-scraping-in-rubyrails/#comments</comments>
		<pubDate>Wed, 14 Jun 2006 07:21:06 +0000</pubDate>
		<dc:creator>peter</dc:creator>
		
	<dc:subject>Uncategorized</dc:subject>
		<guid isPermaLink="false">http://www.rubyrailways.com/data-extraction-for-web-20-screen-scraping-in-rubyrails/</guid>
		<description><![CDATA[Introduction

Despite of the ongoing Web 2.0 buzz, the absolute majority of the Web pages 
are still very Web 1.0:  They heavily mix presentation with content.
[1] This makes hard or impossible for a computer to tell 
off the wheat from the chaff: to sift out meaningful data from the rest of the elements 
used for [...]]]></description>
			<content:encoded><![CDATA[<h2>Introduction</h2>

<p>Despite of the ongoing Web 2.0 buzz, the absolute majority of the Web pages 
are still very Web 1.0:  <span id="1back">They heavily mix presentation with content.
<a href="#1">[1]</a></span> This makes hard or impossible for a computer to tell 
off the wheat from the chaff: to sift out meaningful data from the rest of the elements 
used for formatting,  spacing, decoration or site navigation.</p>

<p>To remedy this problem, some sites provide access to their content
through APIs (typically via web services), but in practice nowadays this is
limited to a few (big) sites, and some of them are not even free or public.
In an ideal Web 2.0 world, where data sharing and site interoperability is one of 
the basic principles, this should change soon(?) - but what should 
one do if he needs the data NOW and not in the likely-to-happen-future?</p>

<h3>Manic Miner</h3>

<p>The solution is called screen/Web scraping or Web extraction - mining Web data 
by observing the page structure and wrapping out the relevant records. In some 
cases the task is even more complex than that: The data can be scattered over 
more pages, triggering of a GET/POST request may be needed to get the input page 
for the extraction  or authorization may be required to navigate to the page of 
interest. Ruby has solutions for these issues, too - we will take a look at them 
as well.</p>

<p>The extracted data can be used in any way you like - to create mashups 
(e.g. <a href="http://www.chicagocrime.org/">chicagocrime.org</a> by <a href="http://www.djangoproject.com/">Django</a> author 
<a href="http://www.holovaty.com/">Adrian Holovaty</a>), to remix and present the relevant data 
(e.g. <a href='http://www.rubystuff.com/'>rubystuff.com</a> com by <a href="http://www.ruby-doc.org/">
ruby-doc.org</a> maintainer <a href="http://www.jamesbritt.com">James Britt</a>), to automatize 
processes (for example if you have more bank accounts, to get the sum of the 
money you have all together, without using your browser), monitor/compare 
prices/items, meta-search, create a semantic web page out of a regular one - 
just to name a few. The number of the possibilities is limited by your 
imagination only.</p>

<h2>Tools of the trade</h2>

<p>In this section we will check out the two main possibilities (string and tree based
wrappers) and take a look at HTree, REXML, RubyfulSoup and WWW::Mechanize based 
solutions.</p>

<h3>String wrappers</h3>

<p>The easiest (but in most of the cases inadequate) possibility is to view the 
HTML document as a string. In this case you can use regular expressions to
mine the relevant data. For example if you would like to extract names
of goods and their price from a Web shop, and you know that they are
both in the same HTML element, like:</p>

<pre>
&lt;td&gt;Samsung SyncMasta 21''LCD     $750.00&lt;/td&gt;
</pre>

<p>you can extract this record from Ruby with this code snippet:</p>

<pre>
scan(page, /&lt;td&gt;(.*)\s+(\$\d+\.\d{2})&lt;\/td&gt;/)
</pre>

<p>Let&#8217;s see a real (although simple) example:</p>

<pre>
1 require 'open-uri'

2 url = "http://www.google.com/search?q=ruby"
3 open(url) {
4   |page| page_content = page.read()
5   links = page_content.scan(/&lt;a class=l.*?href=\"(.*?)\"/).flatten
6   links.each {|link| puts link}
7 }
</pre>

<p>The first and crucial part of creating the wrapper program was the observation of the
page source: We had to look for something that appears only in the result links.
In this case this was the presence of the &#8216;class&#8217; attribute, with value &#8216;l&#8217;. This
task is usually not this easy, but for illustration purposes it serves well.</p>

<p>This minimalistic example shows the basic concepts: How to load the
contents of a Web page into a string (line 4), and how to extract the result
links on a google search result page (line 5). (After execution, the program 
will list the first 10 links of a google search query for the word &#8216;ruby&#8217; (line 6)).</p>

<p>However, in practice you will mostly need to extract data which are not
in a contiguous string, but contained in multiple HTML tags, or divided
in a way where a string is not the proper structure for searching. <span id="2back">In
this case it is better to view the HTML document as a tree.<a href="#2">[2]</a></span></p>

<h3>Tree wrappers</h3>

<p>The tree-based approach, although enables more powerful techniques, 
has its problems, too: The HTML document can look very good in a browser, 
yet still be seriously malformed (unclosed/misused tags). It is a
non-trivial problem to parse such a document into a structured format
like XML, since XML parsers can work with well-formed documents only.</p>

<h4>HTree and REXML</h4>

<p>There is a solution (in most of the cases) for this problem, too:
It is called <a href='http://cvs.m17n.org/~akr/htree/'>HTree</a>. This handy package is able 
to tidy up the malformed HTML input, turning it to XML - the recent version is 
capable to transform the input into  the nicest possible XML from our point of view: a REXML
Document. (<a href='http://www.germane-software.com/software/rexml/docs/tutorial.html'>
REXML</a> is Ruby&#8217;s standard XML/XPath processing library).</p>

<p>After preprocessing the page content with HTree, you can unleash the 
full power of XPath, which is a very powerful XML document querying language, 
highly suitable for web extraction. </p>

<p><span id="3back">Refer to <a href="#3">[3]</a> for the installation instructions of HTree.</span></p>

<p>Let&#8217;s revisit the previous Google example: </p>

<pre>
1 require 'open-uri'
2 require 'htree'
3 require 'rexml/document'

4 url = "http://www.google.com/search?q=ruby"
5 open(url) {
6  |page| page_content = page.read()
7  doc = HTree(page_content).to_rexml
8  doc.root.each_element('//a[@class="l"]') {
        |elem| puts elem.attribute('href').value }  
9 }
</pre>

<p>HTree is used in the 7th line only - it converts the HTML page (loaded into the pageContent
variable on the previous line) into a REXML Document. The real magic happens 
in the 8th line. We select all the &lt;a&gt; tags which have an attribute &#8216;class&#8217; with the
value &#8216;l&#8217;, <span id="4back">then for each such element write out the &#8216;href&#8217; attribute. <a href="#4">[4]</a></span>
I think this approach is much more natural for querying an XML document than a regular expression. 
The only drawback is that you have to learn a new language, XPath, which is (mainly from 
version 2.0) quite difficult to master. However, just to get started you do not need to know
much of it, yet you gain lots of raw power compared to the possibilities offered by regular expressions.</p>

<h4>RubyfulSoup</h4>

<p><a href='http://www.crummy.com/software/RubyfulSoup/'>Rubyfulsoup</a> is a very powerful Ruby 
screen-scraping package, which offers
similar possibilities like HTree + XPath. For people who are not handy with XML/XPath,
RubyfulSoup may be a wise compromise: It&#8217;s an all-in-one, effective HTML parsing 
and web scraping tool with Ruby-like syntax. Although it&#8217;s expressive power
lags behind XPath2.0, it should be adequate in 90% of the cases. If your problem is in the 
remaining 10%, you probably don&#8217;t need to read this tutorial anyway <img src='http://www.rubyrailways.com/wp-includes/images/smilies/icon_wink.gif' alt=';-)' class='wp-smiley' />  
<span id="5back">Installation instructions can be found here: <a href="#5">[5]</a>.</span></p>

<p>The google example again:</p>

<pre>
1  require 'rubygems'
2  require 'rubyful_soup'
3  require 'open-uri'

4  url = "http://www.google.com/search?q=ruby"  
5  open(url) { 
6    |page| page_content = page.read()
7    soup = BeautifulSoup.new(page_content)
8    result = soup.find_all('a', :attrs => {'class' => 'l'}) 
9    result.each { |tag| puts tag['href'] }
10 }
</pre>

<p>As you can see, the difference between the HTree + REXML and RubyfulSoup examples is minimal - 
basically it is limited to differences in the querying syntax. On line 8, you look up all the
&lt;a&gt; tags, with the specified attribute list (in this case a hash with a single pair { &#8216;class&#8217; => &#8216;l&#8217; } )
The other syntactical difference is looking up the value of the &#8216;href&#8217; attribute on line 9.</p>

<p>I have found RubyfulSoup the ideal tool for screen scraping from a single page - however web navigation
(GET/POST, authentication, following links) is not really possible or obscure at best with 
this tool (which is perfectly OK, since it does not aim to provide this functionality). However, there 
is  nothing to fear - the next package is doing just exactly that.</p>

<h4>WWW::Mechanize</h4>

<p>As of today, prevalent majority of data resides in the <i>deep Web</i> - databases, that 
are accessible via querying through web-forms. For example if you would like to get information 
on flights from New York to Chicago, you will (hopefully) not search for it on google  - 
you go to the website of the Ruby Airlines instead, fill in the adequate fields and click on search. 
The information which appears is not available on a static page - it&#8217;s looked up on demand and 
generated on the fly - so until the very moment the web server generates it for you , its practically 
non-existent (i.e. it resides in the deep Web) and hence impossible to extract. At this point 
<a href='http://www.ntecs.de/blog/Blog/WWW-Mechanize.rdoc'>WWW::Mecahnize </a> comes into play. 
<span id="6back">(See <a href="#6">[6]</a> for installation instructions)</span></p>

<p>WWW::Mechanize belongs to the family of screen scraping products (along with http-access2 and Watir)
that are capable to drive a browser. Let&#8217;s apply the &#8216;Show, don&#8217;t tell&#8217; mantra - for everybody&#8217;s delight
and surprise, illustrated on our google scenario: </p>

<pre>
require 'rubygems'
require 'mechanize'

agent = WWW::Mechanize.new
page = agent.get('http://www.google.com')

search_form = page.forms.with.name("f").first
search_form.fields.name("q").first.value = "ruby"
search_results = agent.submit(search_form)
search_results.links.each {
     |link| puts link.href if link.class_name == "l" }
</pre>

<p>I have to admit that i have been cheating with this one <img src='http://www.rubyrailways.com/wp-includes/images/smilies/icon_wink.gif' alt=';-)' class='wp-smiley' /> . I had to hack WWW::Mechanize to 
access a custom attribute (in this case &#8216;class&#8217;) because normally this is not available.
<span id="7back">See how  i did it here: <a href="#7">[7]</a></span></p>

<p>This example illustrates a major difference between RubyfulSoup and Mechanize: additionally to screen scraping 
functionality, WWW::mechanize is able to drive the web browser like a human user: It filled in the 
search form and clicked the &#8217;search&#8217; button, navigating to the result page, then performed screen scraping
on the results.</p>

<p>This example also pointed out the fact that RubyfulSoup - although lacking navigation possibilities -
is much more powerful in screen scraping. For example, as of now, you can not extract arbitrary (say &lt;p&gt;) 
tags with Mechanize, and as the example illustrated, attribute extraction is not possible either - not to 
mention more complex, XPath like queries  (e.g. the third &lt;td&gt; in the second &lt;tr&gt;) which is easy with 
RubyfulSoup/REXML. My recommendation is to combine these tools, as pointed out in the last section of this article.</p>

<h4>WATIR</h4>

<p>From the <a href="http://wtr.rubyforge.org/">WATIR</a> page:</p>

<p><i>WATIR stands for &#8220;Web Application Testing in Ruby&#8221;. Watir drives the Internet Explorer browser the same 
way people do. It clicks links, fills in forms, presses buttons. Watir also checks results, such as whether 
expected text appears on the page.</i></p>

<p>Unfortunately I have no experience with WATIR since i am a linux-only nerd, using windows for occasional 
gaming but not for development, so I can not tell anything about it from the first hand, but judging from the
mailing list contributions i think Watir is more mature and feature-rich than mechanize. Definitely
check it out if you are running on Win32.</p>

<h2>The silver bullet</h2>

<p>For a complex scenario, usually an amalgam of the above tools can provide the ultimate solution:
The combination of WWW::Mechanize or WATIR (for automatization of site navigation), RubyfulSoup (for serious screen 
scraping, where the above two are not enough) and HTree+REXML (for extreme cases where even RubyfulSoup
can&#8217;t help you).</p>

<p>I have been creating industry-strength, robust and effective screen scraping solutions in the last five years 
of my career, and i can show you a handful of pages where even the most sophisticated solutions do not work (and
i am not talking about scraping with RubyfulSoup here, but even more powerful solutions (like embedding 
mozilla in your application and directly accessing the DOM etc)). So the basic rule is: there is no 
spoon (err&#8230; silver bullet) - and i know by experience that the number of &#8216;hard-to-scrap&#8217; sites is rising 
(partially because of the Web 2.0 stuff like AJAX, but also because some people would not like their sites to 
be extracted and apply different anti-scraping masquerading techniques). </p>

<p>The described tools should be enough to get you started - additionally, you may have to figure out how to 
drill down to your stuff on the concrete page of interest.</p>

<p>In the next installment of this series, i will create a mashup application using the introduced tools, from some
more interesting data than google <img src='http://www.rubyrailways.com/wp-includes/images/smilies/icon_wink.gif' alt=';-)' class='wp-smiley' /> 
The results will be presented on a Ruby on Rails powered page, in a sortable AJAX table. 
<br/>
<br/>
<span style='font-size:smaller'>
<i>If you liked this article, be sure to <a href='http://www.digg.com/programming/Data_extraction_for_Web_2.0:_Screen_scraping_in_Ruby_Rails
'>vote for it on digg here</a> so that others can read it, too!</i></span></p>

<div style='border-top: 1px solid black; padding-top:40px; margin-top:40px'>

<div id="1">
[1] There are a lot of other issues (social aspect, interoperability, design principles
etc.), but these are falling out of scope of the current topic.<a href="#1back">Back</a>
</div>

<p><br/></p>

<div id="2">
[2] However, if the problem can be relatively easily tackled with regular expressions, it&#8217;s usually good 
to use them for several reasons: No additional packages are needed (this is even more important if you don&#8217;t have
install rights), you don&#8217;t have to rely on the HTML parser&#8217;s output and if you can use regular expressions, it&#8217;s
usually the easier way to do so. <a href="#2back">Back</a>
</div>

<p><br/></p>

<div id="3">
[3] Install HTree:
wget http://cvs.m17n.org/viewcvs/ruby/htree.tar.gz (or download it from your browser)
tar -xzvf htree.tar.gz
sudo ruby install.rb <a href="#3back">Back</a>
</div>

<p><br/></p>

<div id="4">
[4] There are plenty other (possibly smarter) ways to do this, for example using 
each_element_with_attribute, or a different, more effective XPath - I have chosen to use 
this method to get as close to the regexp example as possible, so it is easy to observe 
the difference between the two approaches for the same solution. For a real REXML tutorial/documentation
visit the <a href='http://www.germane-software.com/software/rexml/docs/tutorial.html'>REXML site</a>.
<a href="#4back">Back</a>
</div>

<p><br/></p>

<div id="5">
[5] The easiest way is to install rubyful_soup from a gem:
sudo gem install rubyful_soup
Since it was installed as a gem, don&#8217;t forget to require &#8216;rubygems&#8217; before requiring rubyful_soup.
<a href="#5back">Back</a>
</div>

<p><br/></p>

<div id="6">
[6] sudo gem install mechanize
<a href="#6back">Back</a>
</div>

<p><br/></p>

<div id="7">
[7] I have added two lines to WWW::Mechanize source file page_elements.rb:

To the class definition:
<pre>
attr_reader :class_name
</pre>

Into the constructor:
<pre>
@class_name = node.attributes['class']
</pre>
</div>

<p></div></p>
tags:]]></content:encoded>
			<wfw:commentRSS>http://www.rubyrailways.com/data-extraction-for-web-20-screen-scraping-in-rubyrails/feed/</wfw:commentRSS>
		</item>
	</channel>
</rss>

<!-- Dynamic Page Served (once) in 0.109 seconds -->
<!-- Cached page served by WP-Cache -->
