<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN"
  "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd">
<html xmlns="http://www.w3.org/1999/xhtml" xml:lang="en" lang="en">
  <head>
    <meta http-equiv="Content-Type" content="text/html; charset=UTF-8" />
<title>
  File: README
  
    &mdash; FSelector Documentation
  
</title>

  <link rel="stylesheet" href="css/style.css" type="text/css" media="screen" charset="utf-8" />

  <link rel="stylesheet" href="css/common.css" type="text/css" media="screen" charset="utf-8" />

<script type="text/javascript" charset="utf-8">
  relpath = '';
  if (relpath != '') relpath += '/';
</script>

  <script type="text/javascript" charset="utf-8" src="js/jquery.js"></script>

  <script type="text/javascript" charset="utf-8" src="js/app.js"></script>


  </head>
  <body>
    <script type="text/javascript" charset="utf-8">
      if (window.top.frames.main) document.body.className = 'frames';
    </script>
    
    <div id="header">
      <div id="menu">
  
    <a href="_index.html" title="Index">Index</a> &raquo; 
    <span class="title">File: README</span>
  
  
  <div class="noframes"><span class="title">(</span><a href="." target="_top">no frames</a><span class="title">)</span></div>
</div>

      <div id="search">
  
    <a id="class_list_link" href="#">Class List</a>
  
    <a id="method_list_link" href="#">Method List</a>
  
    <a id="file_list_link" href="#">File List</a>
  
</div>
      <div class="clear"></div>
    </div>
    
    <iframe id="search_frame"></iframe>
    
    <div id="content"><div id='filecontents'><h1>FSelector: a Ruby gem for feature selection</h1>

<p><strong>Home</strong>: <a href="https://rubygems.org/gems/fselector">https://rubygems.org/gems/fselector</a><br>
<strong>Source Code</strong>: <a href="https://github.com/need47/fselector">https://github.com/need47/fselector</a><br>
<strong>Documentation</strong>: <a href="http://rubydoc.info/gems/fselector/frames">http://rubydoc.info/gems/fselector/frames</a><br>
<strong>Publication</strong>: <a href="http://bioinformatics.oxfordjournals.org/content/28/21/2851">Bioinformatics, 2012, 28, 2851-2852</a><br>
<strong>Author</strong>: Tiejun Cheng<br>
<strong>Email</strong>: <a href="mailto:need47@gmail.com">need47@gmail.com</a><br>
<strong>Copyright</strong>: 2012<br>
<strong>License</strong>: MIT License<br>
<strong>Latest Version</strong>: 1.4.0<br>
<strong>Release Date</strong>: 2012-11-05</p>

<h2>Synopsis</h2>

<p>FSelector is a Ruby gem that aims to integrate various feature 
selection algorithms and related functions into one single 
package. Welcome to contact me (<a href="mailto:need47@gmail.com">need47@gmail.com</a>) if you&#39;d like to 
contribute your own algorithms or report a bug. FSelector allows user 
to perform feature selection by using either a single algorithm or an 
ensemble of multiple algorithms, and other common tasks including 
normalization and discretization on continuous data, as well as replace 
missing feature values with certain criterion. FSelector acts on a 
full-feature data set in either CSV, LibSVM or WEKA file format and 
outputs a reduced data set with only selected subset of features, which 
can later be used as the input for various machine learning softwares 
such as LibSVM and WEKA. FSelector, as a collection of filter methods, 
does not implement any classifier like support vector machines or 
random forest. Check below for a list of FSelector&#39;s features, 
<a href="file.ChangeLog.html" title="ChangeLog">ChangeLog</a> for updates, and <a href="file.HowToContribute.html" title="HowToContribute">HowToContribute</a> if you want 
to contribute.</p>

<h2>Feature List</h2>

<p><strong>1. supported input/output file types</strong></p>

<ul>
<li>csv</li>
<li>libsvm</li>
<li>weka ARFF</li>
<li>on-line dataset in one of the above three formats (read only)</li>
<li>random data (read only, for test purpose)</li>
</ul>

<p><strong>2. available feature selection/ranking algorithms</strong></p>

<pre class="code ruby"><code>algorithm                        shortcut    algo_type  applicability  feature_type
--------------------------------------------------------------------------------------------------
Accuracy                         Acc         weighting  multi-class    discrete
AccuracyBalanced                 Acc2        weighting  multi-class    discrete
BiNormalSeparation               BNS         weighting  multi-class    discrete
CFS_d                            CFS_d       searching  multi-class    discrete
ChiSquaredTest                   CHI         weighting  multi-class    discrete
CorrelationCoefficient           CC          weighting  multi-class    discrete
DocumentFrequency                DF          weighting  multi-class    discrete
F1Measure                        F1          weighting  multi-class    discrete
FishersExactTest                 FET         weighting  multi-class    discrete
FastCorrelationBasedFilter       FCBF        searching  multi-class    discrete
GiniIndex                        GI          weighting  multi-class    discrete
GMean                            GM          weighting  multi-class    discrete
GSSCoefficient                   GSS         weighting  multi-class    discrete
InformationGain                  IG          weighting  multi-class    discrete
INTERACT                         INTERACT    searching  multi-class    discrete
JMeasure                         JM          weighting  multi-class    discrete
KLDivergence                     KLD         weighting  multi-class    discrete
MatthewsCorrelationCoefficient   MCC, PHI    weighting  multi-class    discrete
McNemarsTest                     MNT         weighting  multi-class    discrete
OddsRatio                        OR          weighting  multi-class    discrete
OddsRatioNumerator               ORN         weighting  multi-class    discrete
PhiCoefficient                   PHI         weighting  multi-class    discrete
Power                            Power       weighting  multi-class    discrete
Precision                        Precision   weighting  multi-class    discrete
ProbabilityRatio                 PR          weighting  multi-class    discrete
Recall                           Recall      weighting  multi-class    discrete
Relief_d                         Relief_d    weighting  two-class      discrete
ReliefF_d                        ReliefF_d   weighting  multi-class    discrete
Sensitivity                      SN, Recall  weighting  multi-class    discrete
Specificity                      SP          weighting  multi-class    discrete
SymmetricalUncertainty           SU          weighting  multi-class    discrete
BetweenWithinClassesSumOfSquare  BSS_WSS     weighting  multi-class    continuous
CFS_c                            CFS_c       searching  multi-class    continuous
FTest                            FT          weighting  multi-class    continuous
KS_CCBF                          KS_CCBF     searching  multi-class    continuous
KSTest                           KST         weighting  two-class      continuous
PMetric                          PM          weighting  two-class      continuous
Relief_c                         Relief_c    weighting  two-class      continuous
ReliefF_c                        ReliefF_c   weighting  multi-class    continuous
TScore                           TS          weighting  two-class      continuous
WilcoxonRankSum                  WRS         weighting  two-class      continuous
LasVegasFilter                   LVF         searching  multi-class    discrete, continuous, mixed
LasVegasIncremental              LVI         searching  multi-class    discrete, continuous, mixed
Random                           Rand        weighting  multi-class    discrete, continuous, mixed
RandomSubset                     RandS       searching  multi-class    discrete, continuous, mixed
</code></pre>

<p><strong>note for feature selection interface:</strong><br>
  there are two types of filter algorithms: filter_by_feature_weighting and filter_by_feature_searching  </p>

<ul>
<li>for former: use either <strong>select_feature_by_score!</strong> or <strong>select_feature_by_rank!</strong><br></li>
<li>for latter: use <strong>select_feature!</strong></li>
</ul>

<p><strong>3. feature selection approaches</strong></p>

<ul>
<li>by a single algorithm</li>
<li>by multiple algorithms in a tandem manner</li>
<li>by multiple algorithms in an ensemble manner (share the same feature selection interface as single algorithm)</li>
</ul>

<p><strong>4. availabe normalization and discretization algorithms for continuous feature</strong></p>

<pre class="code ruby"><code>algorithm                         note
---------------------------------------------------------------------------------------
normalize_by_log!                 normalize by logarithmic transformation
normalize_by_min_max!             normalize by scaling into [min, max]
normalize_by_zscore!              normalize by converting into zscore
discretize_by_equal_width!        discretize by equal width among intervals
discretize_by_equal_frequency!    discretize by equal frequency among intervals
discretize_by_ChiMerge!           discretize by ChiMerge algorithm
discretize_by_Chi2!               discretize by Chi2 algorithm
discretize_by_MID!                discretize by Multi-Interval Discretization algorithm
discretize_by_TID!                discretize by Three-Interval Discretization algorithm
</code></pre>

<p><strong>5. availabe algorithms for replacing missing feature values</strong></p>

<pre class="code ruby"><code>algorithm                         note                                   feature_type                     
---------------------------------------------------------------------------------------------
replace_by_fixed_value!           replace by a fixed value               discrete, continuous
replace_by_mean_value!            replace by mean feature value          continuous
replace_by_median_value!          replace by median feature value        continuous
replace_by_knn_value!             replace by weighted knn feature value  continuous
replace_by_most_seen_value!       replace by most seen feature value     discrete
</code></pre>

<h2>Installing</h2>

<p>To install FSelector, use the following command:</p>

<pre class="code ruby"><code>$ gem install fselector
</code></pre>

<p><strong>note:</strong> From version 0.5.0, FSelector uses the RinRuby gem (<a href="http://rinruby.ddahl.org">http://rinruby.ddahl.org</a>) 
  as a seemless bridge to access the statistical routines in the R package (<a href="http://www.r-project.org">http://www.r-project.org</a>), 
  which will greatly expand the inclusion of algorithms to FSelector, especially for those relying 
  on statistical test. To this end, please pre-install the R package. RinRuby should have been 
  auto-installed with FSelector via the above command.</p>

<h2>Usage</h2>

<p><strong>1. feature selection by a single algorithm</strong></p>

<pre class="code ruby"><code><span class='id identifier rubyid_require'>require</span> <span class='tstring'><span class='tstring_beg'>'</span><span class='tstring_content'>fselector</span><span class='tstring_end'>'</span></span>

<span class='comment'># use InformationGain (IG) as a feature selection algorithm
</span><span class='id identifier rubyid_r1'>r1</span> <span class='op'>=</span> <span class='const'>FSelector</span><span class='op'>::</span><span class='const'>IG</span><span class='period'>.</span><span class='id identifier rubyid_new'>new</span>

<span class='comment'># read from random data (or csv, libsvm, weka ARFF file)
</span><span class='comment'># no. of samples: 100
</span><span class='comment'># no. of classes: 2
</span><span class='comment'># no. of features: 15
</span><span class='comment'># no. of possible values for each feature: 3
</span><span class='comment'># allow missing values: true
</span><span class='id identifier rubyid_r1'>r1</span><span class='period'>.</span><span class='id identifier rubyid_data_from_random'>data_from_random</span><span class='lparen'>(</span><span class='int'>100</span><span class='comma'>,</span> <span class='int'>2</span><span class='comma'>,</span> <span class='int'>15</span><span class='comma'>,</span> <span class='int'>3</span><span class='comma'>,</span> <span class='kw'>true</span><span class='rparen'>)</span>

<span class='comment'># number of features before feature selection
</span><span class='id identifier rubyid_puts'>puts</span> <span class='tstring'><span class='tstring_beg'>&quot;</span><span class='tstring_content'>  # features (before): </span><span class='tstring_end'>&quot;</span></span><span class='op'>+</span> <span class='id identifier rubyid_r1'>r1</span><span class='period'>.</span><span class='id identifier rubyid_get_features'>get_features</span><span class='period'>.</span><span class='id identifier rubyid_size'>size</span><span class='period'>.</span><span class='id identifier rubyid_to_s'>to_s</span>

<span class='comment'># select the top-ranked features with scores &gt;0.01
</span><span class='id identifier rubyid_r1'>r1</span><span class='period'>.</span><span class='id identifier rubyid_select_feature_by_score!'>select_feature_by_score!</span><span class='lparen'>(</span><span class='tstring'><span class='tstring_beg'>'</span><span class='tstring_content'>&gt;0.01</span><span class='tstring_end'>'</span></span><span class='rparen'>)</span>

<span class='comment'># number of features after feature selection
</span><span class='id identifier rubyid_puts'>puts</span> <span class='tstring'><span class='tstring_beg'>&quot;</span><span class='tstring_content'>  # features (after): </span><span class='tstring_end'>&quot;</span></span><span class='op'>+</span> <span class='id identifier rubyid_r1'>r1</span><span class='period'>.</span><span class='id identifier rubyid_get_features'>get_features</span><span class='period'>.</span><span class='id identifier rubyid_size'>size</span><span class='period'>.</span><span class='id identifier rubyid_to_s'>to_s</span>

<span class='comment'># you can also use a second alogirithm for further feature selection
</span><span class='comment'># e.g. use the ChiSquaredTest (CHI) with Yates' continuity correction
</span><span class='comment'># initialize from r1's data
</span><span class='id identifier rubyid_r2'>r2</span> <span class='op'>=</span> <span class='const'>FSelector</span><span class='op'>::</span><span class='const'>CHI</span><span class='period'>.</span><span class='id identifier rubyid_new'>new</span><span class='lparen'>(</span><span class='symbol'>:yates</span><span class='comma'>,</span> <span class='id identifier rubyid_r1'>r1</span><span class='period'>.</span><span class='id identifier rubyid_get_data'>get_data</span><span class='rparen'>)</span>

<span class='comment'># number of features before feature selection
</span><span class='id identifier rubyid_puts'>puts</span> <span class='tstring'><span class='tstring_beg'>&quot;</span><span class='tstring_content'>  # features (before): </span><span class='tstring_end'>&quot;</span></span><span class='op'>+</span> <span class='id identifier rubyid_r2'>r2</span><span class='period'>.</span><span class='id identifier rubyid_get_features'>get_features</span><span class='period'>.</span><span class='id identifier rubyid_size'>size</span><span class='period'>.</span><span class='id identifier rubyid_to_s'>to_s</span>

<span class='comment'># select the top-ranked 3 features
</span><span class='id identifier rubyid_r2'>r2</span><span class='period'>.</span><span class='id identifier rubyid_select_feature_by_rank!'>select_feature_by_rank!</span><span class='lparen'>(</span><span class='tstring'><span class='tstring_beg'>'</span><span class='tstring_content'>&lt;=3</span><span class='tstring_end'>'</span></span><span class='rparen'>)</span>

<span class='comment'># number of features after feature selection
</span><span class='id identifier rubyid_puts'>puts</span> <span class='tstring'><span class='tstring_beg'>&quot;</span><span class='tstring_content'>  # features (after): </span><span class='tstring_end'>&quot;</span></span><span class='op'>+</span> <span class='id identifier rubyid_r2'>r2</span><span class='period'>.</span><span class='id identifier rubyid_get_features'>get_features</span><span class='period'>.</span><span class='id identifier rubyid_size'>size</span><span class='period'>.</span><span class='id identifier rubyid_to_s'>to_s</span>

<span class='comment'># save data to standard ouput as a weka ARFF file (sparse format)
</span><span class='comment'># with selected features only
</span><span class='id identifier rubyid_r2'>r2</span><span class='period'>.</span><span class='id identifier rubyid_data_to_weka'>data_to_weka</span><span class='lparen'>(</span><span class='symbol'>:stdout</span><span class='comma'>,</span> <span class='symbol'>:sparse</span><span class='rparen'>)</span>
</code></pre>

<p><strong>2. feature selection by an ensemble of multiple feature selectors</strong></p>

<pre class="code ruby"><code><span class='id identifier rubyid_require'>require</span> <span class='tstring'><span class='tstring_beg'>'</span><span class='tstring_content'>fselector</span><span class='tstring_end'>'</span></span>

<span class='comment'># example 1
</span><span class='comment'>#
</span>

<span class='comment'># creating an ensemble of feature selectors by using 
</span><span class='comment'># a single feature selection algorithm (INTERACT) 
</span><span class='comment'># by instance perturbation (e.g. random sampling)
</span>
<span class='comment'># test for the type of feature subset selection algorithms
</span><span class='id identifier rubyid_r'>r</span> <span class='op'>=</span> <span class='const'>FSelector</span><span class='op'>::</span><span class='const'>INTERACT</span><span class='period'>.</span><span class='id identifier rubyid_new'>new</span><span class='lparen'>(</span><span class='float'>0.0001</span><span class='rparen'>)</span>

<span class='comment'># an ensemble of 40 feature selectors with 90% data by random sampling
</span><span class='id identifier rubyid_re'>re</span> <span class='op'>=</span> <span class='const'>FSelector</span><span class='op'>::</span><span class='const'>EnsembleSingle</span><span class='period'>.</span><span class='id identifier rubyid_new'>new</span><span class='lparen'>(</span><span class='id identifier rubyid_r'>r</span><span class='comma'>,</span> <span class='int'>40</span><span class='comma'>,</span> <span class='float'>0.90</span><span class='comma'>,</span> <span class='symbol'>:random_sampling</span><span class='rparen'>)</span>

<span class='comment'># read SPECT data set (under the test/ directory)
</span><span class='id identifier rubyid_re'>re</span><span class='period'>.</span><span class='id identifier rubyid_data_from_csv'>data_from_csv</span><span class='lparen'>(</span><span class='tstring'><span class='tstring_beg'>'</span><span class='tstring_content'>test/SPECT_train.csv</span><span class='tstring_end'>'</span></span><span class='rparen'>)</span>

<span class='comment'># number of features before feature selection
</span><span class='id identifier rubyid_puts'>puts</span> <span class='tstring'><span class='tstring_beg'>'</span><span class='tstring_content'>  # features (before): </span><span class='tstring_end'>'</span></span> <span class='op'>+</span> <span class='id identifier rubyid_re'>re</span><span class='period'>.</span><span class='id identifier rubyid_get_features'>get_features</span><span class='period'>.</span><span class='id identifier rubyid_size'>size</span><span class='period'>.</span><span class='id identifier rubyid_to_s'>to_s</span>

<span class='comment'># only features with above average count among ensemble are selected
</span><span class='id identifier rubyid_re'>re</span><span class='period'>.</span><span class='id identifier rubyid_select_feature!'>select_feature!</span>

<span class='comment'># number of features after feature selection
</span><span class='id identifier rubyid_puts'>puts</span> <span class='tstring'><span class='tstring_beg'>'</span><span class='tstring_content'>  # features before (after): </span><span class='tstring_end'>'</span></span> <span class='op'>+</span> <span class='id identifier rubyid_re'>re</span><span class='period'>.</span><span class='id identifier rubyid_get_features'>get_features</span><span class='period'>.</span><span class='id identifier rubyid_size'>size</span><span class='period'>.</span><span class='id identifier rubyid_to_s'>to_s</span>


<span class='comment'># example 2
</span><span class='comment'>#
</span>

<span class='comment'># creating an ensemble of feature selectors by using 
</span><span class='comment'># two feature selection algorithms: InformationGain (IG) and Relief_d. 
</span><span class='comment'># note: can be 2+ algorithms, as long as they are of the same type, 
</span><span class='comment'># either filter_by_feature_weighting or filter_by_feature_searching
</span>
<span class='comment'># test for the type of feature weighting algorithms 
</span><span class='id identifier rubyid_r1'>r1</span> <span class='op'>=</span> <span class='const'>FSelector</span><span class='op'>::</span><span class='const'>IG</span><span class='period'>.</span><span class='id identifier rubyid_new'>new</span>
<span class='id identifier rubyid_r2'>r2</span> <span class='op'>=</span> <span class='const'>FSelector</span><span class='op'>::</span><span class='const'>Relief_d</span><span class='period'>.</span><span class='id identifier rubyid_new'>new</span><span class='lparen'>(</span><span class='int'>10</span><span class='rparen'>)</span>

<span class='comment'># an ensemble of two feature selectors
</span><span class='id identifier rubyid_re'>re</span> <span class='op'>=</span> <span class='const'>FSelector</span><span class='op'>::</span><span class='const'>EnsembleMultiple</span><span class='period'>.</span><span class='id identifier rubyid_new'>new</span><span class='lparen'>(</span><span class='id identifier rubyid_r1'>r1</span><span class='comma'>,</span> <span class='id identifier rubyid_r2'>r2</span><span class='rparen'>)</span>

<span class='comment'># read random discrete data (containing missing value)
</span><span class='id identifier rubyid_re'>re</span><span class='period'>.</span><span class='id identifier rubyid_data_from_random'>data_from_random</span><span class='lparen'>(</span><span class='int'>100</span><span class='comma'>,</span> <span class='int'>2</span><span class='comma'>,</span> <span class='int'>15</span><span class='comma'>,</span> <span class='int'>3</span><span class='comma'>,</span> <span class='kw'>true</span><span class='rparen'>)</span>

<span class='comment'># replace missing value because Relief_d 
</span><span class='comment'># does not allow missing value
</span><span class='id identifier rubyid_re'>re</span><span class='period'>.</span><span class='id identifier rubyid_replace_by_most_seen_value!'>replace_by_most_seen_value!</span>

<span class='comment'># number of features before feature selection
</span><span class='id identifier rubyid_puts'>puts</span> <span class='tstring'><span class='tstring_beg'>'</span><span class='tstring_content'>  # features (before): </span><span class='tstring_end'>'</span></span> <span class='op'>+</span> <span class='id identifier rubyid_re'>re</span><span class='period'>.</span><span class='id identifier rubyid_get_features'>get_features</span><span class='period'>.</span><span class='id identifier rubyid_size'>size</span><span class='period'>.</span><span class='id identifier rubyid_to_s'>to_s</span>

<span class='comment'># based on the max feature score (z-score standardized) among
</span><span class='comment'># an ensemble of feature selectors
</span><span class='id identifier rubyid_re'>re</span><span class='period'>.</span><span class='id identifier rubyid_ensemble_by_score'>ensemble_by_score</span><span class='lparen'>(</span><span class='symbol'>:by_max</span><span class='comma'>,</span> <span class='symbol'>:by_zscore</span><span class='rparen'>)</span>

<span class='comment'># select the top-ranked 3 features
</span><span class='id identifier rubyid_re'>re</span><span class='period'>.</span><span class='id identifier rubyid_select_feature_by_rank!'>select_feature_by_rank!</span><span class='lparen'>(</span><span class='tstring'><span class='tstring_beg'>'</span><span class='tstring_content'>&lt;=3</span><span class='tstring_end'>'</span></span><span class='rparen'>)</span>

<span class='comment'># number of features after feature selection
</span><span class='id identifier rubyid_puts'>puts</span> <span class='tstring'><span class='tstring_beg'>'</span><span class='tstring_content'>  # features (after): </span><span class='tstring_end'>'</span></span> <span class='op'>+</span> <span class='id identifier rubyid_re'>re</span><span class='period'>.</span><span class='id identifier rubyid_get_features'>get_features</span><span class='period'>.</span><span class='id identifier rubyid_size'>size</span><span class='period'>.</span><span class='id identifier rubyid_to_s'>to_s</span>
</code></pre>

<p><strong>3. feature selection after discretization</strong></p>

<pre class="code ruby"><code><span class='id identifier rubyid_require'>require</span> <span class='tstring'><span class='tstring_beg'>'</span><span class='tstring_content'>fselector</span><span class='tstring_end'>'</span></span>

<span class='comment'># the Information Gain (IG) algorithm requires data with discrete feature
</span><span class='id identifier rubyid_r'>r</span> <span class='op'>=</span> <span class='const'>FSelector</span><span class='op'>::</span><span class='const'>IG</span><span class='period'>.</span><span class='id identifier rubyid_new'>new</span>

<span class='comment'># but the Iris data set contains continuous features
</span><span class='id identifier rubyid_r'>r</span><span class='period'>.</span><span class='id identifier rubyid_data_from_url'>data_from_url</span><span class='lparen'>(</span><span class='tstring'><span class='tstring_beg'>'</span><span class='tstring_content'>http://repository.seasr.org/Datasets/UCI/arff/iris.arff</span><span class='tstring_end'>'</span></span><span class='comma'>,</span> <span class='symbol'>:weka</span><span class='rparen'>)</span>

<span class='comment'># let's first discretize it by ChiMerge algorithm at alpha=0.10
</span><span class='comment'># then perform feature selection as usual
</span><span class='id identifier rubyid_r'>r</span><span class='period'>.</span><span class='id identifier rubyid_discretize_by_ChiMerge!'>discretize_by_ChiMerge!</span><span class='lparen'>(</span><span class='float'>0.10</span><span class='rparen'>)</span>

<span class='comment'># number of features before feature selection
</span><span class='id identifier rubyid_puts'>puts</span> <span class='tstring'><span class='tstring_beg'>'</span><span class='tstring_content'>  # features (before): </span><span class='tstring_end'>'</span></span> <span class='op'>+</span> <span class='id identifier rubyid_r'>r</span><span class='period'>.</span><span class='id identifier rubyid_get_features'>get_features</span><span class='period'>.</span><span class='id identifier rubyid_size'>size</span><span class='period'>.</span><span class='id identifier rubyid_to_s'>to_s</span>

<span class='comment'># select the top-ranked feature
</span><span class='id identifier rubyid_r'>r</span><span class='period'>.</span><span class='id identifier rubyid_select_feature_by_rank!'>select_feature_by_rank!</span><span class='lparen'>(</span><span class='tstring'><span class='tstring_beg'>'</span><span class='tstring_content'>&lt;=1</span><span class='tstring_end'>'</span></span><span class='rparen'>)</span>

<span class='comment'># number of features after feature selection
</span><span class='id identifier rubyid_puts'>puts</span> <span class='tstring'><span class='tstring_beg'>'</span><span class='tstring_content'>  # features (after): </span><span class='tstring_end'>'</span></span> <span class='op'>+</span> <span class='id identifier rubyid_r'>r</span><span class='period'>.</span><span class='id identifier rubyid_get_features'>get_features</span><span class='period'>.</span><span class='id identifier rubyid_size'>size</span><span class='period'>.</span><span class='id identifier rubyid_to_s'>to_s</span>
</code></pre>

<p><strong>4. see more examples test_*.rb under the test/ directory</strong></p>

<h2>How to contribute</h2>

<p>check <a href="file.HowToContribute.html" title="HowToContribute">HowToContribute</a> to see how to write your own feature selection algorithms and/or make contribution to FSelector.</p>

<h2>Change Log</h2>

<p>A <a href="file.ChangeLog.html" title="ChangeLog">ChangeLog</a> is available from version 0.5.0 and upward to refelect 
what&#39;s new and what&#39;s changed.</p>

<h2>Copyright</h2>

<p>FSelector &copy; 2012 by <a href="mailto:need47@gmail.com">Tiejun Cheng</a>.
FSelector is licensed under the MIT license. Please see the <a href="file.LICENSE.html" title="LICENSE">LICENSE</a> for
more information.</p>
</div></div>
    
    <div id="footer">
  Generated on Mon Nov  5 11:19:43 2012 by 
  <a href="http://yardoc.org" title="Yay! A Ruby Documentation Tool" target="_parent">yard</a>
  0.7.5 (ruby-1.9.3).
</div>

  </body>
</html>