package OddMuse::Database;

=head1 NAME

OddMuse::Database - Perl Module for OddMuse database classes

=head1 DESCRIPTION

This module provides a variety of methods used to manipulate and create
database entries for OddMuse.

=head1 DEPENDENCIES

L<MLDBM>, L<Time::Format>, L<Exporter>, L<OddMuse::Database::Logging>

=cut

require Exporter;
@ISA       = qw( Exporter );
@EXPORT_OK = qw( $datafile $dataswap $debug );
$VERSION   = ( q$Revision: 30 $ ) =~ /(\d+)/g;

use OddMuse::Database::Logging;
use MLDBM qw( DB_File Storable );

#use strict; (fails for some reason - figure out why)

=head1 GLOBAL VARIABLES

=over 2

=item * $datafile - Where the configuration data lives

=item * $dataswap - Where to store temporary database while building

=item * $debug - Sets the text output verbosity when compiling

Please note that setting debug to higher values can impact the amount of time
it takes finish building the database.  For example, on my Wiki, which is
currently about 500 largish pages.

  * debug 1 - 17 seconds
  * debug 2 - 30 seconds
  * debug 3 = 34 secords
  * debug 4 = 20 seconds ( loglevel 4 does not log during processing )

=item * %RecordCache - Caches Individual Records to speed up fetch requests

=back

=cut

our $datafile = $OddMuse::DataDir . '/metadata.db'; 
our $dataswap = '/tmp/metadata.db';              
our $debug = 1;     
our %RecordCache; 

=head1 SYNOPSIS

To connect to the datastore manually, try this.

  use OddMuse::Database;
  my $datastore = OddMuse::Database->new;
  
To create or modify data for a singe record on the datastore, try something
like this.

  use OddMuse::Database;
  my $record = OddMuse::Database->newrecord;
  $record->parse ( $name, 
                   $OddMuse::Database::PageCache{$name},
                   $Page{username},
                   $Page{ts},
                   $Page{originalAuthor},
                   $Page{created}
                 );

To retrive data from the datastore, try:

  use OddMuse::Database;
  my $record = OddMuse::Database->fetch( 'HomePage' );

  while ( my ( $source, $value ) = each %{ $record } ) { 
      if ( $source =~ /title/ ) {
          print "$value";
      }
  }

=head1 METHODS

=over 2

=item * $object->new();

Creates a new connection to the database for information retrieval.  Use this
function if you need to search through the database.  It is far faster to
simply tie to the database store than it is to make repeated $object->fetch
requests.

A complex data structure in hash format is returned in the following format.

 WikiName = (
   wikiname       => 'OfficialPageName', 
   redirects      => 'WikiWord', 
   title          => 'Title of Page', 
   subtitle       => 'A longer description',
   recentauthor   => 'UserName', 
   recentedit     => 1159367500,  
   originalauthor => 'UserName', 
   originaledit   => 1046861160, 
   cluster        => 'ClusterFoo',
   pagesize       => 1534, 
   backlinks      => [ NeatPage HomePage ],
   tags           => [ Foo Bar ],     
   first          => 'A summary ...', 
   words          => 'Foo Bar Cat Blah Gloop ...'   
 )

The hash key is the WikiName.  Title and Subtitle refer to the "SmartTitle" and
"Smart SubTitle".  Recent Author and Original Author are usernames.  RecentEdit
and OriginalEdit are dates in epoch format.  Pagesize is in bytes.  Backlinks
and Tags are an array embedded into the hash.  First and Words are simply long
strings of text.

=cut

sub new {
    my $class = shift;
    my %self;    # Not an Anonymous, we need to tie

    tie %self, 'MLDBM', $OddMuse::Database::datafile
      or die "Cannot open file $OddMuse::Database::datafile $!\n";

    #return bless \%self,$class;
    return \%self;
} ## end sub new

=item * $object->newrecord();

Creates a new record in memory compatable with the database.  When your done
manipulating it's values, you'll probably want to associate this record with
the actual data store.  A simple example:

 tie my %database, 'MLDBM', $datafile, 
   or die "Cannot open file $datafile $!\n";
 my $record = OddMuse::Database->newrecord;

 ... do something to $record ...

 $database{ $name } = $record;
 untie %database;

=cut

sub newrecord {
    my $class = shift;

    my $self = { wikiname       => undef,
                 redirects      => undef,
                 title          => undef,
                 subtitle       => undef,
                 recentauthor   => undef,
                 recentedit     => undef,
                 originalauthor => undef,
                 originaledit   => undef,
                 cluster        => undef,
                 pagesize       => undef,
                 backlinks      => undef,
                 tags           => undef,
                 first          => undef,
                 words          => undef
               };

    bless( $self, $class );
    return $self;
} ## end sub newrecord

=item * $object->fetch( name );

Connects to the data store via $object->new and fetches a single page record.
The hash structure returned is nearly identical to newrecord, with the
exception than it is one hash dimension smaller.  (Because a multidimension
hash key is not needed).  So to fetch a pages title attribute, you might do
something like:

  my $record = OddMuse::Database->fetch( 'HomePage' );
  print $record->{ 'title' };

Do not use the fetch method in a loop to retrive data about multiple pages in
the database, as each and every fetch request must tie to and then break down a
connection to the datastore.  (IE: It will be SLOW).  If you need to mine data
about more than one page, I suggest using $object->new and extracting the data
manually.

=cut

# Grab a single record from the database
sub fetch {
    my ( $class, $name ) = @_;
    my ( $wikiname,     $redirects,  $title,          $subtitle,
         $recentauthor, $recentedit, $originalauthor, $originaledit,
         $cluster,      $pagesize,   @backlinks,      @tags,
         $first,        $words
       );
    my $ondisk = $class->new();    # Connect to disk store

    print "<p><strong>Cached used on $name</strong></p>"
      if $RecordCache{ $name };
    return $RecordCache{ $name }
      if $RecordCache{ $name };    # Use cached copy if possible

    # Search for and decouple values from reference.
    for my $source ( keys %{ $ondisk } ) {
        if ( $source =~ /^$name$/ ) {    # Does this ensure enough uniqueness?
            $wikiname       = $ondisk->{ $source }{ 'wikiname' };
            $redirects      = $ondisk->{ $source }{ 'redirects' };
            $title          = $ondisk->{ $source }{ 'title' };
            $subtitle       = $ondisk->{ $source }{ 'subtitle' };
            $recentauthor   = $ondisk->{ $source }{ 'recentauthor' };
            $recentedit     = $ondisk->{ $source }{ 'recentedit' };
            $originalauthor = $ondisk->{ $source }{ 'originalauthor' };
            $originaledit   = $ondisk->{ $source }{ 'originaledit' };
            $cluster        = $ondisk->{ $source }{ 'cluster' };
            $pagesize       = $ondisk->{ $source }{ 'pagesize' };
            @backlinks      = $ondisk->{ $source }{ 'backlinks' };
            @tags           = $ondisk->{ $source }{ 'tags' };
            $first          = $ondisk->{ $source }{ 'first' };
            $words          = $ondisk->{ $source }{ 'words' };
        } ## end if ( $source =~ /^$name$/)
    } ## end for my $source ( keys %...

    # Now build the record.
    my $self = {    # Anonymous hash
                 wikiname       => $wikiname,
                 redirects      => $redirects,
                 title          => $title,
                 subtitle       => $subtitle,
                 recentauthor   => $recentauthor,
                 recentedit     => $recentedit,
                 originalauthor => $originalauthor,
                 originaledit   => $originaledit,
                 cluster        => $cluster,
                 pagesize       => $pagesize,
                 backlinks      => [@backlinks],
                 tags           => [@tags],
                 first          => $first,
                 words          => $words
               };

    $RecordCache{ $name } = $self;    # Cache the fetched object
    bless $self;
    return $self;
} ## end sub fetch

=item * $object->fetch_backlinks( name );

Returns a hash of hashes with data suitable for rendering a list of backlinks.
The returned hash format looks like:

 WikiName = (
    wikiname       => 'WikiName',
    title          => 'Page Title'
 )

If there is no title, the values of wikiname and title will be identical.  This
way the list of backlinks can be displayed in more human-friendly terms than
simply a CamelCase name.

=cut

sub fetch_backlinks {
    my ( $class, $name ) = @_;
    my $record = $class->fetch( $name );

    my %self;                         # Not an anonymous hash

    while ( my ( $source, $value ) = each %{ $record } )
    {                                 # There is only 1 record in a fetch
        if ( $source =~ /backlinks/ ) {

            foreach my $linkarray ( @{ $value } ) {
                foreach my $backlink ( @{ $linkarray } ) {

                    my $linkrecord = $class->fetch( $backlink );

                    my $linkinfo = $self{ $backlink };
                    $linkinfo->{ 'wikiname' } = $linkrecord->{ 'wikiname' };
                    if ( $linkrecord->{ 'title' } ) {    # Does a title exist?
                        $linkinfo->{ 'title' } = $linkrecord->{ 'title' };
                    } else {    # If not, its title is WikiName
                        $linkinfo->{ 'title' } = $linkrecord->{ 'wikiname' };
                    }
                    $self{ $backlink } = $linkinfo;
                } ## end foreach my $backlink ( @{ $linkarray...
            } ## end foreach my $linkarray ( @{ ...
        } ## end if ( $source =~ /backlinks/)
    } ## end while ( my ( $source, $value...
    return \%self;
} ## end sub fetch_backlinks

=item * $object->parse( name, text, lastuser, lastedit, origuser, origedit );

Creates a single new entry in the database where name is the PageName, text is
the entirety of the page text (typially $page{$text}, lastuser is the last
username to edit the page, lastedit is the last edited timestamp (epoch),
origuser is the original page creator username, and origedit is the original
page creation timestamp.  See the above data structures for an idea of the data
structure it will return.

This function is normally only useful when creating or modifiying the data
store.

=cut

sub parse {

    my $self           = shift;    # the reference
    my $wikiname       = shift;    # WikiName
    my $rawtext        = shift;    # Text to process
    my $recentauthor   = shift;    # passed as arguement
    my $recentedit     = shift;    # passed as arguement
    my $originalauthor = shift;    # passed as arguement
    my $originaledit   = shift;    # passed as arguement

    # Page Name and Size
    $self->{ 'wikiname' } = $wikiname;
    log2( "NAME:\t$wikiname:" );
    $self->{ 'pagesize' } = int( length( $rawtext ) );
    log2( " $self->{'pagesize'} bytes\n" );

    # Page Cluster
    $self->{ 'cluster' } = _extractCluster( $rawtext );
    log2( "\tStoring cluster: $self->{'cluster'}\n" ) if $self->{ 'cluster' };

    # Is page a redirect?
    $self->{ 'redirects' } = _extractRedirects( $rawtext );
    log2( "\tStoring redirect: $self->{'redirect'}\n" )
      if $self->{ 'redirect' };

    # Page #TITLE: (if any)
    $self->{ 'title' } = _extractTitle( $rawtext );
    log2( "\tStoring title: $self->{'title'}\n" ) if $self->{ 'title' };

    # Page #SUBTITLE: (if any)
    $self->{ 'subtitle' } = _extractSubTitle( $rawtext );
    log2( "\tStoring subtitle: $self->{'subtitle'}\n" )
      if $self->{ 'subtitle' };

    # Last Change data
    log2( "\tStoring Author and edit data.\n" );
    $self->{ 'recentauthor' } = $recentauthor;
    log3( "\t\tMost Recent Author: $self->{'recentauthor'}\n" );
    $self->{ 'recentedit' } = $recentedit;
    log3( "\t\tMost Recent Edit: $self->{'recentedit'}\n" );
    $self->{ 'originalauthor' } = $originalauthor;
    log3( "\t\tOriginal Author: $self->{'originalauthor'}\n" );
    $self->{ 'originaledit' } = $originaledit;
    log3( "\t\tOriginal Edit: $self->{'originaledit'}\n" );

    # Tags
    $self->{ 'tags' } = [ _extractTags( $rawtext ) ];
    log2( "\tStoring tags: " )   if ( scalar @{ $self->{ 'tags' } } > 1 );
    log3( "@{$self->{'tags'}}" ) if ( scalar @{ $self->{ 'tags' } } > 1 );
    log2( "\n" )                 if ( scalar @{ $self->{ 'tags' } } > 1 );

    # Backlinks
    $self->{ 'backlinks' } = [ _extractBacklinks( $wikiname ) ];
    log2( "\tStoring backlinks: " )
      if ( scalar @{ $self->{ 'backlinks' } } > 1 );
    log3( "@{$self->{'backlinks'}}" )
      if ( scalar @{ $self->{ 'backlinks' } } > 1 );
    log2( "\n" ) if ( scalar @{ $self->{ 'backlinks' } } > 1 );

    # Search Summary Paragraph
    $self->{ 'first' } = _extractFirst( $rawtext );
    log2( "\tStoring Search Summary Paragraph" ) if $self->{ 'first' };
    log3( " $self->{'first'}" )                  if $self->{ 'first' };
    log2( "\n" )                                 if $self->{ 'first' };

    # Indexable data
    $self->{ 'words' } = _extractWords( $rawtext );
    log2( "\tStoring Page Content for Searching.\n" ) if $self->{ 'words' };

    return $self;
} ## end sub parse

=back

=head2 PRIVATE FUNCTIONS

=over 2

=item * _extractRedirects

=cut

sub _extractRedirects {
    if ( (     $OddMuse::FreeLinks
           and $_[0] =~ /^\#REDIRECT\s+\[\[$OddMuse::FreeLinkPattern\]\]/
         )
         or (     $OddMuse::WikiLinks
              and $_[0] =~ /^\#REDIRECT\s+$OddMuse::LinkPattern/ )
       )
    {
        return $1;
    } ## end if ( ( $OddMuse::FreeLinks...
} ## end sub _extractRedirects

=item * _extractTitle

=cut

sub _extractTitle {
    $_[0] =~ m/\#TITLE[ \t]+(.*?)\s*\n+/;
    return $1;
}

=item * _extractSubTitle

=cut

sub _extractSubTitle {
    $_[0] =~ m/\#SUBTITLE[ \t]+(.*?)\s*\n+/;
    return $1;
}

=item * _extractCluster

=cut

sub _extractCluster {
    if ( ( $OddMuse::WikiLinks and $_[0] =~ /^($OddMuse::LinkPattern)/cgo )
         or (     $OddMuse::FreeLinks
              and $_[0] =~ /^(\[\[$OddMuse::FreeLinkPattern\]\])/cgo )
       )
    {
        return $1;
    }
} ## end sub _extractCluster

=item * _extractTags

=cut

sub _extractTags {
    return ( $_[0] =~ m/\[\[tag:$OddMuse::FreeLinkPattern\]\]/g,
             $_[0] =~ m/\[\[tag:$OddMuse::FreeLinkPattern\|([^]|]+)\]\]/g );
}

=item * _extractBacklinks

This function is ver CPU and Memory intensive,  as it searches entire page
cache for Backlinks.  Also, extending C<$LinkPattern> can REALLY slow things
down.  You've been warned.  Here are the results of a test on my pc.

=over 2

=item * ([A-Z]+[a-z\x80-\xff]+[A-Z][A-Za-z\x80-\xff]*)$QDelim

17 Seconds

=item * (([A-Z]+[A-Z]+[a-z\x80-\xff]*| (plus above)

152 Seconds.  thats almost 900% slower!

=back

=cut

sub _extractBacklinks {
    my ( @unique, %seen );

    while ( my ( $source, $pagetext ) = each %OddMuse::PageCache ) {
        my @links = $pagetext =~ /$OddMuse::LinkPattern/g;
        foreach my $link ( @links ) {
            if ( $link =~ /^$_[0]$/ ) {
                push( @unique, $source )
                  unless ( ( $seen{ $source }++ ) or ( $_[0] eq $source ) );
            }
        }
    } ## end while ( my ( $source, $pagetext...
    return @unique;
} ## end sub _extractBacklinks

=item * _extractWords

=cut

sub _extractWords {
    $_[0] =~ s/\#TITLE[ \t]+(.*?)\s*\n+//g;
    $_[0] =~ s/\#SUBTITLE[ \t]+(.*?)\s*\n+//g;
    $_[0] =~ s/\[\[$OddMuse::FreeLinkPattern\]\]//g;
    $_[0] =~ s/\n/\ /g;
    $_[0] =~ s/\ tag://g;
    return $_[0];
} ## end sub _extractWords

=item * _extractFirst

Possibly the most complex of these functions.  It performs quite a bit of
filtering at different stages of it's execution.  I'm always tweaking this
function to provide more succinct search summaries.

=cut

sub _extractFirst {
    my @paragraphs = ( $_[0] =~ m/(.*)\n\n/cog );
    my ( @filtered, @result, $wordcount, $temp );
    foreach my $para ( @paragraphs ) {
        unless (
            ( $para =~ m/^[\#|\*].*/ ) or    # Drop lists and MetaData
            ( $para =~ /^=.*/ )        or    # Drop Headers
            ( $para =~ /^<toc>/ )      or    # No Toc
            ( $para =~ /\[\[image.*/ ) or    # No image urls
            (  (  $OddMuse::WikiLinks && $para =~ /^($OddMuse::LinkPattern)/cgo
               )
               or                            # No Clusters
               (     $OddMuse::FreeLinks
                  && $$para =~ /^(\[\[$OddMuse::FreeLinkPattern\]\])/cgo
               )                             # No Clusters
            )
          )
        {
            push( @filtered, $para );
        } ## end unless ( ( $para =~ m/^[\#|\*].*/...
    } ## end foreach my $para ( @paragraphs)
    my @words = split( /\s/, $filtered[0] );
    foreach my $word ( @words ) {
        if ( $wordcount < 30 ) {
            $word =~ s/[\[|\]]//g;
            $word =~ s/$OddMuse::UrlPattern//g;
            $word =~ s/$OddMuse::FullUrlPattern//g;
            push( @result, $word );
            $wordcount++;
        }
    } ## end foreach my $word ( @words )
    $temp = pop( @result );
    $temp =~ s/\.//g;
    push( @result, $temp );
    push( @result, '...' );
    my $output = join( ' ', @result );
    return $output;
} ## end sub _extractFirst

1;
__END__

=back

=head1 BUGS AND LIMITATIONS

No bugs have been reported.

Please report any bugs or feature requests to C<cmauch@gmail.com>

=head1 AUTHOR

Charles Mauch <cmauch@gmail.com>

=head1 LICENSE

Copyright (c) 2006 Charles Mauch

This program is free software; you can redistribute it and/or modify it under
the terms of the GNU General Public License as published by the Free Software
Foundation; either version 2 of the License, or (at your option) any later
version.

This program is distributed in the hope that it will be useful, but WITHOUT ANY
WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A
PARTICULAR PURPOSE.  See the GNU General Public License for more details.

You should have received a copy of the GNU General Public License along with
this program; if not, write to the Free Software Foundation, Inc., 51 Franklin
Street, Fifth Floor, Boston, MA  02110-1301, USA.

=head1 SEE ALSO

perl(1).

=cut

# $Id: Database.pm 30 2006-09-29 06:19:20Z cmauch $
