# The Lore of Auk
# Tim Menzies
# September 12, 2012

"""
Tips
====

these tips are arragend in a specific order. At
first, I will list general tips that apply to AWK
and many other languages as well (use version
control and configuation management).

AWK is a stable, cross platform computer language
named for its authors Alfred Aho, Peter Weinberger
and Brian Kernighan.  In the distribution notes for
the langauge, they write "AWK is a convenient and
expressive programming language that can be applied
to a wide variety of computing and data-manipulation
tasks".

More modern fans of AWK include Arnold Robbins and
Nelson Beebe who write in their book _Classic Shell
Scripting_ who say "We like it. A lot. The
simplicity and power of AWK often make it just the
right tool for the job."


There any many uses for the following
system so how you use it depends on your own background and goals.

Al Aho, Peter Weinberger and I created Awk [awkbook]
in 1977; around 1981 I became the de facto owner and
maintainer of the source code, a position that I
still hold. The language is so small and simple that
it remains a widely used tool for data manipulation
and analysis and for basic scripting, though there
are now many other scripting languages to choose
from. There are multiple implementations, of which
GNU's Gawk is the most widely used, and there is a
POSIX standard.

The language itself is small, and our implementation
[source] reflects that. The first version was about
3000 lines of C, Yacc and Lex; today, it is about
6200 lines of C and Yacc, Lex having been dropped
for reasons to be discussed later. The code is
highly portable; it compiles without ifdefs and
without change on most Unix and Linux systems and on
Windows and Mac OS X. The language itself is stable;
although there is always pressure to add features,
the purveyors of various implementations have taken
a hard line against most expansion. This limits the
scope of Awk's application but simplifies life for
everyone.


Some aspects of Awk are themselves almost complete languages, for instance regular expressions, substitution with sub and gsub, and expression evaluation. 

Awk was originally meant for programs like this, only a line or two long, often composed at the commandline prompt. We were surprised when people began creating larger programs, 


Traditionally, AWK is a tool for solo programmers to build
small (sometimes
AUK was developed in response to a pressing need
within the AWK community. There are no standards
in that community for  packaging, testing,
documenting, and share code. I became acutely aware of those
needs while working on the web site `http://awk.info``.
In 2008, Arnold Robbins (the creator of
GNU AWK) asked for help in running an AWK
information web site.  I volunteered and for a year
had much fun finding and archiving fantastic
examples of the creativity of AWK programmers.  AWK
programmers are passionate about their favorite
language, which they use to build fascinating tools
such as:

+ Darius Bacon's AwkLisp project that implemented McCarthy's LISP eval
  function. This distribution incldes a full implemention, in LISP,
  of the famous ELIZA program which is parsed and executed in AwkLisp.
+ Object-oriented languages such as Axel Schreiner's
  AWK pre-processor to ANSI-C or Jim Hart's pure oo-in-AWK system called AWK++. 
+ Jon Bentley's m1 macro pre-processor. This is a great example
  of just how much can be achieved with very little AWK.
+ Harry Spencers's Amazing Awk Formatter. This is
  code with an interesting history. For a period of
  time, licensing issues meant that programmers lost
  access to some UNIX formatting tools. To fix this,
  Spencer recreated all those tools in AWK.
+ AWK extensions are being built all the time
  including JAWK (AWK in the JAVA virtual machine) and
  XGAWK (XML parsers, combined with AWK). GNU AWK,
  also known as GAWK, was recently updated with new
  interactive debuggers and methods for communicating
  to remote processors via IPv6.

The simplicity of AWK makes it a compelling tool for many tasks.

I am a huge fan of AWK, or to be more specific,  GAWK (the GNU version
of AWK). Which is not to say that I am unaware of the benefits
of other languages:

+  I teach third year undergraduate programming languages to 
   computer scientists.
+  I have programmers professionally in Prolog, LISP, CLIPS and Smalltalk.
+  For my own academic work, I have built large and elaborate programs in 
   Bash, Lua, Ruby and Python.  
+  Also, I have  dabbled some in Haskell, CoffeeScript and Scheme.  

For the reader interested in the above languages, I
recommend Brakto's excellent Prolog textbook, XXX
peter seebiel perter novrig, paul grapham, now you
haslell for a better good. arnaold robins classic
shell scripting, stack overlflow, know you kasell for a greter good, 
Structure and Interpretation of Computer Programs. Seven langauges in seven weeks.


I am well-aware of the benefits
of monads, iterators, error handling, inheritance, garbage collection
and all the other adanced programming features that are absent in AWK.

Yet I keep coming back to AWK (or more specifically
to the GNU version of AWK called GAWK). Most of my applciation work
is in data mining and if I work in GAWK

Nevertheless, while AWK is a powerful tool, 
its community is not.  To see a powerfl 
Consider, for example, the 

poweful and widely used by individual program
Neverthless, other programming communities are much more vibrant that AWK.
Python programemrs, for example, can find extensive support for their
work at `stackoverflow.com`. 

the AWK community suffers from a lack of tool and package support.
The website `stackoverflow.com` shows how other languages have a much
more vibrant community than AWK. 


For a year, I worked dilligently
on that site I worked as webmaster on taht site 

+ For experienced programmers, AUK offers a method for
  documenting, testing, packaging and sharing code.
  This is important since there is no established method
  for such sharing in the AWK community.
+ If you are a novice programmer, AUK is a set of
  productivity tricks that took me several years to learn, but are
  quick and simple to apply. Bon app\'{e}tit!
+ If you are a novice UNIX programmer, AUK is a case study
  in how to assemble interesting functionality by combining
  together the standard UNIX tools.


Good GAWK
=========

_Because easy is not wrong._ -- Ronald Loui

Gawk is a mature language- first implemented in the
1970s and now available on most platforms.  It can
be installed from the standard package managers for
OS/X and LINUX.  Also, the WINDOWS version of Gawk
does not use the registry; i.e.  after a few quick
downloads, any user can run the code.  

A tool from
the golden age of Unix, Gawk is sometimes called
_primitive_. It is more accurate to call it 
_elemental_, so tightly focused is it on what it does
best: quickly converting "this" into "that".
Gawk supports smart automatic variable type
handling; simplified file handling; regular
expressions and pattern processing.  It also has
simple and powerful string handling and processing
functions; hash tables; and associative arrays
(where strings are also accepted as array indices).

As an example of a Gawk program, here is
a Naive Bayes classifier written in that language.
This
classifer reads a a CSV file, where the class label
is found in the last column (that finds using the
variable _NF_ which is short for "number of fields"). 
To understand the following code, note
that

     $i, $NF

refers to the i-th cell and the last cell
(repsectively) of the current line. Also, Gawk
supports pattern-oriented programming; e.g.

    Pass==1 {train()}

In the above, `train` is called for all lines of
input when `Pass==1`. Finally, the somewhat arcane
idiom

   if (++Seen[i,$i] == 1)

counts how often a symbol has been seen in a certain row.
Due to the use of predix addtion (see the `++`), this condition
is true only on the first time we have seen this particular symbol.

    #naive bayes classifier in gawk
    #usage:  gawk -F, -f nbc.awk Pass=1 train.csv Pass=2 test.csv
    Pass==1 {train()}
    Pass==2 {print $NF"|"classify()} #show expected & guessed class
    
    function train(    i,h) { 
       Total++;
       h=$NF;     # the hypotheis is in the last column
       H[h]++;    # remember how often we have seen "h"
       for(i=1;i<=NF;i++) {
         if ($i=="?") continue;  # skip unknown values
         Freq[h,i,$i]++
         if (++Seen[i,$i]==1) 
               Attr[i]++}  # remember unique values
    }
    function classify(         i,temp,what,like,h) {  
       like = -100000;         # smaller than any log
       for(h in H) {           # for every hypothesis, do...  
         temp=log(H[h]/Total); # logs stop numeric errors
         for(i=1;i<NF;i++) {  
           if ( $i=="?" ) continue; # skip unknown values
           temp += log((Freq[h,i,$i]+1)/(H[h]+Attr[NF])) }
         if ( temp >= like ) { #  better hypothesis
            like = temp
            what=h}}
       return what;
    }

Gawk is not necessarily better than Ruby or Perl or
Javascript or Python or Scala or Matlab or "R" or
Bash script or Lisp or CoffeeScript or Prolog or
(insert your favorite language here).  But Gawk's
syntax is fairly standard (sort of like ``C'') so it
code is understandable to a very large audience.
The more advanced features of newer languages
_might_ be better for particular tasks.  But to
ensure a larger audience, it is probably wise not to
use advanced programming constructs (e.g. closures
and functional programming), no matter how tempting
they appear.
Hence, I use Gawk for teaching beginners about
data mining.
For example, here is my introductory example
on how to build a data miner.  Using the above
Naive Bayes classfier code, I explain
the basics of classifiers and Gawk.
Studetns can then perform tasks such  as:

+   Modify the code so that there is no train/test data. 
    Instead, make it an incremental learning.
    Hint:  call the functions `train`, then 
    `classify` on every line of input. Note that the 
    order is important: always `train` before `classify`ing 
    so the the results are always on unseen data.
+   Implement an anomaly detector. Hints:  1) make all 
    training examples get the same class; 2) an anomalous test 
    instance has a likelihood _1/a_ of the mean likelihood
    seen during training (the variable _a_
    needs tuning but _a=2_ is often useful).

Interpreted Gawk programs run slower than other
languages like C or C++.  On the other hand, in most
cases it is much faster and easier writing a short
Gawk script than implementing some code in C or C++.
For an example of that brevity, once we spent a few
days coding and testing random binary search trees
in Javascript. After all that work, I dumped that
code in favor of a simple one line Gawk script:

    at[column][value][row] = $class

This creates three nested arrays that stores what
class values were found in the row's holding
particular value for a particular column.
Because Gawk uses associative arrays, this one-liner
lets us quickly find (e.g.) all the rows holding a
particular value: 

   for(column in at) 
     for(value in at[column]) 
       for(row in at[column][value]) 
          print row

The one-liner `at[column][value][row]' is not fully
equivalent to our 400 lines of JavaScript;
e.g. unlike our random binary search trees, the Gawk
code does not accumulate least-recently accessed
items in the leaves.  However, it is very hard to
justify the overhead of building, testing and
maintaining the Javascript version given the extreme
simplicity of the Gawk code.

Gawk is easy to learn since the language avoids some
of the more confusing aspects of other languages
like "C". 
For example, Gawk has no pre-processor; no
pointers or addresses; no structures or unions; no
closures; and only uses a few dozen simple library
functions. As an aside, we note that extensive
libraries can be a blessing.  I once watched two
programmers debate Python vs Gawk.  One of the
programmers really liked Python since, she said, it
supports and uses more code libraries than Gawl. The
other programmer liked Gawk, for exactly the same
reason.

The web site awk.info offers several interesting
examples of how this language can be used:

+ Darius Bacon's AWKLISP  implements McCarthy's LISP eval
  function as well as an S-expression parser. Using AWKLISP,
  he writes a  full implemention, in LISP,
  of the famous ELIZA program.
+ Object-oriented languages such as Axel Schreiner's
  AWK pre-processor to ANSI-C or Jim Hart's pure oo-in-AWK system called AWK++. 
+ Jon Bentley's M1 macro pre-processor. This is an excellent example
  of just how much can be achieved with very little AWK.
+ Harry Spencers's _Amazing Awk Formatter_. This is
  code with an interesting history. For a period of
  time, licensing issues meant that programmers lost
  access to some UNIX formatting tools. To fix this,
  Spencer recreated all those tools in AWK.

AWK extensions are being built all the time
including JAWK (AWK in the JAVA virtual machine) and
XGAWK (XML parsers, combined with AWK). GNU AWK,
also known as GAWK, was recently updated with new
interactive debuggers and methods for communicating
to remote processors via IPv6.

As for my own work, a nice example of the power of Gawk is TINYTIM- 
a web-site content management system. 
In the great spam attack of 2009, Google stopped indexing
my PHP Wordpress sites since they
kept getting hacked. WORDPRESS is a powerful
system a full install of that web content management system, plus
its standard extensions, becomes hundreds of files spread over dozens
of nested directories. 
This makes it hard to find and remove
the hacked file. Something had to be done.

Enter Gawk. After surviving the way I use WordPresss, I 
realized that
all I was ever really doing was serving the same template
page with some different local content.  TINYTIM
implements this core functionality via a GAWK CGI
script that took as its only parameter the name of
the page to serve. TINYTIM combines the named file
with a template, then prints the result to the
screen with the appropriate three-line CGI header.
The whole system was a ten line CGI bash script plus
80 lines of GAWK, plus another 80 lines for the
template file, plus thousands of lines for the
individual page files, plus assorted image files.
Since I was security obsessed, the serving machine
only hosted the CGI bash script while everything
else was held in a GoogleCode subversion repository.
Each time a new page was required, the bash script
ran a `wget` to download the lastest version of the
template, the GAWK file, and the file for the names
page.  Typically, this was less than 500 lines of
ascii files which downloaded in less than 0.3
seconds (depending on network condictions).  After
that, it was nearly instantansous to run GAWK to
fill in the template and start serving the page.  In
fact, the actual page load time was mostly
determined by how many images it pulled from
GoogleCode. 

In practice, the resulting pages loaded
much faster than WorkPress. Better yet, if the site
was hacked, then next time anyone loaded any page,
the whole site was reset from the GoogleCode side.

TINYTIM found its way onto the REDDIT social
networking site in Sept 2010.  As a result, in 2
days, 1100 vistors surfed to the TINYTIM download
page leaving comments like:

+ Great to hear about cool stuff done with  little languages, in this case 
  a classic little language that's been long eclipsed by its 800 ton 
  mutant offspring, Perl. (The above is not meant as disrespect to 
  800 ton mutants.)
+ Cool. Nice to see someone with an appreciation of simple tools.

As seen in this last comment, Gawk has a repuation
for simplicity.
The simplicity and uniformity of the language
is often 
mentioned by proponents of this langauge.
For example, Ronald Loui prefers
using Gawk to teach AI to computer scientist students.
He writes:

"In the puny language, GAWK, which Aho, Weinberger,
and Kernighan thought not much more important than
grep or sed, I find lessons in AI's trends, AI's
history, and the foundations of AI. What I have
found not only surprising but also hopeful, is that
when I have approached the AI people who still enjoy
programming, some of them are not the least bit
surprised... There are some similarities between
GAWK and LISP that are illuminating. Both provided a
powerful uniform data structure (the associative
array implemented as a hash table for GAWK and the
S-expression, or list of lists, for LISP). Both were
well-supported in their environments (GAWK being a
child of UNIX, and LISP being the heart of lisp
machines). Both have trivial syntax and find their
power in the programmer's willingness to use the
simple blocks to build a complex approach."


Uninstallable
=============

The first rule of a software system is that it
is safe to try since it has no side-effects.
That is, the code should be
removable.  Accordingy, yhe
following system lives in three directors:

+  A working directory which I keep in 
   `~/svns/auk`;
+  Another directory where it installs 
   software called `$HOME/opt/auk; 
+  It also  writes temporaries to  
   `~/tmp`. So, to uninstall this system:

    rm $HOME/tmp/*
    rm -rf $HOME/svns/auk $HOME/opt/auk

Sharable
========

The second rule of a software system is that it programmers
can access the code.I strongly recommend that programmer store
their code in one of the many excellent (and free)
on-line code repositories such as GitHub or
SourceForge or GoogleCode. 

One advantage of these on-line tools is that they an
issue tracking system. By using such issue trackers,
then the code store can become the focus of a
community of programmers extending functionality
while also finding and fixing bug.s


Overview
========

   


Tip2: Use Configuration Management
==================================

When working in a LINUX-style environment,
programming generates many such derived or temporary
files. For example, when I work with AWK, I generate
many files such as those listed below. Configuration
management is the process of keeping these files out
of the repository, but auto-generating them if
required.

```
~/.cabal/bin/pandoc
~/opt/auk/
     bin/pgawk
     doc/
       X.epub
       X.md
       X.html
       X.pdf
       X.tex
     lib/
       X.awk
~/tmp
    awkprof_X.out
    awkvars_X.out
    h.sed
    X.aux
    X.bbl
    X.log
    X.toc
```

My configuration management tool is MAKE. MAKE lets
a programmer define _targets_ that are built from
their _dependencies_ using a set of _commands_:

```
target : dependent1 dependent2 ...
   command1
   command2
   ...
```

For
example, my MAKE commands look for certain executables
and, if they are absent, it downloads and installs them from the web
(thus creating the `pandoc` and `pgawk` commands  shown above.
Also, my build system auto-generates documentation
 from the  comments in the source code of my files.
That is, if there is a source file  _X.awk_,
then my build system auto-creates:

+ _X.md_  : Source code documented using John
            Gruber's Markdown syntax [^1] Markdown is an
            ultra-lightweight markup language that is useful for
            rapidly generating documentation.
+ _X.epub_: .epub reader  files, built from the Markdown.
+ _X.html_: HTML files generated from the Markdown file.
+ _X.pdf_ : .pdf files, also generated from the Markdown file. 
+ _X.tex_ : AUK uses an intermediary .tex file to control
            the conversnion from the .md Markdown files to the
            .pdf file. 


[^1] : http://daringfireball.net/projects/markdown/

To make the .md, .epub, .html, .pdf, and .tex
files, this build system uses's
John MacFarlane's PANDOC document format conversion tool [^2].  
For example, consider
rule1, shown below.  Rule1 uses the PANDOC _command_
to build a html _target_ from a Markdown
_depedent_. Note the following convenctions: `%` is
a wildcard that denotes any file; `@` is the target
and `<` is the dependent.

[^2] http://johnmacfarlane.net/pandoc/

```
# rule 1 : from .md Markdown files to .html files.
$(Doc)%.html : $(Doc)%.md  
 echo $@ 1>&2
 cp etc/$(Css) $(Doc)$(Css)
 @$(Pandoc) -s -S --toc -c $(Css) \
    --bibliography=my.bib          \
    --highlight-style kate          \
    --mathjax  $< -o  $@
```

Rule1 knows to build the .html file using some
Javascript tricks for displaying mathematical
equations (this is the `--mathjax` flag).  Rule1
includes two customization options for the generated
html:

+    The .html can reference articles defined in 
     a `my.bib` bibliography file.
+    The visual appearance of the .html can be tuned 
     via the cascaded style sheet `Css`.

Rule2, shown below, generates .pdf files.
This rule uses the `pdflatex`
_command_ that creates several temporary files such
and `X.aux`, `X.bbl`, `X.log` and `X.toc`.  In order
to keep the source code directories clean, the
following call to `pdflatex` stores these
temporaries in $HOME/tmp:

```
# rule 2 : from .tex Tex files to .md Markdown files
$(Doc)%.pdf: $(Doc)%.tex
        @echo $@ 1>&2
        pdflatex --output-directory $(HOME)/tmp $<      
        cp $(HOME)/tmp/$(shell basename $@) $@
```

(Aside: In the above, the last line is needed to
copy the generated PDF file over to
$HOME/opt/auk/doc.)

MAKE knows how to cascade updates. For example,
recall that the above rule converted .tex files to
.pdf. But where does the .tex file come from?
According to the following rule, MAKE generates the
.tex from .md Markdown files:

```
#rule 3 : from .md Markdown files to .tex Tex files 
$(Doc)%.tex: $(Doc)%.md
        @echo $@ 1>&2
        $(Pandoc) --standalone \
            --template=etc/default.latex --mathml \
            --bibliography=my.bib -s $< -o $@
        sed -i -f etc/tex.sed  $@ 
```

At this point, inexperienced programmers start
wondering if this is all really necessary. Surely,
they might think, we can remember to manually run
these updates when necessary.

Experienced programmers would disagree, saying that
it is very useful to automate these file
configuation dependencies. For one thing,
such automation makes it trivially easy to install
AUK on a new platform. Also,  it is
very easy to encode that automation
since  MAKE is a very
smart, very fast program:

+ When making a system, MAKE runs the _commands_ it
  if finds any _targets_ that have newer
  _dependents_. This means that the first time you
  make a system, it can be slow since everything get
  mades. However, the next time, MAKE only remakes
  things that have been updated since the last make.
+ Note the the .tex _target_ of rule3 is the
  _dependent_ of rule2.  MAKE knows how to chain
  these dependancies together; i.e.  MAKE knows to
  update _both_ the .tex _and_ the .pdf if ever the .md
  Markdown file is changed.
+ Make supports parallelism. When making on (say)
  a four core
  machines, the command `make -j4` will inspect
  the depedancies and see what can be build seperately.
  These seperate jobs will be distributed over the
  different cores on the machine.

The reader will note that the above description has
skipped over three files that will be explained
later in this paper:

+   AUK uses a simple  text substitution facility that
    adds a structure-like syntax to AUK code. Those
    substitutions are controlled by _~/tmp/h.sed_.
+   The other two files are  _~/tmp/awkprof_X.txt_
    and _~/tmp/awkvars.txt_. These are important for optimizing
    and debugging AUK programs. 

Tip2: Use Version Control
=========================

This one is a no-brainer. Experienced programmers
use tools that let them branch current versions;
revert to old versions; and keep their code base
consistent on multiple machines.

I use subversion but know that one day I'll switch
to git. One nice feature of both these tools is that
they are supported by numerous free on-line tools
such as SourceForge, GitHub or GoogleCode.

All the code discussed in this paper is available
for download from `auk.googlecode.com`.  Just as as
note to the reader, if this article inspires you to
work on AUK/AWK then please send me an email asking
to be a committer to this repository. Note that
on-line code repositories support issue tracking
systems
(e.g. `https://code.google.com/p/auk/issues/list`)
where developers can work together to find and fix
bugs and extend the functionality of shared code.

When working within a version controlled code base,
it is useful to seperate the _edit_ directories
from the _built_ directories. Progremmers edit
in the former and code is auto-generated to the later.
For an example of the AUK _built_ directories,
see above. As to the AUK _edit_ directories,
these are shown below. 

  svns/auk/
     trunk/
        auk      # AUK start-up scripts 
        Makefile # for demos and test cases
        my.bib   # paper bibliography, BIBTEX format
        # --------------------------------------------
        # "bin" is for files that change rarely
        bin/        
          auk2awk.awk   # auk to gawk translation 
          auk2md.awk    # auk to markdown translation
          gauk          # runtime controller
          getgawk       # installs gawk4  from the web
          getpandoc     # installs pandoc from the web
        # ---------------------------------------------
        # "etc" is for files that change sometimes.
        etc/         
          _footer.md    # standard footer for ,md files
          boot.sh       # controls AUK environment
          css1.css      # controls html appearance
          default.latex # controls .pdf appearance
          details.mk    # controls file auto-generation
          dotetmacs     # controls editor
          h2sed.awk     # controls macro expansion 
          tex.sed       # handles low-level tex issues
        # ---------------------------------------------   
        # "lib" is for files that change often.
        lib/         
          X.auk
          X.h

Another important aspect of working with shared
code directories is not to pollute them with
temporary files. For example, the EMACS editor
creates backup files in the same directory as
the code being editted- which is something of a nusiance
when working in shared directory. 
Therefore the script `trunk/auk` changes the behavior
of the editor. An AUK developer starts her AUK session as follows:

    cd svns/suk/trunk
    . auk

This executes the following script. The first line
is just a pretty to make directories look better.
The second line  tells awk to look for runtime .awk
files in the _built_ directory `~/opt/opt/auk/lib`.
The third line selects a gawk interpreter from the _built_
directory `~/opt/gawk/bin`.

   alias ls="ls --color"
   export AWKPATH="$PWD:$HOME/opt/auk/lib:$AWKPATH"
   export Gawk=$HOME/opt/gawk/bin/gawk
   dot=$PWD/etc/dotemacs

   e() {
      if   [ "$DISPLAY" ]
      then emacs -q -l $dot $* &
      else emacs -q -l $dot $* 
      fi
   }

The behavior of EMACS is altered by the `e` command.
If our AUK developer types

   e lib/X.auk

then that starts EMACS using the following
configuation file.  Note that the first line of the
following stops creating backups created in the
version control directory.  The remaining lines of
this file offer editorial support for a new kind of
.auk file, discussed below.

     1	# trunk/auk/etc/dotemacs
     2  (setq backup-directory-alist
     3	      `((".*" . ,temporary-file-directory)))
     4	(setq auto-save-file-name-transforms
     5	      `((".*" ,temporary-file-directory t)))
     6	;;;; define a new file type called .auk files
     7	(setq auto-mode-alist
     8	   (append '(("\\.auk\\'" . awk-mode)) 
     9	           auto-mode-alist))
    10	(setq awk-mode-hook
    11	   (function (lambda ()
    12	       (setq c-basic-offset 2)
    13	    	 (global-set-key "\C-f" 
    14	                         'auk-fill-paragraph))))
    15	(defun auk-fill-paragraph ()
    16	   (interactive)
    17	   (save-excursion
    18	      (let (pt1 pt2 num)
    19	    	  	 (setq-default fill-column 52)
    20	    		   (forward-paragraph)
    21	    		   (setq pt1 (point))
    22	    		   (backward-paragraph)
    23	    		   (setq pt2 (point))
    24	    		   (set-mark pt1)
    25	    		   (fill-region pt1 pt2))))
    26	(global-font-lock-mode t)     ; syntax highlight
    27	(show-paren-mode t)           ; bracket match
    28	(setq column-number-mode t)   ; know your widths
    
Tib 7: Tune your Editor your Code
=================================

Much of a programmer's life is spent editing source
code files. It is hence important to make the most
of this important tool. For example, AUK programmers
develop source code files with .auk extension. These
files are the same as .awk files, with a few
extensions.

Many editors have customization tools that allow for
the simple extension of those editors to new kinds
of files. For example, the above `dotemacs` defines
EMACS support tools for .auk files:

+  Lines 7,8,9 of the above tells Emacs to treat .auk
   files like .awk files, plus some extra processing.
+  The extra processing is defined at lines
   10,11,12,13.  These lines tell Emacs to apply the
   .awk syntax highlighting rules to  .auk
   files. Also, line 12 tells Emacs not to
   excessively indent .auk code. Finally, the
    _Control-f_ is bound to the format function
   `auk-fill-paragraph`- this is used to
   reformat lines containing Markdown comments.
+  Finally, the last 3 lines turn on syntax 
   highlighting and bracket matching.

Tip 7a: Multi-line Comments
---------------------------

Also, as discussed below, .auk files implement
certain extensions to .awk files:

+   _Multi-line comments_. AUK assumes that comments
    use the Markdown syntax,  which it uses to generate
    the .pdf .html and .epub files. This section reviews
    AUK's support for such comments
+   Macros that substitute some _commonly used substrings_.
    These macros are discussed in the next section.

### Translating .auk to .awk ###

AUK uses triple quotes (`===`)to mark the start and
end of multi-line quotes.  When translating a .auk
file to .awk, a simple Gawk script comments out all
lines between those quotes.

    # etc/auk2awk.sed
    gsub(/^"""/,"") { In = 1 - In } 
                    { print In ? "# " $0 : $0 }

### Translating .auk to .md ###

The other client that uses .auk files is PANDOC's Markdown 
convertor:

+   For files containing any bibliography references,
    add a header at the end of the file that starts the
    references section).
+   Also, AUK defines a standard footer for each .md
    file. This footer stores copyright and author
    attribution data and should be appended to each
    file.
+   One other detail is that PANDOC uses the first
    three lines of a file to define what title, author,
    and date to display on the file. AUK files include
    those details at top-of-file, commented behind a
    hash character at front of line. The following
    script replaces those characters with the PANDOC
    comment percent character.

All these operations are implemented in the
following Gawk script:

    /\[@/   { Ref = 1 } # file contains references
    NR < 4  { if (gsub(/^#/,"%")) {print; next}}
    gsub(/^"""/,"") { 
              In = 1 - In;
              if (Once) print In ? "```" : "```"                 
              Once++
              next }
            { print $0 }
    END     { if (!In) print "```\n\n"
              system("cat etc/_footer.md")
              if (Ref ) 
                  print "\n\nReferences\n==========\n\n" }

Tip 7b: Handle Common String
----------------------------

A common situation with Gawk is that a set of
functions pass around a set of shared variables.
For those functions, programmers must carefully list
all those variables in every function where they are
used. Note that all these variables must be fully
listed in the same order in all functions that use
them.

Speaking for bitter eperience, this can be a major
source of errors in Gawk programs. The problem gets
worse when it is necessary to add, delete, or change
the names of those variables. Once, I spent 2 weeks
debugging an issue that came down to single letter
typographical error in such a set of shared
variables.

AUK removes that problem with a simple header
facility. Alongside the `lib/X.auk` files, AUK also
knows about `lib/X.h` header files. For example,
`lib/readcsv.auk` is an AUK tool for reading a
comma-separated file whose first line is a set of
column names. Alongside that files is
`lib/readcsv.h` header file whose paragraphs defines
the variables passed around by AUK code.

    # lib/readcsv.h
    _Rows
    name # name of the csv file
    w   # w[i] is the weight of the "i"-th column    
    gp  # gp[i] == 1 if "i" is class column
    np  # np[i] == 1 if "i" is numeric column
    hi  # hi[i] is the maximum of numeric column "i"
    lo  # lo[i] is the minimum of numeric column "i"
    all # all[i] is a list of active rows
    d   # d[row][col] stores the read data

The first non-commented word in each paragraph is
`_Rows` and if the AUK pre-processor sees this in a function
header, it is replaced by:

   name,w,gp,np,hi,lo,all,d

The AUK file `etc/h2sed.awk` is a small compiler
takes all the header files in `lib/*.h` and builds a
set of substitution patterns. For example if there
are two header files in `lib/*.h` then AUK writes
the following patterns into `~/tmp/h.sed`.

    s/\<_Dash\([a-zA-Z0-9_]*\)\>/enough\1,tooMuch\1,prune\1,reweight\1,sampleth\1/g
    s/\<_Rows\([a-zA-Z0-9_]*\)\>/name\1,w\1,gp\1,np\1,hi\1,lo\1,all\1,d\1/g

Note the positional substitutions in the above
pattern. These allow a function to reference two
sets of variables, with different suffixes. For example,
the following code copies data from `_Rows1` to `_Rows2`.

    function copy_Rows(_Rows1,_Rows2,  i) {
      for(i in   w1)   w2[i] =   w1[i]
      for(i in  gp1)  gp2[i] =  gp1[i]
      for(i in  np1)  np2[i] =  np1[i]
      for(i in  hi1)  hi2[i] =  hi1[i]
      for(i in  lo1)  lo2[i] =  lo1[i]
      for(i in all1) all2[i] = all1[i]
      for(row in d1)
        for(col in d1[row])
          d2[row][col] = d1[row][col]
    }

Tip 3: Easy Start-Up
====================

Tip 4 : Seperate Slow and Fast Changing Code
============================================

The above directory structure takes care to divide the files
into those that the programmer will change frequently from
those that are changed rarely of never. For example,
the make files that controls the auto-generated _built_ material
is divided in two:

+    `trunk/Makefile` where progammer can add lots of their own
     demonstration and testing code. 
+    `trunk/etc/details.mk` that contains all the general rules
     that, say, convert .tex files to .pdf.

Other examples of this division of rarely edited files from
frequently edited files are as follows:

`trunk/bin` (Editted rarely or never

:  `trunk/bin` stores code that will rarely need changing such
    as the utilties to `getgawk` or `getpandoc` from the web and
    install them locall. The other files in this directory are
    discussed below.

`trunk/etc` (editted sometimes)

:   `trunk/etc` stores configuration files such as `css1.css`
    cascading style sheet that controls the appearance of the .html files.
    Also found in this directory is
    +     `default.latex` that can be altered
           to effect how AUK builds .tex, then .pdf, files.  
    +     Most of the .tex control is in the `default.latex`,
          but some quick extra hacks are implemted in `tex.sed`:

           1,$s/{itemize}/{smallitem}/
           1,$s/{enumerate}/{smallenum}/
           1,$s/{verbatim}/{code}/

          For example, the last hack changes all the verbatim code 
          to display envrionments to
          use the a special `code` display environment defined in
          `default.latex`


In terms of workflow,

`Makefile` controls the build process taht

Introduction
============

I like AWK. A lot.

Other people don't like AWK as much as me. They do not know how
astonishingly useful it can be. AWK is my trick for very rapid
prototyping that allows me to generate research results _and_ manage a
dozen graduate students _and_ write grants _and_ write papers.

Perhaps my reason for my fondness for AWK is that much of my work is
on data mining. If I was doing programming language research, then it
might bother me that AWK does not support closures or monads or
inheritance or interators or structures on insert your favorite
programming construct here.

But I work on data mining so what I care about is the struture of the
data that is input to the program, and not the program itself.  AWK is
a very good data mining tool since it focuses me on the data. When
exploring a new technique, I

To be sure, the AWK distribution does
not come with all the tools supported by, say, ECLIPSE. But many of
those tools can be easily added using readily available free services.
For example, my preferred AWK integrated development enviroment is EMACS
plus google code (which gives me syntax highlighting, auto-indentation,
bracket matching, version control, and an on-line issue tracking system).
Also, if you use a modern AWK like GAWK4, then youn also get an interactive
debugger that supports breakpoints and stepping through the code.

Perhaps another reason that other programs are not fans of AWK is that
they are not in the AWK community and so they lack the lore of AWK.
Lore is defined as the body of traditions and knowledge on a subject
or held by a particular group, typically passed from person to person
by word of mouth.

The article is about the lore of the Awk programming language.

When used by experienced programmers, Awk is an astonishingly useful
language. For example, the author is a busy academic that has writes a
lot of grants and papers while managing lots of grad students and
undergraduate teaching. Yet somehow I can still get more reserach done
in a weekend that my graduate students can do in a month. The reaons?
I program in a very simple langauge that prevents me for getting
clever about abstract data types and monads and closures and design
patterns and all the other

While other academics study computation

The forefront of a bird is its "lore" and is the region
between the eyes and bill.

variable names. dump-vars

Code-level Lore
===============

+    52 characters wide max
+    Functions less than 60 lines.
+    Optional arguments seperated form others by spaces.
+    Indentations using two spaces per tab
+    Comments in _discourse style_ that offers a running commentary
     on the code.

File-level Lore
===============

+    Self-documenting code.
     +    Comments in code
     +    Document options of main drivers
     +    Files come with demo code.

Use version control
+   Keep the repo clean
    + dir structure
    + no temporaries
      + exceptions: long lived tempories that show the output of a long process

User a good editor

+   syntax highlighting
+   bracket matching
+   auto indenting
    + change the indentation to 2 spaces, not 4

use a repo
===========

ss

Rule1 : Use Gawk
================

+    include files
+    multi-dimensional arries

Rule2 : Write self-documetning source files.
============================================

Code comes with comments. Comments tell a story.
 
+   Code is literature
+   Tell a story, run an example.
+   Show the high-level stuff first. Re-arrange code so tedious details are at the end


Code files come with an excutable demo 

Rules : Sort arrays with Zoombie Indexes
========================================

"""
