1# -*- Mode: cperl; cperl-indent-level: 4 -*-
2# $Id: Harness.pm,v 1.80 2003/12/31 02:39:21 andy Exp $
3
4package Test::Harness;
5
6require 5.004;
7use Test::Harness::Straps;
8use Test::Harness::Assert;
9use Exporter;
10use Benchmark;
11use Config;
12use strict;
13
14use vars qw(
15    $VERSION
16    @ISA @EXPORT @EXPORT_OK
17    $Verbose $Switches $Debug
18    $verbose $switches $debug
19    $Have_Devel_Corestack
20    $Curtest
21    $Columns
22    $ML $Last_ML_Print
23    $Strap
24);
25
26=head1 NAME
27
28Test::Harness - Run Perl standard test scripts with statistics
29
30=head1 VERSION
31
32Version 2.40
33
34    $Header: /home/cvs/test-harness/lib/Test/Harness.pm,v 1.80 2003/12/31 02:39:21 andy Exp $
35
36=cut
37
38$VERSION = '2.40';
39
40# Backwards compatibility for exportable variable names.
41*verbose  = *Verbose;
42*switches = *Switches;
43*debug    = *Debug;
44
45$Have_Devel_Corestack = 0;
46
47$ENV{HARNESS_ACTIVE} = 1;
48
49END {
50    # For VMS.
51    delete $ENV{HARNESS_ACTIVE};
52}
53
54# Some experimental versions of OS/2 build have broken $?
55my $Ignore_Exitcode = $ENV{HARNESS_IGNORE_EXITCODE};
56
57my $Files_In_Dir = $ENV{HARNESS_FILELEAK_IN_DIR};
58
59my $Ok_Slow = $ENV{HARNESS_OK_SLOW};
60
61$Strap = Test::Harness::Straps->new;
62
63@ISA = ('Exporter');
64@EXPORT    = qw(&runtests);
65@EXPORT_OK = qw($verbose $switches);
66
67$Verbose  = $ENV{HARNESS_VERBOSE} || 0;
68$Debug    = $ENV{HARNESS_DEBUG} || 0;
69$Switches = "-w";
70$Columns  = $ENV{HARNESS_COLUMNS} || $ENV{COLUMNS} || 80;
71$Columns--;             # Some shells have trouble with a full line of text.
72
73=head1 SYNOPSIS
74
75  use Test::Harness;
76
77  runtests(@test_files);
78
79=head1 DESCRIPTION
80
81B<STOP!> If all you want to do is write a test script, consider using
82Test::Simple.  Otherwise, read on.
83
84(By using the Test module, you can write test scripts without
85knowing the exact output this module expects.  However, if you need to
86know the specifics, read on!)
87
88Perl test scripts print to standard output C<"ok N"> for each single
89test, where C<N> is an increasing sequence of integers. The first line
90output by a standard test script is C<"1..M"> with C<M> being the
91number of tests that should be run within the test
92script. Test::Harness::runtests(@tests) runs all the testscripts
93named as arguments and checks standard output for the expected
94C<"ok N"> strings.
95
96After all tests have been performed, runtests() prints some
97performance statistics that are computed by the Benchmark module.
98
99=head2 The test script output
100
101The following explains how Test::Harness interprets the output of your
102test program.
103
104=over 4
105
106=item B<'1..M'>
107
108This header tells how many tests there will be.  For example, C<1..10>
109means you plan on running 10 tests.  This is a safeguard in case your
110test dies quietly in the middle of its run.
111
112It should be the first non-comment line output by your test program.
113
114In certain instances, you may not know how many tests you will
115ultimately be running.  In this case, it is permitted for the 1..M
116header to appear as the B<last> line output by your test (again, it
117can be followed by further comments).
118
119Under B<no> circumstances should 1..M appear in the middle of your
120output or more than once.
121
122
123=item B<'ok', 'not ok'.  Ok?>
124
125Any output from the testscript to standard error is ignored and
126bypassed, thus will be seen by the user. Lines written to standard
127output containing C</^(not\s+)?ok\b/> are interpreted as feedback for
128runtests().  All other lines are discarded.
129
130C</^not ok/> indicates a failed test.  C</^ok/> is a successful test.
131
132
133=item B<test numbers>
134
135Perl normally expects the 'ok' or 'not ok' to be followed by a test
136number.  It is tolerated if the test numbers after 'ok' are
137omitted. In this case Test::Harness maintains temporarily its own
138counter until the script supplies test numbers again. So the following
139test script
140
141    print <<END;
142    1..6
143    not ok
144    ok
145    not ok
146    ok
147    ok
148    END
149
150will generate
151
152    FAILED tests 1, 3, 6
153    Failed 3/6 tests, 50.00% okay
154
155=item B<test names>
156
157Anything after the test number but before the # is considered to be
158the name of the test.
159
160  ok 42 this is the name of the test
161
162Currently, Test::Harness does nothing with this information.
163
164=item B<Skipping tests>
165
166If the standard output line contains the substring C< # Skip> (with
167variations in spacing and case) after C<ok> or C<ok NUMBER>, it is
168counted as a skipped test.  If the whole testscript succeeds, the
169count of skipped tests is included in the generated output.
170C<Test::Harness> reports the text after C< # Skip\S*\s+> as a reason
171for skipping.
172
173  ok 23 # skip Insufficient flogiston pressure.
174
175Similarly, one can include a similar explanation in a C<1..0> line
176emitted if the test script is skipped completely:
177
178  1..0 # Skipped: no leverage found
179
180=item B<Todo tests>
181
182If the standard output line contains the substring C< # TODO > after
183C<not ok> or C<not ok NUMBER>, it is counted as a todo test.  The text
184afterwards is the thing that has to be done before this test will
185succeed.
186
187  not ok 13 # TODO harness the power of the atom
188
189Note that the TODO must have a space after it.
190
191=begin _deprecated
192
193Alternatively, you can specify a list of what tests are todo as part
194of the test header.
195
196  1..23 todo 5 12 23
197
198This only works if the header appears at the beginning of the test.
199
200This style is B<deprecated>.
201
202=end _deprecated
203
204These tests represent a feature to be implemented or a bug to be fixed
205and act as something of an executable "thing to do" list.  They are
206B<not> expected to succeed.  Should a todo test begin succeeding,
207Test::Harness will report it as a bonus.  This indicates that whatever
208you were supposed to do has been done and you should promote this to a
209normal test.
210
211=item B<Bail out!>
212
213As an emergency measure, a test script can decide that further tests
214are useless (e.g. missing dependencies) and testing should stop
215immediately. In that case the test script prints the magic words
216
217  Bail out!
218
219to standard output. Any message after these words will be displayed by
220C<Test::Harness> as the reason why testing is stopped.
221
222=item B<Comments>
223
224Additional comments may be put into the testing output on their own
225lines.  Comment lines should begin with a '#', Test::Harness will
226ignore them.
227
228  ok 1
229  # Life is good, the sun is shining, RAM is cheap.
230  not ok 2
231  # got 'Bush' expected 'Gore'
232
233=item B<Anything else>
234
235Any other output Test::Harness sees it will silently ignore B<BUT WE
236PLAN TO CHANGE THIS!> If you wish to place additional output in your
237test script, please use a comment.
238
239=back
240
241=head2 Taint mode
242
243Test::Harness will honor the C<-T> or C<-t> in the #! line on your
244test files.  So if you begin a test with:
245
246    #!perl -T
247
248the test will be run with taint mode on.
249
250=head2 Configuration variables.
251
252These variables can be used to configure the behavior of
253Test::Harness.  They are exported on request.
254
255=over 4
256
257=item B<$Test::Harness::Verbose>
258
259The global variable C<$Test::Harness::Verbose> is exportable and can be
260used to let C<runtests()> display the standard output of the script
261without altering the behavior otherwise.  The F<prove> utility's C<-v>
262flag will set this.
263
264=item B<$Test::Harness::switches>
265
266The global variable C<$Test::Harness::switches> is exportable and can be
267used to set perl command line options used for running the test
268script(s). The default value is C<-w>. It overrides C<HARNESS_SWITCHES>.
269
270=back
271
272
273=head2 Failure
274
275It will happen: your tests will fail.  After you mop up your ego, you
276can begin examining the summary report:
277
278  t/base..............ok
279  t/nonumbers.........ok
280  t/ok................ok
281  t/test-harness......ok
282  t/waterloo..........dubious
283          Test returned status 3 (wstat 768, 0x300)
284  DIED. FAILED tests 1, 3, 5, 7, 9, 11, 13, 15, 17, 19
285          Failed 10/20 tests, 50.00% okay
286  Failed Test  Stat Wstat Total Fail  Failed  List of Failed
287  -----------------------------------------------------------------------
288  t/waterloo.t    3   768    20   10  50.00%  1 3 5 7 9 11 13 15 17 19
289  Failed 1/5 test scripts, 80.00% okay. 10/44 subtests failed, 77.27% okay.
290
291Everything passed but t/waterloo.t.  It failed 10 of 20 tests and
292exited with non-zero status indicating something dubious happened.
293
294The columns in the summary report mean:
295
296=over 4
297
298=item B<Failed Test>
299
300The test file which failed.
301
302=item B<Stat>
303
304If the test exited with non-zero, this is its exit status.
305
306=item B<Wstat>
307
308The wait status of the test.
309
310=item B<Total>
311
312Total number of tests expected to run.
313
314=item B<Fail>
315
316Number which failed, either from "not ok" or because they never ran.
317
318=item B<Failed>
319
320Percentage of the total tests which failed.
321
322=item B<List of Failed>
323
324A list of the tests which failed.  Successive failures may be
325abbreviated (ie. 15-20 to indicate that tests 15, 16, 17, 18, 19 and
32620 failed).
327
328=back
329
330
331=head2 Functions
332
333Test::Harness currently only has one function, here it is.
334
335=over 4
336
337=item B<runtests>
338
339  my $allok = runtests(@test_files);
340
341This runs all the given @test_files and divines whether they passed
342or failed based on their output to STDOUT (details above).  It prints
343out each individual test which failed along with a summary report and
344a how long it all took.
345
346It returns true if everything was ok.  Otherwise it will die() with
347one of the messages in the DIAGNOSTICS section.
348
349=for _private
350
351This is just _run_all_tests() plus _show_results()
352
353=cut
354
355sub runtests {
356    my(@tests) = @_;
357
358    local ($\, $,);
359
360    my($tot, $failedtests) = _run_all_tests(@tests);
361    _show_results($tot, $failedtests);
362
363    my $ok = _all_ok($tot);
364
365    assert(($ok xor keys %$failedtests),
366           q{ok status jives with $failedtests});
367
368    return $ok;
369}
370
371=begin _private
372
373=item B<_all_ok>
374
375  my $ok = _all_ok(\%tot);
376
377Tells you if this test run is overall successful or not.
378
379=cut
380
381sub _all_ok {
382    my($tot) = shift;
383
384    return $tot->{bad} == 0 && ($tot->{max} || $tot->{skipped}) ? 1 : 0;
385}
386
387=item B<_globdir>
388
389  my @files = _globdir $dir;
390
391Returns all the files in a directory.  This is shorthand for backwards
392compatibility on systems where glob() doesn't work right.
393
394=cut
395
396sub _globdir {
397    opendir DIRH, shift;
398    my @f = readdir DIRH;
399    closedir DIRH;
400
401    return @f;
402}
403
404=item B<_run_all_tests>
405
406  my($total, $failed) = _run_all_tests(@test_files);
407
408Runs all the given C<@test_files> (as C<runtests()>) but does it
409quietly (no report).  $total is a hash ref summary of all the tests
410run.  Its keys and values are this:
411
412    bonus           Number of individual todo tests unexpectedly passed
413    max             Number of individual tests ran
414    ok              Number of individual tests passed
415    sub_skipped     Number of individual tests skipped
416    todo            Number of individual todo tests
417
418    files           Number of test files ran
419    good            Number of test files passed
420    bad             Number of test files failed
421    tests           Number of test files originally given
422    skipped         Number of test files skipped
423
424If C<< $total->{bad} == 0 >> and C<< $total->{max} > 0 >>, you've
425got a successful test.
426
427$failed is a hash ref of all the test scripts which failed.  Each key
428is the name of a test script, each value is another hash representing
429how that script failed.  Its keys are these:
430
431    name        Name of the test which failed
432    estat       Script's exit value
433    wstat       Script's wait status
434    max         Number of individual tests
435    failed      Number which failed
436    percent     Percentage of tests which failed
437    canon       List of tests which failed (as string).
438
439C<$failed> should be empty if everything passed.
440
441B<NOTE> Currently this function is still noisy.  I'm working on it.
442
443=cut
444
445#'#
446sub _run_all_tests {
447    my(@tests) = @_;
448    local($|) = 1;
449    my(%failedtests);
450
451    # Test-wide totals.
452    my(%tot) = (
453                bonus    => 0,
454                max      => 0,
455                ok       => 0,
456                files    => 0,
457                bad      => 0,
458                good     => 0,
459                tests    => scalar @tests,
460                sub_skipped  => 0,
461                todo     => 0,
462                skipped  => 0,
463                bench    => 0,
464               );
465
466    my @dir_files = _globdir $Files_In_Dir if defined $Files_In_Dir;
467    my $t_start = new Benchmark;
468
469    my $width = _leader_width(@tests);
470    foreach my $tfile (@tests) {
471	if ( $Test::Harness::Debug ) {
472	    print "# Running: ", $Strap->_command_line($tfile), "\n";
473	}
474
475        $Last_ML_Print = 0;  # so each test prints at least once
476        my($leader, $ml) = _mk_leader($tfile, $width);
477        local $ML = $ml;
478
479        print $leader;
480
481        $tot{files}++;
482
483        $Strap->{_seen_header} = 0;
484        my %results = $Strap->analyze_file($tfile) or
485          do { warn $Strap->{error}, "\n";  next };
486
487        # state of the current test.
488        my @failed = grep { !$results{details}[$_-1]{ok} }
489                     1..@{$results{details}};
490        my %test = (
491                    ok          => $results{ok},
492                    'next'      => $Strap->{'next'},
493                    max         => $results{max},
494                    failed      => \@failed,
495                    bonus       => $results{bonus},
496                    skipped     => $results{skip},
497                    skip_reason => $results{skip_reason},
498                    skip_all    => $Strap->{skip_all},
499                    ml          => $ml,
500                   );
501
502        $tot{bonus}       += $results{bonus};
503        $tot{max}         += $results{max};
504        $tot{ok}          += $results{ok};
505        $tot{todo}        += $results{todo};
506        $tot{sub_skipped} += $results{skip};
507
508        my($estatus, $wstatus) = @results{qw(exit wait)};
509
510        if ($results{passing}) {
511            if ($test{max} and $test{skipped} + $test{bonus}) {
512                my @msg;
513                push(@msg, "$test{skipped}/$test{max} skipped: $test{skip_reason}")
514                    if $test{skipped};
515                push(@msg, "$test{bonus}/$test{max} unexpectedly succeeded")
516                    if $test{bonus};
517                print "$test{ml}ok\n        ".join(', ', @msg)."\n";
518            } elsif ($test{max}) {
519                print "$test{ml}ok\n";
520            } elsif (defined $test{skip_all} and length $test{skip_all}) {
521                print "skipped\n        all skipped: $test{skip_all}\n";
522                $tot{skipped}++;
523            } else {
524                print "skipped\n        all skipped: no reason given\n";
525                $tot{skipped}++;
526            }
527            $tot{good}++;
528        }
529        else {
530            # List unrun tests as failures.
531            if ($test{'next'} <= $test{max}) {
532                push @{$test{failed}}, $test{'next'}..$test{max};
533            }
534            # List overruns as failures.
535            else {
536                my $details = $results{details};
537                foreach my $overrun ($test{max}+1..@$details)
538                {
539                    next unless ref $details->[$overrun-1];
540                    push @{$test{failed}}, $overrun
541                }
542            }
543
544            if ($wstatus) {
545                $failedtests{$tfile} = _dubious_return(\%test, \%tot,
546                                                       $estatus, $wstatus);
547                $failedtests{$tfile}{name} = $tfile;
548            }
549            elsif($results{seen}) {
550                if (@{$test{failed}} and $test{max}) {
551                    my ($txt, $canon) = _canonfailed($test{max},$test{skipped},
552                                                    @{$test{failed}});
553                    print "$test{ml}$txt";
554                    $failedtests{$tfile} = { canon   => $canon,
555                                             max     => $test{max},
556                                             failed  => scalar @{$test{failed}},
557                                             name    => $tfile,
558                                             percent => 100*(scalar @{$test{failed}})/$test{max},
559                                             estat   => '',
560                                             wstat   => '',
561                                           };
562                } else {
563                    print "Don't know which tests failed: got $test{ok} ok, ".
564                          "expected $test{max}\n";
565                    $failedtests{$tfile} = { canon   => '??',
566                                             max     => $test{max},
567                                             failed  => '??',
568                                             name    => $tfile,
569                                             percent => undef,
570                                             estat   => '',
571                                             wstat   => '',
572                                           };
573                }
574                $tot{bad}++;
575            } else {
576                print "FAILED before any test output arrived\n";
577                $tot{bad}++;
578                $failedtests{$tfile} = { canon       => '??',
579                                         max         => '??',
580                                         failed      => '??',
581                                         name        => $tfile,
582                                         percent     => undef,
583                                         estat       => '',
584                                         wstat       => '',
585                                       };
586            }
587        }
588
589        if (defined $Files_In_Dir) {
590            my @new_dir_files = _globdir $Files_In_Dir;
591            if (@new_dir_files != @dir_files) {
592                my %f;
593                @f{@new_dir_files} = (1) x @new_dir_files;
594                delete @f{@dir_files};
595                my @f = sort keys %f;
596                print "LEAKED FILES: @f\n";
597                @dir_files = @new_dir_files;
598            }
599        }
600    }
601    $tot{bench} = timediff(new Benchmark, $t_start);
602
603    $Strap->_restore_PERL5LIB;
604
605    return(\%tot, \%failedtests);
606}
607
608=item B<_mk_leader>
609
610  my($leader, $ml) = _mk_leader($test_file, $width);
611
612Generates the 't/foo........' $leader for the given C<$test_file> as well
613as a similar version which will overwrite the current line (by use of
614\r and such).  C<$ml> may be empty if Test::Harness doesn't think you're
615on TTY.
616
617The C<$width> is the width of the "yada/blah.." string.
618
619=cut
620
621sub _mk_leader {
622    my($te, $width) = @_;
623    chomp($te);
624    $te =~ s/\.\w+$/./;
625
626    if ($^O eq 'VMS') { $te =~ s/^.*\.t\./\[.t./s; }
627    my $blank = (' ' x 77);
628    my $leader = "$te" . '.' x ($width - length($te));
629    my $ml = "";
630
631    $ml = "\r$blank\r$leader"
632      if -t STDOUT and not $ENV{HARNESS_NOTTY} and not $Verbose;
633
634    return($leader, $ml);
635}
636
637=item B<_leader_width>
638
639  my($width) = _leader_width(@test_files);
640
641Calculates how wide the leader should be based on the length of the
642longest test name.
643
644=cut
645
646sub _leader_width {
647    my $maxlen = 0;
648    my $maxsuflen = 0;
649    foreach (@_) {
650        my $suf    = /\.(\w+)$/ ? $1 : '';
651        my $len    = length;
652        my $suflen = length $suf;
653        $maxlen    = $len    if $len    > $maxlen;
654        $maxsuflen = $suflen if $suflen > $maxsuflen;
655    }
656    # + 3 : we want three dots between the test name and the "ok"
657    return $maxlen + 3 - $maxsuflen;
658}
659
660
661sub _show_results {
662    my($tot, $failedtests) = @_;
663
664    my $pct;
665    my $bonusmsg = _bonusmsg($tot);
666
667    if (_all_ok($tot)) {
668        print "All tests successful$bonusmsg.\n";
669    } elsif (!$tot->{tests}){
670        die "FAILED--no tests were run for some reason.\n";
671    } elsif (!$tot->{max}) {
672        my $blurb = $tot->{tests}==1 ? "script" : "scripts";
673        die "FAILED--$tot->{tests} test $blurb could be run, ".
674            "alas--no output ever seen\n";
675    } else {
676        $pct = sprintf("%.2f", $tot->{good} / $tot->{tests} * 100);
677        my $percent_ok = 100*$tot->{ok}/$tot->{max};
678        my $subpct = sprintf " %d/%d subtests failed, %.2f%% okay.",
679                              $tot->{max} - $tot->{ok}, $tot->{max},
680                              $percent_ok;
681
682        my($fmt_top, $fmt) = _create_fmts($failedtests);
683
684        # Now write to formats
685        for my $script (sort keys %$failedtests) {
686          $Curtest = $failedtests->{$script};
687          write;
688        }
689        if ($tot->{bad}) {
690            $bonusmsg =~ s/^,\s*//;
691            print "$bonusmsg.\n" if $bonusmsg;
692            die "Failed $tot->{bad}/$tot->{tests} test scripts, $pct% okay.".
693                "$subpct\n";
694        }
695    }
696
697    printf("Files=%d, Tests=%d, %s\n",
698           $tot->{files}, $tot->{max}, timestr($tot->{bench}, 'nop'));
699}
700
701
702my %Handlers = ();
703$Strap->{callback} = sub {
704    my($self, $line, $type, $totals) = @_;
705    print $line if $Verbose;
706
707    my $meth = $Handlers{$type};
708    $meth->($self, $line, $type, $totals) if $meth;
709};
710
711
712$Handlers{header} = sub {
713    my($self, $line, $type, $totals) = @_;
714
715    warn "Test header seen more than once!\n" if $self->{_seen_header};
716
717    $self->{_seen_header}++;
718
719    warn "1..M can only appear at the beginning or end of tests\n"
720      if $totals->{seen} &&
721         $totals->{max}  < $totals->{seen};
722};
723
724$Handlers{test} = sub {
725    my($self, $line, $type, $totals) = @_;
726
727    my $curr = $totals->{seen};
728    my $next = $self->{'next'};
729    my $max  = $totals->{max};
730    my $detail = $totals->{details}[-1];
731
732    if( $detail->{ok} ) {
733        _print_ml_less("ok $curr/$max");
734
735        if( $detail->{type} eq 'skip' ) {
736            $totals->{skip_reason} = $detail->{reason}
737              unless defined $totals->{skip_reason};
738            $totals->{skip_reason} = 'various reasons'
739              if $totals->{skip_reason} ne $detail->{reason};
740        }
741    }
742    else {
743        _print_ml("NOK $curr");
744    }
745
746    if( $curr > $next ) {
747        print "Test output counter mismatch [test $curr]\n";
748    }
749    elsif( $curr < $next ) {
750        print "Confused test output: test $curr answered after ".
751              "test ", $next - 1, "\n";
752    }
753
754};
755
756$Handlers{bailout} = sub {
757    my($self, $line, $type, $totals) = @_;
758
759    die "FAILED--Further testing stopped" .
760      ($self->{bailout_reason} ? ": $self->{bailout_reason}\n" : ".\n");
761};
762
763
764sub _print_ml {
765    print join '', $ML, @_ if $ML;
766}
767
768
769# For slow connections, we save lots of bandwidth by printing only once
770# per second.
771sub _print_ml_less {
772    if( !$Ok_Slow || $Last_ML_Print != time ) {
773        _print_ml(@_);
774        $Last_ML_Print = time;
775    }
776}
777
778sub _bonusmsg {
779    my($tot) = @_;
780
781    my $bonusmsg = '';
782    $bonusmsg = (" ($tot->{bonus} subtest".($tot->{bonus} > 1 ? 's' : '').
783               " UNEXPECTEDLY SUCCEEDED)")
784        if $tot->{bonus};
785
786    if ($tot->{skipped}) {
787        $bonusmsg .= ", $tot->{skipped} test"
788                     . ($tot->{skipped} != 1 ? 's' : '');
789        if ($tot->{sub_skipped}) {
790            $bonusmsg .= " and $tot->{sub_skipped} subtest"
791                         . ($tot->{sub_skipped} != 1 ? 's' : '');
792        }
793        $bonusmsg .= ' skipped';
794    }
795    elsif ($tot->{sub_skipped}) {
796        $bonusmsg .= ", $tot->{sub_skipped} subtest"
797                     . ($tot->{sub_skipped} != 1 ? 's' : '')
798                     . " skipped";
799    }
800
801    return $bonusmsg;
802}
803
804# Test program go boom.
805sub _dubious_return {
806    my($test, $tot, $estatus, $wstatus) = @_;
807    my ($failed, $canon, $percent) = ('??', '??');
808
809    printf "$test->{ml}dubious\n\tTest returned status $estatus ".
810           "(wstat %d, 0x%x)\n",
811           $wstatus,$wstatus;
812    print "\t\t(VMS status is $estatus)\n" if $^O eq 'VMS';
813
814    if (_corestatus($wstatus)) { # until we have a wait module
815        if ($Have_Devel_Corestack) {
816            Devel::CoreStack::stack($^X);
817        } else {
818            print "\ttest program seems to have generated a core\n";
819        }
820    }
821
822    $tot->{bad}++;
823
824    if ($test->{max}) {
825        if ($test->{'next'} == $test->{max} + 1 and not @{$test->{failed}}) {
826            print "\tafter all the subtests completed successfully\n";
827            $percent = 0;
828            $failed = 0;        # But we do not set $canon!
829        }
830        else {
831            push @{$test->{failed}}, $test->{'next'}..$test->{max};
832            $failed = @{$test->{failed}};
833            (my $txt, $canon) = _canonfailed($test->{max},$test->{skipped},@{$test->{failed}});
834            $percent = 100*(scalar @{$test->{failed}})/$test->{max};
835            print "DIED. ",$txt;
836        }
837    }
838
839    return { canon => $canon,  max => $test->{max} || '??',
840             failed => $failed,
841             percent => $percent,
842             estat => $estatus, wstat => $wstatus,
843           };
844}
845
846
847sub _create_fmts {
848    my($failedtests) = @_;
849
850    my $failed_str = "Failed Test";
851    my $middle_str = " Stat Wstat Total Fail  Failed  ";
852    my $list_str = "List of Failed";
853
854    # Figure out our longest name string for formatting purposes.
855    my $max_namelen = length($failed_str);
856    foreach my $script (keys %$failedtests) {
857        my $namelen = length $failedtests->{$script}->{name};
858        $max_namelen = $namelen if $namelen > $max_namelen;
859    }
860
861    my $list_len = $Columns - length($middle_str) - $max_namelen;
862    if ($list_len < length($list_str)) {
863        $list_len = length($list_str);
864        $max_namelen = $Columns - length($middle_str) - $list_len;
865        if ($max_namelen < length($failed_str)) {
866            $max_namelen = length($failed_str);
867            $Columns = $max_namelen + length($middle_str) + $list_len;
868        }
869    }
870
871    my $fmt_top = "format STDOUT_TOP =\n"
872                  . sprintf("%-${max_namelen}s", $failed_str)
873                  . $middle_str
874                  . $list_str . "\n"
875                  . "-" x $Columns
876                  . "\n.\n";
877
878    my $fmt = "format STDOUT =\n"
879              . "@" . "<" x ($max_namelen - 1)
880              . "  @>> @>>>> @>>>> @>>> ^##.##%  "
881              . "^" . "<" x ($list_len - 1) . "\n"
882              . '{ $Curtest->{name}, $Curtest->{estat},'
883              . '  $Curtest->{wstat}, $Curtest->{max},'
884              . '  $Curtest->{failed}, $Curtest->{percent},'
885              . '  $Curtest->{canon}'
886              . "\n}\n"
887              . "~~" . " " x ($Columns - $list_len - 2) . "^"
888              . "<" x ($list_len - 1) . "\n"
889              . '$Curtest->{canon}'
890              . "\n.\n";
891
892    eval $fmt_top;
893    die $@ if $@;
894    eval $fmt;
895    die $@ if $@;
896
897    return($fmt_top, $fmt);
898}
899
900{
901    my $tried_devel_corestack;
902
903    sub _corestatus {
904        my($st) = @_;
905
906        my $did_core;
907        eval { # we may not have a WCOREDUMP
908            local $^W = 0;  # *.ph files are often *very* noisy
909            require 'wait.ph';
910            $did_core = WCOREDUMP($st);
911        };
912        if( $@ ) {
913            $did_core = $st & 0200;
914        }
915
916        eval { require Devel::CoreStack; $Have_Devel_Corestack++ }
917          unless $tried_devel_corestack++;
918
919        return $did_core;
920    }
921}
922
923sub _canonfailed ($$@) {
924    my($max,$skipped,@failed) = @_;
925    my %seen;
926    @failed = sort {$a <=> $b} grep !$seen{$_}++, @failed;
927    my $failed = @failed;
928    my @result = ();
929    my @canon = ();
930    my $min;
931    my $last = $min = shift @failed;
932    my $canon;
933    if (@failed) {
934        for (@failed, $failed[-1]) { # don't forget the last one
935            if ($_ > $last+1 || $_ == $last) {
936                if ($min == $last) {
937                    push @canon, $last;
938                } else {
939                    push @canon, "$min-$last";
940                }
941                $min = $_;
942            }
943            $last = $_;
944        }
945        local $" = ", ";
946        push @result, "FAILED tests @canon\n";
947        $canon = join ' ', @canon;
948    } else {
949        push @result, "FAILED test $last\n";
950        $canon = $last;
951    }
952
953    push @result, "\tFailed $failed/$max tests, ";
954    if ($max) {
955	push @result, sprintf("%.2f",100*(1-$failed/$max)), "% okay";
956    } else {
957	push @result, "?% okay";
958    }
959    my $ender = 's' x ($skipped > 1);
960    my $good = $max - $failed - $skipped;
961    if ($skipped) {
962	my $skipmsg = " (less $skipped skipped test$ender: $good okay, ";
963	if ($max) {
964	    my $goodper = sprintf("%.2f",100*($good/$max));
965	    $skipmsg .= "$goodper%)";
966	} else {
967	    $skipmsg .= "?%)";
968	}
969	push @result, $skipmsg;
970    }
971    push @result, "\n";
972    my $txt = join "", @result;
973    ($txt, $canon);
974}
975
976=end _private
977
978=back
979
980=cut
981
982
9831;
984__END__
985
986
987=head1 EXPORT
988
989C<&runtests> is exported by Test::Harness by default.
990
991C<$verbose>, C<$switches> and C<$debug> are exported upon request.
992
993=head1 DIAGNOSTICS
994
995=over 4
996
997=item C<All tests successful.\nFiles=%d,  Tests=%d, %s>
998
999If all tests are successful some statistics about the performance are
1000printed.
1001
1002=item C<FAILED tests %s\n\tFailed %d/%d tests, %.2f%% okay.>
1003
1004For any single script that has failing subtests statistics like the
1005above are printed.
1006
1007=item C<Test returned status %d (wstat %d)>
1008
1009Scripts that return a non-zero exit status, both C<$? E<gt>E<gt> 8>
1010and C<$?> are printed in a message similar to the above.
1011
1012=item C<Failed 1 test, %.2f%% okay. %s>
1013
1014=item C<Failed %d/%d tests, %.2f%% okay. %s>
1015
1016If not all tests were successful, the script dies with one of the
1017above messages.
1018
1019=item C<FAILED--Further testing stopped: %s>
1020
1021If a single subtest decides that further testing will not make sense,
1022the script dies with this message.
1023
1024=back
1025
1026=head1 ENVIRONMENT
1027
1028=over 4
1029
1030=item C<HARNESS_ACTIVE>
1031
1032Harness sets this before executing the individual tests.  This allows
1033the tests to determine if they are being executed through the harness
1034or by any other means.
1035
1036=item C<HARNESS_COLUMNS>
1037
1038This value will be used for the width of the terminal. If it is not
1039set then it will default to C<COLUMNS>. If this is not set, it will
1040default to 80. Note that users of Bourne-sh based shells will need to
1041C<export COLUMNS> for this module to use that variable.
1042
1043=item C<HARNESS_COMPILE_TEST>
1044
1045When true it will make harness attempt to compile the test using
1046C<perlcc> before running it.
1047
1048B<NOTE> This currently only works when sitting in the perl source
1049directory!
1050
1051=item C<HARNESS_DEBUG>
1052
1053If true, Test::Harness will print debugging information about itself as
1054it runs the tests.  This is different from C<HARNESS_VERBOSE>, which prints
1055the output from the test being run.  Setting C<$Test::Harness::Debug> will
1056override this, or you can use the C<-d> switch in the F<prove> utility.
1057
1058=item C<HARNESS_FILELEAK_IN_DIR>
1059
1060When set to the name of a directory, harness will check after each
1061test whether new files appeared in that directory, and report them as
1062
1063  LEAKED FILES: scr.tmp 0 my.db
1064
1065If relative, directory name is with respect to the current directory at
1066the moment runtests() was called.  Putting absolute path into
1067C<HARNESS_FILELEAK_IN_DIR> may give more predictable results.
1068
1069=item C<HARNESS_IGNORE_EXITCODE>
1070
1071Makes harness ignore the exit status of child processes when defined.
1072
1073=item C<HARNESS_NOTTY>
1074
1075When set to a true value, forces it to behave as though STDOUT were
1076not a console.  You may need to set this if you don't want harness to
1077output more frequent progress messages using carriage returns.  Some
1078consoles may not handle carriage returns properly (which results in a
1079somewhat messy output).
1080
1081=item C<HARNESS_OK_SLOW>
1082
1083If true, the C<ok> messages are printed out only every second.  This
1084reduces output and may help increase testing speed over slow
1085connections, or with very large numbers of tests.
1086
1087=item C<HARNESS_PERL>
1088
1089Usually your tests will be run by C<$^X>, the currently-executing Perl.
1090However, you may want to have it run by a different executable, such as
1091a threading perl, or a different version.
1092
1093If you're using the F<prove> utility, you can use the C<--perl> switch.
1094
1095=item C<HARNESS_PERL_SWITCHES>
1096
1097Its value will be prepended to the switches used to invoke perl on
1098each test.  For example, setting C<HARNESS_PERL_SWITCHES> to C<-W> will
1099run all tests with all warnings enabled.
1100
1101=item C<HARNESS_VERBOSE>
1102
1103If true, Test::Harness will output the verbose results of running
1104its tests.  Setting C<$Test::Harness::verbose> will override this,
1105or you can use the C<-v> switch in the F<prove> utility.
1106
1107=back
1108
1109=head1 EXAMPLE
1110
1111Here's how Test::Harness tests itself
1112
1113  $ cd ~/src/devel/Test-Harness
1114  $ perl -Mblib -e 'use Test::Harness qw(&runtests $verbose);
1115    $verbose=0; runtests @ARGV;' t/*.t
1116  Using /home/schwern/src/devel/Test-Harness/blib
1117  t/base..............ok
1118  t/nonumbers.........ok
1119  t/ok................ok
1120  t/test-harness......ok
1121  All tests successful.
1122  Files=4, Tests=24, 2 wallclock secs ( 0.61 cusr + 0.41 csys = 1.02 CPU)
1123
1124=head1 SEE ALSO
1125
1126L<Test> and L<Test::Simple> for writing test scripts, L<Benchmark> for
1127the underlying timing routines, L<Devel::CoreStack> to generate core
1128dumps from failed tests and L<Devel::Cover> for test coverage
1129analysis.
1130
1131=head1 AUTHORS
1132
1133Either Tim Bunce or Andreas Koenig, we don't know. What we know for
1134sure is, that it was inspired by Larry Wall's TEST script that came
1135with perl distributions for ages. Numerous anonymous contributors
1136exist.  Andreas Koenig held the torch for many years, and then
1137Michael G Schwern.
1138
1139Current maintainer is Andy Lester C<< <andy@petdance.com> >>.
1140
1141=head1 LICENSE
1142
1143This program is free software; you can redistribute it and/or
1144modify it under the same terms as Perl itself.
1145
1146See L<http://www.perl.com/perl/misc/Artistic.html>
1147
1148=head1 TODO
1149
1150Provide a way of running tests quietly (ie. no printing) for automated
1151validation of tests.  This will probably take the form of a version
1152of runtests() which rather than printing its output returns raw data
1153on the state of the tests.  (Partially done in Test::Harness::Straps)
1154
1155Document the format.
1156
1157Fix HARNESS_COMPILE_TEST without breaking its core usage.
1158
1159Figure a way to report test names in the failure summary.
1160
1161Rework the test summary so long test names are not truncated as badly.
1162(Partially done with new skip test styles)
1163
1164Deal with VMS's "not \nok 4\n" mistake.
1165
1166Add option for coverage analysis.
1167
1168Trap STDERR.
1169
1170Implement Straps total_results()
1171
1172Remember exit code
1173
1174Completely redo the print summary code.
1175
1176Implement Straps callbacks.  (experimentally implemented)
1177
1178Straps->analyze_file() not taint clean, don't know if it can be
1179
1180Fix that damned VMS nit.
1181
1182HARNESS_TODOFAIL to display TODO failures
1183
1184Add a test for verbose.
1185
1186Change internal list of test results to a hash.
1187
1188Fix stats display when there's an overrun.
1189
1190Fix so perls with spaces in the filename work.
1191
1192=for _private
1193
1194Keeping whittling away at _run_all_tests()
1195
1196=for _private
1197
1198Clean up how the summary is printed.  Get rid of those damned formats.
1199
1200=head1 BUGS
1201
1202HARNESS_COMPILE_TEST currently assumes it's run from the Perl source
1203directory.
1204
1205Please use the CPAN bug ticketing system at L<http://rt.cpan.org/>.
1206You can also mail bugs, fixes and enhancements to
1207C<< <bug-test-harness@rt.cpan.org> >>.
1208
1209=head1 AUTHORS
1210
1211Original code by Michael G Schwern, maintained by Andy Lester.
1212
1213=head1 COPYRIGHT
1214
1215Copyright 2003 by Michael G Schwern C<< <schwern@pobox.com> >>,
1216                  Andy Lester C<< <andy@petdance.com> >>.
1217
1218This program is free software; you can redistribute it and/or
1219modify it under the same terms as Perl itself.
1220
1221See L<http://www.perl.com/perl/misc/Artistic.html>.
1222
1223=cut
1224