179.art Benchmark

Origin:
  SPEC FP 2000

Syntax for running:

Note:  Art has two reference command lines.  These two command
lines are identified as art110 and art470.

ref, art110:
art -scanfile c756hel.in -trainfile1 a10.img -trainfile2 hc.img
-stride 2 -startx 110 -starty 200 -endx 160 -endy 240 -objects 10

ref, art470:
art -scanfile c756hel.in -trainfile1 a10.img -trainfile2 hc.img
-stride 2 -startx 470 -starty 140 -endx 520 -endy 180 -objects 10

train:
art -scanfile c756hel.in -trainfile1 a10.img -stride 2 -startx 134
-starty 220 -endx 184 -endy 240 -objects 3

test:
art -scanfile c756hel.in -trainfile1 a10.img -stride 2 -startx 134
-starty 220 -endx 139 -endy 225 -objects 10

lgred:
art -scanfile c756hel.in -trainfile1 a10.img -stride 5 -startx 134
-starty 220 -endx 184 -endy 240 -objects 1 

mdred:
Not Available

smred:
Not Available

The input directory (under this directory) contains sample input files.
These input files and their size follows.

Input   Simulation Size (number of instructions at o0)
-----   ---------------
ref (110) 181.4 billion
ref (470) 198.9 billion	
train	17.5 billion
test	16.0 billion
lgred    7.7 billion
mdred   Not Available 
smred   Not Available


Benchmark Author: Charles Roberson & Max Domeika

Benchmark Description:
The Adaptive Resonance Theory 2 (ART 2) neural network is used to recognize
objects in a thermal image.  The objects are a helicopter and an airplane.
The neural network is first trained on the objects.  After training is
complete, the learned images are found in the scanfield image.  A window
corresponding to the size of the learned objects is scanned across the
scanfield image and serves as input for the neural network.  The neural
network attempts to match the windowed image with one of the images it has
learned.

Input Description: The training files consist of a thermal image of a
helicopter and an airplane.  The scanfile is a field of view containing other
thermal views of the helicopter and airplane.

Output Description: The output data consists of the confidence of a match
between the learned image and the windowed field of view.  In addition, each
F2 neuron's output is printed.  After the entire field of view is scanned
the field of view with the highest confidence of being a match is output.


