We discuss the following measurable characteristics of intelligent
behavior in computing systems: (1) speed and
scope of adaptibility to unforeseen situations, including
recognition, assessment, proposals, selection, and execution;
(2) rate of effective learning of observations, behavior
patterns, facts, tools, methods, etc., which requires identification,
encapsulation, and recall; (3) accurate modeling
and prediction of the relevant external environment, which
includes the ability to make more effective abstractions;
(4) speed and clarity of problem identification and formulation;
(5) effective association and evaluation of disparate
information; (6) identification of more important assumptions
and prerequisites; (7) use of symbolic language, including
the range and use of analogies and metaphors (this
is about identification of similarities), and the invention of
symbolic language, which includes creating effective notations.
We make no claim that these are all the important
characteristics; discovering others is the point of our research
Key Phrases: Intelligent Autonomous Systems, Measuring
Intelligence, Constructed Complex Systems, Reflective Infrastructure,
Problem Posing Interpretation of Behavior
This paper will describe some characteristics of intelligent
computing systems, and describe how to make those measurements
and what they might mean, though we know that
they do not cover the full spectrum of what is commonly
considered to be intelligent behavior.
Intelligence is difficult to measure, because it is thought to
be an intrinsic property of systems, like a potential capability
or competence, whereas the only things that can be
measured are actual performances under various kinds of
conditions. This problem has plagued the evaluators of human
intelligence since the beginning, on measuring some
postulated corresponding performance characteristics .
Therefore, metrics can only be based on observed system
behavior (though the observations can, of course, measure
internal processes from an internal perspective), since
we have no direct access to how internal organization and
structure affect intelligence. Ehen if we assume that intelligence
is entirely intrinsic, we cannot evaluate it separately
from its corresponding behavior (even if the behavior is
only observable introspectively).
2 Computing System Behaviors
Computer programs that play combinatorial games or
search the web are not very interesting to us from an intelligent
systems point of view, because their domain is so
limited and their goals are provided from the outside. Even
so, we're interested in computer programs as creative entities
(co-investigators, so to speak, instead of just tools),
and we think that a careful study of what we can make programs
do will be helpful in understanding what the issues
are [l] . In order to study these possibilities, we want to
define a set of measurements that can be used to differentiate
and understand the relati onships among different kinds
of behavioral characteristics.
We'll start with the assumption that a computing system is
designed to help its users -do- something. That something
is a problem in some subject area. generally called an application
domain, which provides a certain context of use
and corresponding terminology, and a domain-specific language,
which includes more than just vocabulary terms. It
also has a set of abbreviations and conventions about what
can remain implicit, and a set of simplifications (which are
fruitful lies about the entities and behaviors in the domain).
What the user wants to do is called the problem, which
only makes sense within the context of interpretation provided
by the domain-specific language of the application
domain. These languages are used to define the problem
context or problem space, which is a specialized context
within the application domain, in which it makes sense to
state a problem.
0-7803-6583-6/00/$10.00 0 2000 IEEE 2086
So now we have a well-specified problem defined in a
problem context. We are purposely stting aside creativity
for now. Explicitly identifying the problem, and separating
it from the possible solutions or required user actions, is an
important aspect of our approach. It allows many different
possible solution methods to be considered. Since NO one
analysis or problem-solving method can deal with all problems
in a complex domain , it is important to have many
3 Characteristics of Intelligence
In this section, we discuss the following measurable characteristics
of intelligent systems (it can be seen that there
are non-trivial overlaps among them, which we try to unravel
2. learning, .
4. problem identification,
6. assumptions, and
7. use of symbols.
In each case, we offer an approach to at least one way to
compute a measurement value for the characteristic, which
we hope will stimulate others to invent and provide better
We make no claim that these are all the important characteristics;
discovering others is the point of our research
By far the most commonly expressed attribute of intelligence
is adaptibility, which for us means the speed and
scope of adaptibility to unforeseen situations, including
recognition (of the unforeseen situation), assessment, proposals
(for reacting to it), selection (of an activity), and execution.
Accurate prediction of effects is even better (and
more successful), but we save that one for a later section.
A common example of adaptibility is flexible planning, in
which a system can react quickly to situations by changing
its plans. It seems clear that flexibility in plans is partly the
result of their incompleteness: if the detailed goals remain
partly unspecified, then there are more possible steps to
take. This phenomenon shows up in programming as 'late
binding', in which a resource used to address a problem is
often selected just before use (as in our Wrapping approach
to heterogeneous system integration in Constructed Complex
Systems [ 131). The delaying of these decisions does,
of course, conflict with rapid execution, and the resulting
tradeoff is important and depends essentially on rapid elaboration
and evaluations of the choices.
To measure adaptability of a system, we have to present
it with different kinds of variability in its environment,
and measure its performance, then average that performance
over some variability measurement of the environment.
The variability in the environment can be static
(many different kinds of slowly changing environment),
dynamic, (rapidly changing phenomena within the environment),
and in both cases, we can describe the degradation
in performance as a function of the variability in the
Another common attribute of intelligence is learning,
which for us is the rate of effective learning of observations,
behavior patterns, facts, tools, methods, etc. [ 171.
There is an enormous literature on learning in humans and
animals, but our interest here is mainly on the measurements
for computing systems that can learn. Learning is
about improving performance, so in a sense all of our proposed
measurements can be improved by learning. Part of
this learning includes concept formation and formulation,
which is a way to summarize different structures and processes
compactly. We return to this point later on, in the
section on symbol systems.
It is important to note here that there are some fundamental
limitations on the kinds of symbol systems that can be used
in the expressive tasks above. One of the limitations of any
discrete symbol system is the 'get stuck' theorems 
, which show that unless a system can change its own
basic symbols, and re-express its knowledge and behavior
in new symbols, new knowledge gradually becomes harder
and harder to incorporate, leading to a kind of stagnation.
Measuring learning is a little easier than measuring adaptability.
We have long made a distinction between a smart
system, which has a lot of knowledge about its domain of
applicability, and an intelligent system, which can learn
new knowledge quickly about its domain of applicability.
Smartness is a performance characteristic that is relatively
easy to measure, and the ability to learn, which is about
improving that performance, is easy but time-consuming
3.3 Predictive Modeling
An important way to be less surprised at environmental
phenomena is predictive modeling, which for us means accurate
modeling and prediction of the relevant external environment.
This kind of modeling includes the ability to
make more effective abstractions (which is treated below
in a later section). Since a system cannot know everything
about its environment, we assume that there will be multiple
models carried in parallel, with new data interpreted
into information using the model as an interpretive context,
and each model adjusted, assessed, and ranked for likelihood
A concrete example of this kind of modeling is trying to
distinguish trends from fluctuations at different time scales
in a complex environment. In such an environment, activity
occurs at many time scales, so the only viable approach
is multiresolutional[2 11, that is, the system must maintain
several different filtering processes that examine the environment
at different resolutions (time, space, and even
conceptual), and look for local stationarity.
There are three kinds of models to be considered: empirical
models, which are computed according to the observed
data, a priori models, which are provided up front, and
fitted to the data (we think these are much less important
than the others), and deduced models, which are derived
from other models and knowledge available.
Measuring the modeling capability is not about comparing
the resulting models with the processes underlying the
environmental phenomena, but rather, it is about measuring
the correctness or appropriateness of the predictions.
Some predictions take the form 'this phenomenon is unimportant'',
while some must be much more definite, such as
'the moving ball will be there at that time' or 'the closing
door will be open enough for a few seconds'. Once explicitly
formulated, these predictions can be compared, and the
results plotted against the complexity of the prediction task
(which we as evaluators must assess).
3.4 Problem Identification
The best way to respond to problems quickly is to identify
them quickly, which requires speed and clarity of problem
identification and formulation. In our opinion, speed of
problem solution is secondary. Even if we seem to specify
a problem as a constrained search, we seem to construct
search spaces that are very problem-specific, often
extremely intricate, constructed using the constraints directly
(i.e., not by searching a large encompassing space,
and ignoring the parts outside the constraints).
This problem identification problem is a special case of the
situation identification problem, in which acceptable performance
is often dependent on recognizing that a situation
is similar to one encountered before, and that, in turn,
depends on identifying the 'right' set of features of the situation
to explicitly notice and recall.
The ability to identify important situation features quickly
and correctly depends on having at hand the right specification
spaces to determine and describe the features.
Very often, the application domain and problem context
that allow a problem or even a situation to make sense must
be inferred from the observahle environmental behavior.