Detection in Random Fields


Share/Save/Bookmark

Hupkens, Erik Peter (1997) Detection in Random Fields. thesis.

open access
[img]
Preview
PDF
731kB
Abstract:In general, a detection problem arises when changes may appear in a certain
process. These changes may be errors in a dynamic process, objects in images,
scratches on surfaces, or anything that is not supposed to be present.
This thesis is concerned with a generalization from one-parameter detection
theory to multi-parameter detection theory. The difference between these
problems is not just an increase in dimension of the independent parameter.
Consider for example a process that is defined as a function of time. At a
certain moment in time, a change in the process may appear, and the objective
of the problem is to detect this change as soon as possible after it has
occurred. On the other hand, if the parameter is given by the position on
a surface, so that it is two-dimensional, we cannot speak of a change time.
The way the change appears in the measurements depends on the observation
mechanism that is used to obtain these measurements. So, instead of a
one-dimensional time-scale that is perfectly ordered, a two-dimensional grid,
or mult-dimensional index set without any natural ordering has to be used.
It is assumed that a model exists that describes the process under normal
circumstances, i.e., when no change is present. Furthermore, models are assumed
to exist that describe the process when a change is present. Based on
these models, a statistical test may be created in order to detect these changes.
Two classes of statistical tests exist; non-sequential tests and sequential
tests. A non-sequential test uses all available data to make a decision about
the presence of a change. The performance of such a test is measured by the
error probabilities. The disadvantage of these tests is that all data has to be
used, which may be expensive and not always necessary.
A sequential test performs a sequence of tests on subsets of the available
data. At each stage, a decision is made whether to continue with the next
subset or to stop and make a final decision on the presence of a change. An
additional performance measure for these tests is given by the number of data
points that is required to make a decision.
Chapter 3 deals with the detection of global changes in autoregressive
fields. An autoregressive fi�eld is characterized by a statistical dependence between neighbouring sites on the grid. Because these random fields are strongly
non-causal, sequential tests may not be evaluated that easily. Hence, only nonsequential tests are examined in this case. The resulting detection problem
is rewritten as a standard one-parameter detection problem. In case the parameter becomes three-dimensional, where the third parameter may represent
something like time or depth, and there is independence in this third dimension,
it is possible to use a sequential test. Again, the resulting detection
problem is rewritten as a sequential one-parameter detection problem.
The changes that may appear in the random fields of Chapter 3 are global
in the sense that they cover the entire grid. In practice, this will only rarely be the case. Therefore, in Chapter 4 the emphasis lies on the detection of local changes; only small parts of the grid are covered by the change. The random fields in this chapter are assumed to be semi-causal; there exists a dependence between the sites on the grid, but it is possible to define a causal ordering on the grid. The random fields are described by a stochastic dynamic system, and the changes are detected by using a bank of Kalman filters. Although
the tests that are used are sequential, the emphasis does not lie on quickest
detection.
Finally, in Chapter 5 the quickest detection problem is addressed. So
far the observation mechanism has been fixed. In practice it may be more
optimal to allow some freedom in the observation mechanism. For example, if
previous observations indicate the presence of a change in a certain region, it
may be best to continue measuring in that region. The concept of a stopping
strategy is introduced to denote the combination of an observation mechanism
or sample path and a statistical test.
All random fields are assumed to be independent. The problem is approached
from both the Neyman-Pearson point of view and the Bayesian point
of view. From the Neyman-Pearson point of view, the quickest detection problem
is defined as the minimization of the expected number of observations
that are needed to make a decision, when the error probabilities are bounded
by given constants. The stopping strategy that minimizes this average site
number for all possible changes is said to be uniformly most efficient. Unfortunately, such a stopping strategy generally does not exist. A simplification is given by the use of so-called myopic strategies that only optimize the immediate result.
Using the Bayesian approach, a cost function, defined by a linear combination
of the error probabilities and the average site numbers, has to be
minimized. The weights of the different terms are determined by the prior
probabilities of the changes and the relative cost of the errors with respect to the cost of making an observation. This problem does have a solution, but the optimal stopping strategy is only implicitly defined. Approximations to the
optimal strategy are given by n-step look-ahead procedures, that only consider
the n next observations in the cost function.
Item Type:Thesis
Faculty:
Electrical Engineering, Mathematics and Computer Science (EEMCS)
Research Group:
Link to this item:http://purl.utwente.nl/publications/29618
Export this item as:BibTeX
EndNote
HTML Citation
Reference Manager

 

Repository Staff Only: item control page

Metis ID: 140252