|
楼主 |
发表于 2006-1-12 22:16:00
|
显示全部楼层
PREFACE
At last a book that hopefully will take the mystery and drudgery out of the g–h,
–, g–h–k, –– and Kalman filters and makes them a joy. Many books
written in the past on this subject have been either geared to the tracking filter
specialist or difficult to read. This book covers these filters from very simple
physical and geometric approaches. Extensive, simple and useful design
equations, procedures, and curves are presented. These should permit the reader
to very quickly and simply design tracking filters and determine their
performance with even just a pocket calculator. Many examples are presented
to give the reader insight into the design and performance of these filters.
Extensive homework problems and their solutions are given. These problems
form an integral instructional part of the book through extensive numerical
design examples and through the derivation of very key results stated without
proof in the text, such as the derivation of the equations for the estimation of the
accuracies of the various filters [see Note (1) on page 388]. Covered also in
simple terms is the least-squares filtering problem and the orthonormal
transformation procedures for doing least-squares filtering.
The book is intended for those not familiar with tracking at all as well as for
those familiar with certain areas who could benefit from the physical insight
derived from learning how the various filters are related, and for those who are
specialists in one area of filtering but not familiar with other areas covered. For
example, the book covers in extremely simple physical and geometric terms the
Gram–Schmidt, Givens, and Householder orthonormal transformation procedures
for doing the filtering and least-square estimation problem. How these
procedures reduce sensitivity to computer round-off errors is presented. A
simple explanation of both the classical and modified Gram–Schmidt procedures
is given. Why the latter is less sensitive to round-off errors is explained in
xiii
physical terms. For the first time the discrete-time orthogonal Legendre
polynomial (DOLP) procedure is related to the voltage-processing procedures.
Important real-world issues such as how to cope with clutter returns,
elimination of redundant target detections (observation-merging or clustering),
editing for inconsistent data, track-start and track-drop rules, and data
association (e.g., the nearest-neighbor approach and track before detection)
are covered in clear terms. The problem of tracking with the very commonly
used chirp waveform (a linear-frequency-modulated waveform) is explained
simply with useful design curves given. Also explained is the important
moving-target detector (MTD) technique for canceling clutter.
The Appendix gives a comparison of the Kalman filter (1960) with the
Swerling filter (1959). This Appendix is written by Peter Swerling. It is time for
him to receive due credit for his contribution to the ‘‘Kalman–Swerling’’ filter.
The book is intended for home study by the practicing engineer as well as for
use in a course on the subject. The author has successfully taught such a course
using the notes that led to this book. The book is also intended as a design
reference book on tracking and estimation due to its extensive design curves,
tables, and useful equations.
It is hoped that engineers, scientists, and mathematicians from a broad range
of disciplines will find the book very useful. In addition to covering and relating
the g–h, –, g–h–k, ––, Kalman filters, and the voltage-processing
methods for filtering and least-squares estimation, the use of the voltageprocessing
methods for sidelobe canceling and adaptive-array processing are
explained and shown to be the same mathematically as the tracking and
estimated problems. The massively parallel systolic array sidelobe canceler
processor is explained in simple terms. Those engineers, scientists, and
mathematicians who come from a mathematical background should get a good
feel for how the least-squares estimation techniques apply to practical systems
like radars. Explained to them are matched filtering, chirp waveforms, methods
for dealing with clutter, the issue of data association, and the MTD clutter
rejection technique. Those with an understanding from the radar point of view
should find the explanation of the usually very mathematical Gram–Schmidt,
Givens, and Householder voltage-processing (also called square-root) techniques
very easy to understand. Introduced to them are the important concepts of
ill-conditioning and computational accuracy issues. The classical Gram–
Schmidt and modified Gram–Schmidt procedures are covered also, as well as
why one gives much more accurate results. Hopefully those engineers,
scientists, and mathematicians who like to read things for their beauty will
find it in the results and relationships given here. The book is primarily intended
to be light reading and to be enjoyed. It is a book for those who need or want to
learn about filtering and estimation but prefer not to plow through difficult
esoteric material and who would rather enjoy the experience. We could have
called it ‘‘The Joy of Filtering.’’
The first part of the text develops the g–h, g–h–k, –, ––, and
Kalman filters. Chapter 1 starts with a very easy heuristic development of g–h
xiv PREFACE
filters for a simple constant-velocity target in ‘‘lineland’’ (one-dimensional
space, in contrast to the more complicated two-dimensional ‘‘flatland’’).
Section 1.2.5 gives the g–h filter, which minimizes the transient error resulting
from a step change in the target velocity. This is the well-known Benedict–
Bordner filter. Section 1.2.6 develops the g–h filter from a completely different,
common-sense, physical point of view, that of least-squares fitting a straight
line to a set of range measurements. This leads to the critically damped (also
called discounted least-squares and fading-memory) filter. Next, several
example designs are given. The author believes that the best way to learn a
subject is through examples, and so numerous examples are given in Section
1.2.7 and in the homework problems at the end of the book.
Section 1.2.9 gives the conditions (on g and h) for a g–h filter to be stable
(these conditions are derived in problem 1.2.9-1). How to initiate tracking with
a g–h filter is covered in Section 1.2.10. A filter (the g–h–k filter) for tracking a
target having a constant acceleration is covered in Section 1.3. Coordinate
selection is covered in Section 1.5.
The Kalman filter is introduced in Chapter 2 and related to the Benedict–
Bordner filter, whose equations are derived from the Kalman filter in Problem
2.4-1. Reasons for using the Kalman filter are discussed in Section 2.2, while
Section 2.3 gives a physical feel for how the Kalman filter works in an optimum
way on the data to give us a best estimate. The Kalman filter is put in matrix
form in Section 2.4, not to impress, but because in this form the Kalman filter
applies way beyond lineland—to multidimensional space.
Section 2.6 gives a very simple derivation of the Kalman filter. It requires
differentiation of a matrix equation. But even if you have never done
differentiation of a matrix equation, you will be able to follow this derivation.
In fact, you will learn how to do matrix differentiation in the process! If
you had this derivation back in 1958 and told the world, it would be your
name filter instead of the Kalman filter. You would have gotten the IEEE
Medal of Honor and $20,000 tax-free and the $340,000 Kyoto Prize,
equivalent to the Nobel Prize but also given to engineers. You would be world
famous.
In Section 2.9 the Singer g–h–k Kalman filter is explained and derived.
Extremely useful g–h–k filter design curves are presented in Section 2.10
together with an example in the text and many more in Problems 2.10-1 through
2.10-17. The issues of the selection of the type of g–h filter is covered in
Section 2.11.
Chapter 3 covers the real-world problem of tracking in clutter. The use of the
track-before-detect retrospective detector is described (Section 3.1.1). Also
covered is the important MTD clutter suppression technique (Section 3.1.2.1).
Issues of eliminating redundant detections by observation merging or clustering
are covered (Section 3.1.2.2) as well as techniques for editing out inconsistent
data (Section 3.1.3), combining clutter suppression with track initiation
(Section 3.1.4), track-start and track-drop rules (Section 3.2), data association
(Section 3.3), and track-while-scan systems (Section 3.4).
PREFACE xv
In Section 3.5 a tutorial is given on matched filtering and the very commonly
used chirp waveform. This is followed by a discussion of the range bias error
problem associated with using this waveform and how this bias can be used to
advantage by choosing a chirp waveform that predicts the future—a fortunetelling
radar.
The second part of the book covers least-squares filtering, its power and
voltage-processing approaches. Also, the solution of the least-squares filtering
problem via the use of the DOLP technique is covered and related to voltageprocessing
approaches. Another simple derivation of the Kalman filter is
presented and additional properties of the Kalman filter given. Finally, how to
handle nonlinear measurement equations and nonlinear equations of motion are
discussed (the extended Kalman filter).
Chapter 4 starts with a simple formulation of the least-squares estimation
problem and gives its power method solution, which is derived both by simple
differentiation (Section 4.1) and by simple geometry considerations (Section
4.2). This is followed by a very simple explanation of the Gram–Schmidt
voltage-processing (square-root) method for solving the least-squares problem
(Section 4.3). The voltage-processing approach has the advantage of being
much less sensitive to computer round-off errors, with about half as many bits
being required to achieve the same accuracy. The voltage-processing approach
has the advantage of not requiring a matrix inverse, as does the power method.
In Section 4.4, it is shown that the mathematics for the solution of the
tracking least-squares problem is identical to that for the radar and
communications sidelobe canceling and adaptive nulling problems. Furthermore,
it is shown how the Gram–Schmidt voltage-processing approach can be
used for the sidelobe canceling and adaptive nulling problem.
Often the accuracy of the measurements of a tracker varies from one time to
another. For this case, in fitting a trajectory to the measurements, one would like
to make the trajectory fit closer to the accurate data. The minimum-variance
least-squares estimate procedure presented in Section 4.5 does this. The more
accurate the measurement, the closer the curve fit is to the measurement.
The fixed-memory polynomial filter is covered in Chapter 5. In Section 5.3
the DOLP approach is applied to the tracking and least-squares problem for
the important cases where the target trajectory or data points (of which there
are a fixed number L þ 1) are approximated by a polynomial fit of some
degree m. This method also has the advantage of not requiring a matrix
inversion (as does the power method of Section 4.1). Also, its solution is
much less sensitive to computer round-off errors, half as many bits being
required by the computer.
The convenient and useful representation of the polynomial fit of degree m in
terms of the target equation motion derivatives (first m derivatives) is given in
Section 5.4. A useful general solution to the DOLP least-squares estimate for a
polynomial fit that is easily solved on a computer is given in Section 5.5.
Sections 5.6 through 5.10 present the variance and bias errors for the leastsquares
solution and discusses how to balance these errors. The important |
|