找回密码
 注册
搜索
查看: 1165|回复: 1

[资料] [EBOOK]Tracking and Kalman Filtering Made Easy

[复制链接]
发表于 2006-1-12 22:15:00 | 显示全部楼层 |阅读模式
【文件名】:06112@52RD_[EBOOK]Tracking and Kalman Filtering Made Easy1.rar
【格 式】:rar
【大 小】:1457K
【简 介】:[EBOOK]Tracking and Kalman Filtering Made Easy  
PART I TRACKING, PREDICTION, AND SMOOTHING BASICS
1 g–h and g–h–k Filters 3
1.1 Why Tracking and Prediction are Needed in a Radar 3
1.2 g–h Filters 14
1.2.1 Simple Heruistic Derivation of g–h Tracking and
Prediction Equations 14
1.2.2 – Filter 23
1.2.3 Other Special Types of Filters 23
1.2.4 Important Properties of g–h Tracking Filters 24
1.2.4.1 Steady-State Performance for Constant-
Velocity Target 24
1.2.4.2 For What Conditions is the Constant-
Velocity Assumption Reasonable 24
1.2.4.3 Steady-State Response for Target with
Constant Acceleration 25
1.2.4.4 Tracking Errors due to Random Range
Measurement Error 27
1.2.4.5 Balancing of Lag Error and rms
Prediction Error 27
vii
1.2.5 Minimization of Transient Error
(Benedict–Bordner Filter) 29
1.2.6 New Approach to Finding Optimum g–h Tracking
Filters (The Critically Damped g–h Filter) 32
1.2.7 g–h Filter Examples 40
1.2.8 Circuit Diagram of General g–h Filter 46
1.2.9 Stability of Constant g–h Filter 47
1.2.10 Track Initiation 47
1.3 g–h–k Filter 51
1.4 Tracking in Multidimensions 59
1.5 Selection of Coordinates for Tracking Filter 60
2 Kalman Filter 64
2.1 Two-State Kalman Filter 64
2.2 Reasons for Using the Kalman Filter 66
2.3 Properties of Kalman Filter 68
2.4 Kalman Filter in Matrix Notation 69
2.5 Derivation of Minimum-Variance Equation 77
2.5.1 First Derivation 77
2.5.2 Second Derivation 79
2.6 Exact Derivation of r-Dimensional Kalman Filter 80
2.7 Table Lookup Approximation to the Kalman Filter 84
2.8 Asquith–Friedland Steady-State g–h Kalman Filter 84
2.9 Singer g–h–k Kalman Filter 88
2.10 Convenient Steady-State g–h–k Filter Design Curves 95
2.11 Selection of Tracking Filter 104
3 Practical Issues for Radar Tracking 111
3.1 Track Initiation and Clutter Rejection 111
3.1.1 Use of Track Initiation to Eliminate Clutter 111
3.1.2 Clutter Rejection and Observation-Merging
Algorithms for Reducing Track Initiation Load 116
3.1.2.1 Moving-Target Dector 117
3.1.2.2 Observation Merging (Redundancy-
Elimination, Clustering) 120
3.1.3 Editing for Inconsistencies 121
3.1.4 Combined Clutter Suppression and Track
Initiation 121
3.2 Track-Start and Track-Drop Rules 127
3.3 Data Association 127
3.3.1 Nearest-Neighbor Approach 127
3.3.2 Other Association Approaches 129
3.4 Track-While-Scan System 130
3.5 Tracking with a Chirp Waveform 132
viii CONTENTS
3.5.1 Chirp Waveform, Pulse Compression, Match
Filtering of Chirp Waveform, and Pulse
Compression 132
3.5.1.1 Matched Filter 137
3.5.1.2 Alternate Way to View Pulse Compression
and Pulse Coding 137
3.5.1.3 Affect of Tracking with a Chirp Waveform 141
3.5.1.4 Range–Doppler Ambiguity Problem of
Chirp Waveform 141
3.5.2 Effect of Using Chirp Waveform on Tracker
Filtering Accuracy 146
PART II LEAST-SQUARES FILTERING, VOLTAGE
PROCESSING, ADAPTIVE ARRAY PROCESSING,
AND EXTENDED KALMAN FILTER
4 Least-Squares and Minimum-Variance Estimates for Linear
Time-Invariant Systems 155
4.1 General Least-Squares Estimation Results 155
4.2 Geometric Derivation of Least-Squares Solution 167
4.3 Orthonormal Transformation and Voltage-Processing
(Square-Root) Method for LSE 174
4.4 Adaptive Nulling, the Orthonormal Transformation, and
the LSE 188
4.5 Minimum-Variance Estimate 200
5 Fixed-Memory Polynomial Filter 205
5.1 Introduction 205
5.2 Direct Approach (Using Nonorthogonal mth-Degree
Polynomial Fit) 206
5.3 Discrete Orthogonal Legendre Polynomial Approach 208
5.4 Representation of Polynomial Fit in Terms of Its
Derivatives (State Variable Representation of
Polynomial Fit in Terms of Process Derivatives) 212
5.5 Representation of Least-Squares Estimate in Terms of
Derivative State Vector 214
5.6 Variance of Least-Squares Polynomial Estimate 217
5.7 Simple Example 219
5.8 Dependence of Covariance on L, T, m, and h 219
5.9 Systematic Errors (Bias, Lag, or Dynamic Error) 225
5.10 Balancing Systematic and Random Estimation Errors 229
5.11 Trend Removal 230
CONTENTS ix
6 Expanding-Memory (Growing-Memory) Polynomial Filters 233
6.1 Introduction 233
6.2 Extrapolation from Fixed-Memory Filter Results 234
6.3 Recursive Form 234
6.4 Stability 236
6.5 Track Initiation 236
6.6 Variance Reduction Factor 237
6.7 Systematic Errors 238
7 Fading-Memory (Discounted Least-Squares) Filter 239
7.1 Discounted Least-Squares Estimate 239
7.2 Orthogonal Laguerre Polynomial Approach 240
7.3 Stability 244
7.4 Variance Reduction Factors 244
7.5 Comparison with Fixed-Memory Polynomial Filter 245
7.6 Track Initiation 248
7.7 Systematic Errors 251
7.8 Balancing the Systematic and Random Prediction Error 251
8 General Form for Linear Time-Invariant System 252
8.1 Target Dynamics Described by Polynomial as a Function
of Time 252
8.1.1 Introduction 252
8.1.2 Linear Constant-Coefficient Differential Equation 253
8.1.3 Constant-Coefficient Linear Differential Vector
Equation for State Vector X(t) 254
8.1.4 Constant-Coefficient Linear Differential Vector
Equation for Transition Matrix  256
8.2 More General Model Consisting of the Sum of The
Product of Polynomials and Exponentials 258
9 General Recursive Minimum-Variance Growing-Memory
Filter (Bayes and Kalman Filters without Target
Process Noise) 260
9.1 Introduction 260
9.2 Bayes Filter 261
9.3 Kalman Filter (Without Process Noise) 262
9.4 Comparison of Bayes and Kalman Filters 262
9.5 Extension to Multiple Measurement Case 263
x CONTENTS
10 Voltage Least-Squares Algorithms Revisited 264
10.1 Computation Problems 264
10.2 Orthogonal Transformation of Least-Squares
Estimate Error 267
10.2.1 Physical Interpretation of Orthogonal
Transformation 271
10.2.2 Physical Interpretation of U 275
10.2.3 Reasons the Square-Root Procedure Provides
Better Accuracy 278
10.2.4 When and Why Inaccuracies Occur 280
11 Givens Orthonormal Transformation 283
11.1 The Transformation 283
11.2 Example 295
11.3 Systolic Array Implementation 298
11.3.1 Systolic Array 298
11.3.2 CORDIC Algorithm 307
12 Householder Orthonormal Transformation 315
12.1 Comparison of Householder and Givens Transformations 315
12.2 First Householder Transformation 317
12.3 Second and Higher Order Householder Transformations 320
13 Gram–Schmidt Orthonormal Transformation 322
13.1 Classical Gram–Schmidt Orthonormal Transformation 322
13.2 Modified Gram–Schmidt Orthonormal Transformation 333
14 More on Voltage-Processing Techniques 339
14.1 Comparison of Different Voltage Least-Squares
Algorithm Techniques 339
14.2 QR Decomposition 342
14.3 Sequential Square-Root (Recursive) Processing 343
14.4 Equivalence between Voltage-Processing Methods and
Discrete Orthogonal Legendre Polynomial Approach 345
14.5 Square-Root Kalman Filters 353
15 Linear Time-Variant System 354
15.1 Introduction 354
15.2 Dynamic Model 355
15.3 Transition Matrix Differential Equation 355
CONTENTS xi
16 Nonlinear Observation Scheme and Dynamic Model
(Extended Kalman Filter) 357
16.1 Introduction 357
16.2 Nonlinear Observation Scheme 357
16.3 Nonlinear Dynamic Model 360
17 Bayes Algorithm with Iterative Differential Correction for
Nonlinear Systems 367
17.1 Determination of Updated Estimates 367
17.2 Extension to Multiple Measurement Case 370
17.3 Historical Background 374
18 Kalman Filter Revisited 375
18.1 Introduction 375
18.2 Kalman Filter Target Dynamic Model 375
18.3 Kalman’s Original Results 376
Appendix Comparison of Swerling’s and Kalman’s
Formulations of Swerling–Kalman Filters 383
Problems 388
Symbols and Acronyms 402
Solution to Selected Problems 419
References 456
Index 465
xii CONTENTS
【目 录】:


本帖子中包含更多资源

您需要 登录 才可以下载或查看,没有账号?注册

×
 楼主| 发表于 2006-1-12 22:16:00 | 显示全部楼层
PREFACE
At last a book that hopefully will take the mystery and drudgery out of the g–h,
–, g–h–k, –– and Kalman filters and makes them a joy. Many books
written in the past on this subject have been either geared to the tracking filter
specialist or difficult to read. This book covers these filters from very simple
physical and geometric approaches. Extensive, simple and useful design
equations, procedures, and curves are presented. These should permit the reader
to very quickly and simply design tracking filters and determine their
performance with even just a pocket calculator. Many examples are presented
to give the reader insight into the design and performance of these filters.
Extensive homework problems and their solutions are given. These problems
form an integral instructional part of the book through extensive numerical
design examples and through the derivation of very key results stated without
proof in the text, such as the derivation of the equations for the estimation of the
accuracies of the various filters [see Note (1) on page 388]. Covered also in
simple terms is the least-squares filtering problem and the orthonormal
transformation procedures for doing least-squares filtering.
The book is intended for those not familiar with tracking at all as well as for
those familiar with certain areas who could benefit from the physical insight
derived from learning how the various filters are related, and for those who are
specialists in one area of filtering but not familiar with other areas covered. For
example, the book covers in extremely simple physical and geometric terms the
Gram–Schmidt, Givens, and Householder orthonormal transformation procedures
for doing the filtering and least-square estimation problem. How these
procedures reduce sensitivity to computer round-off errors is presented. A
simple explanation of both the classical and modified Gram–Schmidt procedures
is given. Why the latter is less sensitive to round-off errors is explained in
xiii
physical terms. For the first time the discrete-time orthogonal Legendre
polynomial (DOLP) procedure is related to the voltage-processing procedures.
Important real-world issues such as how to cope with clutter returns,
elimination of redundant target detections (observation-merging or clustering),
editing for inconsistent data, track-start and track-drop rules, and data
association (e.g., the nearest-neighbor approach and track before detection)
are covered in clear terms. The problem of tracking with the very commonly
used chirp waveform (a linear-frequency-modulated waveform) is explained
simply with useful design curves given. Also explained is the important
moving-target detector (MTD) technique for canceling clutter.
The Appendix gives a comparison of the Kalman filter (1960) with the
Swerling filter (1959). This Appendix is written by Peter Swerling. It is time for
him to receive due credit for his contribution to the ‘‘Kalman–Swerling’’ filter.
The book is intended for home study by the practicing engineer as well as for
use in a course on the subject. The author has successfully taught such a course
using the notes that led to this book. The book is also intended as a design
reference book on tracking and estimation due to its extensive design curves,
tables, and useful equations.
It is hoped that engineers, scientists, and mathematicians from a broad range
of disciplines will find the book very useful. In addition to covering and relating
the g–h, –, g–h–k, ––, Kalman filters, and the voltage-processing
methods for filtering and least-squares estimation, the use of the voltageprocessing
methods for sidelobe canceling and adaptive-array processing are
explained and shown to be the same mathematically as the tracking and
estimated problems. The massively parallel systolic array sidelobe canceler
processor is explained in simple terms. Those engineers, scientists, and
mathematicians who come from a mathematical background should get a good
feel for how the least-squares estimation techniques apply to practical systems
like radars. Explained to them are matched filtering, chirp waveforms, methods
for dealing with clutter, the issue of data association, and the MTD clutter
rejection technique. Those with an understanding from the radar point of view
should find the explanation of the usually very mathematical Gram–Schmidt,
Givens, and Householder voltage-processing (also called square-root) techniques
very easy to understand. Introduced to them are the important concepts of
ill-conditioning and computational accuracy issues. The classical Gram–
Schmidt and modified Gram–Schmidt procedures are covered also, as well as
why one gives much more accurate results. Hopefully those engineers,
scientists, and mathematicians who like to read things for their beauty will
find it in the results and relationships given here. The book is primarily intended
to be light reading and to be enjoyed. It is a book for those who need or want to
learn about filtering and estimation but prefer not to plow through difficult
esoteric material and who would rather enjoy the experience. We could have
called it ‘‘The Joy of Filtering.’’
The first part of the text develops the g–h, g–h–k, –, ––, and
Kalman filters. Chapter 1 starts with a very easy heuristic development of g–h
xiv PREFACE
filters for a simple constant-velocity target in ‘‘lineland’’ (one-dimensional
space, in contrast to the more complicated two-dimensional ‘‘flatland’’).
Section 1.2.5 gives the g–h filter, which minimizes the transient error resulting
from a step change in the target velocity. This is the well-known Benedict–
Bordner filter. Section 1.2.6 develops the g–h filter from a completely different,
common-sense, physical point of view, that of least-squares fitting a straight
line to a set of range measurements. This leads to the critically damped (also
called discounted least-squares and fading-memory) filter. Next, several
example designs are given. The author believes that the best way to learn a
subject is through examples, and so numerous examples are given in Section
1.2.7 and in the homework problems at the end of the book.
Section 1.2.9 gives the conditions (on g and h) for a g–h filter to be stable
(these conditions are derived in problem 1.2.9-1). How to initiate tracking with
a g–h filter is covered in Section 1.2.10. A filter (the g–h–k filter) for tracking a
target having a constant acceleration is covered in Section 1.3. Coordinate
selection is covered in Section 1.5.
The Kalman filter is introduced in Chapter 2 and related to the Benedict–
Bordner filter, whose equations are derived from the Kalman filter in Problem
2.4-1. Reasons for using the Kalman filter are discussed in Section 2.2, while
Section 2.3 gives a physical feel for how the Kalman filter works in an optimum
way on the data to give us a best estimate. The Kalman filter is put in matrix
form in Section 2.4, not to impress, but because in this form the Kalman filter
applies way beyond lineland—to multidimensional space.
Section 2.6 gives a very simple derivation of the Kalman filter. It requires
differentiation of a matrix equation. But even if you have never done
differentiation of a matrix equation, you will be able to follow this derivation.
In fact, you will learn how to do matrix differentiation in the process! If
you had this derivation back in 1958 and told the world, it would be your
name filter instead of the Kalman filter. You would have gotten the IEEE
Medal of Honor and $20,000 tax-free and the $340,000 Kyoto Prize,
equivalent to the Nobel Prize but also given to engineers. You would be world
famous.
In Section 2.9 the Singer g–h–k Kalman filter is explained and derived.
Extremely useful g–h–k filter design curves are presented in Section 2.10
together with an example in the text and many more in Problems 2.10-1 through
2.10-17. The issues of the selection of the type of g–h filter is covered in
Section 2.11.
Chapter 3 covers the real-world problem of tracking in clutter. The use of the
track-before-detect retrospective detector is described (Section 3.1.1). Also
covered is the important MTD clutter suppression technique (Section 3.1.2.1).
Issues of eliminating redundant detections by observation merging or clustering
are covered (Section 3.1.2.2) as well as techniques for editing out inconsistent
data (Section 3.1.3), combining clutter suppression with track initiation
(Section 3.1.4), track-start and track-drop rules (Section 3.2), data association
(Section 3.3), and track-while-scan systems (Section 3.4).
PREFACE xv
In Section 3.5 a tutorial is given on matched filtering and the very commonly
used chirp waveform. This is followed by a discussion of the range bias error
problem associated with using this waveform and how this bias can be used to
advantage by choosing a chirp waveform that predicts the future—a fortunetelling
radar.
The second part of the book covers least-squares filtering, its power and
voltage-processing approaches. Also, the solution of the least-squares filtering
problem via the use of the DOLP technique is covered and related to voltageprocessing
approaches. Another simple derivation of the Kalman filter is
presented and additional properties of the Kalman filter given. Finally, how to
handle nonlinear measurement equations and nonlinear equations of motion are
discussed (the extended Kalman filter).
Chapter 4 starts with a simple formulation of the least-squares estimation
problem and gives its power method solution, which is derived both by simple
differentiation (Section 4.1) and by simple geometry considerations (Section
4.2). This is followed by a very simple explanation of the Gram–Schmidt
voltage-processing (square-root) method for solving the least-squares problem
(Section 4.3). The voltage-processing approach has the advantage of being
much less sensitive to computer round-off errors, with about half as many bits
being required to achieve the same accuracy. The voltage-processing approach
has the advantage of not requiring a matrix inverse, as does the power method.
In Section 4.4, it is shown that the mathematics for the solution of the
tracking least-squares problem is identical to that for the radar and
communications sidelobe canceling and adaptive nulling problems. Furthermore,
it is shown how the Gram–Schmidt voltage-processing approach can be
used for the sidelobe canceling and adaptive nulling problem.
Often the accuracy of the measurements of a tracker varies from one time to
another. For this case, in fitting a trajectory to the measurements, one would like
to make the trajectory fit closer to the accurate data. The minimum-variance
least-squares estimate procedure presented in Section 4.5 does this. The more
accurate the measurement, the closer the curve fit is to the measurement.
The fixed-memory polynomial filter is covered in Chapter 5. In Section 5.3
the DOLP approach is applied to the tracking and least-squares problem for
the important cases where the target trajectory or data points (of which there
are a fixed number L þ 1) are approximated by a polynomial fit of some
degree m. This method also has the advantage of not requiring a matrix
inversion (as does the power method of Section 4.1). Also, its solution is
much less sensitive to computer round-off errors, half as many bits being
required by the computer.
The convenient and useful representation of the polynomial fit of degree m in
terms of the target equation motion derivatives (first m derivatives) is given in
Section 5.4. A useful general solution to the DOLP least-squares estimate for a
polynomial fit that is easily solved on a computer is given in Section 5.5.
Sections 5.6 through 5.10 present the variance and bias errors for the leastsquares
solution and discusses how to balance these errors. The important
点评回复

使用道具 举报

高级模式
B Color Image Link Quote Code Smilies

本版积分规则

Archiver|手机版|小黑屋|52RD我爱研发网 ( 沪ICP备2022007804号-2 )

GMT+8, 2024-5-5 03:34 , Processed in 0.058443 second(s), 17 queries , Gzip On.

Powered by Discuz! X3.5

© 2001-2023 Discuz! Team.

快速回复 返回顶部 返回列表