图书介绍
ADAPTIVE CONTROL PROCESSES:A GUIDED TOURPDF|Epub|txt|kindle电子书版本网盘下载
![ADAPTIVE CONTROL PROCESSES:A GUIDED TOUR](https://www.shukui.net/cover/46/33958449.jpg)
- RICHARD BELLMAN 著
- 出版社: PRINCETON UNIVERSITY PRESS
- ISBN:
- 出版时间:1961
- 标注页数:255页
- 文件大小:10MB
- 文件页数:266页
- 主题词:
PDF下载
下载说明
ADAPTIVE CONTROL PROCESSES:A GUIDED TOURPDF格式电子书版下载
下载的文件为RAR压缩包。需要使用解压软件进行解压得到PDF格式图书。建议使用BT下载工具Free Download Manager进行下载,简称FDM(免费,没有广告,支持多平台)。本站资源全部打包为BT种子。所以需要使用专业的BT下载软件进行下载。如BitComet qBittorrent uTorrent等BT下载工具。迅雷目前由于本站不是热门资源。不推荐使用!后期资源热门了。安装了迅雷也可以迅雷进行下载!
(文件页数 要大于 标注页数,上中下等多册电子书除外)
注意:本站所有压缩包均有解压码: 点击下载压缩包解压工具
图书目录
CHAPTER Ⅰ FEEDBACK CONTROL AND THE CALCULUS OF VARIATIONS13
1.1 Introduction13
1.2 Mathematical description of a physical system13
1.3 Parenthetical15
1.4 Hereditary influences16
1.5 Criteria of performance17
1.6 Terminal control18
1.7 Control process18
1.8 Feedback control19
1.9 An alternate concept20
1.10 Feedback control as a variational problem21
1.11 The scalar variational problem22
1.12 Discussion24
1.13 Relative minimum versus absolute minimum24
1.14 Nonlinear differential equations26
1.15 Two-point boundary value problems27
1.16 An example of multiplicity of solution29
1.17 Non-analytic criteria30
1.18 Terminal control and implicit variational problems31
1.19 Constraints32
1.20 Linearity34
1.21 Summing up35
Bibliography and discussion36
CHAPTER Ⅱ DYNAMICAL SYSTEMS AND TRANSFORMATIONS41
2.1 Introduction41
2.2 Functions of initial values41
2.3 The principle of causality42
2.4 The basic functional equation42
2.5 Continuous version43
2.6 The functional equations satisfied by the elementary functions43
2.7 The matrix exponential44
2.8 Transformations and iteration44
2.9 Carleman's linearization45
2.10 Functional equations and maximum range46
2.11 Vertical motion—Ⅰ46
2.12 Vertical motion—Ⅱ47
2.13 Maximum altitude47
2.14 Maximum range48
2.15 Multistage processes and differential equations48
Bibliography and discussion48
CHAPTER Ⅲ MULTISTAGE DECISION PROCESSES AND DYNAMIC PROGRAMMING51
3.1 Introduction51
3.2 Multistage decision processes51
3.3 Discrete deterministic multistage decision processes52
3.4 Formulation as a conventional maximization problem53
3.5 Markovian-type processes54
3.6 Dynamic programming approach55
3.7 A recurrence relation56
3.8 The principle of optimality56
3.9 Derivation of recurrence relation57
3.10 “Terminal” control57
3.11 Continuous deterministic processes58
3.12 Discussion59
Bibliography and comments59
CHAPTER Ⅳ DYNAMIC PROGRAMMING AND THE CALCULUS OF VARIATIONS61
4.1 Introduction61
4.2 The calculus of variations as a multistage decision process62
4.3 Geometric interpretation62
4.4 Functional equations63
4.5 Limiting partial differential equations64
4.6 The Euler equations and characteristics65
4.7 Discussion66
4.8 Direct derivation of Euler equation66
4.9 Discussion67
4.10 Discrete processes67
4.11 Functional equations68
4.12 Minimum of maximum deviation69
4.13 Constraints70
4.14 Structure of optimal policy70
4.15 Bang-bang control72
4.16 Optimal trajectory73
4.17 The brachistochrone74
4.18 Numerical computation of solutions of differential equations76
4.19 Sequential computation77
4.20 An example78
Bibliography and comments79
CHAPTER Ⅴ COMPUTATIONAL ASPECTS OF DYNAMIC PROGRAMMING85
5.1 Introduction85
5.2 The computational process—Ⅰ86
5.3 The computational process—Ⅱ87
5.4 The computational process—Ⅲ88
5.5 Expanding grid88
5.6 The computational process—Ⅳ88
5.7 Obtaining the solution from the numerical results89
5.8 Why is dynamic programming better than straightforward enumeration90
5.9 Advantages of dynamic programming approach90
5.10 Absolute maximum versus relative maximum91
5.11 Initial value versus two-point boundary value problems91
5.12 Constraints91
5.13 Non-analyticity92
5.14 Implicit variational problems92
5.15 Approximation in policy space93
5.16 The curse of dimensionality94
5.17 Sequential search95
5.18 Sensitivity analysis95
5.19 Numerical solution of partial differential equations95
5.20 A simple nonlinear hyperbolic equation96
5.21 The equation fT=g1+g2fc+g3fc297
Bibliography and comments98
CHAPTER Ⅵ THE LAGRANGE MULTIPLIER100
6.1 Introduction100
6.2 Integral constraints101
6.3 Lagrange multiplier102
6.4 Discussion103
6.5 Several constraints103
6.6 Discussion104
6.7 Motivation for the Lagrange multiplier105
6.8 Geometric motivation106
6.9 Equivalence of solution108
6.10 Discussion109
Bibliography and discussion110
CHAPTER Ⅶ TWO-POINT BOUNDARY VALUE PROBLEMS111
7.1 Introduction111
7.2 Two-point boundary value problems112
7.3 Application of dynamic programming techniques113
7.4 Fixed terminal state113
7.5 Fixed terminal state and constraint115
7.6 Fixed terminal set115
7.7 Internal conditions116
7.8 Characteristic value problems116
Bibliography and comments117
CHAPTER Ⅷ SEQUENTIAL MACHINES AND THE SYNTHESIS OF LOGICAL SYSTEMS119
8.1 Introduction119
8.2 Sequential machines119
8.3 Information pattern120
8.4 Ambiguity121
8.5 Functional equations121
8.6 Limiting case122
8.7 Discussion122
8.8 Minimum time123
8.9 The coin-weighing problem123
8.10 Synthesis of logical systems124
8.11 Description of problem124
8.12 Discussion125
8.13 Introduction of a norm125
8.14 Dynamic programming approach125
8.15 Minimum number of stages125
8.16 Medical diagnosis126
Bibliography and discussion126
CHAPTER Ⅸ UNCERTAINTY AND RANDOM PROCESSES129
9.1 Introduction129
9.2 Uncertainty130
9.3 Sour grapes or truth?131
9.4 Probability132
9.5 Enumeration of equally likely possibilities132
9.6 The frequency approach134
9.7 Ergodic theory136
9.8 Random variables136
9.9 Continuous stochastic variable137
9.10 Generation of random variables138
9.11 Stochastic process138
9.12 Linear stochastic sequences138
9.13 Causality and the Markovian property139
9.14 Chapman-Kolmogoroff equations140
9.15 The forward equations141
9.16 Diffusion equations142
9.17 Expected values142
9.18 Functional equations144
9.19 An application145
9.20 Expected range and altitude145
Bibliography and comments146
CHAPTER Ⅹ STOCHASTIC CONTROL PROCESSES152
10.1 Introduction152
10.2 Discrete stochastic multistage decision processes152
10.3 The optimization problem153
10.4 What constitutes an optimal policy?154
10.5 Two particular stochastic control processes155
10.6 Functional equations155
10.7 Discussion156
10.8 Terminal control156
10.9 Implicit criteria157
10.10 A two-dimensional process with implicit criterion157
Bibliography and discussion158
CHAPTER Ⅺ MARKOVIAN DECISION PROCESSES160
11.1 Introduction160
11.2 Limiting behavior of Markov processes160
11.3 Markovian decision processes—Ⅰ161
11.4 Markovian decision processes—Ⅱ162
11.5 Steady-state behavior162
11.6 The steady-state equation163
11.7 Howard's iteration scheme164
11.8 Linear programming and sequential decision processes164
Bibliography and comments165
CHAPTER Ⅻ QUASILINEARIZATION167
12.1 Introduction167
12.2 Continuous Markovian decision processes168
12.3 Approximation in policy space168
12.4 Systems170
12.5 The Riccati equation170
12.6 Extensions171
12.7 Monotone approximation in the calculus of variations171
12.8 Computational aspects172
12.9 Two-point boundary-value problems173
12.10 Partial differential equations174
Bibliography and comments175
CHAPTER ⅩⅢ STOCHASTIC LEARNING MODELS176
13.1 Introduction176
13.2 A stochastic learning model176
13.3 Functional equations177
13.4 Analytic and computational aspects177
13.5 A stochastic learning model—Ⅱ177
13.6 Inverse problem178
Bibliography and comments178
CHAPTER ⅩⅣ THE THEORY OF GAMES AND PURSUIT PROCESSES180
14.1 Introduction180
14.2 A two-person process181
14.3 Multistage process181
14.4 Discussion182
14.5 Borel-von Neumann theory of games182
14.6 The min-max theorem of von Neumann184
14.7 Discussion184
14.8 Computational aspects185
14.9 Card games185
14.10 Games of survival185
14.11 Control processes as games against nature186
14.12 Pursuit processes—minimum time to capture187
14.13 Pursuit processes—minimum miss distance189
14.14 Pursuit processes—minimum miss distance within a given time189
14.15 Discussion190
Bibliography and discussion190
CHAPTER ⅩⅤ ADAPTIVE PROCESSES194
15.1 Introduction194
15.2 Uncertainty revisited195
15.3 Reprise198
15.4 Unknown—Ⅰ199
15.5 Unknown—Ⅱ200
15.6 Unknown—Ⅲ200
15.7 Adaptive processes201
Bibliography and comments201
CHAPTER ⅩⅥ ADAPTIVE CONTROL PROCESSES203
16.1 Introduction203
16.2 Information pattern205
16.3 Basic assumptions207
16.4 Mathematical formulation208
16.5 Functional equations208
16.6 From information patterns to distribution functions209
16.7 Feedback control209
16.8 Functional equations210
16.9 Further structural assumptions210
16.10 Reduction from functionals to functions211
16.11 An illustrative example—deterministic version211
16.12 Stochastic version212
16.13 Adaptive version213
16.14 Sufficient statistics215
16.15 The two-armed bandit problem215
Bibliography and discussion216
CHAPTER ⅩⅦ SOME ASPECTS OF COMMUNICATION THEORY219
17.1 Introduction219
17.2 A model of a communication process220
17.3 Utility a function of use221
17.4 A stochastic allocation process221
17.5 More general processes222
17.6 The efficient gambler222
17.7 Dynamic programming approach223
17.8 Utility of a communication channel224
17.9 Time-dependent case224
17.10 Correlation225
17.11 M-signal channels225
17.12 Continuum of signals227
17.13 Random duration228
17.14 Adaptive processes228
Bibliography and comments230
CHAPTER ⅩⅧ SUCCESSIVE APPROXIMATION232
18.1 Introduction232
18.2 The classical method of successive approximations233
18.3 Application to dynamic programming234
18.4 Approximation in policy space235
18.5 Quasilinearization236
18.6 Application of the preceding ideas236
18.7 Successive approximations in the calculus of variations237
18.8 Preliminaries on differential equations239
18.9 A terminal control process240
18.10 Differential-difference equations and retarded control241
18.11 Quadratic criteria241
18.12 Successive approximations once more243
18.13 Successive approximations—Ⅱ244
18.14 Functional approximation244
18.15 Simulation techniques246
18.16 Quasi-optimal policies246
18.17 Non-zero sum games247
18.18 Discussion248
Bibliography and discussion249