Nonparametric Modelling/Identification Methods
Modern nonparametric statistical methods (e.g. methods based
on Gaussian Process Priors) offer many advantages including:
 The model is directly based on the data. Contrast this
with parametric modelling approaches where a model structure with
a small number of parameters is postulated. The parameter values
are selected to achieve a good fit to the measured data and in
this way the information contained in the measured data is distilled
to a small number of parameters. Unfortunately, the reverse procedure
is often illconditioned with biases and errors in the estimated
parameters often leading to unnecessarily poor predictions.
Direct adaptation. Additional data can be directly incorporated
into an existing model at little cost.
 A wide range of prior knowledge can be accommodated. By
suitable choice of interpolation strategy, prior knowledge ranging
from almost none to almost complete (e.g. a full prior model of
the system) can be supported.
 Direct regularisation. Measured data is generally relatively
sparse at operating points far from equilibrium. Proper interpolation
(based on smoothing or socalled regularisation) can greatly improve
the generalisation ability of the model in such operating regions
and avoids the numerical illconditioning in conventional parametric
model associated with the need to estimate parameters from such
sparse data.
 While nonparametric models offer a number of
significant advantages, they are essentially "black box"
in nature. Although black box representations are useful for many
purposes, their utility for analysis and design is limited. Fortunately,
given a nonparametric Bayesian model, the corresponding velocitybased
linearisation family can be derived immediately. Velocitybased
representations are wellsuited to linearisationbased analysis
and design and complement nonparametric representations in many
ways. The use of a dual nonparametric/velocitybased representation
to exploit the considerable synergy which exists between these
representations is therefore quite natural and attractive.
By using a dual nonparametric/velocitybased representation,
it is in principle possible to directly incorporate prior knowledge
of the velocity based linearisation family including the likely linearisations
at certain operating points (e.g. equilibrium), the likely scheduling
variable, any known decomposition into operating regions and any knowledge
of the likely smoothness of system in different operating regions. Conversely,
when such information is unknown or requires to be validated, it can
be inferred from an identified model. It should be noted that structural
information such as knowledge of the scheduling variable is extremely
valuable in many contexts, including control
design.
Divide
& Conquer Identification
The identification of linear timeinvariant (LTI) systems from measured
experimental data has received considerable attention over the last
thirty years and there exists a wealth of theoretical results relating
to issues such as structure identification, parameter estimation, experiment
design and model validation testing together with a great deal of accumulated
practical experience. However, all systems are in reality nonlinear
and identification techniques are less well developed for systems which
cannot be accurately approximated by a single LTI system. It is, therefore,
often attractive to consider a divide and conquer strategy whereby the
analysis/design of a nonlinear system is decomposed into the analysis/design
of a collection of LTI systems. Note that the attraction of such an
approach extends beyond theoretical considerations: in many situations
it is impractical and/or unsafe to collect global test data  a series
of experiments providing information on the local behaviour are often
much more preferable andnaturally leads to consideration of a divide
and conquer approach to identification. In the context of system
identification it is common practice, when faced with the task of modelling
a nonlinear system, to initially identify a number of LTI approximations
to the system each of which is locally valid. However, in the system
identification literature there is notable lack of research relating
to the natural next step: namely, attempting to properly reconstruct
a nonlinear system from an appropriate family of identified linear systems.
Velocitybased methods appear
to have the potential to address precisely this task and thus open up
a new paradigm for system identification.
Modular Modelling of Interacting
Systems
The common practice, at least in the first instance, is to use linearisationbased
approaches to analyse a nonlinear model. This is reflected, for
example, in the ubiquity of tools for trimming and linearising nonlinear
simulation models. However, conventional linearisationbased
representations do not readily support modular analysis and design.
Conventional linearisations require knowledge of the equilibrium points
of a system which, in a tightly integrated system, are a global property
i.e. the equilibria of a subsystem depend on the characteristics of
the rest of the system. Clearly this is at odds with modular analysis
and design methodologies which require each subsystem to have a welldefined
interface to the rest of the system which is insensitive to the implementation
details of the system. (Such modular approaches enable the detailed
design and implementation of each subsystem to be carried out separately
and are particularly important in projects where subcontractors are
involved).
Velocitybased linearisation
methods can provide a framework which genuinely supports modular
analysis and design methods. Features include: no dependence
on detailed equilibrium information, tools which are valid globally
(rather than only close to equilibrium operation), analysis and design
results obtained with a specific subsystem can be integrated in a direct
and transparent manner with those obtained for other subsystems.
Other important practical advantages include: trimming of simulation
models (to determine equilibrium points) and numerical differentiation
(linearisation is achieved by 'freezing' rather than differentiating)
are not needed. Both trimming and numerical differentiation are
highly nontrivial for the complex largescale simulation models frequently
encountered in industry.
