Diese Seite ist aus Gründen der Barrierefreiheit optimiert für aktuelle Browser. Sollten Sie einen älteren Browser verwenden, kann es zu Einschränkungen der Darstellung und Benutzbarkeit der Website kommen!
Homepage LMU Homepage LMU Study Group Methodological Foundations of Statistics and their Applications
Search:

Second Workshop on Principles and Methods of Statistical Inference with Interval Probability

Munich, 09 - 15 September 2009

Day Subject Organised by
Wed 9th Reliability Frank Coolen, Lev Utkin
Thu 10th Graphical models and Markov chains Thomas Augustin, Gert de Cooman, Damjan Skulj
Fri 11th Foundations Frank Coolen, Marco Cattaneo
Sat 12th Public lecture day (starting at 14:15) honouring Kurt Weichselberger on the occasion of his 80th birthday Speakers: Michael Goldstein, Lev Utkin, Kurt Weichselberger
Laudatio: Hans Schneeweiß

The public lecture day will take place at the Vice chancellor's room ("Rektorzimmer"), located at the first floor of the LMU main building (Geschwister-Scholl-Platz 1).
Please note the details on the accessability of the main building at the detailed programme.
Sun 13th Excursion
Mon 14th Regression Jochen Einbeck, Gero Walter
Tue 15th Classification, computation, and numerical analysis Matthias Troffaes

Wed 9th: Reliability

Time Event / Speaker Abstracts / Topic
Wed 9th
09:00 Thomas Augustin, Frank Coolen Informal opening
09:15 Lev Utkin Imprecise software reliability

A new framework of the general software reliability growth models called imprecise software reliability models (ISRM) is proposed. The known probabilistic and possibilistic models can be considered as special cases of the ISRM. The main idea of the ISRM is to consider a set of possible probability distributions of time to software failure restricted by some lower and upper bounds and to combine imprecise Bayesian inference or generalized moments and maximum likelihood estimation.
10:00 Michael Oberguggenberger Reliability assessment of engineering structures

In civil engineering, reliability of a structure is mostly interpreted as safety against failure (or unusability). In this field, reliablity assessments often face the problems of lack of data, ignorance about input parameter distributions, and high computational cost. We argue that a good understanding of the behavior of a structure near failure should be obtained through sensitivity analysis, that is, the assessment of the impact of individual (or sets of) input parameters on the output variables indicating the onset of failure. We exemplify the approach with results on the buckling of shell structures in aerospace engineering. The key ingredients are small scale Monte Carlo simulations, computation of statistical properties of input-output correlation measures, iterative methods, and random fields. We will also touch upon some further applications in geotechnical engineering.
10:30 Matthias Troffaes Dependence learning

When dealing with fault trees, often probabilities of component failures are well known, but correlation between components is not. Naively assuming independence in such cases can lead to gross underestimation of failure rates. Dropping the independence assumption without further information on the other hand leads to overly conservative and very imprecise estimates. In this talk I investigate various challenges for dependence learning using copulas, how these might be generalised when we lack prior information about the dependence structure, and how joint data could be used in fault trees to improve the Frechet bounds at fault tree and-gates.
11:00 Coffee break
11:30 Thomas Fetz Credal sets and reliability bounds for series systems

Engineering approaches to design makes use of limit states, which are conditions beyond which a system or some of its components fails. Many failure modes (each one being a limit state) may arise where failure of the system is achieved if any of the limit states is violated (series system). If information about the variables of the system is given in terms of probability distributions, then one can either attempt to directly calculate the probability of failure for the system or revert to the upper and lower bounds (Frechet bounds) on the probability of failure for the system calculated using the single modes' probability of failure. The latter approach is useful because directly calculating a system's probability of failure may be extremely time consuming and computationally intensive. Modelling now the uncertainty of the variables of the system by credal sets (random sets, p-boxes, parameterized probability measures) leads to intervals for the probability of failure for each mode where in general we have interactions between these probability intervals. The implications of these interactions for the computation of the system's reliability are illustrated by the failure modes of a portal frame. Further it is determined when calculating the failure probability bounds for each failure mode and then calculating bounds for the system is advantageous with respect to directly calculating the system's probability of failure.
12:00 Frank Coolen Nonparametric predictive inference for system reliability

We consider reliability of systems with basic structures, focussing on the lower probability of system functioning based on test data for the components. We briefly review some recent results, including a powerful algorithm for redundancy allocation. We discuss research progress and challenges within nonparametric predictive inference for Bernoulli random quantities, that are important for developing this approach for more complex systems. Interestingly, some proofs have been based on combinatorial arguments counting paths on rectangular lattices, which hopefully provides a suitable approach for further progress.
12:30 Bernhard Schmelzer Set-valued stochastic processes in vibration analysis

At ISIPTA 09 a theoretical concept was presented how to obtain a set-valued stochastic process from stochastic differential equations involving uncertain parameters modeled by random sets. Furthermore, analogues of first entrance times were defined and indicated to be an important tool for assessing system reliability. We use this theoretical concept to analyze a problem in applied mechanics. An earthquake excited vibration-prone structure is equipped with a tuned mass damper (TMD). By modeling earthquake by white noise we investigate the seismic performance of the TMD by assuming its damping ratio and natural frequency to be uncertain and modeled by random sets.
13:00 Lunch
14:30 Everyone Open discussion (WORK!) - including topics following from morning presentations.

Thu 10th: Graphical models and Markov chains

Time Event / Speaker Abstracts / Topic
Thu 10th
09:00 Michael Goldstein Graphical Models

09:45 Everyone Discussion round: On the role graphical models in statistics
10:00 Marco Cattaneo Hierarchical Networks

Hierarchical networks generalize credal networks by describing also the relative likelihood of different values in the probability intervals. This leads in particular to a more powerful and more robust updating rule.
10:30 Coffee break
11:00 Gert de Cooman An overview of the MePiCTIr message passing algorithm for doing expert system inference in Credal Trees under Epistemic Irrelevance
12:00 Matthias Troffaes, Ricardo Shirota Filho Solving act-state independent imprecise decision processes

In this talk we show identify sufficient (and maybe even necessary) conditions under which a simply extensive form solution exists to sequential decision processes with imprecision, assuming gain locality and act-state independence. We intend to demonstrate our result using sequential betting on a bag of marbles.
12:30 Lunch
14:30 Damjan Skulj Distances between probability measures and coefficients of ergodicity for imprecise Markov chains

Coefficients of ergodicity are an important tool in measuring convergence of Markov chains. We explore possibilities to generalise the concept to imprecise Markov chains. We find that this can be done in at least two different ways, which both have interesting implications in the study of convergence of imprecise Markov chains. Thus we extend the existing definition of the uniform coefficient of ergodicity and define a new so-called weak coefficient of ergodicity. The definition is based on the endowment of a structure of a metric space to the class of imprecise probabilities.
In the classical theory, several different coefficients of ergodicity exist, depending on different distance functions between probability measures. A question that still remains to be answered is how to extend some other important coefficients to imprecise Markov chains.
15:00 Filip Hermans Markov chains, and the geometry of convergence and accessibility
15:30 Coffee break
16:00 Everyone Discussion: Crazy Ideas and Short Statements on Perspectives and Challenges, includes:
  • characteristic functions, and how they could help in Markov chain research (Filip Hermans)
  • the difference between the expectation approach and the sets of probabilities approach in Markov chains, which amounts to the difference between epistemic irrelevance and strong independence. This is of course also relevant for credal sets. (Filip Hermans and Gert de Cooman)
  • overview of avenues for future research (Gert de Cooman)

Fri 11th: Foundations

Time Event / Speaker Abstracts / Topic
Fri 11th
09:00 Frank Coolen Hybrid methods

In larger scale applications of statistical methods, there may be benefits for using what could be called 'hybrid methods', where simultaneously a combination of methods are used for subparts of the model or problem. For example, one may wish to use imprecise methods for part of the problem, while precise methods (classical, Bayesian or other) may be considered appropriate for another part of the problem. This raises important questions on foundational aspects, including interpretation of results. We briefly illustrate one idea, namely the combined use of nonparametric predictive inference and Bayesian methods on different random variables in a basic exposure model in risk assessment.
09:20 Lev Utkin Imprecise Two-Stage Maximum Likelihood Estimation

In order to deal with statistical data characterized by changeable parameters it is proposed to maximize the likelihood function over a set of probability distributions at first stage and then to maximize over the changeable parameters. The set of distributions is defined by its lower and upper bounds which can be constructed by using the imprecise Bayesian models, by using the generalized method of moments, and by using the confidence intervals on the mean and variance. Every approach has shortcomings and virtues that are analyzed.
09:50 Michael Goldstein Temporal coherence

I will talk about temporal coherence, temporal sure preference and conditioning as a "model" for inference, and raise several issues that are important for foundations of statistics, and may be interesting to explore within an imprecise framework.
10:20 Coffee break
10:45 Marco Cattaneo The likelihood approach to statistics as a theory of imprecise probability

I will show how the likelihood approach to statistics can be interpreted as a theory of imprecise probability, and compare it with the precise and imprecise Bayesian approaches.
11:30 Matthias Troffaes Factuality AKA subgame perfection

We characterized the precise conditions under which counterfactuals arise. A normal form solution will contain a number of optimal policies. One can examine these policies restricted to a subtree of the original tree, and one can also consider finding the normal form solution of the subtree (conditional upon the preceding events). If this solution differs from the restriction of the full solution, then one's actions in a subtree are determined by the full tree in which it was embedded: one is counterfactual.
12:00 Tomaz Podobnik On probabilistic parametric inference

The term probabilistic parametric inference stands for specifying a posterior (real valued, unitary, and countably additive) probability distribution to a (possibly multi-dimensional) parameter in a family of direct probability distributions. The probabilistic inference is characteristic of Bayesian schools of statistical inference, as opposed to frequentist schools. In the Bayesian paradigm, it is also possible to make statements regarding the values of the inferred parameters in the absence of data, and these statements can be summarized by prior distributions. A Bayesian statistical model thus comprises a parametric family of direct probability distributions and an unconditional probability distribution on the parameter space, called the non-informative prior probability distribution.
Here, we formulate a theory of probabilistic parametric inference that, unlike Bayesian statistical models, does not comprise (non-informative) prior probability distributions. We resolve the seeming inconsistencies, the marginalization paradox and the strong inconsistency, by showing that they stem from applying to a rule outside the theory, and from improper notation. The theory is objective in that a particular likelihood function always results in the same posterior probability distribution. Bayesian statistical models, the empirical Bayesian analysis, the fiducial inference, the reference prior approach and the form invariant approach are all incompatible with the objective probabilistic parametric inference. The theory is also operational: it is possible to construct credible regions with posterior-probability content equal to the coverage of the regions. From an operational perspective, we object to the unreserved usage of marginalization.
12:30 Lunch
14:30 Everyone Open discussion (WORK!) - including the following topics:
penalization of imprecision, set-valued data

Sat 12th: Public lecture day honouring Kurt Weichselberger on the occasion of his 80th birthday

Time Event / Speaker Abstracts / Topic
Sat 12th
from 14:15 Testimonial lectures ("Festvorträge"), by
Michael Goldstein,
Lev Utkin, and
Kurt Weichselberger.

Laudatio:
Hans Schneeweiß.
The lectures will take place at the Vice chancellor's room ("Rektorzimmer"), located at the first floor of the LMU main building (Geschwister-Scholl-Platz 1).

Please note that on Saturday 12th (and Sunday 13th) there is a street fair in the street in front of the main building called Streetlife Festival (page in German). The central entrance of the main building is thus not reachable by car (see the notes for residents — also only in German, but included map should be self-explanatory). If you need to get as close as possible to the venue by car, consider heading to the corner Schellingstraße / Ludwigstraße (destination address for satellite navigation: Schellingstr. 3 or 4 or Ludwigstr. 25). If using public transport poses no problem, we would suggest to use one of the numerous Park and Ride parkings around the city centre (Page on Munich Park and Ride parkings, in German).

The underground station "Universität" is served as usual on Saturday 12th.

For information on getting around in Munich, please see venue.
Michael Goldstein Bayesian uncertainty analysis for complex physical models

Accounting for, and analysing, all the sources of uncertainty that arise when using complex models to describe a large scale physical system (such as climate) is a very challenging task. I will give an overview of some Bayesian approaches for assessing such uncertainties, for purposes such as model calibration and forecasting.
Lev Utkin Imprecise inference models in decision making and risk analysis

The traditional approach to decision analysis in the framework of expected utility theory calls for single or precise distributions of states of nature. However, we have usually only partial information about probabilities of states of nature. Some decision problems where direct data (precise or interval-valued) on the states are available are considered. The imprecise Bayesian models are proposed for solving the problems. Moreover, some imprecise models are studied by proceeding from certain applications (risk model of insurance, warranty contracts). A decision problem in the case of a non-monotone utility function when states of nature are described by sets of continuous probability distributions restricted by some known lower and upper distributions is also studied.
Kurt Weichselberger Symmetric Probability Theory
around 18:00 Reception

Sun 13th: Excursion

Time Event / Speaker Abstracts / Topic
Sun 13th

Mon 14th: Regression

Time Event / Speaker Abstracts / Topic
Mon 14th
09:30 Jochen Einbeck Are there 15 different ways of thinking of imprecise regression?
10:00 Damjan Skulj Linear regression with NPI

The Nonparametric predictive inference (NPI) model is combined with the assumption of a linear dependence between random variables to obtain predictions about the dependent variable conditional on the value of independent variables. The basic idea of the method is explained and a numerical example is given.
10:30 Coffee break
11:15 Thomas Augustin Credal maximum likelihood -- an imprecise probability alternative to mixed models?

The talk considers credal parametric sampling models, where the underlying parameter may vary in an interval. Based on this, we suggest a notion of credal maximum likelihood, leading to interval-valued point estimators arising from maximizing individual likelihood contributions subject to constraints on the interval-width. The approach may provide an imprecise probability alternative to handle unobserved heterogeneity in regression models.
11:45 Gero Walter Linear regression with sets of conjugate priors

A new conjugate prior model for linear regression is presented, which is a special case of the normal - inverse gamma model common in classical Bayesian analysis. Contrary to the latter, the new model can be easily used for inference with sets of priors via the (generalized) iLUCK-model technique.
12:30 Lunch
14:30 Lev Utkin,
Frank Coolen
Imprecise regression analysis

A framework for constructing the imprecise regression models based on combining the maximum likelihood estimation and imprecise inference models. The main idea is that the noise in regression models is assumed to be governed by an unknown probability distribution from a set of distributions. The models differs by the method of defining the set of distributions. Various special models as examples of applying the proposed framework are studied.

Regression analysis of interval data

A new regression model with interval-valued statistical data is developed. The main idea underlying the model is to maximize the "density" or "overcrowding" of the biased intervals. The likelihood function for determining the model parameters is written by using extended belief and plausibility functions. The model has some interesting properties.
15:15 Short coffeee break
15:30 Everyone Crazy hour
16:30 Jochen Einbeck (moderation) Rejoinder

Tue 15th: Classification, computation, and numerical analysis

Time Event / Speaker Abstracts / Topic
Tue 15th
09:30 Rebecca Baker Classification trees with NPI

We consider the construction of classification trees using the NPI model for multinomial data. We present two algorithms, one approximate and one precise, for finding the maximum entropy distribution consistent with the NPI model. These methods are compared to previous approaches including precise and imprecise methods.
10:00 Luc Jaulin Probabilistic set-membership estimation

Interval constraint propagation methods have been shown to be efficient and reliable to solve difficult nonlinear bounded error estimation problems. However they are considered as unsuitable in a probabilistic context, where the approximation of a probability density function by a set cannot be accepted as reliable. This talk shows how probabilistic estimation problems can be transformed into a set estimation problem by assuming that some rare events will never happen. Since the probability of occurrence of those rare events can be computed, we can give some prior lower bounds for the probability associated to solution set of the corresponding set estimation problem. The approach will be illustrated on a parameter estimation problem and on the dynamic localization of an underwater robot.
11:00 Coffee break
11:30 Nathan Huntley Backward induction

We study the conditions under which backward induction can be used to solve decision trees. A large decision tree induces a very large set of gambles, and if the choice function is computationally expensive, it may be too difficult to apply it to all gambles. Backward induction works by solving subtrees of the decision tree, and eliminating gambles that are not present in these smaller solutions.
12:30 Lunch
14:30 Everyone Open research time, for those that still have energy after a long and exciting week of research!
Last modification: Gero Walter,