First Lecture, September 9, Overview
Leaves of the honey locusts outside of my office have turned light yellow. The fall is here.
As always, the first lecture needs to first give students an overview of the subject, and then tell students the focus of the course. Nobody can cover all the topics of a subject. Selecting the topics to be taught in the next 11 or 12 sessions is not an easy task. When you are new in the subject, you may struggle to make up the full course. But when you have more than the time allows, you have to cut. And cutting hurts.
Last night Amazon finally delivered the copy of Der Kiureghian (2020) Structural and System Reliability to my doorsteps. I had a quick scan on the way to school. It is a great text, but I still hesitate to use it as the text for this course. It can be a good one for structural engineering students. But my class has probably only a half of them coming from structural/geotechnical engineering, and the other half from transportation, environmental and construction management.
Reliability itself may need two courses; one for structural reliability, the other for engineering reliability. Structural reliability takes a deductive approach, starting from random variables and their functions to reliability-based design. Monte Carlo simulation, surrogate modelling, code calibration and development, load combinations, and reliability evaluation of existing structures can be decently covered. Engineering reliability takes a inductive approach, starting from reliability or failure data analysis through stochastic deterioration modelling to inspection and maintenance optimization. There is no single textbook to my knowledge that covers both structural reliability and engineering reliability. Civil engineers still don’t talk to mechanical or electrical engineers. Can I teach students to build this bridge? I tried a few times, not successful. Students only complained that it was a hard course. Sad!
I did a pre-course survey through D2L last week. One question asks students to put down their learning expectations. One student put: “I hope to learn about Canadian laws and building codes.” I used to put ‘Beyond great expectations’ as the subtitle of the first lecture slide, but this expectation is huge! It is not irrelevant. I will try to entice some students to the code comparisons (e.g., NBCC vs. ASCE 7, AASHTO vs. CHBDC). Laws, ultimately, are one of the most effective framework for managing risks. At one time, I thought of initiating a private research project matching construction contracts with construction project risks. So far, this idea still stays as only an idea. So, I am sorry, this student; I won’t be able to cover Laws.
Since the word reliability has already mad you so busy, why bother to introduce risk?
Reliability is an abstract concept. In structural reliability, the concept arises from the need of defining safety and serviceability. Most of us can agree that there is no absolute safety. So we need a measure to define the relative safety so that it can help address the ‘how safe is safe enough’ question. Probability is found to be a convenient, existing mathematical notion. Hence, reliability denotes the probability of no failure, or 1 minus failure probability. Neglect for now all the analysis details, which Prof. Der Kiureghian takes the whole book to explain. But what is the meaning of probability? Can we validate the calculated probability of failure empirically? If not, is structural reliability a scientific theory?
This brings the subjective interpretation of probability. The whole structural reliability community takes this interpretation. This tradition starts from Prof. Cornell (see the preface of his 1971 textbook). In a recent article for JCSS 50 Interpretation of probability in structural safety – A philosophical conundrum“, Ton Vrouwenvelder et al. (2024) reiterated this Bayesian, subjective stand.
Students won’t fully buy this in until they were presented this question in a decision-making setting. Past engineering education places too much emphasis on the scientific foundations of engineering, and yet little discussions about decision making has been given to students. In my view, introducing the basic setting and categorization of engineering decisions a good start of the subject.
A decision involves three basic elements: alternatives or options, outcomes, and criterion or criteria. To decide the best option to take in front of a decision, one needs to list all meaningful options they may have, obtain the outcome of every option, and know how to compare the outcomes based on the decision maker’s value system. Decision theory focuses on the last question, i.e., how to compare outcomes in order to conclude the decision. This seems to be an easy task, but it is actually not. There are two basic categories:
- Multi-criteria decision making
- Decision making under uncertainty
In the first category, the outcomes of a decision alternative involve multiple aspects. For example, when we buy a new car, we need to compare multiple models. In this comparison, may look at not only the price, but also the power, fuel consumption, safety, aesthetics, and so on. Some of the attributes can be converted to a standard unit. For example, the purchase price and the fuel consumption may be lumped into a lifecycle cost if we can properly estimate the expected service life. However, apparently not all attributes can be converted to a unified measure. This brings the decision maker to a difficult situation where they have to compare ‘apple and orange.’
The decision making under uncertainty is different. In this case, the outcome of a decision alternative is uncertain. In general, it can be described as a random variable with a given probability distribution. The question thus turns to the comparison of two random variables. In the simplest case, we are facing a decision like the following between two options:
- Option 1: The net benefit is zero.
- Option 2: The net benefit is $5000 with 20% and -$1000 with 80%.
Do you prefer Option 1 or 2?
At the core, decision theory analyzes the coherence of decision rules. For example, do we take the option based on the mean value (in probability, it is also called expected value, or expectation) of the random outcome? Does the mean criterion provide a coherent decision? Does the variance or standard deviation matter? How do we settle down the conservatism embedded in engineering? Pushing it to limit, we often face hard engineering decisions of “low probability, large consequences.” That’s why we need to know a little decision theory. More importantly, the statistical decision theory takes probability purely as the machinery of logic. It does not matter whether the probability is interpreted as an empirical frequency, an inherent nature of the subject under study, or a degree of belief of the decision maker on the subject. Once we understand this, we can settle down the struggle, for most of the part, whether we need to separate aleatory and epistemic uncertainties.
This course focuses on decision making under uncertainty, in which the uncertainty can be modelled using probability theory. We will not discuss game theory, which deals with another DMUU situation where a decision involves multiple opponents and the uncertainty is primarily caused by the opponents’ decisions.
Engineering decisions add two complications to the DMUU problems: First, the number of decision alternatives can be huge or even infinite. Second, the outcome analysis of each alternative often involves complicated system analysis. Mathematical optimization is used to address the first aspect, whereas probabilistic system analysis, or simply, reliability analysis, is used to address the second aspect.
Broadly speaking, reliability analysis addresses two major problems:
- Characterization and quantification of uncertainties
- Propagation of uncertainties from inputs to outputs
In terms of uncertainty characterization, there are probability models, random variable models, random vector models, stochastic process models, and random field models. For each model, we need to understand the full and partial characterization. For example, for a random variable model, the full characterization requires the determination of its probability distribution. But it can partially characterized by its mean, standard deviation, skewness, kurtosis, etc. How do the full and partial characterizations related to each other? This is also an interesting question. Understanding this will help you understand the various reliability methods and their relationships.
We need empirical data to quantify the uncertainty. This involves the study of statistics. Likelihood function is the bridge between probability and statistics. It plays the pivotal role in Bayesian statistics as well. Therefore, we will focus on the construction of likelihood functions for various models. Reliability data often involves missing and incomplete information. The likelihood function provides a proper mechanism to address the data imperfections as well.
Regarding uncertainty propagation, we shall study both analytical methods and simulation based methods.
If time permits, we will introduce Response Surface as the prelude to surrogate modelling for complicated system analysis.
Various application areas will be largely left for students to explore by themselves through their course projects.