Joseph Butler (1692-1752): Probability is the very guide of life.
Arnold Yuan: Uncertainty is the very guide to research; where there’s uncertainty there is research.
Joseph Butler (1692-1752): Probability is the very guide of life.
Arnold Yuan: Uncertainty is the very guide to research; where there’s uncertainty there is research.
Infrastructure Canada (http://www.infrastructure.gc.ca/index-eng.html), under the Minister of Transport, Infrastructure and Communities, is the lead federal department responsible for infrastructure policy development and program delivery. Infrastructure Canada delivers a broad range of infrastructure programs, providing flexible and effective funding support to provincial, territorial, municipal, private sector and not-for-profit infrastructure projects. It creates partnerships and makes investments to build, upgrade and renew public infrastructure. The current Infrastructure Canada programs include
Infrastructure Ontario (http://www.infrastructureontario.ca/home.aspx) is a crown corporation wholly owned by the Province of Ontario and established by the Ontario Infrastructure and Lands Corporation Act, 2011 that defines our responsibilities to our shareholder. Infrastructure Ontario is guided by Provincial Capital Plans, which build on the success of the ReNew Ontario and the Province’s Building a Better Tomorrow framework.
(to be updated)
The following graph will speak more than one thousand words when somebody wants to coax young fellows to study civil engineering.
Source: Canadian Construction Overview (PDF), by Mark Casaletto, Reed Construction Data (2012)
Dr Mihailo D. Trifunac is a professor of earthquake engineering at the University of Southern California. He recently published a review paper on earthquake response spectra in Soil Dynamics and Earthquake Engineering. The opening paragraphs, in my opinion, represent an excellent essay on the relationship between engineering research and design code writing.
Design codes should be simple and robust, and their procedures directly based on sound understanding of the physical nature of the problem. Achieving this is difficult, especially when large, nonlinear, and chaotic dynamic response is involved. Those who work on code formulation must possess a wide range of expertise spanning all disciplines from strong earthquake ground motion to dynamics of structures, so that all relevant advances can be synthesized and the uncertainties minimized. Such requirements seem at first to demand a multi-author work. Yet that approach may be doomed from the onset, because the essence of the problem is to develop a unified and balanced synthesis. That consideration dictates single authorship, despite all the difficulties it poses. Inevitably, that single author would have to perform the most difficult task of assimilating material from many disciplines, and would require guidance and help from many colleagues. This view, which is at odds with the manner in which collective committee decisions have been made in the past developments of earthquake design codes, may explain in part why we have inherited so many incompatible and inconsistent formulations in the contemporary codes.
Ideally, the design codes should be formulated based directly on in-depth understanding of the intricacies of the nonlinear response of structures, in a way, which wisely simplifies the complex phenomena and grasps the significant and dominant phenomena. This view is also at odds with most of the current code development approaches, which rarely start anew from the evolving knowledge about the physical nature of the problem, and typically focus only on fine-tuning of the governing parameters and on the development of correction procedures aimed at extending or correcting the contemporary codes for the observed discrepancies.
from Trifunac, M. D. (2012). Earthquake response spectra for performance based design — a critical review. Soil Dynamics and Earthquake Engineering, 37:73-83.
Indeed, the current incremental code development approach makes codes thicker and thicker, design equations longer and longer (with more and more parameters), and understanding of the codes more and more difficult. In many ways, this also has made engineering education more and more difficult. Oft-times, professors are so busy to try to cover all the correction parameters that no time could be set aside to explain the first principle dominated in the design equations.
From the water we drink to the roads we drive on, public infrastructure forms the fabric of a nation. Year after year, the government of all orders, from federal and provincial governments to municipalities, invests heavily on public infrastructure. Listed below are several important projects that are of historical significance.
(Source of the three graphs: Infrastructure Canada (2011). Building for Prosperity: Public Infrastructure in Canada)
Decision making under uncertainty is a hard subject. It is hard because there is no single criterion that is universally accepted by decision makers. For example, when the probability of the ‘state’ of the world is hard to evaluate, a risk averter (a person who averts risks) may make decision based on the worst scenario, whereas a risk seeker would like to bet on the optimistic scenario. Even if the worst scenario is used as the decision criterion, the decision will depend on which side of the coin is checked. At one side of the coin is the payoff, at the other side, regret. If the worst payoff is focused, the decision maker might choose among the alternative actions one that has the maximum minimum payoff, i.e., the maximin criterion. Otherwise, she might choose the action that minimizes the maximum regret, and hence she exercises the minimax criterion.
Similarly, when the probability of the state of the world is given, the decision criteria are not unique. For instance, one might make decision completely based on the maximum likelihood scenario. Nevertheless, the most often used decision criterion is the so-called maximized expected value criterion. Based on this criterion, a decision maker chooses the action of the maximum expected value. Suppose that the ‘world’ of the decision problem has only a finite number of states, and that there is a known probability pi associated with each state i . Then the expected value is calculated as
EV = p1×v1+…+pn×vn
where vi is the value of the outcome when the state is in state i for this action. Here the value is used as a synonym of utility.
As shown in the expression above, the expected value criterion involves two major terms: probability and utility. Unfortunately, neither of the terms is easy to understand.
A simple question to ask is: Why should the maximized expected value criterion be considered rational? In other words, why is it rational for a decision maker to maximize the expected value?
There are two different arguments for the expected value principle. The first argument is based on the law of large numbers (LLN) in probability. That is, it is a rational criterion because in the long run the decision maker will be better off if s/he maximizes expected value. The second argument takes an axiomatic approach, aiming at deriving the expected value criterion from a few fundamental well-accepted facts (axioms) for rational decision making.
However, the LLN argument has been refuted by many prominent decision theorists from several aspects. Keynes famously objected to the LNN by stating: “In the long run we are all dead”. He suggested that no real-life decision maker will ever face any decision an infinite number of times. Therefore, mathematical facts about what would happen after an infinite number of repetitions are therefore of little normative relevance. Just on the contrary the real-life decisions are often of one-shot nature.