Fully Funded PhD Student Positions Available in the RIII Lab

The Risk-Informed Infrastructure Innovation (RIII) Lab led by Professor Arnold Yuan is looking for one to two exceptional PhD students working in the area of digital twins-enabled bridge asset management. Students are required to have solid background in civil engineering. Research experience in building information modelling, structural health monitoring, deep reinforcement learning and infrastructure asset management will be an important asset.

Young researchers looking for research opportunities are encouraged to contact Prof. Arnold Yuan via email (arnold dot yuan at torontomu dot ca).

When you send your first inquiry email, please

  • indicate what research topics you would like to work on,
  • identify at least three major active researchers (not your references) in the world, and
  • explain why you choose TMU and me as your potential supervisor for further study.

Last update: 2024-10-04

Posted in Personal Life | Comments Off on Fully Funded PhD Student Positions Available in the RIII Lab

CV8311 F24 Lecture 3 Probabilistic Risk Analysis

After the introduction of the probability concept and the rules of probability calculation, I jumped to the applications. Probabilistic Risk Analysis (PRA) is a subject that uses probability and only probability. It does not even need the notions of random variables, which will be covered in the next lecture.

The lecture starts with two student presentations. Bara, a PhD student, presented a summary of the following two papers:

  • [KG81] Kaplan S, Garrick BJ (1981). On the quantitative definition of risk. Risk Analysis 1(1): 11-27.
  • [K97] Kaplan S (1997). The words of risk analysis. Risk Analysis 17(4): 407-417.

Jason, an MASc student, presented a summary of the following two papers:

  • [PC96] Pate-Cornell ME (1996). Uncertainties in risk analysis: six levels of treatment. Reliability Engineering & System Safety 54: 95-111
  • [KD09] Der Kiureghian A, Ditlevsen O (2009). Aleatory or epistemic? Does it matter? Structural Safety 31: 105-112

The first set focuses on the definition of risk, whereas the second on the treatment of uncertainties. In my view, any students interested in engineering risk and reliability must read these four papers carefully and re-read them from time to time. A professor’s privilege is such a re-reading opportunity.

Risk

Bara and Jason summarized these two topics very well. KG81 is the second paper of the flagship journal in risk analysis. It really sets the cornerstone for the whole subject of risk analysis. In the paper Kaplan and Garrick defined risk and expressed it using a triplet <si, pi, xi>, with si denoting a hazard scenario, pi the frequency of that scenario, and xi the consequence or outcome of the scenario. Suppose there are N hazard scenarios that have been identified. A special case of the identified scenario is the case N+1: “All else”. This is a tactic used to address a critic that the paper cited but didn’t mention the source:

A risk analysis is essentially a listing of scenarios. In reality, the list is infinite. Your analysis, and any analysis, is perforce finite, hence incomplete. Therefore no matter how thoroughly and carefully you have done your work, I am not going to trust your results. I’m not worried about the scenarios you have identified, but about those you haven’t thought of. Thus I am never going to be satisfied.

This critic actually came from the famous Lewis Report, a review report for the Reactor Safety Study (WASH 1400), the very first probabilistic (or quantitative) risk analysis study in human history. I personally think that this critic is still valid. Using modern language, it was literally pointing out our incapability in front of ‘Black Swan’ events. There are always something that we don’t know we don’t known about. Risk analysis is a very humbling exercise! In this sense, I fully agree with Terje Aven’s proposal that risk analysis is a process of documenting what we know, how much we know and what we don’t know. Aven has two papers that are worth careful reading:

  • Aven T (2010). Some reflections on uncertainty analysis and management. Reliability Engineering & System Safety 95: 195-201
  • Aven T (2013). A conceptual framework for linking risk and the elements of the data–information–knowledge–wisdom (DIKW) hierarchy. Reliability Engineering & System Safety 111: 30-36.

In KG81, the authors presented the so-called two levels of risk analysis. The simple triplet <si, pi, xi> explained above with pi interpreted as frequency – an objective measure of likelihood – is the level-1 risk analysis. Considering modelling uncertainty – an epistemic uncertainty, the triplet is modified as <si, qi(pi), xi>, with qi(pi) representing the degree of belief on the frequency pi. In other words, Kaplan and Garrick proposed a two-stage treatment of uncertainties in probabilistic risk analysis.

It is because of the unknown unknowns and modelling uncertainties that we should use the term risk-informed decision making, rather than risk-based decision making. Risk analysis can never be truly complete.

In terms of results visualization, Kaplan and Garrick explains the risk curve, sometimes also referred to as Farmer’s curve, which basically is the survival function of consequence. With consideration of epistemic uncertainties, the risk curve turns into a family of risk curve. The paper also explained multi-dimensional consequence situation, where the risk curve becomes a risk surface. The multi-dimensional consequence is also known as multi-attribute consequence.

In the second paper [K97], Kaplan extended the risk triplet to the following format: <si, qi(pi), qi(xi)> to cover the complexity in technological systems where the consequence corresponding to a single hazard scenario may be best modelled as a random variable, rather than a deterministic value. Another interesting point that Kaplan explained in the new paper was the dynamic view of scenarios (see below the figures extracted from K97). This immediately connects the risk analysis to event tree analysis and decision analysis.

Uncertainty

PC96 delineates the various risk analyses from the uncertainty treatment perspective. Prof. Pate-Cornell (Prof. Cornell’s wife) ranked them from Level 0 to Level 5 (see below). This somehow corresponds to the 3 levels of structural reliability analysis and design that was first proposed by Madsen, Krenk and Lind (1985). Level 3 structural reliability covers Pate-Cornell’s level 4 and 5 of uncertainty treatment.

Uncertainty is the raison dêtre of risk and reliability analysis. How to dealing with uncertainties is a fundamental question. Historically, uncertainties are divided into aleatory uncertainty and epistemic uncertainty. But is this kind of categorization necessary? Does it make the risk communication easier or more difficult? [DK09] was first read in a workshop organized by JCSS tribute to Stanford Professor C. Allin Cornell in March 2007.

I first read DK09 not long after I wrote a discussion paper with Field and Grigoriu on model selection (see below). Indeed, I fully agree with Field and Grigoriu, and much earlier, George Box that there are no correct models, only useful ones. Therefore, utility is the very guide. Later I studied the relationship between measurement error and residual error in a regression model. I also studied the nature of chaos and quantum uncertainty. After that I was convinced that at the philosophical level, there are only two types of inherent or aleatory uncertainty: chaos and quantum uncertainty. Interestingly enough, both has to do with measurement. All other uncertainties are epistemic, although, you may treat them (wholly or partly) as aleatory uncertainties, depending on resources and time you can afford to reduce them. DK states that “the characterization of uncertainty becomes a pragmatic choice dependent on the purpose of the application.” I agree, and want to add that the pragmatic choice is often forced to make for managerial reasons.

  • Yuan, X.-X. (2009). Discussion of “Model selection in applied science and engineering: a decision-theoretic approach” by R. V. Field and M. Grigoriu, Journal of Engineering Mechanics, ASCE. 135(4), 358-359.
  • Box GEP (1976). Science and Statistics. Journal of the American Statistical Association. 71 (356): 791-799

Network Analysis

Probabilistic Risk Analysis uses two major techniques: (1) total probability, (2) conditioning. In the lecture, I used a simple bridge network as an example to explain the network reliability analysis. I first explained the reliability analysis of a series system and a parallel system. For the simple network, I first explained the path view and cut view, the concepts of minimal paths and minimal cuts. Then I explained the Inclusion-Exclusion equation (and the Bonferroni’s inequality). Finally, I introduced the conditioning method, which requires the use of conditioning and the total probability formula.

Qualitative risk analysis such as event tree and fault tree methods will be introduced through student presentation next week. Common cause failures are explained in the lecture notes, but left for students to read. Statistical estimation of frequency based on data will be explained later when we get to the statistical module. Estimation of degree of belief using expert judgment is not explained, but Cooke’s monograph can be a good reference. With all these, student should be able to carry out a rudimentary probabilistic risk analysis.

  • Cooke, R.M. (1991). Experts in Uncertainty; Opinion and Subjective Probability in Science. Oxford University Press.
Posted in CV8311, Risk & Reliability | Comments Off on CV8311 F24 Lecture 3 Probabilistic Risk Analysis

CV8311 F24, Lecture 2 Probability

This lecture introduces the concept of probability and basic probability laws. Applications are deferred to next lecture.

There are two types of lecturers: one that follows strictly a plan, being it in the form of Powerpoint slides, lecture notes, or a textbook, and the other jumping around, even though they had a plan. Apparently, I belonged to the second kind. The only plan that I followed was the following, informal definition of probability:

Probability is a measure of how likely it is that something will happen, or that a statement is true.

This teaching approach requires students to do their own readings. To me, learning involves two processes: private learning and absorbing; and discussions and debates. The purpose of discussions is to help integrate the new knowledge into our own existing knowledge network – a process of Bayesian updating.

Back to the informal definition of probability given above, I forgot where I took this definition – likely from Wikipedia – but this definition fits my purpose of emphasizing the double faces of probability: that is, a probability model can be applied to both a natural system (e.g., a building, a transport network), which is objective, and a decision system (e.g., a design process, an asset management planning process).

I provided students with a lecture note. The note includes a section “What is probability?” to explain various interpretations of probability. Although there are at least six different interpretations, my notes focus on only the classical, frequentistic, and Bayesian interpretations. I decided not to discuss this during my lecture. They are interesting and hard topics. A thorough discussion of these interpretations will require more than 3 hours, which I cannot afford. On the other hand, not every student is philosophically oriented. From applications point of view, most of students probably just want to know which stands you are taking and why. This is probably a safe approach.

My selection is a Bayesian one. From a pragmatic view, a coin to be tossed and a coin that’s covered on the desk are, to me (a modeller), is no difference. They both can be modelled by a probability model. Unless there is a way (or we can afford to take the way) to collect all necessary data for the coin dynamics from the start of tossing to the landing, these two coins can take the same probability model. From a theoretic view, separating probability into frequentistic probability and subjective (or Bayesian) probability is a fact of history of knowledge discovery, but we do not need to fall into the trap of history. All knowledge is relative; using the jargon of probability, conditional! All theories has a boundary. Therefore, a full discourse of any serious matter must take the background knowledge (or assumptions) into consideration. This is the essence of risk analysis, and this is the major reason that we do not use ‘risk-based’ decision making, but rather ‘risk-informed’ decision making. Separating probability into subjective and objective probabilities will create ourselves a new trap: what is the sum (or product) of an apple and an orange?

Suppose the coin to be tossed is modelled by an ‘objective’ probability, and the coin being covered on the desk by a ‘subjective’ probability. If either of the coin is on heads, take 1, otherwise 0. Ask: what is the probability that the sum of the two coins will be 1? There are three possibilities: 0 (TT), 1 (HT & TH), and 2 (HH). The answer is 50%. Is it similar to like adding an apple to an orange?

Why do we need to formalize the definition of probability?

Andre Kolmogorov axiomatized the probability theory. As an engineering student, do we really need to care the theory?

I provided in the lecture two motivational examples. One is taken Khaneman’s Thinking – Fast and Slow:

Linda is thirty-one years old, single, outspoken, and very bright.  She majored in philosophy.  As a student, she was deeply concerned with issues of discrimination and social justice, and also participated in antinuclear demonstrations.

Does Linda look more like

  • A. A bank teller,
  • B. An insurance salesperson, or
  • C. A bank teller active in the feminist movement?

This example shows the significance of following probability laws if daily reasoning.

The second motivational example is the famous Bertrand’s paradox:

A random chord is drawn on the unit circle.  What is the probability that its length exceeds √3, the length of the side of the equilateral triangle inscribed in the unit circle?

What do we mean by randomness? Three different definitions of ‘a random chord’ give three different estimation of the probability: 1/2, 1/3, and 1/4. Amazing!

Every Probability is Conditional!

Conditional probability of A given B is defined as Pr(A|B) = Pr(AB)/Pr(B). The event B becomes a normalizer. Based on this, we can take Pr(A) as Pr(A|O), where O here represents the sample space $\Omega$ (thanks TMU’s wordpress, it doesn’t support Latex writing!).

Two events A and B are said to be independent of each other if Pr(A|B) = Pr(A), or Pr(B|A) = Pr(B), or Pr(AB) = Pr(A)xPr(B). In daily language, the independence basically means that knowing a new information B does not change your estimation (or belief) of the probability of A.

A common confusion that beginners often have is between the notion of mutual exclusion and statistical dependence. Events A and B are mutually exclusive if Pr(AB) = 0. In the Venn’s diagram, A and B have no overlapped area. So mutual exclusion can be easily visualized. Independence is very difficult to visualize, although an area analogy to probability may be helpful – that is, think of Pr(A) being the proportion of the whole area of the sample space, Area(A)/Area(O); then Pr(A|B) becomes Area(AB)/Area(B). If the two proportion of A to Omega happens to the same as the proportion of AB to B, then A and B are independent.

Conditional Independence and Bayesian Network

Two events A and B are said to be conditional independent of each other given C, if Pr(AB|C) = Pr(A|C) Pr(B|C).

The conditional independence is a powerful assumption for dealing with complex joint events. A special case is the Markov property:

Pr(X1 X2 … Xn) = Pr(X1) Pr(X2|X1) Pr(X3|X2) … Pr(Xn|Xn-1)

A modern application of the conditional independence is through the notion of Bayesian network, which will be explained in the next or further next lecture.

Total Probability Formula

The total probability formula is a powerful tool in probabilistic risk analysis. The formula looks naively simple:

Pr(A) = Pr(AE1) + … + Pr(AEn) = Pr(A|E1) Pr(E1) + … + Pr(A|En) Pr(En)

In real-world problems, there may be multiple, mutually exclusive, ways to trigger an event A. These exhaustive, mutually exclusive events are denoted as E1, …, En (sometimes referred to as initiating events). Then the probability of A can be evaluated separately using the old ‘divide-and-conquer’ wisdom.

Bayes Rule

Many people underestimate the value of the Bayes formula. This time, I introduced it by introducing a toy problem first:

A city is served by three overnight mail carriers called A, B, and C.  The past record indicates that they fail to deliver the mail on time 1%, 2%, and 3% of the time, respectively.  If the overnight letter arrived late, what is the probability that it was sent via A? via B? via C?

The background information was intentionally missing. This aligns with the historical development of the so-called inverse probability problem. Knowing Pr(A|B), how do we find Pr(B|A)?

The value of the Bayes formula is not the mathematical formula itself, but lies on the interpretations of Pr(A) and Pr(A|E), with E representing the new ‘evidence’ and A the event of major interest.

With new evidence E arriving, how do we update our belief of A? There are two layers of issues here. The first layer is that A and E must be dependent somehow. If they are independent of each other, the new evidence does not provide any new information about A, and hence there is no updating.

The second layer of significance is the ‘outside of box’, or systems viewpoint. To complete the updating, we need to look at not only how A would affect the occurrence of E, but also other ‘reasons’ that may trigger the occurrence of E as well. Only through this holistic assessment, can one properly update their belief on A.

Posted in CV8311 | Comments Off on CV8311 F24, Lecture 2 Probability

CV8311 Risk and Reliability for Engineers Fall 2024, Lecture 1

First Lecture, September 9, Overview

Leaves of the honey locusts outside of my office have turned light yellow. The fall is here.

As always, the first lecture needs to first give students an overview of the subject, and then tell students the focus of the course. Nobody can cover all the topics of a subject. Selecting the topics to be taught in the next 11 or 12 sessions is not an easy task. When you are new in the subject, you may struggle to make up the full course. But when you have more than the time allows, you have to cut. And cutting hurts.

Last night Amazon finally delivered the copy of Der Kiureghian (2020) Structural and System Reliability to my doorsteps. I had a quick scan on the way to school. It is a great text, but I still hesitate to use it as the text for this course. It can be a good one for structural engineering students. But my class has probably only a half of them coming from structural/geotechnical engineering, and the other half from transportation, environmental and construction management.

Reliability itself may need two courses; one for structural reliability, the other for engineering reliability. Structural reliability takes a deductive approach, starting from random variables and their functions to reliability-based design. Monte Carlo simulation, surrogate modelling, code calibration and development, load combinations, and reliability evaluation of existing structures can be decently covered. Engineering reliability takes a inductive approach, starting from reliability or failure data analysis through stochastic deterioration modelling to inspection and maintenance optimization. There is no single textbook to my knowledge that covers both structural reliability and engineering reliability. Civil engineers still don’t talk to mechanical or electrical engineers. Can I teach students to build this bridge? I tried a few times, not successful. Students only complained that it was a hard course. Sad!

I did a pre-course survey through D2L last week. One question asks students to put down their learning expectations. One student put: “I hope to learn about Canadian laws and building codes.” I used to put ‘Beyond great expectations’ as the subtitle of the first lecture slide, but this expectation is huge! It is not irrelevant. I will try to entice some students to the code comparisons (e.g., NBCC vs. ASCE 7, AASHTO vs. CHBDC). Laws, ultimately, are one of the most effective framework for managing risks. At one time, I thought of initiating a private research project matching construction contracts with construction project risks. So far, this idea still stays as only an idea. So, I am sorry, this student; I won’t be able to cover Laws.

Since the word reliability has already mad you so busy, why bother to introduce risk?

Reliability is an abstract concept. In structural reliability, the concept arises from the need of defining safety and serviceability. Most of us can agree that there is no absolute safety. So we need a measure to define the relative safety so that it can help address the ‘how safe is safe enough’ question. Probability is found to be a convenient, existing mathematical notion. Hence, reliability denotes the probability of no failure, or 1 minus failure probability. Neglect for now all the analysis details, which Prof. Der Kiureghian takes the whole book to explain. But what is the meaning of probability? Can we validate the calculated probability of failure empirically? If not, is structural reliability a scientific theory?

This brings the subjective interpretation of probability. The whole structural reliability community takes this interpretation. This tradition starts from Prof. Cornell (see the preface of his 1971 textbook). In a recent article for JCSS 50 Interpretation of probability in structural safety – A philosophical conundrum“, Ton Vrouwenvelder et al. (2024) reiterated this Bayesian, subjective stand.

Students won’t fully buy this in until they were presented this question in a decision-making setting. Past engineering education places too much emphasis on the scientific foundations of engineering, and yet little discussions about decision making has been given to students. In my view, introducing the basic setting and categorization of engineering decisions a good start of the subject.

A decision involves three basic elements: alternatives or options, outcomes, and criterion or criteria. To decide the best option to take in front of a decision, one needs to list all meaningful options they may have, obtain the outcome of every option, and know how to compare the outcomes based on the decision maker’s value system. Decision theory focuses on the last question, i.e., how to compare outcomes in order to conclude the decision. This seems to be an easy task, but it is actually not. There are two basic categories:

  • Multi-criteria decision making
  • Decision making under uncertainty

In the first category, the outcomes of a decision alternative involve multiple aspects. For example, when we buy a new car, we need to compare multiple models. In this comparison, may look at not only the price, but also the power, fuel consumption, safety, aesthetics, and so on. Some of the attributes can be converted to a standard unit. For example, the purchase price and the fuel consumption may be lumped into a lifecycle cost if we can properly estimate the expected service life. However, apparently not all attributes can be converted to a unified measure. This brings the decision maker to a difficult situation where they have to compare ‘apple and orange.’

The decision making under uncertainty is different. In this case, the outcome of a decision alternative is uncertain. In general, it can be described as a random variable with a given probability distribution. The question thus turns to the comparison of two random variables. In the simplest case, we are facing a decision like the following between two options:

  • Option 1: The net benefit is zero.
  • Option 2: The net benefit is $5000 with 20% and -$1000 with 80%.

Do you prefer Option 1 or 2?

At the core, decision theory analyzes the coherence of decision rules. For example, do we take the option based on the mean value (in probability, it is also called expected value, or expectation) of the random outcome? Does the mean criterion provide a coherent decision? Does the variance or standard deviation matter? How do we settle down the conservatism embedded in engineering? Pushing it to limit, we often face hard engineering decisions of “low probability, large consequences.” That’s why we need to know a little decision theory. More importantly, the statistical decision theory takes probability purely as the machinery of logic. It does not matter whether the probability is interpreted as an empirical frequency, an inherent nature of the subject under study, or a degree of belief of the decision maker on the subject. Once we understand this, we can settle down the struggle, for most of the part, whether we need to separate aleatory and epistemic uncertainties.

This course focuses on decision making under uncertainty, in which the uncertainty can be modelled using probability theory. We will not discuss game theory, which deals with another DMUU situation where a decision involves multiple opponents and the uncertainty is primarily caused by the opponents’ decisions.

Engineering decisions add two complications to the DMUU problems: First, the number of decision alternatives can be huge or even infinite. Second, the outcome analysis of each alternative often involves complicated system analysis. Mathematical optimization is used to address the first aspect, whereas probabilistic system analysis, or simply, reliability analysis, is used to address the second aspect.

Broadly speaking, reliability analysis addresses two major problems:

  1. Characterization and quantification of uncertainties
  2. Propagation of uncertainties from inputs to outputs

In terms of uncertainty characterization, there are probability models, random variable models, random vector models, stochastic process models, and random field models. For each model, we need to understand the full and partial characterization. For example, for a random variable model, the full characterization requires the determination of its probability distribution. But it can partially characterized by its mean, standard deviation, skewness, kurtosis, etc. How do the full and partial characterizations related to each other? This is also an interesting question. Understanding this will help you understand the various reliability methods and their relationships.

We need empirical data to quantify the uncertainty. This involves the study of statistics. Likelihood function is the bridge between probability and statistics. It plays the pivotal role in Bayesian statistics as well. Therefore, we will focus on the construction of likelihood functions for various models. Reliability data often involves missing and incomplete information. The likelihood function provides a proper mechanism to address the data imperfections as well.

Regarding uncertainty propagation, we shall study both analytical methods and simulation based methods.

If time permits, we will introduce Response Surface as the prelude to surrogate modelling for complicated system analysis.

Various application areas will be largely left for students to explore by themselves through their course projects.

Posted in CV8311 | Leave a comment

TMU AM Hybinar No. 6

Role and Importance of Resilience and Engineering Asset Management at Times of Major, Large-Scale Instabilities and Disruptions

  • Speaker: Dr. Dragan Komljenovic, Senior Research Scientist, Institut de recherche d’Hydro-Québec (IREQ)
  • Time & Date: 14:10-15:30 EST, Friday, February 24, 2023,
  • Location: Online
  • Zoom Link: Please registration for the Zoom link. If you want to be included in the subscription list, please write to Dr. Arnold Yuan.

Abstract

Contemporary organizations function in a complex business and operational environment composed of closely interdependent systems. They are also complex by their internal structure, management and deployed modern technologies. This complexity is not always well understood and cannot be efficiently controlled. As the complexity and interdependencies increase, man-made systems become more unstable creating conditions for cascading, system-level failures causing serious threats to both themselves and society in general.

Such breakdowns may consist of a) serious physical damages and destruction of their physical assets (caused by natural disasters, extreme weather phenomena and climate change, malicious human actions, etc.), b) large functional disruptions with no physical damages of assets (caused by major organization’s internal disturbances, market crashes, pandemics, wars, disruptions of supply chains, etc.) or c) both. Those sources of risks are basically external to organizations. They are unable to control them but are deeply affected by those risks.
Recent examples of such functional disruptions include Covid-19 epidemic and the Russia-Ukraine war. It is affecting both all sectors of life and businesses worldwide. It convincingly shows that we need to think, plan and act globally in order to deal with such situations that will also take place in the future. Thus, organizations must find ways of coping with this reality to remain economically viable. We are of opinion that the concepts of structured Asset Management (AM) and resilience put together may provide an efficient framework in this regard.

Two case studies in a major North American electrical utility (Hydro-Quebec) demonstrate the applicability of this approach: i) during an exceptional ice storm with significant damages of its physical assets, and ii) coping with challenges of COVID-19 with no destruction of its physical assets.

About the Speaker

Dragan Komljenovic received his BSc at the University of Tuzla, his MSc at the University of Belgrade, his 1st Ph.D. at Laval University (Quebec-City, Canada) in 2002 and 2nd Ph.D. at the University of Quebec in Trois-Rivieres (UQTR), Canada in Industrial Engineering in the field of Engineering Asset Management in 2018. He works as a Senior Research Scientist at the Hydro- Quebec’s Research Institute (IREQ) in the field of reliability, asset management, risk analysis and maintenance optimization. Dragan worked almost 12 years as a Reliability and Nuclear Safety Engineer at the Gentilly- 2 Nuclear Power Plant, Hydro-Quebec. He collaborates with several universities in Canada and abroad. Dragan has published more than 90 refereed journal and conference papers. He is a Fellow of the International Society of Engineering Asset Management (ISEAM), and Vice- President of the Montreal Chapter of Society of Reliability Engineers (SRE). Dragan has the status of professional engineer in the province of Quebec, Canada.

Dragan Komljenovic, ing., Ph.D., FISEAM

Prediction and Reliability

Institut de recherche d’Hydro-Québec (IREQ)

1800, boul. Lionel-Boulet

Varennes, QC; J3X 1S1, Canada

Phone: ++1-450-652-8741

Email: komljenovic.dragan@ireq.ca

https://www.hydroquebec.com/about/

Scopus Author Identifier:6505846970

ORICID ID:https://orcid.org/0000-0002-1542-4426

https://www.scopus.com/authid/detail.uri?authorId=6505846970
https://scholar.google.ca/citations?user=otcTRjQAAAAJ&hl=en
Posted in Personal Life | Comments Off on TMU AM Hybinar No. 6

TMU AM Hybinar No. 5

Sustainable infrastructure is a two-way street: balancing environmental and condition performance goals

  • Speaker: Dr. Omar Swei, Assistant Professor, Department of Civil Engineering, University of British Columbia
  • Time & Date: 14:10-15:30 EST, Friday, January 27, 2023,
  • Location: Online
  • Zoom Link: Please register here for the zoom link.

Abstract

Governmental agencies are under increasing pressure to mitigate the global warming impact of our infrastructure systems. This objective, however, must be carefully balanced with other performance metrics of interest (e.g., pavement condition) for agencies. This presentation will highlight recent work aimed at understanding these tradeoffs for managing infrastructure systems at both the facility- and network-level. Through a series of case studies, this presentation will highlight key takeaways and opportunities for future research.

About the Speaker

Omar Swei joined the Department of Civil Engineering at The University of British Columbia (UBC) as an Assistant Professor in 2018. He presently oversees the department’s graduate program in project and construction management. His research emphasizes the use of operations research methods to improve the design, delivery, and maintenance of infrastructure systems.

Posted in Personal Life | Comments Off on TMU AM Hybinar No. 5

Asset Management Hybinar No. 4

History of Infrastructure Assets Management in Canada

  • Speaker: Dr. Guy Félio, P.Eng., IRP [Climate], Fellow CSCE, Fellow IAM
  • Time & Date: 14:10-15:30, Friday, November 11, 2022 Rescheduled to 13:10 – 14:30 EST, Monday, November 21, 2022
  • Location: Online
  • Zoom Link: https://ryerson.zoom.us/j/4612874120



Abstract

In the 1970s, the US Army Corps of Engineers was asked by the US Air Force to develop a tool to manage the runways they operated in the USA and around the world, and thus was born PAVER (a process and software still available and used today). Pavement management, which used the basic steps found in modern asset management frameworks, was Dr. Felio’s first introduction (in the late 1970s) to the tools and processes that today form asset management planning.

The presentation covers three key periods in the development, adoption and evolution of Asset Management in Canada: the pre-2000 years – a period of recognition and first actions; the decade between 2000 and 2010 when the AM industry was born; and the years since 2010 during which AM has been institutionalized across the country. Some of the major milestones and key people in these periods will be presented.

About the Speaker

Dr. Guy Félio (LinkedIn profile) is a national leader of infrastructure asset management. He has an exceptional research and professional experience in Infrastructure planning, management, climate risks assessment, mitigation and adaptations.  Guy obtained a PhD in Civil Engineering from Texas A&M University after undergraduate and graduate studies in Ottawa and a few years of consulting work. He started his career as a university professor at UCLA and returned to Canada in the early 90’s, spending 2 years as a consultant and then joined the National Research Council to become the Head of the Urban Infrastructure Rehabilitation research group. The research conducted by the group focused on finding practical solutions to municipal infrastructure challenges, including the management of their assets. It is during that time that he developed the concept and built awareness and engagement to create the “National Guide to Sustainable Municipal Infrastructure” – known as InfraGuide. While at NRC, he was also seconded to Infrastructure Canada’s Program Operations to support the development and implementation of programs. After leaving the National Research Council, Guy worked in various consulting functions; he also did a tour of duty as an elected city Councillor in the eastern-Ontario municipality he and family have called home for the last 30 years. Recently, he has continued working as an independent consultant focusing on asset management, climate risks assessments, and adaptation projects in Canada and internationally. He was involved in the ISO Committee that developed the ISO 55000 Asset Management Standard, and now is on various ISO Committees related to Climate Change and Infrastructure Resilience.

Posted in Infrastructure Management, Public Infrastructure | Tagged , , | Comments Off on Asset Management Hybinar No. 4

Asset Management Hybinar No. 3

Efficient Scenario Analysis for Optimal Adaptation of Bridge Networks under Deep Uncertainties through Knowledge Transfer

  • Speaker: Dr. Minghui Cheng, Postdoctoral Associate, Systems Engineering, Cornell University
  • Time & Date: 14:10-15:30, Friday, October 28, 2022
  • Location: Online
  • Zoom Link: https://ryerson.zoom.us/j/4612874120

Abstract

Due to deep uncertainties associated with climate change and socioeconomic growth, managing bridge networks faces the challenge to perform optimization for different scenarios. There exist a large number of scenarios when various sources of uncertainties, such as population growth and the increasing magnitude and frequency of natural hazards due to climate change, are compounded. Traditionally, scenarios are analyzed sequentially. However, when optimization for one single scenario is time-consuming, only a limited number of scenarios can be considered. To accelerate scenario analysis, this presentation introduces a novel scheme through knowledge transfer between scenarios. Specifically, after finishing the optimization of a certain number of scenarios, the analyses of any new scenarios are accelerated by utilizing the knowledge obtained from optimization of previous scenarios. The scheme builds on meta-learning-based surrogate modelling (MLSM), previously developed by Dr. Cheng and his colleagues, to realize the concept of knowledge transfer. The presentation first introduces MLSM and shows several applications. A proper definition of similar scenarios for adaptation of bridge networks under deep uncertainties is then given to stipulate the situation when knowledge transfer can occur. A bridge network in Camden County, New Jersey, is used as an illustrative example to demonstrate the computational efficiency of the proposed novel scheme. The full paper covered by the presentation can be found here.

About the Speaker

Dr. Minghui Cheng  (Google Scholar Profile) is currently a postdoctoral associate at Systems Engineering, Cornell University. He obtained his Ph.D. in Structural Engineering at Lehigh University, USA, in 2021 under the supervision of Prof. Dan Frangopol. Prior to that, he earned his B.E. in Civil Engineering at Hunan University, China, in 2016. His research is primarily focused on (1) establishing digital twins for bridge networks, (2) optimal life-cycle management of structures and infrastructure systems, (3) realizing knowledge transfer in engineering analysis, and (4) calibrating decision-making models for civil engineering stakeholders. He is the recipient of multiple awards including University Fellowship and P.C. Rossin Doctoral Fellowship at Lehigh Univerisy and National Scholarship (twice) and Chinese Government Scholarship during his time at Hunan University. He has multiple publications in prestigious journals, such as Structural Safety, Reliability Engineering & System Safety, Computers & Structures, Journal of Bridge Engineering, among others. Two of them are selected into the ASCE Bridge Asset Management Collection.

Posted in Personal Life | Comments Off on Asset Management Hybinar No. 3

Asset Management Hybinar No. 2

Wastewater Asset Management – Canadian’s Experience

  • Speaker: Fayi Zhou, PhD, PMP, P.Eng., Manager, Drainage Design, EPCOR Utilities, Edmonton
  • Time & Date: 14:10-15:30, Friday, October 14, 2022
  • Location: CUI 219 (44 Gerrard Street East) – Map
  • Zoom Link: https://ryerson.zoom.us/j/4612874120

For in-person attendance, please arrive at the front of the CUI building (facing Gerrard St.) at 2 pm when a student will wait there to receive and get you in. Since the door is card controlled, you may be locked out after 2 pm.

Abstract

Urban Wastewater Infrastructure plays a key role providing essential service to over 86% of population in Canada. With these infrastructure’s aging and deterioration, there are increasing demands on municipalities to invest significant amount of money to repair or replace the aging facilities to upgrade or maintain the level of service to the public. According to Canada’s Core Public Infrastructure Survey released in July 26, 2022, Water and Wastewater systems were composed of 4,126 wastewater treatment plants and lagoons, 3,342 water treatment facilities, 472,488 kilometers of underground pipes and others. 28% of total capital spending on infrastructure by municipal, local and regional government in 2020 was on water and sewer infrastructure.  Infrastructure Report Card released in 2019, 16% of the potable water, 20% of wastewater and 16% of storm water infrastructure are poor or very poor condition. What need to rehab, when to rehab and how much will cost, these are the basic questions that decision makers in municipalities rely on in order to make the right decision in providing funding.  Providing the answers of these questions rely on the best practice of asset management.

On October 14, Dr. Zhou will present a general overview on current and future water and wastewater assessment management practices in Canada, from inventory development and condition assessment to capital investment, with the emphasis on risk based asset-management. He will also provide some insights on some areas of concerns requiring further research attentions such as Big Data based quantitative risk ranking, and AI application in automatic condition assessment.

About the Speaker

Dr. Zhou is the manager of Drainage Design at EPCOR Utilities, City of Edmonton. He is also an adjunct professor at Concordia University in Montreal. He has over 35 years of experience in water and wastewater engineering both in the industry and academia with the specialties in urban drainage planning and design. He has been with the City of Edmonton since 2000 playing various roles of engineer, project manager and general supervisor. He led the drainage team in developing the first Low Impact Development Design Guide in Alberta, as well as developing drainage asset management system. He obtained his bachlor and master degree from Dalian Unviersity of Technology and his Ph.D from University of Alberta. He published over 70 papers in journals and conferences. He is a reviewer of Journal of Hydraulic Engineering (ASCE), Journal of Hydraulic Research (JHR) amd Journal of Fluid Engineering (JFE). He was on the board of director of North Sashatchewan Water Alliance(NSWA), Alberta Low Impact Development Partners (ALIDP).

Posted in Budget Allocation, Infrastructure Management, News, Public Infrastructure | Comments Off on Asset Management Hybinar No. 2

Asset Management Research Hybinar No. 1

Best Practices for Managing Municipal Infrastructure – A Coordinated Approach

  • Speaker: Soliman Abusamra, PhD, PMP, ENV SP, Senior Project Manager, Corridor Fleet Replacement Program, VIA Rail Canada
  • Time & Date: 14:10-15:30, Friday, September 16, 2022
  • Location: CUI 219 (44 Gerrard Street East) – Map
  • Zoom Link: https://ryerson.zoom.us/j/4612874120

Abstract

Infrastructure has a key role in determining the quality of people’s lives and is an instrumental ingredient to achieve economic growth. Over the past decade, aging infrastructure systems have been placing tremendous pressure on governments through steeply growing budget deficits and urgent need for replacement. According to Canadian Infrastructure Report Card released in 2019, one-third of Canada’s municipal infrastructure are in fair, poor and failing condition states, increasing the risk of service disruption, and leaving the decision-makers with no choice but undertake immediate interventions. Furthermore, recent studies estimated Canada’s infrastructure deficit at a range between $110 billion to $270 billion. In addition, the massive number of infrastructure intervention activities occurring in cities leads to detrimental social, environmental, and economic impacts on the community. Have you ever experienced the situation when the same road has been closed more than once in a very short time span? The lack of coordination results in increased service disruption for the users, less efficient expenditures’ utilization for the taxpayers’ money, higher cost of maintenance for municipalities with limited budget, etc. Thus, coordinating the interventions of the co-located assets (e.g. roads, water, and sewer) is progressively becoming paramount to cope with those tough challenges and enhance the infrastructure spending to derive the best value of money.

On September 16, Dr. Abusamra will present a coordination and optimization framework for managing the municipal infrastructure under performance-based contracts. The framework presents an integrated contractual and asset management solution to aid decision-makers in both the pre-contract and post-contract phases. He will also share some KPIs’ and case studies to highlight the value-added coordination can bring to municipalities and assets’ owners. Finally, he will share some insights on the role of digital technology on asset management and a roadmap for the years to come.

About the Speaker

Dr. Abusamra  (LinkedIn Profile, Google Scholar Profile) is a Senior Project Manager for the Maintenance Facilities Upgrades at VIA Rail. He has 10+ years of experience in the industry and academia across three continents. In Africa, he worked as a Cost Engineer at Gleeds Cost Consultancy mainly involved in quantity surveying, cost estimation, and claim analysis. In Asia, he worked as Project Controls Engineer at Consolidated Contractors Company in Riyadh Metro Project, a design-build rapid transit system (6 lines, 176 KM, 85 stations, $22.5 billion); where he was involved in construction supervision, progress monitoring, quantity surveying, BIM, planning, and management. In North America, prior to joining VIA Rail as a Senior Project Manager, he worked as a Manager in KPMG’s Global Infrastructure Advisory Team supporting various municipalities and clients in developing their asset management strategic and tactical plans. He also worked as a cost consultant at LCO Construction Consultant on several projects, most notably the REM project in Montreal. He received his Ph.D. in Civil Engineering from Concordia University, published numerous books, journals, and conference articles about asset management for municipal infrastructure and spoke in various public events (TED Talk, CBC, IAM, etc.). His key areas of interests are in Asset Management, Artificial Intelligence, Digital Twins, Optimization and Decision-making, Project Management, Civil Engineering, Highways, Condition Assessment, and Sustainability. He is a certified PMP, ENV SP, Lead Auditor – ISO 19011: 2018, OSHA certified, LCA Certified, asset management professional with extensive knowledge of ISO 55000 series. He is an editorial member and reviewer in top-ranked journals such as ASCE, CSCE, Elsevier, etc.

Posted in Infrastructure Management, Public Infrastructure | Tagged | Comments Off on Asset Management Research Hybinar No. 1