Case Study

Cantilever beam

Thyge Vinther Ludvigsen, s203591

22 Apr, 2026

Time Plan

Introduction

Goal of my work sins last time

What I test in this case study:

  • Inclusion of additional features

What is need to be done in future work:

  • Optimization algorithms for sensor placement:
  • Feature selection methods (e.g., PCA)

New Stochastic model

Sensor error model

Multivariate normal distribution:

\[ \epsilon_{\text{OMA}} \sim \mathcal{N}(0, \boldsymbol\Sigma_{OMA}(e)) \]

Diagonal covariance matrix: 1

\[ \Sigma_{\text{OMA}}(e)= \begin{bmatrix} \Sigma_{\omega}(e) & 0 \\ 0 & \Sigma_{\phi}(e) \end{bmatrix} \]

Natural frequencies

Frequency covariance Matrix: \[ \Sigma_{\omega}(e) = \operatorname{diag}\!\left( \operatorname{Var}(\hat{\omega}_1 \mid e), \dots, \operatorname{Var}(\hat{\omega}_m \mid e) \right) \]

Error approximation for frequencies:

\[ \boxed{ \operatorname{Var}(\hat{\omega}_i \mid e) \;\approx\; \frac{\sigma_\varepsilon^2} {\;\|\mathbf S(e) \odot \phi_i\|_2^2} } \]

Where:

  • \(\sigma_\varepsilon = CV \cdot \mu_{\omega_i}\): sensor noise, given as sensor CV (coefficient of variation) multiplied by the mean value of the natural frequencies.
  • \(\|\mathbf S(e) \odot \phi_i\|_2^2\): is the modal observability,
    • \(\| x \|_2\) is the L2 norm of the vector \(x\), so the euclidean distance.
    • \(\mathbf S(e)\) is the sensor configuration vector.
    • \(\phi_i\) is the mode shape vector for the \(i\)-th mode, which describes the deformation pattern of the structure at that mode.
    • \(\odot\) denotes the element-wise (Hadamard) product, which multiplies corresponding elements of the two vectors.

Mode shapes

Mode shape covariance matrix: 1

\[ \Sigma_{\phi}(e) = \operatorname{diag}\!\left( \operatorname{Var}(\hat{\phi}_1 \mid e), \dots, \operatorname{Var}(\hat{\phi}_m \mid e) \right) \]

Error approximation for mode shapes: \[ \boxed{ \operatorname{Var}(\hat{\phi}_i \mid e) \;\approx\; \frac{ \sigma_\varepsilon^2 } {\;\|\mathbf S(e) \odot \phi_i\|_2^{}} } \]

Where:

  • \(\sigma_\varepsilon = CV \cdot \mathbf S(e) \odot \phi_i\): sensor noise, given as sensor CV (coefficient of variation) multiplied by the mean value of the natural frequencies.
  • \(\|\mathbf S(e) \odot \phi_i\|_2\): is the modal observability.
    • \(\| x \|_2\) is the L2 norm of the vector \(x\), so the euclidean distance.
    • \(\mathbf S(e)\) is the sensor configuration vector.
    • \(\phi_i\) is the mode shape vector for the \(i\)-th mode, which describes the deformation pattern of the structure at that mode.
    • \(\odot\) denotes the element-wise (Hadamard) product, which multiplies corresponding elements of the two vectors.

Features

Feature vector:

\[ \boxed{ \mathbf y = h(\bar{\boldsymbol\phi}, \bar{\boldsymbol\omega}) } \]

Where \(h(\cdot)\) is an function that maps the modal parameters to the feature space.

  • Natural frequencies, \(\omega\)
  • Total Modal Assurance Criterion (TMAC)
    • \(MAC(\phi_a,\phi_b)=\frac{|\phi_a^T\phi_b|^2}{(\phi_a^T\phi_a)(\phi_b^T\phi_b)}\)
    • \(TMAC(H_{\text{no damage}}, \bar{\boldsymbol\phi}) = \sum_{i=1}^{n} \frac{|\phi_{0,i}^T \bar{\phi}_i|^2}{(\phi_{0,i}^T \bar{\phi}_i)^2}\)
  • Modal Flexibility: \(F = \sum_{i=1}^{n} \frac{\phi_i \phi_i^T}{\omega_i^2}\)

Problems

  • Features with nonlinear combinations of stochastic variables leads to implicit likelihoods.

Code example - Stochastic model

Goal of this code example

  1. Setup an new stochastic model
  2. generate modal parameters given an sensor configuration.
  3. Compute the features for an given sensor configuration.

Give:

  • Modal parameters for all states, \(\omega\) and \(\phi\)
  • sensor coefficients of variation (CV)
  • Sensor configuration vector, \(S\)
  • Prior probabilities for the states, \(p(\theta)\)

Test:

  • Calculate the variance of the modal parameters given \(S\) and CV
  • Generate samples of modal parameters given \(S\) and CV
  • Compute the features for a given sensor configuration.
import numpy as np
from numpy.random import multivariate_normal

class BR:
  def __init__(self, omega_all, phi_all, CV_omega, CV_phi, prior, N = 10**4, print_code = False):
    # all parameters for later use
    self.omega_all = omega_all
    self.phi_all = phi_all
    # No damage parameters
    self.omega_H0 = omega_all[0]
    self.phi_H0 = phi_all[0]
    # Damage parameters
    self.omega_H1 = omega_all[1:]
    self.phi_H1 = phi_all[1:] 
    
    # Coefficient of variation for omega and phi
    self.CV_omega = CV_omega
    self.CV_phi = CV_phi

    # number of Monte Carlo samples
    self.N = N
    
    # prior probabilities for theta
    self.prior = prior

    # print code
    self.print_code = print_code


  # def feature_vector(self, features_vector):

  #   return 

  # Statical parameters for the multivariate normal distribution 
  def covariance_matrix(self, S):
    # Sensor vector
    S = np.array(S)

    # Standard deviation
    sigma_omega = self.CV_omega * self.omega_H0 
    sigma_phi = self.CV_phi * np.ones(self.phi_H0.shape[0]) 

    # modal observability
    mo = np.array([np.linalg.norm(np.multiply(S, phi_i)) for phi_i in self.phi_H0])

    # Variance Omega
    var_omega = sigma_omega**2 / (mo**2)

    # Variance Phi
    var_phi = sigma_phi**2 / (mo**(0.5))

    # Diagonal covariance matrix
    mu = np.concatenate([self.omega_H0, self.mask_and_normalize(self.phi_H0, S).flatten()])  # mean vector
    diagonal = np.concatenate([var_omega, np.repeat(var_phi, np.sum(S, dtype=int))])
    cov = np.diag(diagonal)  # covariance matrix
    
    # save for later use
    self.mu = mu
    self.cov = cov 


  def mask_and_normalize(self,X, S):
    mask = S == 1

    # Step 1: apply mask on last axis
    filtered = X[..., mask]

    # Step 2: compute max along last axis
    max_vals = np.max(np.abs(filtered), axis=-1, keepdims=True)

    # Step 3: avoid division by zero
    max_vals[max_vals == 0] = 1

    # Step 4: normalize
    return filtered / max_vals


  def MCS_modal(self, S):
    # S is the sensor vector
    S = np.array(S)

    # get covariance matrix and mean vector
    self.covariance_matrix(S)

    # Generate all states and H labels at once: theta ~ P(theta)
    states = np.random.choice(range(len(self.prior)), size=self.N, p=self.prior)
    H_vec = (states > 0).astype(int)  # 0 if state == 0, else 1
    self.states = states
    self.H_vec = H_vec

    # generate errors for omega and phi
    # errors = np.random.multivariate_normal(mean=np.zeros(self.cov.shape[0]), cov=self.cov, size=self.N)

    # model properties 
    # lambda_star = np.random.multivariate_normal(mean=self.mu, cov=self.cov, size=self.N)
    lambda_star = np.random.multivariate_normal(mean=np.zeros(self.cov.shape[0]), cov=self.cov, size=self.N)
    lambda_star[:, :len(self.omega_H0)] += self.omega_all[states]
    lambda_star[:, len(self.omega_H0):] += self.mask_and_normalize(self.phi_all[states], S).reshape(self.N, -1)
    self.lambda_star = lambda_star

    # extract omega and phi from lambda_star
    omega_bar = lambda_star[:, :len(self.omega_H0)]
    phi_bar = lambda_star[:, len(self.omega_H0):]
    phi_bar = lambda_star[:, len(self.omega_H0):].reshape(self.N, self.phi_H0.shape[0], np.sum(S, dtype=int))
    # normize phi_bar by  by the maximum absolute value across all samples for each mode
    # normalize each mode-shape vector per sample (along the last axis)
    max_abs = np.max(np.abs(phi_bar), axis=2, keepdims=True)  # shape: (N, N_phi, 1)
    phi_bar = phi_bar / np.where(max_abs == 0, 1, max_abs)
    # phi_bar = self.mask_and_normalize(phi_bar, S)

    self.omega_bar = omega_bar
    self.phi_bar = phi_bar
    

  def MAC(self, phi1, phi2):
    # Compute the Modal Assurance Criterion (MAC) between two mode shapes
    numerator = np.abs(np.dot(phi1, phi2))**2
    denominator = np.dot(phi1, phi1) * np.dot(phi2, phi2)
    return numerator / denominator if denominator != 0 else 0

  def TMAC(self, phi_theta_1, phi_theta_2):
    N_samples = phi_theta_1.shape[0]
    TMAC_matrix = np.zeros((N_samples, N_samples))
    for i in range(N_samples):
        for j in range(N_samples):
            TMAC_matrix[i, j] = self.MAC(phi_theta_1[i], phi_theta_2[j])

    # sum of diagonal of the MAC 
    self.print2("TMAC matrix:\n", TMAC_matrix)  
    TMAC = np.sum(np.diag(TMAC_matrix)) / (N_samples)

    return TMAC
  
  # Modal Flexibility (MF)
  def MF(self, phi_vec, omega_vec):
    # Compute the Modal Flexibility (MF) for a given mode shape and natural frequency
    return sum(np.diag(np.abs(np.dot(phi_vec, phi_vec.T) / (omega_vec**2))))

  def print2(self, *args):
    if self.print_code:
        print(*args)
    

Code - Setup

np.random.seed(42)

# frequencyes and mode shapes
omega_all = np.array([[1, 6.72223044],    # H0, theta_0
                      [1.2, 8.72223044],  # H1, theta_1
                      [1.6, 10.8]])          # H1, theta_2
phi_all = np.array([[[0.4, 0.5, 1], [-0.3, -1, .5]],   # H0, theta_0
                    [[0.2, 0.2, 1], [-0.5, -1, .5]],   # H1, theta_1
                    [[-0.1, 0.1, 1], [-0.9, -1, .5]]]) # H1, theta_2

# Coefficient of variation (CV)
CV_omega = 0.1
CV_phi = 0.2

# Prior probabilities for the states (theta)
prior = np.array([0.5, 0.4, 0.1])

# Number of Samples 
N = 10**4

# Instantiate the BR class
BRinf = BR(omega_all, phi_all, CV_omega, CV_phi, prior, N)

# Sensor configuration
S = np.array([1, 0, 1])

# Generate modal parameters given the sensor configuration
BRinf.MCS_modal(S)

"""Simple results"""
print("Sampled states (theta):", BRinf.states)
print(f"#θ_0 = {sum(BRinf.states == 0)}, fraction = {sum(BRinf.states == 0) / BRinf.N:.2f}")
print(f"#θ_1 = {sum(BRinf.states == 1)}, fraction = {sum(BRinf.states == 1) / BRinf.N:.2f}")
print(f"#θ_2 = {sum(BRinf.states == 2)}, fraction = {sum(BRinf.states == 2) / BRinf.N:.2f}")

print("\nSampled modal parameters")
print("-" * 40)
print("Frequencies:")
print(np.round(BRinf.omega_bar[0], 4))

print("\nMode shapes:")
print(np.round(BRinf.phi_bar[0], 4))
Sampled states (theta): [0 2 1 ... 2 0 0]
#θ_0 = 5076, fraction = 0.51
#θ_1 = 3963, fraction = 0.40
#θ_2 = 961, fraction = 0.10

Sampled modal parameters
----------------------------------------
Frequencies:
[0.9934 5.0069]

Mode shapes:
[[ 0.1391  1.    ]
 [-0.6883  1.    ]]

Plot - Omega

Plot - Mode shapes 1

Plot - MAC + TMAC

Plot - Modal Flexibility (MF)

Questions

Likelihood double use

Workflow MCS for Bayesian risk:

  1. for an sensor configuration \(S\), calculate the staticial parameters for the likelihood function.
  2. sample structural states, \(\theta\), from the prior distribution, \(p(\theta)\).
  3. sample modal parameters, \((\bar{\boldsymbol\omega}, \bar{\boldsymbol\phi})\), from the likelihood function, \(p(\bar{\boldsymbol\omega}, \bar{\boldsymbol\phi} \mid \theta, S)\).
  4. compute the likelihood of the modal properties for being in state \(H_0\) or \(H_1\)
    • But I use the error function for the sensor configuration as if it was perfectly known?

Question ?

In this setup of the model, do I assume perfect knowledge of the statistical error parameters for a given sensor configuration when I calculate the likelihood of the modal properties for being in state \(H_0\) or \(H_1\)?

  • Is this a defensible approach, or is it problematic for the validity of the model?

likelihood: explicit to implicit

When we introduce the stochasticity in the model parameters, and not in the features, do we end up with a problem for the likelihood function?

  • We have nonlinear combinations of normaly distributed variables, which leads to non-normal distributions of the features
  • this means that we can not use an multivariate normal distribution as likelihood function for the features, which is what we have been doing in the past.

Question ?

What to do now?

Solution 1: Explicit likelihood

  • Use some of the modal properties as features
  • select thos whit the biggets difference between the damage and undamage state.

Solution 2: Implicit likelihood

General work flow:

  • Given an sensor configuration
  • Sample the modal parameters
  • Compute the features for each sample
  • Build an empirical distribution of the features for each state, which can be used as an implicit likelihood function for the features. (★) (\(\Large \star\))
  • Usse the Implicit likelihood for the features in the Bayesian decision analysis.

Idea: build an surrogate model that links variance in OMA error to likelihood of the features. (Fast computation)

Solution 3: No likelihood

Idea:

Inttroduce a decision rule there dos not use the likelihood function, but instead directly uses the features to make decisions.

Example:

  • Train an supervised machine learning (ML) algorithm to classify the structural states based on the features.
  • Use Monte Carlo simulations to estimate the risk (Bayesian risk) for this decision rule.

Problems:

Implicite likelihoode

Question ?

Have you bild an implicit likelihood function befor? and do you have any tips / resources for how to do it?

Problems

  • what data to use for building the implicit likelihood?
  • how to build the implicit likelihood for an vector of features?
  • Iders for building a surrogate mode there links the variance in the OMA error for an given sensor configuration to the likelihood of the features? (Fast computation)

Note: all this is dependent on the structure there is evluated.

No likelihood

Use supervised machine learning (ML) to directly classify the structural states \(H_0\) and \(H_1\) based on the features.

Question ?

  • what data to use for training the ML algorithm? (★ ★ ★)
    • Use data for difrent sensor configurations, but dont lable the configurations ?
    • set up bound for the staticial paramters for the OMA error chose chose different noise levels for the training data?
    • generate data independent of the prior distributions?
  • Do you think that Tree-based classification methods would be a good starting point for this problem?

Note: all this is dependent on the structure there is evluated.

Cass study

Goal of this Cass study

  • Setup an new stochastic model
  • Evaluate the Bayes risk whit the new stochastic model

Chosen Features

Modal properties:

  • x first natural frequencies
  • x first mode shapes

Features:

  • x first natural frequencies
  • Total Modal Assurance Criterion (TMAC)
    • \(MAC(\phi_a,\phi_b)=\frac{|\phi_a^T\phi_b|^2}{(\phi_a^T\phi_a)(\phi_b^T\phi_b)}\)
    • with no damage as reference
  • Modal Flexibility
    • \(F = \sum_{i=1}^{n} \frac{\phi_i \phi_i^T}{\omega_i^2}\)