A Simple Seldonian Algorithm

We follow the three steps of the Seldonian framework:

  1. Define the goal for the algorithm design process. Our goal is to create an algorithm \(a\) that attempts to maximize an objective function \(f\) and ensures that the behavioral constraints are satisfied. That is, we are searching for an algorithm \(a\) that is an approximate solution to:

    $$ \arg\max_{a \in \mathcal A}f(a)\\\text{s.t. }\forall i \in \{1,\dotsc,n\} \Pr(g_i(a(D)) \leq 0) \geq 1-\delta_i. $$

    As this is a regression problem with two parameters, \(\Theta=\mathbb R^2\) and \(f(a)\) is the expected negative MSE of the solution that algorithm \(a\) will return for the user's application. We do not know the value of \(f(a)\) for any \(a\), as we do not even know the user's application when creating the algorithm. Instead, we will require the user to provide a sample objective function \(\hat f:\Theta \times \mathcal D \to \mathbb R\) such that \(\hat f(\theta,D)\) is an estimate of the utility of any algorithm that returns the solution \(\theta\) given input data \(D\). In our case, this will be the definition of \(\hat f\) from the previous tutorial: \(\hat f(\theta,D)=\frac{1}{n}\sum_{i=1}^n (\hat y(X_i,\theta)-Y_i)^2\).

  2. Define the interface that the user will use. We will begin with an interface that makes creating the Seldonian algorithm easier. Later we will focus on interfaces that make it even easier for users to define undesirable behavior. Our initial interface will require the user to provide functions \(\hat g_i\) as described in the previous tutorial.

  3. Create the algorithm.

    We will implement the algorithm in Figure S15 (in the supplemental materials of our paper). A diagram of this algorithm is provided below:

    Simple Seldonian Algorithm

    From P. S. Thomas, B. Castro da Silva, A. G. Barto, S. Giguere, Y. Brun, and E. Brunskill. Preventing undesirable behavior of intelligent machines. Science, 366:999–1004, 2019. Reprinted with permission from AAAS.

    First, the data \(D\) is partitioned into two sets, \(D_1\) and \(D_2\). We call \(D_1\) the candidate data and \(D_2\) the safety data. In other work we write \(D_c\) for \(D_1\) and \(D_s\) for \(D_2\). Once the data has been partitioned, a method called Candidate Selection uses the candidate data to pick a single solution \(\theta_c\), called the candidate solution, which the algorithm considers returning. The candidate solution is then passed to the Safety Test mechanism, which uses the safety data to test whether the candidate solution is safe to return. If so, it is returned, and if not the algorithm returns No Solution Found (NSF).

    Pseudocode for this algorithm is provided below, where \(\hat \mu(v)\) and \(\hat \sigma(v)\) return the sample mean and sample standard deviation (with Bessel's correction) of the vector \(v\). That is, \(\hat \mu(v)=1/|v|\sum_{i=1}^{|v|}v_i\) and \(\hat \sigma(v)=\sqrt{(|v|-1)^{-1}\sum_{i=1}^{|v|}(v_i-\hat \mu(v))^2}\), where \(|v|\) is the number of elements in \(v\). Also, let \(t_{1-\delta,\nu}\) be the \(100(1-\delta)\) percentile of the Student \(t\)-distribution with \(\nu\) degrees of freedom, i.e., tinv(\(1-\delta,\nu\)) in Matlab [documentation].

    Inputs: Feasible set \(\Theta\), data \(D\), probabilities \(\delta_1,\delta_2,\dotsc,\delta_n\), functions \(\hat g_1, \hat g_2, \dotsc, \hat g_n\), function \(\hat f\).
    Output: Either a solution in \(\Theta\) or No Solution Found (NSF).
    1. Partition \(D\) into two data sets, \(D_1\) and \(D_2\).
    2. Candidate Selection: Use a black-box optimization algorithm to compute \(\theta_c\) that approximates a solution to: $$ \theta_c \in \arg\max_{\theta \in \Theta} \hat f(\theta,D_1)\\ \text{s.t. } \forall i \in \{1,2,\dotsc,n\}, \quad \hat \mu(\hat g_i(\theta,D_1)) + 2\frac{\hat \sigma(\hat g(\theta,D_1))}{\sqrt{|D_2|}}t_{1-\delta_i,|D_2|-1} \leq 0 $$
    3. Safety Test: Return \(\theta_c\) if $$ \forall i \in \{1,2,\dotsc,n\}, \quad \hat \mu(\hat g_i(\theta_c,D_2)) + \frac{\hat \sigma(\hat g(\theta_c,D_2))}{\sqrt{|D_2|}}t_{1-\delta_i,|D_2|-1} \leq 0, $$ and No Solution Found (NSF) otherwise.

    Here the safety test is using the result that, if \(X=(X_1,\dotsc,X_m)\) are independent and identically distributed random variables and \(\hat \mu(X)\) is normally distributed, then $$ \Pr\left (\mathbf E[X_1] \leq \hat \mu(X) + \frac{\hat \sigma(X)}{\sqrt{m}}t_{1-\delta,m-1} \right ) \geq 1-\delta. $$ The assumption that \(\hat \mu(X)\) is normally distributed is reasonable if \(m\) is sufficiently large, due to the central limit theorem. Recall that \(\mathbf E[\hat g_{i,j}(\theta,D)]=g_i(\theta)\), so, the safety test mechanism is computing a high-confidence upper bound on each \(g_i(\theta)\) using a confidence interval derived from the Student \(t\)-statistic. If this high-confidence upper bound is at most zero for every behavioral constraint, then the candidate solution is safe to return, and otherwise the algorithm returns NSF.

    With the safety test in place, the algorithm will be Seldonian regardless of how the candidate solution is computed, as long as it is not computed using the data \(D_2\), which is used by the safety test. First, consider what would happen if we just used \(\theta_c \in \arg\max_{\theta \in \Theta}\hat f(\theta,D_1)\). In this case, the candidate solution would be one predicted to perform best according to the primary objective. However, it often will not be safe, and will fail the safety test resulting in NSF. To fix this, we want to select a candidate solution that satisfies: $$\theta_c \in \arg\max_{\theta \in \Theta}\hat f(\theta,D_1)\\\text{s.t. $\theta_c$ predicted to pass safety test}.$$ This suggests a simple alternative: predict the safety test using \(D_1\) to compute the sample mean and sample standard deviation: $$\theta_c \in \arg\max_{\theta \in \Theta}\hat f(\theta,D_1)\\\text{s.t. }\forall i \in \{1,2,\dotsc,n\},\quad \hat \mu(\hat g_i(\theta,D_1)) + \frac{\hat \sigma(\hat g(\theta,D_1))}{\sqrt{|D_2|}}t_{1-\delta_i,|D_2|-1}\leq 0.$$ Notice that we are using the size of the safety data set in two locations above. The above method for computing the candidate solution can work well when the primary ojective and the behavioral constraints are aligned. However, when they are conflicting, the candidate selection mechanism tends to be over-confident that the candidate solution will pass the safety test (likely due to over-fitting on the candidate data). To fix this, we use a hack: we double the width of the confidence interval when predicting the outcome of the safety test, giving the line that appears in the pseudocode: $$\theta_c \in \arg\max_{\theta \in \Theta}\hat f(\theta,D_1)\\\text{s.t. }\forall i \in \{1,2,\dotsc,n\},\quad \hat \mu(\hat g_i(\theta,D_1)) + 2\frac{\hat \sigma(\hat g(\theta,D_1))}{\sqrt{|D_2|}}t_{1-\delta_i,|D_2|-1}\leq 0.$$ After implementing this quasi-Seldonian algorithm, we encourage you to experiment with removing this \(2\) that doubles the width of the confidence interval to see how it impacts the performance of the algorithm.

Previous: Example problem
C++