Candidate Selection

Recall the pseudocode for computing the candidate solution:

2. Candidate Selection: Use a black-box optimization algorithm to compute \(\theta_c\) that approximates a solution to: $$ \theta_c \in \arg\max_{\theta \in \Theta} \hat f(\theta,D_1)\\ \text{s.t. } \forall i \in \{1,2,\dotsc,n\}, \quad \hat \mu(\hat g_i(\theta,D_1)) + 2\frac{\hat \sigma(\hat g_i(\theta,D_1))}{\sqrt{|D_2|}}t_{1-\delta_i,|D_2|-1} \leq 0 $$

The black-box optimization algorithm that we choose to use when searching for a candidate solution is called Powell. Powell, however, is not a constrained algorithm. One way of addressing this limitation is by incorporating the constraint into the objective function as a barrier function. That is, we will now find an approximate solution to the following unconstrained problem:

$$ \theta_c \in \arg \max_{\theta \in \mathbb R^2} \begin{cases} \hat f(\theta,D_1) &\mbox{if } \operatorname{predictedUpperBound}(i) \leq 0, \,\, \forall i \in \{1,2,\dotsc,n\},\\ -100,\!000 - \sum_{i=1}^n \operatorname{max}\left(0,\operatorname{predictedUpperBound}(i)\right ) &\mbox{otherwise,} \end{cases} $$ where \(\operatorname{predictedUpperBound}(i)\) is a conservative prediction of what the upper bound will be in the safety test for the \(i^\text{th}\) constraint, as computed by predictTTestUpperBound: $$ \operatorname{predictedUpperBound}(i)=\hat \mu(\hat g_i(\theta,D_1) + 2\frac{\hat \sigma(\hat g_i(\theta,D_1))}{\sqrt{|D_2|}}t_{1-\delta_i,|D_2|-1}. $$

According to this new unconstrained optimization problem, solutions that are predicted to pass the safety test will have a performance of \(\hat f(\theta, D_1)\). Solutions that are predicted not to pass the safety test will not be selected by the optimization algorithm because we assign a large negative performance to them: \(\left( -100,\!000 - \sum_{i=1}^n \operatorname{max}\left(0,\operatorname{predictedUpperBound}(i)\right) \right)\). Here, \(-100,\!000\) is a constant small enough to ensure that \(\hat f(\theta, D_1)\) will be the performance associated with any solution \(\theta\) that is predicted to pass the safety test. We include the term \(\sum_{i=1}^n \max(0,\operatorname{predictedUpperBound}(i))\) to shape the barrier function to encourage Powell to tend towards solutions that will pass the safety test.

Let us now write the objective function that Powell will attempt to maximize.

# The objective function maximized by getCandidateSolution.
#     thetaToEvaluate: the candidate solution to evaluate.
#     (candidateData_X, candidateData_Y): the data set D1 used to evaluated the solution.
#     (gHats, deltas): vectors containing the behavioral constraints and confidence levels.
#     safetyDataSize: |D2|, used when computing the conservative upper bound on each behavioral constraint.
def candidateObjective(thetaToEvaluate, candidateData_X, candidateData_Y, gHats, deltas, safetyDataSize): 

  # Get the primary objective of the solution, fHat(thetaToEvaluate)
  result = fHat(thetaToEvaluate, candidateData_X, candidateData_Y)

  predictSafetyTest = True     # Prediction of what the safety test will return. Initialized to "True" = pass
  for i in range(len(gHats)):  # Loop over behavioral constraints, checking each
    g         = gHats[i]       # The current behavioral constraint being checked
    delta     = deltas[i]      # The confidence level of the constraint

    # This is a vector of unbiased estimates of g_i(thetaToEvaluate)
    g_samples = g(thetaToEvaluate, candidateData_X, candidateData_Y)

    # Get the conservative prediction of what the upper bound on g_i(thetaToEvaluate) will be in the safety test
    upperBound = predictTTestUpperBound(g_samples, delta, safetyDataSize)

    # We don't think the i-th constraint will pass the safety test if we return this candidate solution
    if upperBound > 0.0:

      if predictSafetyTest:
        # Set this flag to indicate that we don't think the safety test will pass
        predictSafetyTest = False  

        # Put a barrier in the objective. Any solution that we think will fail the safety test will have a
        # large negative performance associated with it
        result = -100000.0    

      # Add a shaping to the objective function that will push the search toward solutions that will pass 
      # the prediction of the safety test
      result = result - upperBound

  # Negative because our optimizer (Powell) is a minimizer, but we want to maximize the candidate objective
  return -result  

Now that we have our candidate objective function, we can write getCandidateSolution, which uses Powell to search for a solution that maximizes candidateObjective.

# Use the provided data to get a candidate solution expected to pass the safety test.
#    (candidateData_X, candidateData_Y): data used to compute a candidate solution.
#    (gHats, deltas): vectors containing the behavioral constraints and confidence levels.
#    safetyDataSize: |D2|, used when computing the conservative upper bound on each behavioral constraint.
def getCandidateSolution(candidateData_X, candidateData_Y, gHats, deltas, safetyDataSize):
  
  # Chooses the black-box optimizer we will use (Powell)
  minimizer_method = 'Powell'
  minimizer_options={'disp': False}

  # Initial solution given to Powell: simple linear fit we'd get from ordinary least squares linear regression
  initialSolution = leastSq(candidateData_X, candidateData_Y)

  # Use Powell to get a candidate solution that tries to maximize candidateObjective
  res = minimize(candidateObjective, x0=initialSolution, method=minimizer_method, options=minimizer_options, 
    args=(candidateData_X, candidateData_Y, gHats, deltas, safetyDataSize))

  # Return the candidate solution we believe will pass the safety test
  return res.x

That's it! If you now run main.py, you should get the following console output: A solution was found: [0.5844721756, 1.0560192943]. In other words, our Quasi-Seldonian algorithm found a solution that minimizes the sample mean squared error, while ensuring (with high probability) that all behavioral constraints are satisfied!

The final main.py file with all of the code up until this point can be downloaded here:
Download main.py

In the next tutorial, we show how you can change the code in main() to repeatedly run our Quasi-Seldonian algorithm QSA using different amounts of data and printing results to files that can be used to create plots like Fig. 3 in our paper. This will allow us to analyze:

  • How much performance (mean square error minimization) is lost due to the behavioral constraints?
  • How much data does it take for the algorithm to frequently return solutions?
  • How often does the algorithm exhibit undesirable behavior?

Python
Python