Research Article Volume 5 Issue 3

Department of Statistics, University of Florida, USA

**Correspondence:** Andre I. Khuri, Department of Statistics, University of Florida, USA

Received: October 29, 2017 | Published: March 10, 2017

**Citation: **Khuri AI. A general overview of response surface methodology. *Biom Biostat Int J*. 2017;5(3):87-93. DOI: 10.15406/bbij.2017.05.00133

Response surface methodology (RSM) is a collection of statistical and mathematical techniques used for the purpose of

- Setting up a series of experiments (design) for adequate predictions of a response y.
- Fitting a hypothesized (empirical) model to data obtained under the chosen design.
- Determining optimum conditions on the model's input (control) variables that lead to maximum or minimum response within a region of interest.

Formal work on RSM began with the publication of the article On the Experimental Attainment of Optimum Conditions by Box et al.^{1}

- Hill and Hunter
^{2} - Mead and Pike
^{3} - Myers, Khuri, and Carter
^{4}

See also the article by Steinberg and Hunter.^{5}

- Myers and Montgomery
^{6} - Khuri and Cornell
^{7} - Box and Draper
^{8}

The present article provides a general overview of response surface methodology (RSM).

Let $\eta $ denote the mean of a response variable $y$ . Let ${x}_{1},\dots ,{x}_{k}$ denote a set of input, or control, variables. Then,

$\eta \left(x\right)=\varphi \left({x}_{1},\dots ,{x}_{k}\right).$

For example, we have the first-order model

$$\eta \left(x\right)={\beta}_{0}+{\displaystyle \sum _{i=1}^{k}{\beta}_{i}}{x}_{i}$$

or the second-order model

$\eta \left(x\right)={\beta}_{0}+{\displaystyle \sum _{i=1}^{k}{\beta}_{i}}{x}_{i}+{\displaystyle \sum _{i=1}^{k}{\displaystyle \sum _{j=1}^{k}{\beta}_{ij}}}{x}_{i}{x}_{j}$

$y=\eta +\in ,$

where $E(\in )=0$ and $Var(\in )={\sigma}_{\in}^{2}$

Let ${x}_{ui}$ denote the design setting of variable ${x}_{i}$ at the uth experimental run $\left(u=1,2,\dots ,n;i=1,2,\dots ,k\right)$ . Let ${y}_{u}$ denote the corresponding response value. By definition, the design matrix $D$ is the $n\times k$ matrix

$$D=\left[\begin{array}{cccc}{x}_{11}& {x}_{12}& \dots & {x}_{1k}\\ {x}_{21}& {x}_{22}& \dots & {x}_{2k}\\ \dots & \dots & \dots & \dots \\ {x}_{n1}& {x}_{n2}& \dots & {x}_{nk}\end{array}\right]$$

We have the linear model

$$y=X\beta +{\in}_{,}$$

where
$X$
is
$n\times p$
of rank p and
$\beta $
is a vector of unknown parameters.

The least-squares estimator of
$\beta $
is

$$\widehat{\beta}={\left({X}^{\text{'}}X\right)}^{-1}{X}^{\text{'}}y$$

$$Var\left(\widehat{\beta}\right)={\left({X}^{\text{'}}X\right)}^{-1}{\sigma}_{\in}^{2}$$

$$\widehat{y}\left(x\right)={f}^{\text{'}}\left(x\right)\widehat{\beta},$$

where
${f}^{\text{'}}\left(x\right)$
is a vector of order
$1\times p$
of the same form as a row of
$X$
, but

is evaluated at the point
$x={\left({x}_{1},\dots ,{x}_{k}\right)}^{\text{'}}$

For a first-order model, the predicted response is

$\widehat{y}\left(x\right)={\widehat{\beta}}_{0}+{\displaystyle \sum _{i=1}^{k}{\widehat{\beta}}_{i}{x}_{i,}}$

and for a second-order model we have

$$\widehat{y}\left(x\right)={\widehat{\beta}}_{0}+{\displaystyle \sum _{i=1}^{k}{\widehat{\beta}}_{i}{x}_{i,}}+{\displaystyle \sum _{i=1}^{k}{\displaystyle \sum _{j=1}^{k}{\widehat{\beta}}_{ij}{x}_{i}{x}_{j}}}.$$

The prediction variance is given by

$$Var\left[\widehat{y}\left(x\right)\right]={\sigma}_{\in}^{2}{f}^{\text{'}}\left(x\right){\left({X}^{\text{'}}X\right)}^{-1}f\left(x\right).$$

Some important properties of a response surface design (Box and Draper)^{8}:

- Generation of a satisfactory distribution of information throughout the region of interest, $R$ .
- “Closeness" of $\widehat{y}\left(x\right)$ to $\eta \left(x\right)$ over $R$ .
- Good detectibility of lack of fit.
- Insensitivity (robustness) to extreme observations and to violations of the usual normal theory assumptions.
- Ability to perform experiments in blocks.
- Extendibility to a higher-order design.
- Requiring a small number of experimental runs.

- D-optimality: Maximization of $\left|{X}^{\text{'}}X\right|$
- A-optimality: Minimization of $tr\left[{\left({X}^{\text{'}}X\right)}^{-1}\right]$
- E-optimality: Minimization of the largest eigenvalue of ${\left({X}^{\text{'}}X\right)}^{-1}$
- G-optimality: Minimization of $Ma{x}_{R}\left\{Var\left[\widehat{y}\left(x\right)\right]\right\}$
- The above are referred to as alphabetic optimality criteria. Some authors, including Box [9], have questioned the applicability of alphabetic optimality theory to response surface experiments, since such optimal designs are sensitive to the form of the model.
- Rotatability: A design D is rotatable if $Var\left[\widehat{y}\left(x\right)\right]$ is constant at all points that are equidistant from the design center (Box and Hunter (1957)).
- A rotatable design has the uniform precision property (UP) if the prediction variance at $x=0$ is equal to the prediction variance at a distance $\rho =1$ from the origin. A design is rotatable if and only if the ${X}^{\text{'}}X$ matrix has a particular form.
- Rotatability is a desirable property to have, especially when there is a need to optimize $\widehat{y}\left(x\right)$ over the surface of a hypersphere, as is the case in ridge analysis.
- Orthogonality: A design is orthogonal if it can provide independent information about the effects of the various terms in the model.

${\left|{X}^{\text{'}}X\right|}^{-\frac{1}{2}}$ is propotional to the volume of an ellipsoidal confidence region on $\beta $ .

**Examples:**

- ${3}^{k}$ is orthogonal, but is not rotatable.
- Central composite design can be made rotatable and either have the
- UP property or the orthogonality property.

The fitted model is

$$y={f}^{\text{'}}\left(x\right)\beta +\in $$

The “True" model is

$$\eta \left(x\right)={f}^{\text{'}}\left(x\right)\beta +{g}^{\text{'}}\left(x\right)\gamma .$$

Integrated Mean Squared Error Criterion:^{10,11} This amounts to the minimization of J, where

$$J=\frac{n\Omega}{{\sigma}_{\in}^{2}}{{\displaystyle \underset{R}{\int}E\left[\widehat{y}\left(x\right)-\eta \left(x\right)\right]}}^{2}dx,$$

$\Omega $
is the reciprocal of the volume of
$R$
.

$J=V+B.$

$V=\frac{n\Omega}{{\sigma}_{\in}^{2}}{\displaystyle \underset{R}{\int}Var\left[\widehat{y}\left(x\right)\right]}\text{\hspace{0.17em}}dx$

$V=n\Omega {\displaystyle \underset{R}{\int}{f}^{\text{'}}}\left(x\right){\left({X}^{\text{'}}X\right)}^{-1}f\left(x\right)dx$

$B=\frac{n\Omega}{{\sigma}_{\in}^{2}}{\displaystyle \underset{R}{\int}{\left\{E\left[\widehat{y}\left(x\right)\right]-\eta \left(x\right)\right\}}^{2}}dx$

$B=\frac{n\Omega}{{\sigma}_{\in}^{2}}{\displaystyle \underset{R}{\int}{\gamma}^{\text{'}}}\left[{A}^{\text{'}}f\left(x\right)-g\left(x\right)\right]\left[{f}^{\text{'}}\left(x\right)A-{g}^{\text{'}}\left(x\right)\right]\gamma dx$

where
$A={\left({X}^{\text{'}}X\right)}^{-1}{X}^{\text{'}}Z$
is the alias matrix, and
$z$
is the matrix corresponding to
${g}^{\text{'}}\left(x\right)$

The Box and Draper criterion for the minimization of
$B$
is:

$$\frac{1}{n}{X}^{\text{'}}X=\Omega {\displaystyle \underset{R}{\int}f\left(x\right){f}^{\text{'}}\left(x\right)dx}$$

$$\frac{1}{n}{X}^{\text{'}}Z=\Omega {\displaystyle \underset{R}{\int}f\left(x\right){g}^{\text{'}}\left(x\right)dx}$$

Designs that minimize J have characteristics similar to those of designs which minimize
$B$
alone (all bias designs).

A design is robust if it helps reduce the impact of nonideal conditions on data analysis. Some of these conditions include

- Nonnormal errors
- Missing observations
- Outliers

Box^{12} introduced the word “robust" when examining the effect of departure from normality on the analysis of variance. Some relevant articles are:^{13,15,16,48}

These are designs for the estimation of the partial derivatives of the mean response
$\eta \left(x\right)$
with respect
${x}_{1},\dots ,{x}_{k}$

Myers and Lahoda^{17} extended the Box-Draper integrated mean squared error criterion to find appropriate designs for the joint estimation of the partial derivatives of
$\eta \left(x\right)$
.

Hader and Park^{18} introduced the concept of design rotatability for second-order models. Under this design criterion,
$Var\left[\frac{\partial \widehat{y}\left(x\right)}{\partial {x}_{i}}\right]$
is constant at all points that are equidistant from the design center
$\left(i=1,2,\dots ,k\right)$

Park^{19} considered “slope rotatability over all directions":

$$\widehat{y}\left(x\right)={f}^{\text{'}}\left(x\right)\widehat{\beta}.$$

$$\frac{\partial \widehat{y}}{\partial v}={v}^{\text{'}}grad\left[\widehat{y}\left(x\right)\right]={v}^{\text{'}}G\widehat{\beta},$$

where
$v$
is a unit vector.

$$Var\left[\frac{\partial \widehat{y}\left(x\right)}{\partial v}\right]={\sigma}_{\in}^{2}{v}^{\text{'}}G{\left({X}^{\text{'}}X\right)}^{-1}{G}^{\text{'}}v$$

$$Av{g}_{v}Var\left[\frac{\partial \widehat{y}\left(x\right)}{\partial v}\right]=c{\displaystyle \underset{s}{\int}Var\left[\frac{\partial \widehat{y}\left(x\right)}{\partial v}\right]}dA,$$

where $S$ denotes the surface of a hypersphere of unit radius, $c$ is the reciprocal of the surface of $S$ .

$$W\left(x\right)=Ave.Slop{e}^{}Var.=\frac{{\sigma}_{\in}^{2}}{k}tr\left[G{\left({X}^{\text{'}}X\right)}^{-1}{G}^{\text{'}}\right]$$

A design
$D$
is slope rotatable over all directions if
$W\left(x\right)$
is constant at all points equidistant from the design center.

Park gave necessary and sufficient conditions for a design to be slope rotatable over all directions for second-order models.

$$Rotatabilit{y}^{}{\Rightarrow}^{}Slop{e}^{}Rotatabilit{y}^{}Ove{r}^{}al{l}^{}Directions$$

**Related articles**

- Huda and Mukerjee
^{20} - Mukerjee and Huda,
^{21} - Park
^{22}

These are designs that induce a certain degree of sensitivity to possible inadequacy of the fitted model.

Construction of such designs is done by the maximization of

$$\lambda ={\gamma}^{\text{'}}L\gamma ,$$

$L={Z}^{\text{'}}\left[I-X{\left({X}^{\text{'}}X\right)}^{-1}{X}^{\text{'}}\right]Z.$

${\Lambda}_{1}$
optimality: Maximize
${\Lambda}_{1}$
, the minimum of
$\lambda $
over a specified region

$\varphi $
in the
$\gamma $
space.

${\Lambda}_{2}$
optimality: Maximize
${\Lambda}_{2}$
, the average value of
$\lambda $
over the boundary of
$\varphi $
.

Estimation of Optimum Conditions.

The Method of Steepest Ascent This is a maximum-region-seeking procedure introduced by Box and Wilson^{1} for RSM: To maximize

$$\widehat{y}\left(x\right)={\widehat{\beta}}_{0}+{\displaystyle \sum _{i=1}^{k}{\widehat{\beta}}_{i}{x}_{i}}$$

subject to the constraint ${\sum}_{i=1}^{k}{x}_{i}^{2}}={r}^{2}.$

${x}_{i}=r{\upsilon}_{i},$

where
$\upsilon ={\left({\upsilon}_{1},\dots ,{\upsilon}_{k}\right)}^{\text{'}}$
is a unit vector in the direction of
$grad\left[\widehat{y}\left(x\right)\right]$
A related article is Myers and Khuri.^{23}

Related articles include Hoerl,^{24} Draper^{25}

To maximize (minimize)

$$\widehat{y}\left(x\right)={\widehat{\beta}}_{0}+{\displaystyle \sum _{i=1}^{k}{\widehat{\beta}}_{i}}{x}_{i}+{\displaystyle \sum _{i=1}^{k}{\displaystyle \sum _{j=1}^{k}{\beta}_{ij}}}{x}_{i}{x}_{j}$$

$$\widehat{y}\left(x\right)={\widehat{\beta}}_{0}+{x}^{\text{'}}b+{x}^{\text{'}}Bx$$

subject to the constraint
${\sum}_{i=1}^{k}{x}_{i}^{2}}={r}^{2$

$$h\left(x\right)=\widehat{y}\left(x\right)-\mu \left({\displaystyle \sum _{i=1}^{k}{x}_{i}^{2}-{r}^{2}}\right),$$

where
$\mu $
is a Lagrange multiplier.

For a maximum,
$\mu $
should be larger than
${e}_{\mathrm{max}}\left(B\right)$
.

For a minimum,
$\mu $
should be smaller than
${e}_{\mathrm{min}}\left(B\right)$
.

By selecting several values of
$\mu $
as shown above, we obtain plots of

max.
$\widehat{y}$
vs. r and plots of
${x}_{i}$
vs.
$r$
for
$i=1,2,\mathrm{...},k$

Khuri and Myers^{26} introduced a certain modification to the method of ridge analysis in cases in which the design is not rotatable and may even be ill conditioned. In such cases,
$Var\left[\widehat{y}\left(x\right)\right]$
can vary appreciably on a hypersphere
$S\left(r\right)$
centered at the origin inside a region of interest. The proposed modification optimizes
$\widehat{y}\left(x\right)$
on
$S\left(r\right)$
subject to maintaining certain a constraint on the size of
$Var\left[\widehat{y}\left(x\right)\right]:$

$$Var\left[\widehat{y}\left(x\right)\right]={\sigma}_{\in}^{2}{f}^{\text{'}}\left(x\right){\left({X}^{\text{'}}X\right)}^{-1}f\left(x\right)$$

$$Var\left[\widehat{y}\left(x\right)\right]={\sigma}_{\in}^{2}{\displaystyle \sum _{i=1}^{p}\frac{{\left[{f}^{\text{'}}\left(x\right){\omega}_{i}\right]}^{2}}{{v}_{i}}}$$

$${f}^{\text{'}}\left(x\right){\omega}_{i}={a}_{i}+{x}^{\text{'}}{\tau}_{i}+{x}^{\text{'}}{T}_{i}x,$$

where
$p$
is the number of parameters in the model,
${\omega}_{1},\mathrm{...},{\omega}_{p}$
are orthonor-mal eigenvectors of
${X}^{\text{'}}X$
, and
${v}_{1},\mathrm{....},{v}_{p}$
are the corresponding eigenvalues of
${X}^{\text{'}}X$
. Let
${v}_{\mathrm{min}}$
be the smallest eigenvalue of
${X}^{\text{'}}X$
and let
${\omega}_{m}$
be the corresponding eigenvector. To optimize
$\widehat{y}\left(x\right)$
subject to the constraints

$$\sum _{i=1}^{k}{x}_{i}^{2}}={r}^{2$$

$$\left|{f}^{\text{'}}\left(x\right){\omega}_{m}\right|\text{\hspace{1em}}\le \text{\hspace{1em}}q$$

where
$q$
is a small positive constant, or equivalently,

$$\left|{a}_{m}+{x}^{\text{'}}{\tau}_{m}+{x}^{\text{'}}{T}_{m}x\right|\text{\hspace{1em}}\le \text{\hspace{1em}}q$$

The estimated optimum of a mean response and the location of the optimum are obtained by using $\widehat{y}\left(x\right)$ . They are therefore random variables. In order to have a better assessment of optimum conditions, confidence intervals on the optimum of the mean response as well as confidence regions on the location of the optimum are needed. Relevant references include

- Box and Hunter
^{27} - Stablein, Carter, and Wampler
^{28} - Carter, Chinchilli, Campbell, and Wampler
^{29} - Carter, Chinchilli, Myers, and Campbell
^{30}

A design is rotattable if and only if the ${X}^{\text{'}}X$ matrix has a particular form, which we denote by ${X}^{\text{'}}{X}_{rot}$ . For any given design $D$ , the “closeness" of the corresponding ${X}^{\text{'}}X$ matrix to ${X}^{\text{'}}{X}_{rot}$ can be determined. On this basis, a measure $\psi \left(D\right)$ , which quantifies the percent rotatability that is inherent in $D$ , can be derived. $A$ design $D$ is rotatable if and only if $\psi \left(D\right)=100$ .

For example, the
${3}^{2}$
factorial design is 93.08% rotatable.

Properties of
$\psi \left(D\right)$
:

- It is invariant to a change of scale (same change for all the input variables).
- It is invariant to the addition of experimental runs at the design center (design augmentation at the center).
- It applies to any response surface model and design.
- It is not invariant to design rotation.

Draper and Pukelsheim^{32} introduced a similar measure that is invariant to design rotation, but it applies only to second-order models.

- Can be used to compare designs with respect to their degree of rotatability.
- Can be used to construct a design that satisfies a certain desirable criterion, such as a bias or a variance criterion, in addition to having a relatively high percent rotatability.
- Can be used to “repair" rotatability by the addition of properly chosen experimental runs to a given design.

**Giovannitti-jensen and myers (1989) **

They plotted maximum and minimum values of
$\frac{1}{{\sigma}_{\in}^{2}}Var\left|\widehat{y}\left(x\right)\right|$
, on a hyperspher
$S\left(r\right)$
, against
$r$
.

Relevant Reference are Khuri^{33} used the measure of rotatability
$\psi \left(D\right)$
to derive several upper bounds on the range of
$\frac{1}{{\sigma}_{\in}^{2}}Var\left|\widehat{y}\left(x\right)\right|$
. These bounds are easy to compute since they only require determining eigenvalues and traces associated with the matrices
${X}^{\text{'}}X$
and
$F$
, where

$F={\left({X}^{\text{'}}X\right)}^{-1}-\text{\hspace{0.17em}}\text{\hspace{0.17em}}{U}_{rot}^{-1}$

and ${U}_{rot}$ is the “rotatable" portion of ${X}^{\text{'}}X$ .

Park and Kim^{34} defined a measure
${Q}_{k}\left(D\right)$
to quantify the degree of slope rotatability over axial directions^{18} for second-order models. It takes the value zero if and only if the design is slope rotatable. It can also be used to “repair" slope rotatability by design augmentation.

Jang and Park^{35} considered the maximum,
$Vmax\left(r\right)$
, and minimum,
$Vmin\left(r\right)$
, values of
$\frac{1}{{\sigma}_{\in}^{2}}W\left(x\right)$
over a hypersphere
$S\left(r\right)$
, where

$$W\left(x\right)=\frac{{\sigma}_{\in}^{2}}{k}tr\left[G{\left({X}^{\text{'}}X\right)}^{-1}{G}^{\text{'}}\right]$$

The quantity

$$\Delta \left(r\right)=Vmin\left(r\right)-Vmax\left(r\right)$$

is zero if and only if the design is slope rotatable over all directions. Jang and Park plotted
$Vmax\left(r\right)$
and
$Vmin\left(r\right)$
against
$r$
(slope variance dispersion graphs).

**Khuri and vining ^{36}**

Specification Problem: To determine conditions on the input variables in a process that cause the mean response $\eta \left(x\right)$ to fall within specified bounds, e.g., ${\alpha}^{}{<}^{}\eta {\left(x\right)}^{}{<}^{}b$ , where a and b are given, with a specified degree of confidence.

**Procedure**

Start with an initial n-point design
${D}_{n}$

Calculate
${\widehat{\beta}}_{n}={\left({X}_{n}^{\text{'}}{X}_{n}\right)}^{-1}{X}_{n}^{\text{'}}{y}_{n}$
$\widehat{y}\left(x\right)={f}^{\text{'}}\left(x\right){\widehat{\beta}}_{n}$

Compute

$${y}_{1n}\left(x\right)=\widehat{y}\left(x\right)-ST{D}_{n}{t}_{\alpha /2,n-p}$$

$${y}_{2n}\left(x\right)=\widehat{y}\left(x\right)+ST{D}_{n}{t}_{\alpha /2,n-p},$$

Where

$$ST{D}_{n}={\left[M{S}_{E}{f}^{\text{'}}\left(x\right){\left({X}_{n}^{\text{'}}{X}_{n}\right)}^{-1}f\left(x\right)\right]}^{1/2}$$

If there is a point
${x}_{0}$
such that
$\alpha <{y}_{1n}\left({x}_{0}\right),{y}_{2n}\left({x}_{0}\right)<b$
, then
${x}_{0}$
has a probability of at least
$1-\alpha $
of satisfying

$\alpha <\eta \left({x}_{0}\right)<b,$

${x}_{0}$
is a solution to the specification problem.

Khuri and Vining presented a sequential procedure of adding points to
${D}_{n}$
, if necessary, until a solution can be found to the inequalities

$$\alpha <{y}_{1n}\left(x\right),{y}_{2n}\left(x\right)<b$$

for a sufficiently large $n$ .

An experiment in which a number of responses can be measured for each setting of a group of input variable, ${x}_{1},\mathrm{...},{x}_{k}$ , is called a multiresponse experiment.

When several responses are considered simultaneously, any statistical investigation of the responses should take into consideration the multi- variate nature of the data. The response variables should not be analyzed individually or independently of one another. This is particularly true when the response variables are correlated.

Traditional response surface techniques that apply to single response models are, in general, not adequate to analyze multiresponse models.

In particular, if the individual response models are linear, then

${y}_{i}={X}_{i}{\beta}_{i}+{\in}_{i}$ $i=1,2,\dots ,r.$

$Y=F\left(D,B\right)+\in $

An estimate of
$\beta $
is obtained by minimizing the determinant of

$S\left(D,\beta \right)={\left[Y-F\left(D,\beta \right)\right]}^{\text{'}}\left[Y-F\left(D,\beta \right)\right]$

Relevant articles include Box, Hunter, MacGregor, and Erjavec;^{38} Khuri.^{39}

The r linear response models,

$${y}_{i}={X}_{i}{\beta}_{i}+{\in}_{i},$$ $i=1,2,\mathrm{...},r,$

can be represented as a single linear multiresponse model

$y=X\Theta +\delta ,$

Where
$y={\left[{y}_{1}^{\text{'}},{y}_{2}^{\text{'}},\dots ,{y}_{r}^{\text{'}}\right]}^{\text{'}}$

X is a block-diagonal matrix $diag\left({X}_{1},{X}_{2},\dots ,{X}_{r}\right)$ ,

$\Theta ={\left[{\beta}_{1}^{\text{'}},{\beta}_{2}^{\text{'}},\dots ,{\beta}_{r}^{\text{'}}\right]}^{\text{'}}$ and,

$$\delta ={\left[{\in}_{1}^{\text{'}},{\in}_{2}^{\text{'}},\dots ,{\in}_{r}^{\text{'}}\right]}^{\text{'}}$$

The variance-covariance matrix of is given by the direct (Kronecker) product $\sum \otimes {I}_{n}$

$$\widehat{\Theta}={\left[{X}^{\text{'}}\left({\displaystyle \sum {}^{-1}}\otimes {I}_{n}\right)X\right]}^{1}{X}^{\text{'}}\left({\displaystyle \sum {}^{-1}\otimes}{I}_{n}\right)y$$

If
$\sum$
is unknown, an estimate
$\sum =}\left({\widehat{\sigma}}_{ij}\right)$
can be used, where

$${\widehat{\sigma}}_{ij}=\frac{1}{n}{y}_{i}^{\text{'}}\left[{I}_{n}-{P}_{j}\right]{y}_{j},$$ $i,j=1,2,\mathrm{...},r,$

and

$${P}_{i}={X}_{i}{\left({X}_{i}^{\text{'}}{X}_{i}\right)}^{1}{X}_{i}^{\text{'}}$$
$i=1,2,\dots ,r.$

Zelner^{40}

In particular, if
${X}_{1}={X}_{2}=\cdots ={X}_{r}={X}_{0}$
, then,

$${\widehat{\beta}}_{i}={\left({X}_{0}^{\text{'}}{X}_{0}\right)}^{-1}{X}_{0}^{\text{'}}{y}_{i},$$

Draper and Hunter,^{41} Fedorov):^{42} D-optimal designs.
$\sum$
must be known. Wijesinha and Khuri^{43} used an estimate of
$\sum$
. Wijesinha and Khuri^{44} considered designs that maximize the power of the multi variate lack fit test.

Khuri:^{45} Multiresponse rotatability for linear multirespose models.

A design is multiresponse rotatable if
$Var\left[\widehat{y}\left(x\right)\right]$
is constant at all points x that are equidistant from the origin, where
$\widehat{y}\left(x\right)$
is the vector of r predicted responses.

This design property can be achieved if and only if the design is rotatable for a single-response model having the highest degree among all the r response models.

Khuri^{46}

The models

${y}_{i}={X}_{i}{\beta}_{i}+{\in}_{i,}$ $i=1,2,\dots ,r$

can be written as

$Y={X}^{*}B+\in $

where

$$Y=\left[{y}_{1}:\text{\hspace{0.17em}}{y}_{2}:\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\mathrm{...}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}:{y}_{r}\right]$$

$${X}^{*}=\left[{X}_{1}:\text{\hspace{0.17em}}\text{\hspace{0.17em}}{X}_{2}:\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\dots \text{\hspace{0.17em}}\text{\hspace{0.17em}}:\text{\hspace{0.17em}}{X}_{r}\right]$$

$$B=diag\left({\beta}_{1},{\beta}_{2},\text{\hspace{0.17em}}\mathrm{...}\text{\hspace{0.17em}},{\beta}_{r}\right)$$

$$\in =\left[{\in}_{1}\text{\hspace{0.17em}}:\text{\hspace{0.17em}}{\in}_{2}\text{\hspace{0.17em}}:\text{\hspace{0.17em}}\mathrm{...}\text{\hspace{0.17em}}:\text{\hspace{0.17em}}{\in}_{r}\right]$$

Assume that replicated observations are available on all the responses at some points in a region of interest.

The above model suffers from lack of fit if and only if there exists a linear combination of the responses that suffers from lack of fit.

The multivariate lack of fit test is Roy's largest-root test, ${e}_{max}\left({G}_{{2}^{-1}{G}_{1}}\right)$

Where

$${G}_{1}={Y}^{\text{'}}\left[{I}_{n}-X{({X}^{\text{'}}X)}^{-1}{X}^{\text{'}}-K\right]Y$$

$${G}_{2}={Y}^{\text{'}}KY$$

$$K=diag\left({K}_{1},{K}_{2},\text{\hspace{0.17em}}\dots \text{\hspace{0.17em}},{K}_{m},0\right)$$

${K}_{i}={I}_{{v}_{i}}-\frac{1}{{v}_{i}}{J}_{{v}_{i}}$ $i=1,2,\dots ,m$

(
${v}_{i}$
repeated observations are taken at the
$i$
th design point
$\left(i=1,2,\text{\hspace{0.17em}}\dots \text{\hspace{0.17em}},m\right)$
).

Large values of the test statistic are significant.

Note: The test may be significant even though the response models do not individually exhibit a significant lack of fit.

Richert, et al.^{47} investigated the effects of heating temperature
$\left({x}_{1}\right)$
, pH level
$\left({x}_{2}\right)$
, redox potential
$\left({x}_{3}\right)$
, sodium oxalate
$\left({x}_{4}\right)$
, and sodium lauryl sulfate
$\left({x}_{5}\right)$
on foaming properties of whey protein concentrates. The responses are
${y}_{1}$
=whipping time

${y}_{2}$ =maximum overrun

${y}_{3}$ =percent soluble protein.

A second-order model was assumed for each response.

Khuri and Conlon^{48}

The purpose of a multiresponse optimization technique is to find operating conditions on the input variables that lead to optimal, or near optimal, values of several response functions.

$${y}_{i}={X}_{0}{\beta}_{i}+{\in}_{i},$$ $i=1,2,\dots ,r.$

In this case, an unbiased estimate of $\sum$ , the variance-covariance matrix of ther response variables, is given by

$$\sum {}_{u}}={Y}^{\text{'}}\left[{I}_{n}-{X}_{0}{\left({X}_{0}^{\text{'}}{X}_{0}\right)}^{-1}{X}_{0}^{\text{'}}\right]Y/\left(n-p\right),$$

$$r\le n-p.$$

${\widehat{y}}_{i}\left(x\right)={f}^{\text{'}}\left(x\right){\widehat{\beta}}_{i},$ $i=1,2,\dots ,r.$

Let $\widehat{y}\left(x\right)={\left[{\widehat{y}}_{1}\left(x\right),{\widehat{y}}_{2}\left(x\right),\text{\hspace{0.17em}}\dots \text{\hspace{0.17em}},{\widehat{y}}_{r\left(x\right)}\right]}^{\text{'}}.$

Let ${\phi}_{i}$ be the optimum value of ${\widehat{y}}_{i}\left(x\right)$ optimized individually over a region of interest R $\left(i=1,2,\text{\hspace{0.17em}}\dots \text{\hspace{0.17em}},r\right)$ .

$\phi ={\left({\phi}_{1},{\phi}_{2},\text{\hspace{0.17em}}\dots \text{\hspace{0.17em}},{\phi}_{r}\right)}^{\text{'}}$

$$Var\left[\widehat{y}\left(x\right)\right]={f}^{\text{'}}\left(x\right){\left({X}_{0}^{\text{'}}{X}_{0}\right)}^{-1}f\left(x\right){\displaystyle \sum}$$

An “ideal” optimum occurs when ${\widehat{y}}_{i}\left(x\right)$ , for $i=1,2,\dots ,r,$ , attain their individual optima at the same set of conditions.

To find “compromise conditions" that are somewhat favorable to all the responses, we consider the metric (distance function)

$$\rho \left[\widehat{y}\left(x\right),\phi \right]={\left[{\left(\widehat{y}\left(x\right)-\phi \right)}^{\text{'}}{\left\{Var\left[\widehat{y}\left(x\right)\right]\right\}}^{-1}\left(\widehat{y}\left(x\right)-\phi \right)\right]}^{1/2}$$

Multiresponse optimization is then reduced to minimizing $\rho \left[\widehat{y}\left(x\right),\phi \right]$ with respect to $x$ over $R$ .

- Research workers in RSM should provide the means to facilitate the use of RSM by data analysts, practitioners, and experimenters.
- Research workers in RSM need feedback from experimenters. This can help create methodology that applies to realistic situations.
- Better software support. The software currently available for RSM is limited. This is particularly true in the multiresponse area.
- More research work is needed in the multiresponse case, especially in the design area.
- Multiresponse techniques should be modified to allow the presence of a block effect in the model.
- There is a need to explore new RSM techniques suitable for more general models under less restrictive assumptions (generalized linear models).
- Develop designs which are robust with respect to several criteria.
- There is a need for nonparametric techniques in RSM (model-free techniques).

None.

None.

- Box GEP, Wilson KB. On the experimental attainment of optimum conditions.
*Journal of the Royal Statistical Society B*. 1951;13:1–45. - Hill WJ, Hunter WG. A review of response surface methodology: A literature survey.
*Technometrics*. 1996;8(4):571–590. - Mead R, Pike DJ. A review of response surface methodology from a biometric point of view.
*Biometrics.*1975;31(4):803–851. - Myers RH, Khuri AI, Carter WH. Response surface methodology: 1966-1988.
*Technometrics.*1989;31(2):137–157. - Steinberg DM, Hunter WG (1984) Experimental design: review and comment: (with Discussion and Response).
*Technometrics*26(2):105–109. - Myers RH, Montgomery DC. Response surface methodology: process and product optimization using designed experiments. Wiley, New York; 1995.
- Khuri AI, Cornell JA. Response surfaces. 2nd ed. Dekker, New York; 1996.
- Box GEP, Draper NR. Response surfaces, mixtures, and ridge analyses. 2nd ed. Wiley, Hoboken, New Jersey; 2007.
- Box GEP. Choice of response surface design and alphabetic optimality.
*Utilitas Mathematica*. 1982;21B:11–55. - Box GEP, Draper NR. A basis for the selection of a response surface design.
*Journal of the American Statistical Associatio*n. 1959;54(287):622–654. - Box GEP, Draper NR. The choice of a second order rotatable design.
*Biometrika*. 1963;50(3/4):335–352. - Box GEP. Non-normality and tests on variances.
*Biometrika.*1953;40(3/4):318–335. - Herzberg AM, Andrews DF. Some considerations in the optimal design of experiments in non-optimal situations.
*Journal of the Royal Statistical Society*. 1976;38(3):284–289. - Khuri AI, Conlon M. Simultaneous optimization of multiple responses represented by polynomial regression functions. Technometrics. 1981;23(4):363–375.
- McKee B, Kshirsagar AM. Effect of missing plots in some response surface designs.
*Communications in Statistics - Theory and Methods*. 1982;11(14):1525–1549. - Akhtar M, Prescott P. Response surface designs robust to missing observations.
*Communications in Statistics - Simulation and**Computation*. 1986;15(2):345–363. - Myers RH, Lahoda SJ. A generalization of the response surface mean square error criterion with a specific application to the slope.
*Technometrics*. 1975;17(4):481–486. - Hader RJ, Park SH. Slope rotatable central composite designs.
*Technometrics.*1978;20(4):413–417. - Park SH. A class of multifactor designs for estimating the slope of response surfaces.
*Technometrics*. 1987;29(4):449–453. - Huda S, Mukerjee R. Minimizing the maximum variance of the difference between two estimated responses. Biometrika. 1984; 71(2):381–385.
- Mukerjee R, Huda S. Minimax second- and third- order designs to estimate the slope of a response surface. Biometrika. 1985; 72(1):173–178.
- Park JY. Designs for estimating the difference between two responses.
*Communications in Statistics - Theory and Methods*. 1990;19(12):4773–4787. - Khuri AI, Myers RH. Modified ridge analysis. Techno-metrics. 1979;21(4):467–473.
- Hoerl AE. Optimum solution of many variables equations.
*Chemical Engineering Progress.*1953;55:67–78. - Draper NR. “Ridge analysis” of response surfaces.
*Technometrics*. 1963;5(4):469–479. - Myers RH, Khuri A. A new procedure for steepest ascent.
*Communications in Statistics - Theory and Methods*. 1979;8(14): 1359–1376. - Box GEP, Hunter JS. A confidence region for the solution of a set of simultaneous equations with an application to experimental design.
*Biometrika*. 1954;41(1/2):190–199. - Stablein DM, Carter WH, Wampler GL. Confidence regions for constrained optima in response-surface experiments. Biometrics. 1983;39(3):759–763.
- Carter WH, Chinchilli VM, Campbell ED, et al. Confidence interval about the response at the stationary point of a response surface, with an application to preclinical cancer therapy.
*Biometrics*. 1984;40(4):1125–1130. - Carter WH, Chinchilli VM, Myers RH, et al. Confidence intervals and improved ridge analysis of response surfaces.
*Technometrics*. 1986;28(4):339–346. - Khuri AI. A measure of rotatability for response surface designs.
*Technometrics*. 1988;30:95–104. - Draper NR, Pukelsheim F. Another look at rotatability.
*Technometrics*. 1990;32(2):195–202. - Khuri AI. Response surface models with random block effects.
*Technometrics*. 1992;34(1):26–37. - Park SH, Kim HJ. A measure of slope-rotatability for second order response surface experimental designs. Journal of Applied Statistics. 1992;19(3):391–404.
- Jang DH, Park SH. A measure and a graphical method for evaluating slope rotatability in response surface designs.
*Communications in Statistics - Theory and Methods*. 1993;22:1849–1863. - Khuri AI, Vining GG. Conditions required for the mean response to fall within specified bounds.
*Journal of Statistical Planning and Inference*. 1991;28(1):125–136. - Box GEP, Draper NR. The Bayesian estimation 0f common parameters from several responses.
*Biometrika*. 1965;52(3/4):355–365. - Box GEP, Hunter WG, MacGregor JF, et al. Some problems associated with the analysis of multiresponse data.
*Technometrics*. 1973;15(1):33–51. - Khuri AI. Multiresponse rotatability.
*Journal of Statistical Plan- ning and Inference*.1990;25(1):1–6. - Zelner A. An efficient method of estimating seemingly unrelated regressions and tests for aggregation bias.
*Journal of the American Statistical Association*. 1962;57:348–368. - Draper NR, Hunter WG. Design of experiments for parameter estimation in multiresponse situations.
*Biometrika*. 1966;53(3/4): 525–533. - Fedorov VV. Theory of optimal experiments.
*Academic Press*, New York; 1972. - Wijesinha MC, Khuri AI. The sequential generation of multiresponse D-optimal designs when the variance-covariance matrix is not known.
*Communications in Statistics-Simulation and Computation*. 1987;16(1):239–259. - Wijesinha MC. Construction of optimal designs to increase the power of the multiresponse lack of fit test.
*Journal of Statistical Planning and Inference*. 1987;16:179–192. - Khuri AI. The analysis of multiresponse experiments: A review. in statistical design and analysis of industrial experiments, In: Subir Ghosh, editor. New York; 1990;231–246.
- Khuri AI. A test for lack of fit of a linear multiresponse model.
*Technometrics*. 1985;27(3):213–218. - Richert SH, Morr CV, Conney CM. Effect of heat and other factors upon foaming properties of whey protein concentrates.
*Journal of Food Science*. 1974;39(1):42–48. - Khuri AI, Myers RH. Design related robustness of tests in regression models.
*Communications in Statistics - Theory and**Methods.*1981;10(3):223–235.

©2017 Khuri. This is an open access article distributed under the terms of the, which permits unrestricted use, distribution, and build upon your work non-commercially.

2 7