- Journal List
- BMC Bioinformatics
- v.8; 2007
- PMC1800867

# Efficient classification of complete parameter regions based on semidefinite programming

^{1}Institute of Molecular Systems Biology, ETH Zürich, CH-8093 Zürich, Switzerland

^{2}Laboratory for Information and Decision Systems, Massachusetts Institute of Technology, Cambridge MA 02139, USA

^{}Corresponding author.

This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

## Abstract

### Background

Current approaches to parameter estimation are often inappropriate or inconvenient for the modelling of complex biological systems. For systems described by nonlinear equations, the conventional approach is to first numerically integrate the model, and then, in a second *a posteriori *step, check for consistency with experimental constraints. Hence, only single parameter sets can be considered at a time. Consequently, it is impossible to conclude that the "best" solution was identified or that no good solution exists, because parameter spaces typically cannot be explored in a reasonable amount of time.

### Results

We introduce a novel approach based on semidefinite programming to directly identify consistent steady state concentrations for systems consisting of mass action kinetics, i.e., polynomial equations and inequality constraints. The duality properties of semidefinite programming allow to rigorously certify infeasibility for whole regions of parameter space, thus enabling the simultaneous multi-dimensional analysis of entire parameter sets.

### Conclusion

Our algorithm reduces the computational effort of parameter estimation by several orders of magnitude, as illustrated through conceptual sample problems. Of particular relevance for systems biology, the approach can discriminate between structurally different candidate models by proving inconsistency with the available data.

## Background

Systems biology is a framework integrating diverse disciplines such as molecular biology and genetics with mathematical modelling and simulation, where computational models assume an increasingly important role in recent years. Simulations are an essential element for quantitative understanding because the highly nonlinear behavior of complex biological systems can sometimes be counterintuitive [1,2]. System identification, which comprises both parameter estimation and structural network analysis in biological systems, can be extremely difficult since the governing principles and topological interactions are often not known [3]. The flood of data from novel high-throughput and genome-wide analyses further adds to the problem, and their diversity poses additional challenges for consistency testing and integration. Exact values of kinetic parameters (e.g., association or dissociation constants for protein-protein interaction) are very difficult to determine experimentally, because they are a function of the heterogeneous spatial and developmental cellular conditions.

Most approaches to mathematical modelling of biological systems either avoid parametrization by focusing on static steady-state models of reaction stoichiometry or topology [4,5], approximate behavior and regulation through heuristic-based approaches [6,7], or reduce the solution space by applying enzyme kinetics such as in traditional Michaelis-Menten or *linlog *kinetics, respectively [8,9]. While this enables the detailed modelling of time-dependent processes [10], the underlying quasi steady-state assumption *a priori *neglects the dynamics of enzyme complexes. Thus, in many cases the use of mass-action kinetics in the form of elementary reactions becomes necessary, for instance when some of the enzyme kinetics do not follow Michaelis-Menten [11], a steady-state of the system cannot be assumed [12], transport functions have to be described [13] or the network structure is uncertain [14,15].

The systems of equations that arise in time-dependent models based on chemo-physical kinetics of any kind are usually hard to parameterize, since their components are tightly coupled and there is limited information about the time evolution of the concentrations of all the species. Several approaches for efficient system identification have been developed, ranging from reduction of system dimensionality [16] to decoupling of the differential equations [17]. The actual parametrization algorithm, here referred to as the *wrapping algorithm*, is of particular importance [18] and was for example highlighted in a comparison between gradient-based methods and genetic and evolutionary algorithms [19]. All of these approaches however consider single, isolated points in the parameter space; this can be very time-consuming due to the iterative nature of the parametrization process [19] and furthermore cannot guarantee that the algorithm finds possible solutions.

We present here a conceptually novel approach based on techniques from numerical convex optimization, in particular *semidefinite programming *(*SDP*) [20], that can efficiently partition the parameter space into feasible and infeasible regions. Basically, our approach allows to analyze sets of mass action kinetics, i.e., ordinary differential equations (ODEs) at steady state, under consideration of additional algebraic equality and inequality constraints (e.g., mass balances or formation rates). It is thus possible to simultaneously consider all the available information during the parameter estimation itself, rather than checking consistency *a posteriori *as in most current methods.

Previous applications used SDP either for rational experimental design based on covariance analysis [21] or for the derivation of barrier certificates by construction of surrogate models [22]. In contrast, the algorithm presented here needs no problem reformulation but is based directly on the original parameter estimation problem. By exploiting convexity in the search for feasible steady state concentrations, our methods enable a multi-dimensional perturbation analysis for *sets *of parameters. Our algorithm therefore provides rigorous proofs for the classification of the parameter space into feasible and infeasible regions, thus reducing the number of points needed for consideration by several orders of magnitude.

## Methods

### Steady state analysis

The natural starting point in the analysis of dynamical systems in biology is the determination of a steady state equilibrium of the model, since it represents the reference point for any kind of perturbation introduced to the system. It is therefore of utmost importance that the species' concentrations at an equilibrium point, which is henceforth synonymously referred to as steady state, are realistic and validated against as much data as possible.

In the present study we consider models in the form of mass action kinetics, hence all reaction rates are direct functions of the concentrations *y *and the kinetic constants *k*. At an equilibrium point, the corresponding system of *n *ODEs reduces to a set of polynomial equations

*f*_{i }(*k, y*) = 0, *i *= 1, ..., *n*. (1)

Besides the equations above, the state concentrations also have to satisfy various kinds of constraints based on experimental data, e.g., formation rates or overall mass balances. Since these will in practice inevitably include certain errors, *inequality *constraints also arise naturally. Consider for instance the general mass balance equations, here described by a matrix *M*, which when multiplied with the array of state variables *y *satisfy

*M*·*y *= *b*, *M *∈ ℝ^{m × n}, (2)

where *b *∈ ℝ^{m }is the total amount of each species. Hence, Eqn. (2) represents overall mass balances while Eqn. (1) comprises the mass action kinetics, i.e., the formation and consumption rates of each single species. Since components can exist both unbound or bound in a complex, respectively, there are in general fewer mass balance equations than differential equations (*n *≥ *m*). An experimental error *ε *can be taken into account by relaxing the exact mass balance equations to inequalities where

(1 - *ε*)·*b *≤ *M*·*y *≤ (1 + *ε*)·*b*. (3)

Thus, the parameter estimation problem can be rewritten as a nonlinear optimization

$\begin{array}{ll}\underset{y}{\mathrm{min}}\hfill & {f}_{0}(k,y)\hfill \\ \text{s}.\text{t}.\hfill & {f}_{i}(k,y)=0\hfill \\ \hfill & {g}_{i}(k,y)\ge 0,\hfill \end{array}\text{}\left(4\right)$

that depends on the steady state concentrations *y *and the set of kinetic parameters *k*. Here, *f*_{0 }(*k, y*) is an arbitrary objective function, e.g., the overall model error, the equality constraints *f*_{i }(*k, y*) represent the steady state condition and *g *(*k, y*) is a set of inequality constraints, e.g., the mass balances.

### Semidefinite programming

In the following, we will introduce a reformulation of the generalized parameter estimation problem (4) based on semidefinite programming (SDP). SDP is a specific kind of convex optimization problem [20,23,24], with very appealing numerical properties. An SDP problem corresponds to the optimization of a linear function subject to a matrix inequality. The only prerequisite for the applicability of the SDP is a quadratic (or polynomial) representation of the original set of equalities and inequalities (4), which allows a reformulation/relaxation in terms of symmetric matrices. For clarity of exposition, we focus on the case where the *f*_{i }are quadratic and the *g*_{i }are linear, although our methods extend to the fully polynomial case; see [25] for details.

For the SDP relaxation we define new variables *x, X *in terms of the original state variables *y *by:

$\begin{array}{cc}x=\left[\begin{array}{c}1\\ y\end{array}\right]\in {\mathbb{R}}^{n+1},& X=x\cdot {x}^{T}=\left[\begin{array}{cc}1& {y}^{T}\\ y& y{y}^{T}\end{array}\right]\in {\mathbb{R}}^{(n+1)\times (n+1)}\end{array}.\text{}\left(5\right)$

Based on these definitions, the original problem (4), including both steady state equations and data consistency inequalities, can be rewritten as:

$\begin{array}{ccc}\underset{x}{\mathrm{min}}& {x}^{T}{Q}_{obj}x& \\ \text{s}.\text{t}.& {x}^{T}{Q}_{i}x& =0\\ & L\cdot x& \ge 0\\ & x& =\left[\begin{array}{c}1\\ y\end{array}\right].\end{array}\text{}\left(6\right)$

Here, the symmetric matrix *Q*_{obj }defines an objective function (e.g., the identity matrix), that selects a specific solution out of the possibly many equilibria. The symmetric matrices *Q*_{i }correspond to the set of ODEs at steady state (Eqn. (1)) and the matrix *L *to linear inequality constraints derived, for instance, from approximate mass-balance equations (Eqn. (3)). Note, that in general the matrices *Q*_{i }are a function of the set of (generally unknown) rate parameters *k*_{j}, while *L*, which represents the mass balances, is not. The basic convex relaxation described in the *Appendix *then yields the following SDP relaxation of (6):

$\begin{array}{ccc}\underset{X}{\mathrm{min}}& Tr({Q}_{obj}\cdot X)& \\ \text{s}.\text{t}.& Tr({Q}_{i}\cdot X)& =0\\ & Tr({e}_{1}\cdot {e}_{1}^{T}\cdot X)& =1\\ & L\cdot X\cdot {e}_{1}& \ge 0\\ & L\cdot X\cdot {L}^{T}& \ge 0\\ & X& \succcurlyeq 0,\end{array}\text{}\left(7\right)$

where Tr is the trace operator, which adds up the diagonal elements, and the inequality in the last line indicates that the matrix *X *must be positive semidefinite, i.e., all its eigenvalues should be greater than or equal to zero. The vector *e*_{1 }is an all-zero vector, except for its first entry, which equals one to enforce *X*_{11 }= 1. The set of feasible solutions, i.e., the set of matrices *X *that satisfy the constraints, is always a convex set. Recall that the matrix *X *as defined in Eqn. (5) is by construction a rank one matrix. This rank condition, however, is not guaranteed for the *X *obtained from the optimization, and thus this property has to be checked independently after a solution to the SDP is computed. This is necessary because the relaxed problem formulation (7) is less strict then the original form (6), hence the set of feasible solutions becomes larger. Note, that introduction of additional nonlinear constraints helps to reduce the set of " false positive" solutions, an aspect discussed in more detail in an additional section below. Finally, in the particular case of *Q*_{obj }= 0, the problem reduces to whether or not the inequality can be satisfied for some matrix *X*. In this case, the SDP is referred to as a *feasibility problem*, where we are interested in proving the mere existence of solutions rather than finding any particular one.

### Dual SDP problems

The convexity of SDP has made it possible to develop sophisticated and reliable analytical and numerical methods to solve them [20]. A very important feature of SDP problems, from both the theoretical and applied viewpoints, is the associated *duality theory *(see also *Appendix*). For every SDP of the form (7) (usually called the *primal problem*), there is another associated SDP, called the *dual problem*, which can be derived via Lagrangian duality:

$\begin{array}{ll}\mathrm{max}\hfill & \gamma \hfill \\ \text{s}.\text{t}.\hfill & {Q}_{obj}\succcurlyeq \gamma \cdot {e}_{1}\cdot {e}_{1}^{T}+{\displaystyle \sum _{i}{\lambda}_{i}\cdot {Q}_{i}+{e}_{1}\cdot {r}^{T}\cdot L+{L}^{T}\cdot r\cdot {e}_{1}^{T}+{L}^{T}\cdot S\cdot L,}\hfill \end{array}\text{}\left(8\right)$

where the constraint represents a matrix inequality with *r *≥ 0, *S *≥ 0, *S*_{ii }= 0. The dual variables *γ*, *λ*, *r*, *S *are Lagrange multipliers associated to the different constraints in the primal problem. In the case of feasibility problems (i.e., *Q*_{obj }= 0), the dual problem can be used to certify the nonexistence of solutions of the primal. This property will be crucial in our developments.

## Results

### Parameter estimation for nonlinear mass action kinetics at steady state

Identifying a satisfactory equilibrium point of a set of ODEs requires the solution of a system of algebraic equations (those that define the steady-state conditions) subject to additional equality or inequality constraints (consistency with experimental data). This can be time-demanding, because the system must often be simulated from a given set of initial conditions until it settles to an equilibrium [11,26]. This is particularly troublesome when the computed solution violates the experimental constraints and must hence be discarded. Traditional heuristic tools for this optimization problem require, for instance, an iterative procedure where the candidate sets of parameters are generated by some kind of wrapping parametrization algorithm (e.g., gradient-based or evolutionary), and a subsequent consistency check by integrating the system equations for these parameter values [19]. This trial and error method can be very time consuming because it only allows to check consistency in a subsequent step. Hence, an algorithm that searches for steady state solutions but is guaranteed to be consistent with all experimental data would be extremely valuable.

### SDP and nonlinear systems of equations

As opposed to these "indirect" techniques, our method is a conceptually novel direct approach based on a *convex relaxation *of the generalized parameter estimation problem at steady state (4). Our techniques apply whenever the model presentation is in polynomial form, since in this case the resulting system can be *relaxed *into a semidefinite programming problem (6), which in turn can be solved using efficient interior-point methods [27-30].

We illustrate the application of our approach with the following nonlinear system given by

$\begin{array}{ccc}[A]+[B]& \stackrel{{k}_{1}}{\to}& [A\xb7B]\\ [A\xb7B]& \stackrel{{k}_{2}}{\to}& [A\xb7{B}^{\ast}]\\ [A\xb7{B}^{\ast}]& \stackrel{{k}_{3}}{\to}& [A]+[B].\end{array}\text{}\left(9\right)$

Here, two components *A *and *B *form a complex, *A *• *B*, that transforms into a modified form, *A *• *B**, and finally decays into its building blocks. This system yields a set of four ODEs that at steady state reduces to a polynomial system:

$\begin{array}{ccc}[A]:& -{k}_{1}\cdot [A]\cdot [B]+{k}_{3}\cdot [A\xb7{B}^{\ast}]& =0\\ [B]:& -{k}_{1}\cdot [A]\cdot [B]+{k}_{3}\cdot [A\xb7{B}^{\ast}]& =0\\ [A\xb7B]:& {k}_{1}\cdot [A]\cdot [B]-{k}_{2}\cdot [A\xb7B]& =0\\ [A\xb7{B}^{\ast}]:& {k}_{2}\cdot [A\xb7B]-{k}_{3}\cdot [A\xb7{B}^{\ast}]& =0.\end{array}\text{}\left(10\right)$

If it is furthermore known from experimental data that in equimolar concentrations of *A *and *B *one third of them is unbound while the rest is associated in one of both complexes, we can write mass balances as

$\begin{array}{ccc}[A]=\frac{1}{3},& [B]=\frac{1}{3},& [A\xb7B]+[A\xb7{B}^{\ast}]=\frac{2}{3},\end{array}\text{}\left(11\right)$

which have to be kept within a certain experimental accuracy *ε *= 20%.

The example above is thus a system of nonlinear polynomial relations based on mass action kinetics that comprises both equalities (Eqns. (10)) and inequalities (Eqns. (11)). As explained earlier, we define the state variables

*y*^{T }= ([*A*], [*B*], [*A *• *B*], [*A *• *B**]), (12)

which are then used to produce a new matrix variable *X *according to Eqn. (5). To write the SDP problem, the equality and inequality constraints, Eqns. (10) and Eqns. (11), respectively, have to be rewritten into the form of Eqn. (6). For example, the steady state equation for species *B *becomes

${Q}_{B}=\left(\begin{array}{ccccc}0& 0& 0& 0& 0.5\cdot {k}_{3}\\ 0& 0& -0.5\cdot {k}_{1}& 0& 0\\ 0& -0.5\cdot {k}_{1}& 0& 0& 0\\ 0& 0& 0& 0& 0\\ 0.5\cdot {k}_{3}& 0& 0& 0& 0\end{array}\right),\text{}\left(13\right)$

while the rows of *L *corresponding to the mass balance inequalities for component *A *are

${L}_{A}=\left(\begin{array}{ccccc}-1/3\cdot \left(1-\epsilon \right)& 1& 0& 0& 0\\ 1/3\cdot \left(1+\epsilon \right)& -1& 0& 0& 0\end{array}\right).\text{}\left(14\right)$

The feasible set of the relaxed SDP problem (7) is always convex and includes all the equilibria of the original problem. Thus, SDP optimization enables the possibility of directly solving the underlying nonlinear system of algebraic equations, or of proving its inconsistency. Furthermore, this property makes possible a direct consistency check based on feasibility or infeasibility of the SDP optimization. Typically, a parameterization approach would require identification of a stable steady state, either by root-finding or numerical integration, at which point the simulation can be verified or falsified for a given set of parameters *k*. In contrast, the SDP-based approach bypasses this time consuming step: if the problem (7) is infeasible the parameters *k *are inconsistent with the experimental data, and if a feasible solution of rank one can be found, the given parameters are valid.

For our running example, consider a set of parameters, *k*_{1 }= 3, *k*_{2 }= 1 and *k*_{3 }= 1, respectively, for which the SDP is feasible. Let the solution be

$X=1/9\cdot \left(\begin{array}{ccccc}9& 3& 3& 3& 3\\ 3& 1& 1& 1& 1\\ 3& 1& 1& 1& 1\\ 3& 1& 1& 1& 1\\ 3& 1& 1& 1& 1\end{array}\right),\text{}\left(15\right)$

which is rank one. Using the largest eigenvalue of *X *to scale the corresponding eigenvector yields

${x}_{sol}^{T}$ = (1,1/3,1/3,1/3,1/3) and ${y}_{sol}^{T}$ = (1/3,1/3,1/3,1/3).

A simple check shows that ${y}_{sol}^{T}$ directly fulfills the steady state condition and all additional constraints.

### Additional nonlinear constraints

The decision variable *X *in Eqn. (5) is by construction a rank one matrix. This property of *X*, however, is not necessarily guaranteed when solving the SDP relaxation (7) (including this constraint would destroy convexity). Hence, it has to be verified independently once the optimization is completed. From our numerical experience, a very limited number of solutions fails the rank one condition such that the corresponding results have to be discarded. A simple but effective combination of constraints, however, allows to reduce the number of these "false positive" solutions: since most of the inequality constraints are linear (e.g., mass balance equations), they can be used to tighten the description of the feasible set by using redundant constraints. By exhaustive multiplication of *m *pairs of linear inequality constraints, i.e. each upper bound is multiplied with every possible lower bound, $\left(\begin{array}{c}m\\ 2\end{array}\right)$ new quadratic inequalities are generated. This multiplication of constraints corresponds to the term *L*·*X*·*L*^{T }appearing in (7). Note, that even the product of a lower and an upper bound of the same constraint can be helpful, because the thus generated matrix has nonzero diagonal (i.e., quadratic) elements. These additional nonlinear algebraic constraints improve the performance of the algorithm significantly, because the set of feasible solutions that do not meet the rank one condition becomes much smaller.

### Analyzing whole regions of parameter space

The approach presented in the previous paragraphs can directly handle nonlinear algebraic equations in polynomial form. However, it considers only *single *sets of parameters, that have to be provided by an external parameter estimation algorithm. Since in general the parameter space can be quite large, it would be very helpful to be able to discard *a priori *large regions of the space, where we know for sure that no consistent rates can be found. As we will see, we can achieve this goal in a very efficient way by considering the dual optimization problem (8) presented earlier.

As in general convex optimization, infeasibility of the primal problem can be proven by the existence of a dual feasible solution, and conversely, dual infeasibility follows from a primal solution. Thus, the existence of dual variables satisfying the dual constraints in (8) directly proves primal infeasibility. This criterion can be used to yield an efficient procedure for the analysis of the parameter space. Therein, large regions can be detected where the given model is inconsistent with the experimental data. The feasible region will usually represent a small fraction in the overall parameter space. Therefore an efficient search of the parameter space will focus rather on negative (inconsistent) than positive (consistent) regions.

The use of SDP-based parametrizations allows to obtain exactly this information. Once dual feasibility has been proven for a given set of parameters, this point in the solution space is associated to specific values of the dual variables. This information can then be used to extend the feasibility check to larger regions in the parameter space. Recall that only the matrices *Q*_{i}, i.e., the mass action kinetics, are actually dependent on the rate parameters *k*. We thus used a *bisection *approach in which the current parameter set is an *n*-dimensional box. Since this is a polytope, it follows that if the conditions are valid on the corners then they are also valid on the interior, and thus can be neglected for further analysis. In particular, we determine a factor *η *by which each parameter can be perturbed such that dual feasibility holds. We remark that the choice of cuboid-like regions is only a matter of convenience, the results can be easily extended in a direct fashion to any other polyhedral partition scheme (for instance, using simplices).

A formal proof of the guaranteed inconsistency of whole regions is provided in the *Appendix*. The pseudo-code of the complete algorithm for an efficient classification of the parameter space in feasible and infeasible regions is shown in Table Table1.1. The procedure extends the infeasibility information from an isolated set of parameters to larger regions of parameter space by using SDP optimization. All parameter sets which lie in these extended regions need not to be considered any longer for feasibility purposes. Even more, in contrast to common grid-like discretization approaches, the algorithm allows an efficient and continuous exploration of the parameter space without undergoing the danger of missing a feasible solution by accident.

### Efficient classification of the parameter space

The results in Fig. Fig.11 correspond to the example discussed earlier (further examples are shown in the *Appendix*). In the figure, the contour plot representing the feasible and infeasible regions in the 3-dimensional parameter space was obtained by exhaustive evaluation of sets of parameters. Particular solutions obtained with the SDP-based algorithm that prove infeasibility of whole regions are indicated in the graph. The respective regions of feasibility and infeasibility are shown, as well as a few cuboids that illustrate how we can discard the existence of solutions on whole regions of parameter space. Each box is obtained from the solution of a single small SDP optimization problem. It is clear that the SDP-based search in the infeasible region of the parameter space provides a powerful analysis tool, as it allows the efficient exploration of whole areas instead of the time consuming consistency check at single points. The figure also shows how the size of the boxes varies depending on the relative position (Fig. (Fig.1).1). In other words, the allowed perturbation *η *increases with the distance from the feasible region. This is highlighted in Fig. Fig.2,2, where the distribution of *η *for one slice of the feasible region (in the *k*_{1 }- *k*_{3 }plane) is shown. Finally, since the systems of equations and inequalities are all linear in *k*, the feasible region is invariant under non-negative scalings (i.e., it is a cone), as can be seen in the three-dimensional plot in Fig. Fig.11.

**Graph of the feasible parameter space**. Contour plot of the three-dimensional parameter space of

*k*

_{1},

*k*

_{2 }and

*k*

_{3 }from the example given in the text. The cone with the black edges marks the feasible region, infeasible areas are illustrated as cuboids with

**...**

### Rigorous proofs for model discrimination

The basic algorithm for exploration of the parameter space can also be used to discriminate between different model alternatives, for instance when the system structure is uncertain [14]. This is possible since SDP allows for the exploration of the complete solution space. If this is the case, the failure of parametrization (i.e., the lack of suitable values for the model parameters) directly proves model inconsistency. Moreover, we avoid the danger of missing possible solutions by accident since we consider whole regions of parameter space instead of isolated points. As an example, consider a branching point in a metabolic pathway (Fig. (Fig.3).3). It is known that all conversion steps are catalyzed by enzymes (*E*_{i}). However, while the reaction scheme for the pathway from *A *to *C *via *B *is well established, it may not be known whether the conversion from *B *to *D *is catalyzed by enzyme *E*_{3 }or whether component *A *exerts some cooperative effect (Fig. (Fig.3).3). To discriminate between these two scenarios, two model alternatives that describe possible reaction schemes can be proposed (Table (Table22).

**Uncertain model structure**. It is unknown whether

*A*exerts a positive influence on the conversion of

*B*to

*D*. By numerical integration of model alternative II, concentrations were obtained and used in lieu of experimental data. Overall component concentrations,

**...**

The consistency of both models is analyzed through defined variations in the incoming carbon flux *v*_{i}: An artificial data set is generated by simulation of model II, that in turn is used for parameter analysis (Figs. (Figs.33 and and4).4). Hence, two steady states can be simultaneously considered for SDP optimization. As expected, parameter estimation of model II identifies the feasible region in the solution space without any problems. In contrast, no feasible set of parameters is found for model I when an experimental error of *ε *= 0.25 is assumed (Fig. (Fig.4).4). This is a direct proof that model I is incorrect and needs to be modified. Only when an intolerably high deviation of *ε *= 0.75 is allowed, the parametrization algorithm finds solutions that are consistent with the experimental data (Fig. (Fig.4).4). The basic ideas underlying this small example can be directly applied to real biological problem sets where the structure is unknown. Since the search of the parameter space can be completed in a few steps, SDP provides a powerful tool for model discrimination and can be used for model invalidation and subsequent redesign of experimental set-ups.

## Discussion

We introduced a conceptually novel approach for parameter estimation and parameter space classification based on a direct search for solutions that are simultaneously consistent with the steady-state concentrations and the inequalities arising from experimental data. The method is based on semidefinite optimization, whose convexity properties allow for the direct solution (of relaxations) of nonlinear optimization problems in polynomial form. It is thus possible to establish the possible infeasibility of steady state concentrations for a given set of parameters under direct consideration of experimental constraints. Our approach avoids the time-consuming numerical identification of a stable steady state, where feasibility can only be validated in retrospective [11,26]. Besides the direct consistency analysis for single sets of parameters, the duality properties of SDP optimization problems allow the direct proof of feasibility (or infeasibility) of whole regions in the parameter space instead of the mere consideration of isolated spots. This significantly reduces the total number of possibilities that have to be evaluated. Moreover, our approach is based on a simple convex relaxation of the original parametrization problem and can hence easily be applied without time consuming problem reformulation.

The possibility of discarding whole regions of the parameter space from further exploration is extremely valuable for model discrimination, where determination of model consistency or inconsistency with experimental data is the desired goal [14]. This approach can be very time-intensive due to the trial and error method that includes several integrations per set of parameters. Our algorithm thus provides an valuable tool for model discrimination.

Despite the immediate practical applications of our method, further work is necessary to fully exploit the total possibilities of the algorithm presented. Since the problem size increases with the number of variables of the system, the corresponding optimization problems will inevitably result in large matrices. Current versions of SDP solvers [27-30], however, slow down considerably when larger problem sets with more than about 30 state variables were analyzed. Stiffness of the underlying system of equations is another difficulty, which remains unsolved to date. Hence, problems which large differences in the kinetic parameters, e.g., if Michaelis-Menten kinetics are transformed to mass action kinetics, can probably not be solved without sophisticated preprocessing. An adequate way of scaling therefore needs to be developed. Additionally, the efficiency of the wrapping parametrization algorithm itself could also be improved, since currently there is no control level algorithm that supervises the search direction in the parameter space. Valuable information, however, is available, since the shape of the boxes indicates the maximal possible perturbation *η*, and this could be used to determine the direction of the next step (Fig. (Fig.2).2). A promising approach would thus be a method that simultaneously uses primal and dual information.

## Conclusion

In conclusion, we believe the results shown here are an important first step towards the integration of SDP as a tool to solve and analyze polynomial systems in chemical and biochemical engineering. Since our SDP-based algorithm allows to increase the efficiency of the pivotal steps in parameter estimation, it has great potential for the identification of nonlinear systems that prevail in biology.

## Competing interests

The authors declare that they have no competing interests.

## Authors' contributions

LK and PAP designed the study. LK conceived the modeling background. PAP derived and supervised the SDP part of the work. US participated in the design and coordinated the project. All authors read and approved the final manuscript.

## Appendix

### Convex relaxation

Consider a set of quadratic equations and linear inequalities for the vector *x *as defined in (5).

*x*^{T }·*Q*_{i}·*x *= 0, *L*·*x *≥ *0*. (16)

Defining *X *= *x*·*x*^{T}, we find that these equations can be rewritten as *affine *expressions in the matrix *X*. A semidefinite relaxation of Eqns. (16) is then obtained by replacing the nonconvex constraint *X *= *x*·*x*^{T }with the weaker (but convex) alternative:

*X*_{11 }= 1, *X *≥ 0,

yielding the system

Tr(*Q*_{i}·*X*) = 0, *L*·*X*·*e*_{1 }≥ 0, (17)

with *X*_{11 }= 1, *X *≽ 0.

In general, the original (Eqns. (16)) and the new (Eqns. (17)) formulations are equivalent only if the matrix *X *has rank one. However, since such rank constraint is nonconvex, it cannot be directly included in the SDP optimization. This relaxation causes the set of solutions to becomes larger, and as a consequence the rank condition must be checked independently after the optimization is completed. Notice also that the formulation can be strengthened by adding the redundant constraints *L*·*X*·*L*^{T }≥ 0.

### Weak Duality in SDP problems

A typical SDP problem in primal form is

min Tr(*C*·*X*)

s.t Tr(*A*_{i}·*X*) = *b*_{i}, *i *= 1, ..., *m * (18)

*X *≽ 0.

The associated dual problem can be stated as

$\begin{array}{cc}\mathrm{max}& {b}^{T}z\\ s.t& {\displaystyle {\sum}_{i=1}^{m}{A}_{i}}{z}_{i}\u2aafC,\end{array}\text{}\left(19\right)$

where *b *= (*b*_{1},...,*b*_{m}), and the vector **z **= (*z*_{1},...,*z*_{m}) contains the dual decision variables.

The key relationship between the primal and the dual problem is the fact that feasible solutions of one can be used to bound the values of the other problem. Indeed, let *X *and **z **be any two feasible solutions of the primal and dual problems respectively. Then we have the following inequality:

Tr(*C*·*X*) ≥ *b*^{T }**z**. (20)

This property is known as *weak duality*. Thus, we can use any feasible *X *to compute an upper bound for the optimum of *b*^{T }**z**, and we can also use any feasible **z **to compute a lower bound for the optimum of Tr(*C*·*X*).

### Proof for regions of inconsistency

If a feasible dual solution can be found, Eqn. (8) will be satisfied. As noticed earlier, the matrices *Q*_{i }explicitly depend on the unknown rates *k *in an affine way. Thus, we want to be able to disprove the consistency of not just one particular fixed value of the rate constants, but to *simultaneously *discard whole *sets *of rates at the same time. In other words, we want to find solutions (*γ*, *λ*_{i}, *r*, *S*) of the dual form (8) that work for all rate constants *k *on a given set.

Notice that the dependence of the matrices *Q*_{i }on the rate constants *k *is affine. Assume the nominal value and deviation of rate constant *j *is *k*_{j0 }± Δ*k*_{j}. Then, we can write

${Q}_{i}(k)={Q}_{i0}+{\displaystyle \sum _{j}{Q}_{ij}{\delta}_{j}\Delta {k}_{j},}$

where *δ*_{j }:= (*k*_{j }- *k*_{j0})/Δ*k*_{j}, and |*δ*_{j}| ≤ 1. To guarantee that the dual form (8) holds for all allowable values of the rate constants, we can use the following result:

**Lemma 1 ***Consider the linear matrix inequality given by:*

${A}_{0}+{\displaystyle \sum _{i=1}^{n}{\delta}_{i}{A}_{i}\succcurlyeq 0.\text{}\left(21\right)}$

If there exist η, W_{i}, such that

$\begin{array}{cc}{A}_{0}={\displaystyle \sum _{i=1}^{n}{W}_{i},}& \{\begin{array}{c}{W}_{i}+\eta \cdot {A}_{i}\succcurlyeq 0\\ {W}_{i}-\eta \cdot {A}_{i}\succcurlyeq 0\end{array},\text{}\left(22\right)\end{array}$

*for i *= 1,..., *n, then Eqn. (21) holds for all δ such that *|*δ*_{i}| ≤ *η*.

The lemma follows easily from the identity:

${A}_{0}+{\displaystyle \sum _{i=1}^{n}{\delta}_{i}{A}_{i}=\frac{1}{2\eta}{\displaystyle \sum _{i=1}^{n}\left(\eta +{\delta}_{i}\right)}\left({W}_{i}+\eta {A}_{i}\right)+\left(\eta -{\delta}_{i}\right)\left({W}_{i}-\eta {A}_{i}\right)}.$

Since the expressions in (22) are affine in the unknowns *η*, *W*_{i}, we can hence directly use this lemma to obtain guaranteed regions of inconsistency.

### Further case studies

Additional example 1:

Consider a simple two-element linear reaction, where component *A *is converted into component *B *and vice versa:

$\begin{array}{ccc}[A]& \underset{{k}_{2}}{\overset{{k}_{1}}{\rightleftarrows}}& [B].\end{array}$

We assume unimolar concentrations of species *A *and *B *that are kept within an accuracy of 25%. The two-dimensional contour plot including feasible and infeasible regions is shown in Fig. Fig.55.

**Contour plot for additional example 1**. The feasible and infeasible regions are shown in white and black, respectively. Starting from an initial set of parameters (black cross), whole areas can be proven to be inconsistent (gray).

Additional example 2:

Consider another simple nonlinear system, where *mRNA *binds to a ribosome to form a protein *P*, which decays at a certain rate:

$\begin{array}{lll}[mRNA]+[rib]\hfill & \underset{{k}_{2}}{\overset{{k}_{1}}{\rightleftarrows}}\hfill & [mRNA\xb7rib]\hfill \\ [mRNA\xb7rib]\hfill & \stackrel{{k}_{3}}{\to}\hfill & [mRNA]+[rib]+[P]\hfill \\ [P]\hfill & \stackrel{{k}_{4}}{\to}\hfill & \varnothing .\hfill \end{array}$

For simplicity, the overall concentrations of *mRNA*, ribosome and *P *are all equal to 1 and have to be kept within an accuracy of *ε *= 25%. The overall parameter space is 4-dimensional. Fig. Fig.66 shows the contour plot in the *k*_{3 }- *k*_{4 }plane.

## References

- Kitano H. Systems biology: a brief overview. Science. 2002;295:1662–1624. doi: 10.1126/science.1069492. [PubMed] [Cross Ref]
- Bailey JE. Mathematical modeling and analysis in biochemical engineering: past accomplishments and future opportunities. Biotechnol Prog. 1998;14:8–20. doi: 10.1021/bp9701269. [PubMed] [Cross Ref]
- Stelling J, Sauer U, Szallasi Z, Doyle FJ, 3rd, Doyle J. Robustness of cellular functions. Cell. 2004;118:675–685. doi: 10.1016/j.cell.2004.09.008. [PubMed] [Cross Ref]
- Edwards JS, Covert M, Palsson B. Metabolic modelling of microbes: the flux-balance approach. Environ Microbiol. 2002;4:133–140. doi: 10.1046/j.1462-2920.2002.00282.x. [PubMed] [Cross Ref]
- Kuepfer L, Sauer U, Blank LM. Metabolic functions of duplicate genes in
*Saccharomyces cerevisiae*. Genome Res. 2005;15:1421–1430. doi: 10.1101/gr.3992505. [PMC free article] [PubMed] [Cross Ref] - Varner JD. Large-scale prediction of phenotype: concept. Biotechnol Bioeng. 2000;69:664–678. doi: 10.1002/1097-0290(20000920)69:6<664::AID-BIT11>3.0.CO;2-H. [PubMed] [Cross Ref]
- Lee B, Yen J, Yang L, Liao JC. Incorporating qualitative knowledge in enzyme kinetic models using fuzzy logic. Biotechnol Bioeng. 1999;62:722–729. doi: 10.1002/(SICI)1097-0290(19990320)62:6<722::AID-BIT11>3.0.CO;2-U. [PubMed] [Cross Ref]
- Bailey J, Ollis D. Biochemical Engineering Fundamentals McGraw-Hill chemical engineering series. 1986.
- Heijnen JJ. Approximate kinetic formats used in metabolic network modeling. Biotechnol Bioeng. 2005;91:534–545. doi: 10.1002/bit.20558. [PubMed] [Cross Ref]
- Schoeberl B, Eichler-Jonsson C, Gilles ED, Muller G. Computational modeling of the dynamics of the MAP kinase cascade activated by surface and internalized EGF receptors. Nat Biotechnol. 2002;20:370–375. doi: 10.1038/nbt0402-370. [PubMed] [Cross Ref]
- Mishra J, Bhalla US. Simulations of inositol phosphate metabolism and its interaction with InsP(3)-mediated calcium release. Biophys J. 2002;83:1298–1316. [PMC free article] [PubMed]
- Stelling J, Gilles ED, Doyle F., Jr 3rd Robustness properties of circadian clock architectures. Proc Natl Acad Sci U S A. 2004;101:13210–13215. doi: 10.1073/pnas.0401463101. [PMC free article] [PubMed] [Cross Ref]
- Smith AE, Slepchenko BM, Schaff JC, Loew LM, Macara IG. Systems Analysis of Ran transport. Science. 2002;295:488–491. doi: 10.1126/science.1064732. [PubMed] [Cross Ref]
- Haunschild MD, Freisleben B, Takors R, Wiechert W. Investigating the dynamic behavior of biochemical networks using model families. Bioinformatics. 2005;21:1617–25. doi: 10.1093/bioinformatics/bti225. [PubMed] [Cross Ref]
- Barkai N, Leibler S. Robustness in simple biochemical networks. Nature. 1997;387:913–917. doi: 10.1038/43199. [PubMed] [Cross Ref]
- Bentele M, Lavrik I, Ulrich M, Stosser S, Heermann DW, Kalthoff H, Krammer PH, Eils R. Mathematical modeling reveals threshold mechanism in CD95-induced apoptosis. J Cell Biol. 2004;166:839–851. doi: 10.1083/jcb.200404158. [PMC free article] [PubMed] [Cross Ref]
- Voit EO, Almeida J. Decoupling dynamical systems for pathway identification from metabolic profiles. Bioinformatics. 2004;20:1670–1681. doi: 10.1093/bioinformatics/bth140. [PubMed] [Cross Ref]
- Polisetty PK, Voit EO, Gatzke EP. Identification of metabolic system parameters using global optimization methods. Theor Biol Med Model. 2006;3 [PMC free article] [PubMed]
- Moles CG, Mendes P, Banga JR. Parameter estimation in biochemical pathways: a comparison of global optimization methods. Genome Res. 2003;13:2467–2474. doi: 10.1101/gr.1262503. [PMC free article] [PubMed] [Cross Ref]
- Vandenberghe L, Boyd S. Semidefinite programming. SIAM Rev. 1996;39:49–95. doi: 10.1137/1038003. [Cross Ref]
- Flaherty P, Jordan MI, Arkin AP. Robust Design of Biological Experiments. Proceedings of the Neural Information Processing Symposium 2005. 2005.
- Tau JF, Fazel M, Liu X, Otitoju T, Papachristodoulou A, Prajna S, Doyle J. Application of Robust Model Validation Using SOSTOOLS to the Study of G-Protein Signaling in Yeast. Proceedings of FOSBE (Foundations of Systems Biology and Engineering) 2005.
- Todd MJ. Semidefinite Optimization. Acta Numerica. 2001;10:515–560.
- Boyd S, Vandenberghe L. Convex Optimization. Cambridge University Press; 2004.
- Parrilo PA. Semidefinite programming relaxations for semialgebraic problems. Math Prog. 2003;96:293–320. doi: 10.1007/s10107-003-0387-5. [Cross Ref]
- Kremling A, Fischer S, Gadkar K, Doyle FJ, Sauter T, Bullinger E, Allgower E, Gilles ED. A benchmark for methods in reverse engineering and model discrimination: problem formulation and solutions. Genome Res. 2004;14:1773–1785. doi: 10.1101/gr.1226004. [PMC free article] [PubMed] [Cross Ref]
- Lofberg J. YALMIP: A Toolbox for Modeling and Optimization in MATLAB. Proceedings of the CACSD Conference. 2004.
- Sturm JF. Using SeDuMi 1.02, a MATLAB toolbox for optimization over symmetric cones. Optimization Methods and Software (Special issue on Interior Point Methods) 1999;11/12:625–653.
- YALMIP http://control.ee.ethz.ch/~joloef/yalmip.php
- SeDuMi http://sedumi.mcmaster.ca

**BioMed Central**

## Formats:

- Article |
- PubReader |
- ePub (beta) |
- PDF (402K) |
- Citation

- Parameter identification, experimental design and model falsification for biological network models using semidefinite programming.[IET Syst Biol. 2010]
*Hasenauer J, Waldherr S, Wagner K, Allgöwer F.**IET Syst Biol. 2010 Mar; 4(2):119-30.* - Computational procedures for optimal experimental design in biological systems.[IET Syst Biol. 2008]
*Balsa-Canto E, Alonso AA, Banga JR.**IET Syst Biol. 2008 Jul; 2(4):163-72.* - Extraction of elementary rate constants from global network analysis of E. coli central metabolism.[BMC Syst Biol. 2008]
*Zhao J, Ridgway D, Broderick G, Kovalenko A, Ellison M.**BMC Syst Biol. 2008 May 7; 2:41. Epub 2008 May 7.* - Stochastic P systems and the simulation of biochemical processes with dynamic compartments.[Biosystems. 2008]
*Spicher A, Michel O, Cieslak M, Giavitto JL, Prusinkiewicz P.**Biosystems. 2008 Mar; 91(3):458-72. Epub 2007 Jul 17.* - A structured approach for the engineering of biochemical network models, illustrated for signalling pathways.[Brief Bioinform. 2008]
*Breitling R, Gilbert D, Heiner M, Orton R.**Brief Bioinform. 2008 Sep; 9(5):404-21. Epub 2008 Jun 23.*

- Systems Biology as an Integrated Platform for Bioinformatics, Systems Synthetic Biology, and Systems Metabolic Engineering[Cells. ]
*Chen BS, Wu CC.**Cells. 2(4)635-688* - Identification of Growth Phases and Influencing Factors in Cultivations with AGE1.HN Cells Using Set-Based Methods[PLoS ONE. ]
*Borchers S, Freund S, Rath A, Streif S, Reichl U, Findeisen R.**PLoS ONE. 8(8)e68124* - A Unifying Mathematical Framework for Genetic Robustness, Environmental Robustness, Network Robustness and their Trade-offs on Phenotype Robustness in Biological Networks. Part III: Synthetic Gene Networks in Synthetic Biology[Evolutionary Bioinformatics Online. ]
*Chen BS, Lin YP.**Evolutionary Bioinformatics Online. 987-109* - Workflow for generating competing hypothesis from models with parameter uncertainty[Interface Focus. 2011]
*Gomez-Cabrero D, Compte A, Tegner J.**Interface Focus. 2011 Jun 6; 1(3)438-449* - Set-base dynamical parameter estimation and model invalidation for biochemical reaction networks[BMC Systems Biology. ]
*Rumschinski P, Borchers S, Bosio S, Weismantel R, Findeisen R.**BMC Systems Biology. 469*

- CompoundCompoundPubChem chemical compound records that cite the current articles. These references are taken from those provided on submitted PubChem chemical substance records. Multiple substance records may contribute to the PubChem compound record.
- PubMedPubMedPubMed citations for these articles
- SubstanceSubstancePubChem chemical substance records that cite the current articles. These references are taken from those provided on submitted PubChem chemical substance records.
- TaxonomyTaxonomyTaxonomy records associated with the current articles through taxonomic information on related molecular database records (Nucleotide, Protein, Gene, SNP, Structure).
- Taxonomy TreeTaxonomy Tree

- Efficient classification of complete parameter regions based on semidefinite pro...Efficient classification of complete parameter regions based on semidefinite programmingBMC Bioinformatics. 2007; 8()12

Your browsing activity is empty.

Activity recording is turned off.

See more...