Fold / Hopf Continuation

In this page, we explain how to perform continuation of Fold / Hopf points and detect the associated bifurcations.

For this to work best, it is advised to have an analytical expression for the jacobian. See the tutorial Temperature model for more details although BifurcationProblem implements it with AD by default.

A quite complete example for detection of codim 2 bifurcations of equilibria is Extended Lorenz-84 model (codim 2 + BT/ZH aBS).

List of detected codim 2 bifurcation points

Bifurcationsymbol used
Bogdanov-Takensbt
Bautingh
Cuspcusp
Zero-Hopfzh
Hopf-Hopfhh

In a nutshell, all you have to do (see below) is to call continuation(br, ind_bif, lens2) to continue the bifurcation point stored in br.specialpoint[ind_bif] and set proper options.

Fold continuation (theory)

The continuation of Fold bifurcation points is based on a Minimally Augmented[Govaerts] formulation which is an efficient way to detect singularities. The continuation of Fold points is based on the formulation

\[G(u,p) = (F(u,p), \sigma(u,p))\in\mathbb R^{n+1}\quad\quad (F_f)\]

where the test function $\sigma$ is solution of

\[\left[\begin{array}{cc} dF(u,p) & w \\ v^{\top} & 0 \end{array}\right]\left[\begin{array}{c} r \\ \sigma(u,p) \end{array}\right]=\left[\begin{array}{c}0_{n} \\1\end{array}\right]\quad\quad (M_f)\]

where $w,v$ are chosen in order to have a non-singular matrix $(M_f)$. More precisely, $v$ (resp. $w$) should be close to a null vector of dF(u,p) (resp. dF(u,p)'). During continuation, the vectors $w,v$ are updated so that the matrix $(M_f)$ remains non-singular ; this is controlled with the argument update_minaug_every_step (see below).

note that there are very simplified calls for this, see Newton refinement below. In particular, you don't need to set up the Fold Minimally Augmented problem yourself. This is done in the background.

Linear Method

You can pass the bordered linear solver to solve $(M_f)$ using the option bdlinsolver (see below). Note that the choice bdlinsolver = BorderingBLS() can lead to singular systems. Indeed, in this case, $(M_f)$ is solved by inverting dF(u,p) which is singular at Fold points.

Detection of codim 2 bifurcation points

You can detect the following codim 2 bifurcation points by using the option detect_codim2_bifurcation in the method continuation. Under the hood, the detection of these bifurcations is done by using Event detection as explained in Event Handling.

  • the detection of Cusp (Cusp) is done by the detection of Fold bifurcation points along the curve of Folds by monitoring the parameter component of the tangent.
  • the detection of Bogdanov-Takens (BT) is performed using the test function[Bindel] $\psi_{BT}(p) = \langle w(p),v(p)\rangle$
  • the detection of Zero-Hopf (ZH) is performed by monitoring the number of eigenvalues $\lambda$ such that $\Re\lambda > \min\limits_{\nu\in\Sigma(dF)}|\Re\nu|$ and $\Im\lambda > \epsilon$ where $\epsilon$ is the Newton tolerance.

Hopf continuation (theory)

The continuation of Fold bifurcation points is based on a Minimally Augmented (see [Govaerts] p. 87) formulation which is an efficient way to detect singularities. The continuation of Hopf points is based on the formulation

\[G(u,\omega,p) = (F(u,\omega,p), \Re\sigma(u,\omega,p), \Im\sigma(u,\omega,p))\in\mathbb R^{n+2}\quad\quad (F_h)\]

where the test function $\sigma$ is solution of

\[\left[\begin{array}{cc} dF(u,p)-i\omega I_n & w \\ v^{\top} & 0 \end{array}\right]\left[\begin{array}{c} r \\ \sigma(u,\omega,p) \end{array}\right]=\left[\begin{array}{c} 0_{n} \\ 1 \end{array}\right]\quad\quad (M_h)\]

where $w,v$ are chosen in order to have a non-singular matrix $(M_h)$. More precisely, $w$ (resp. $v$) should be a left (resp. right) approximate null vector of $dF(u,p)-i\omega I_n$. During continuation, the vectors $w,v$ are updated so that the matrix $(M_h)$ remains non-singular ; this is controlled with the argument update_minaug_every_step (see below).

note that there are very simplified calls to this, see Newton refinement below. In particular, you don't need to set up the Hopf Minimally Augmented problem yourself. This is done in the background.

Linear Method

You can pass the bordered linear solver to solve $(M_h)$ using the option bdlinsolver (see below). Note that the choice bdlinsolver = BorderingBLS() can lead to singular systems. Indeed, in this case, $(M_h)$ is solved by inverting dF(u,p)-iω I_n which is singular at Hopf points.

Detection of codim 2 bifurcation points

You can detect the following codim 2 bifurcation points by using the option detect_codim2_bifurcation in the method continuation. Under the hood, the detection of these bifurcations is done by using Event detection as explained in Event Handling.

  • the detection of Bogdanov-Takens (BT) is performed using the test function[Bindel],[Blank] $\psi_{BT}(p) = \langle w(p),v(p)\rangle$
  • the detection of Bautin (GH) is based on the test function $\psi_{GH}(p) = \Re(l_1(p))$ where $l_1$ is the Lyapunov coefficient defined in Simple Hopf point.
  • the detection of Zero-Hopf (ZH) is performed by monitoring the eigenvalues.
  • the detection of Hopf-Hopf (HH) is performed by monitoring the eigenvalues.

The continuation of Hopf points is stopped at BT and when $\omega<100\epsilon$ where $\epsilon$ is the newton tolerance.

Setting the jacobian

In order to apply the newton algorithm to $F_f$ or $F_h$, one needs to invert the jacobian. This is not completely trivial as one must compute this jacobian and then invert it. You can select the following jacobians for your computations (see below):

  • jacobian_ma = :autodiff [Default], automatic differentiation is applied to $F_f$ (or $F_h$) and the matrix is then inverted using the provided linear solver. In particular, the jacobian is formed. This is very well suited for small dimensions (say < 100)
  • jacobian_ma = :minaug, a specific procedure for evaluating the jacobian $F_f$ (or $F_h$) and inverting it (without forming the jacobian!) is used. This is well suited for large dimensions. It can be used for matrix-free (e.g. iterative) solvers.
  • jacobian_ma = :MinAugMatrixBased the jacobian matrix is evaluated using analytical formula. This allows for example to form a sparse matrix when the underlying problem has sparse jacobian.
  • jacobian_ma = :finiteDifferencesMF the jacobian is evaluated in a matrix-free version using finite differences. Mainly for debugging purposes.
  • jacobian_ma = :finiteDifferences. The jacobian matrix is evaluated using finite differences.

Newton refinement

Once a Fold / Hopf point has been detected after a call to br = continuation(...), it can be refined using newton iterations. Let us say that ind_bif is the index in br.specialpoint of a Fold / Hopf point. This guess can be refined as follows:

outfold = newton(br::AbstractBranchResult, ind_bif::Int;  
	normN = norm, options = br.contparams.newton_options,
	bdlinsolver = BorderingBLS(options.linsolver),
	start_with_eigen = false, kwargs...)

For the options parameters, we refer to Newton.

It is important to note that for improved performances, a function implementing the expression of the hessian should be provided. This is by far the fastest. BifurcationProblem provides it by default using AD though.

Reader interested in this advanced usage should look at the code example/chan.jl of the tutorial Temperature model.

Fold / Hopf continuation

To compute the curve of Fold / Hopf points, one can call continuation with the following options

Missing docstring.

Missing docstring for continuation(br::BifurcationKit.AbstractBranchResult, ind_bif::Int64, lens2, options_cont::ContinuationPar = br.contparams ; kwargs...). Check Documenter's build log for details.

where the options are as above except with have an additional parameter axis lens2 which is used to locate the bifurcation points.

See Temperature model for an example of use.

Advanced use

Here, we expose the solvers that are used to perform newton refinement or codim 2 continuation in case the above methods fails. This is useful in case it is too involved to expose the linear solver options. An example of advanced use is the continuation of Folds of periodic orbits, see Continuation of Fold of periodic orbits.

Missing docstring.

Missing docstring for newton_fold. Check Documenter's build log for details.

BifurcationKit.newton_hopfFunction
newton_hopf(
    prob,
    hopfpointguess,
    par,
    eigenvec,
    eigenvec_ad,
    options;
    normN,
    bdlinsolver,
    usehessian,
    kwargs...
)

This function turns an initial guess for a Hopf point into a solution to the Hopf problem based on a Minimally Augmented formulation. The arguments are as follows

  • prob::AbstractBifurcationProblem where p is a set of parameters.
  • hopfpointguess initial guess (x0, p0) for the Hopf point. It should a BorderedArray as returned by the function HopfPoint.
  • par parameters used for the vector field
  • eigenvec guess for the iω eigenvector
  • eigenvec_ad guess for the -iω eigenvector
  • options::NewtonPar options for the Newton-Krylov algorithm, see NewtonPar.

Optional arguments:

  • normN = norm
  • bdlinsolver bordered linear solver for the constraint equation
  • kwargs keywords arguments to be passed to the regular Newton-Krylov solver

Simplified call:

Simplified call to refine an initial guess for a Hopf point. More precisely, the call is as follows

newton_hopf(br::AbstractBranchResult, ind_hopf::Int; normN = norm, options = br.contparams.newton_options, kwargs...)

The parameters / options are as usual except that you have to pass the branch br from the result of a call to continuation with detection of bifurcations enabled and index is the index of bifurcation point in br you want to refine. You can pass newton parameters different from the ones stored in br by using the argument options.

Jacobian transpose

The adjoint of the jacobian J is computed internally when Jᵗ = nothing by using transpose(J) which works fine when J is an AbstractArray. In this case, do not pass the jacobian adjoint like Jᵗ = (x, p) -> transpose(d_xF(x, p)) otherwise the jacobian will be computed twice!

ODE problems

For ODE problems, it is more efficient to use the Matrix based Bordered Linear Solver passing the option bdlinsolver = MatrixBLS()

source
BifurcationKit.continuation_foldFunction
continuation_fold(
    prob,
    alg,
    foldpointguess,
    par,
    lens1,
    lens2,
    eigenvec,
    eigenvec_ad,
    options_cont;
    update_minaug_every_step,
    normC,
    bdlinsolver,
    bdlinsolver_adjoint,
    jacobian_ma,
    compute_eigen_elements,
    usehessian,
    kind,
    record_from_solution,
    kwargs...
)

Codim 2 continuation of Fold points. This function turns an initial guess for a Fold point into a curve of Fold points based on a Minimally Augmented formulation. The arguments are as follows

  • prob::AbstractBifurcationFunction
  • foldpointguess initial guess (x0, p10) for the Fold point. It should be a BorderedArray as returned by the function foldpoint
  • par set of parameters
  • lens1 parameter axis for parameter 1
  • lens2 parameter axis for parameter 2
  • eigenvec guess for the right null vector
  • eigenvec_ad guess for the left null vector
  • options_cont arguments to be passed to the regular continuation

Optional arguments:

  • jacobian_ma::Symbol = :autodiff, how the linear system of the Fold problem is solved. Can be :autodiff, :finiteDifferencesMF, :finiteDifferences, :minaug
  • bdlinsolver bordered linear solver for the constraint equation with top-left block J. Required in the linear solver for the Minimally Augmented Fold functional. This option can be used to pass a dedicated linear solver for example with specific preconditioner.
  • bdlinsolver_adjoint bordered linear solver for the constraint equation with top-left block J^*. Required in the linear solver for the Minimally Augmented Fold functional. This option can be used to pass a dedicated linear solver for example with specific preconditioner.
  • update_minaug_every_step update vectors a, b in Minimally Formulation every update_minaug_every_step steps
  • compute_eigen_elements = false whether to compute eigenelements. If options_cont.detect_event>0, it allows the detection of ZH points.
  • kwargs keywords arguments to be passed to the regular continuation

Simplified call

continuation_fold(br::AbstractBranchResult, ind_fold::Int64, lens2::AllOpticTypes, options_cont::ContinuationPar ; kwargs...)

where the parameters are as above except that you have to pass the branch br from the result of a call to continuation with detection of bifurcations enabled and index is the index of Fold point in br that you want to continue.

Jacobian transpose

The adjoint of the jacobian J is computed internally when Jᵗ = nothing by using transpose(J) which works fine when J is an AbstractArray. In this case, do not pass the jacobian adjoint like Jᵗ = (x, p) -> transpose(d_xF(x, p)) otherwise the jacobian would be computed twice!

ODE problems

For ODE problems, it is more efficient to use the Matrix based Bordered Linear Solver passing the option bdlinsolver = MatrixBLS(). This is the default setting.

Detection of Bogdanov-Takens and Cusp bifurcations

In order to trigger the detection, pass detect_event = 1 or 2 in options_cont.

source
BifurcationKit.continuation_hopfFunction
continuation_hopf(
    prob_vf,
    alg,
    hopfpointguess,
    par,
    lens1,
    lens2,
    eigenvec,
    eigenvec_ad,
    options_cont;
    update_minaug_every_step,
    normC,
    linsolve_adjoint,
    bdlinsolver,
    bdlinsolver_adjoint,
    jacobian_ma,
    compute_eigen_elements,
    usehessian,
    kind,
    massmatrix,
    record_from_solution,
    kwargs...
)

codim 2 continuation of Hopf points. This function turns an initial guess for a Hopf point into a curve of Hopf points based on a Minimally Augmented formulation. The arguments are as follows

  • prob::AbstractBifurcationProblem
  • hopfpointguess initial guess (x0, p10) for the Hopf point. It should be a Vector or a BorderedArray
  • par set of parameters
  • lens1 parameter axis for parameter 1
  • lens2 parameter axis for parameter 2
  • eigenvec guess for the iω eigenvector at p1_0
  • eigenvec_ad guess for the -iω eigenvector at p1_0
  • options_cont keywords arguments to be passed to the regular continuation

Optional arguments:

  • jacobian_ma::Symbol = :autodiff, how the linear system of the Fold problem is solved. Can be :autodiff, :finiteDifferencesMF, :finiteDifferences, :minaug
  • linsolve_adjoint solver for (J+iω)^* ⋅sol = rhs
  • bdlinsolver bordered linear solver for the constraint equation with top-left block (J-iω). Required in the linear solver for the Minimally Augmented Hopf functional. This option can be used to pass a dedicated linear solver for example with specific preconditioner.
  • bdlinsolver_adjoint bordered linear solver for the constraint equation with top-left block (J-iω)^*. Required in the linear solver for the Minimally Augmented Hopf functional. This option can be used to pass a dedicated linear solver for example with specific preconditioner.
  • update_minaug_every_step update vectors a,b in Minimally Formulation every update_minaug_every_step steps
  • compute_eigen_elements = false whether to compute eigenelements. If options_cont.detect_event>0, it allows the detection of ZH, HH points.
  • kwargs keywords arguments to be passed to the regular continuation

Simplified call:

continuation_hopf(br::AbstractBranchResult, ind_hopf::Int, lens2::AllOpticTypes, options_cont::ContinuationPar ;  kwargs...)

where the parameters are as above except that you have to pass the branch br from the result of a call to continuation with detection of bifurcations enabled and index is the index of Hopf point in br that you want to refine.

ODE problems

For ODE problems, it is more efficient to use the Matrix based Bordered Linear Solver passing the option bdlinsolver = MatrixBLS(). This is the default setting.

Jacobian transpose

The adjoint of the jacobian J is computed internally when Jᵗ = nothing by using transpose(J) which works fine when J is an AbstractArray. In this case, do not pass the jacobian adjoint like Jᵗ = (x, p) -> transpose(d_xF(x, p)) otherwise the jacobian would be computed twice!

Detection of Bogdanov-Takens and Bautin bifurcations

In order to trigger the detection, pass detect_event = 1,2 in options_cont. Note that you need to provide d3F in prob.

source

Algorithmic details (Fold)

If we write $(s,\sigma)$ the solution of the adjoint problem associated to $(M_f)$, one can show[Govaerts] that the differential of $\sigma$ satisfies:

\[\partial \sigma + \langle s,\partial dF \cdot r\rangle = 0\]

This allows to compute the jacobian of the Fold functional to use for the Newton algorithm:

\[\left[\begin{array}{cc} \partial_{u}F(u,p) & \partial_pF(u,p) \\ \partial_x\sigma(u,p) & \partial_p\sigma(u,p) \end{array}\right].\]

Algorithmic details (Hopf)

We recall that the unknowns are $(x,p,\omega)$. The jacobian of the Hopf functional to use for the Newton algorithm is

\[\left[\begin{array}{ccc} \partial_{u}F & \partial_pF & 0 \\ \partial_x\sigma_r & \partial_p\sigma_r & \partial_\omega\sigma_r\\ \partial_x\sigma_i & \partial_p\sigma_i & \partial_\omega\sigma_i \end{array}\right]\]

using a similar formula for $\partial\sigma$ as in the Fold case.

References

  • Govaerts

    Govaerts, Willy J. F. Numerical Methods for Bifurcations of Dynamical Equilibria. Philadelphia, Pa: Society for Industrial and Applied Mathematics, 2000.

  • Blank

    Blank, H. J. de, Yu. A. Kuznetsov, M. J. Pekkér, and D. W. M. Veldman. “Degenerate Bogdanov–Takens Bifurcations in a One-Dimensional Transport Model of a Fusion Plasma.” Physica D: Nonlinear Phenomena 331 (September 15, 2016): 13–26. https://doi.org/10.1016/j.physd.2016.05.008.

  • Bindel

    Bindel, D., M. Friedman, W. Govaerts, J. Hughes, and Yu.A. Kuznetsov. “Numerical Computation of Bifurcations in Large Equilibrium Systems in Matlab.” Journal of Computational and Applied Mathematics 261 (May 2014): 232–48. https://doi.org/10.1016/j.cam.2013.10.034.