Library
Parameters
BifurcationKit.NewtonPar — Typeoptions = NewtonPar(tol = 1e-4,...)Returns a variable containing parameters to affect the newton algorithm when solving F(x) = 0.
Arguments (with default values):
tol = 1e-10: absolute tolerance forF(x)maxIter = 50: number of Newton iterationsverbose = false: display Newton iterations?linsolver = DefaultLS(): linear solver, must be<: AbstractLinearSolvereigsolver = DefaultEig(): eigen solver, must be<: AbstractEigenSolver
Arguments only used in newtonPALC
linesearch = false: use line search algorithmalpha = 1.0: alpha (damping) parameter for line search algorithmalmin = 0.001: minimal vslue of the dampingalpha
For performance reasons, we decided to use an immutable structure to hold the parameters. One can use the package Setfield.jl to drastically simplify the mutation of different fields. See the tutorials for examples.
BifurcationKit.ContinuationPar — Typeoptions = ContinuationPar(dsmin = 1e-4,...)Returns a variable containing parameters to affect the continuation algorithm used to solve F(x,p) = 0.
Arguments
dsmin, dsmaxare the minimum, maximum arclength allowed value. It controls the density of points in the computed branch of solutions.dsis the initial arclength.thetais a parameter in the arclength constraint. It is very important to tune it. See the docs ofcontinuation.pMin, pMaxallowed parameter range forpmaxStepsmaximum number of continuation stepsnewtonOptions::NewtonPar: options for the Newton algorithmsaveToFile = false: save to file. A name is automatically generated.saveSolEveryNsteps::Int64 = 0at which continuation steps do we save the current solution`plotEveryNsteps = 3
Handling eigen elements, their computation is triggered by the argument detectBifurcation (see below)
nev = 3number of eigenvalues to be computed. It is automatically increased to have at leastnevunstable eigenvalues. To be set for proper bifurcation detection. See Detection of bifurcation points for more informations.saveEigEveryNsteps = 1record eigen vectors every specified steps. Important for memory limited ressource, e.g. GPU.saveEigenvectors = trueImportant for memory limited ressource, e.g. GPU.
Handling bifurcation detection
precisionStability = 1e-10lower bound on the real part of the eigenvalues to test for stability of equilibria and periodic orbitsdetectFold = truedetect Fold bifurcations? It is a useful option although the detection of Fold is cheap. Indeed, it may happen that there is a lot of Fold points and this can saturate the memory in memory limited devices (e.g. on GPU)detectBifurcation::Int∈ {0, 1, 2, 3} If set to 0, nothing is done. If set to 0, the eigen-elements are computed. If set to 2, bifurcation are detected along the continuation run, but not located precisely. If set to 3, a bisection algorithm is used to locate the bifurcations (slower). The possibility to switch off detection is a useful option. Indeed, it may happen that there are a lot of bifurcation points and this can saturate the memory of memory limited devices (e.g. on GPU)dsminBisectiondsmin for the bisection algorithm for locating bifurcation pointsnInversionnumber of sign inversions in bisection algorithmmaxBisectionStepsmaximum number of bisection stepstolBisectionEigenvaluetolerance on real part of eigenvalue to detect bifurcation points in the bisection steps
Handling ds adaptation (see continuation for more information)
a = 0.5aggressiveness factor. It is used to adaptdsin order to have a number of newton iterations per continuation step roughly constant. The higherais, the larger the step sizedsis changed at each continuation step.thetaMin = 1.0e-3minimum value ofthetadoArcLengthScalingtrigger further adaptation oftheta
Misc
finDiffEps::T = 1e-9ε used in finite differences computations
For performance reasons, we decided to use an immutable structure to hold the parameters. One can use the package Setfield.jl to drastically simplify the mutation of different fields. See tutorials for more examples.
Results
BifurcationKit.ContResult — Typestruct ContResult{T, Teigvals, Teigvec, Biftype, Ts, Tfunc, Tpar, Tl<:Setfield.Lens} <: BifurcationKit.BranchResultStructure which holds the results after a call to continuation.
branch::RecursiveArrayTools.VectorOfArray{T,2,Array{Array{T,1},1}} where Tholds the low-dimensional information about the branch. More precisely,
branch[:,i]contains the following information(param, printSolution(u, param), Newton iterations, ds, i)for each continuation stepi.eig::Array{NamedTuple{(:eigenvals, :eigenvec, :step),Tuple{Teigvals,Teigvec,Int64}},1} where Teigvec where TeigvalsA vector with eigen-elements at each continuation step.
foldpoint::Array{Biftype,1} where BiftypeA vector holding the set of fold points detected during the computation of the branch.
stability::Array{Bool,1}A
Vector{Bool}holding the stability of the computed solution for each continuation step. Hence, the stabilitystability[k]should matcheig[k]which corresponds tobranch[k]for a givenkn_imag::Array{Int64,1}A
Vector{Int64}holding the number of eigenvalues with positive real part and non zero imaginary part for each continuation step (to detect Hopf bifurcation)n_unstable::Array{Int64,1}A
Vector{Int64}holding the number of eigenvalues with positive real part for each continuation step (to detect stationary bifurcation)sol::AnyVector of solutions sampled along the branch. This is set by the argument
saveSolEveryNsteps::Int64(default 0) inContinuationParcontparams::ContinuationParThe parameters used for the call to
continuationwhich produced this branch.type::SymbolType of solutions computed in this branch. Default: :Equilibrium
functional::AnyStructure associated to the functional, useful for branch switching. For example, when computing periodic orbits, the functional
PeriodicOrbitTrapProblem,ShootingProblem... will be saved here. Default: nothingparams::AnyParameters passed to continuation and used in the equation F(x, par) = 0 Default: nothing
param_lens::Setfield.LensParameter axis used for computing the branch
bifpoint::Array{Biftype,1} where Biftype
A vector holding the set of bifurcation points detected during the computation of the branch. Each entry of the vector contains a tuple with fields:typebifurcation type,:hopf, :bp...,idxis the index ineig(see above) for which the bifurcation occurs.paramparameter value at the bifurcation pointnormnorm of the equilibrium at the bifurcation pointprintsol = printSolution(x, param)xequilibrium at the bifurcation pointtautangent along the branch at the bifurcation pointind_evis the eigenvalue index responsible for the bifurcation (if applicable)stepis the continuation step at which the bifurcation occurs,status ∈ {:converged, :guess}indicates if the bisection algorithm was successful in detecting the bifurcation pointδ = (δr, δi)where δr indicates the change in the number of unstable eigenvalues and δi indicates the change in the number of unstable eigenvalues with nonzero imaginary part.abs(δr)is thus an estimate of the dimension of the kernel of the Jacobian at the bifurcation point.precisionprecision in location of the bifurcation point
Problems
BifurcationKit.DeflationOperator — TypeDeflationOperator. It is used to handle the following situation. Assume you want to solve F(x)=0 with a Newton algorithm but you want to avoid the process to return some already known solutions $roots_i$. The deflation operator penalizes these roots ; the algorithm works very well despite its simplicity. You can use DeflationOperator to define a function M(u) used to find, with Newton iterations, the zeros of the following function $F(u) \cdot Π_i(dot(u - roots_i, u - roots_i)^{-p} + shift) := F(u) \cdot M(u)$. The fields of the struct DeflationOperator are as follows:
- power
p dotfunction, this function has to be bilinear and symmetric for the linear solver to work well- shift
- roots
The deflation operator is is $M(u) = \prod_{i=1}^{n_{roots}}(shift + norm(u-roots_i)^{-p})$ where $norm(u) = dot(u,u)$.
Given defOp::DeflationOperator, one can access its roots as defOp[n] as a shortcut for defOp.roots[n]. Also, one can add (resp.remove) a new root by using push!(defOp, newroot) (resp. pop!(defOp)). Finally length(defOp) is a shortcut for length(defOp.roots)
BifurcationKit.DeflatedProblem — Typepb = DeflatedProblem(F, J, M::DeflationOperator)This creates a deflated problem $M(u) \cdot F(u) = 0$ where M is a DeflationOperator which encodes the penalization term. J is the jacobian of F. Can be used to call newton and continuation.
BifurcationKit.PeriodicOrbitTrapProblem — Typepb = PeriodicOrbitTrapProblem(F, J, ϕ, xπ, M::Int)This composite type implements Finite Differences based on a Trapezoidal rule to locate periodic orbits. The arguments are as follows
F(x,p)vector fieldJjacobian ofF. Can be matrix basedJ(x,p)or Matrix-FreeJt = nothingjacobian tranpose ofF(optional), useful for continuation of Fold of periodic orbits. it should not be passed in case the jacobian is a (sparse) matrix as it is computed internally, and it would be computed twice in that case.d2F = nothingsecond derivative of F (optional), useful for continuation of Fold of periodic orbits. It has the definitiond2F(x,p,dx1,dx2).`ϕused to set a section for the phase constraint equationxπused in the section for the phase constraint equationM::Intnumber of time sliceslinsolver: = DefaultLS()linear solver for each time slice, i.e. to solveJ⋅sol = rhs. This is only needed for the computation of the Floquet multipliers.isinplace::BoolwhetherFandJare inplace functions (Experimental). In this case, the functionsFandJmust have the following definitions(o, x, p) -> F(o, x, p)and(o, x, p, dx) -> J(o, x, p, dx).ongpu::Boolwhether the computation takes place on the gpu (Experimental)
You can then call pb(orbitguess, p) to compute the functional on a orbitguess. Note that orbitguess must be of size M * N + 1 where N is the number of unknowns in the state space and orbitguess[M*N+1] is an estimate of the period of the limit cycle.
The scheme is as follows. We first consider a partition of $[0,1]$ given by $0<s_0<\cdots<s_m=1$ and one looks for T = x[end] such that
$\left(x_{i} - x_{i-1}\right) - \frac{T\cdot h_i}{2} \left(F(x_{i}) + F(x_{i-1})\right) = 0,\ i=1,\cdots,m-1$
with $u_{0} := u_{m-1}$ and the periodicity condition $u_{m} - u_{1} = 0$ and
where $h_1 = s_i-s_{i-1}$. Finally, the phase of the periodic orbit is constrained by using a section
$\langle x[1] - x_\pi, \phi\rangle=0.$
A functional, hereby called G, encodes this problem. The following methods are available
pb(orbitguess, p)evaluates the functional G onorbitguesspb(orbitguess, p, du)evaluates the jacobiandG(orbitguess).dufunctional atorbitguessondupb(Val(:JacFullSparse), orbitguess, p)return the sparse matrix of the jacobiandG(orbitguess)atorbitguesswithout the constraints. It is calledA_γin the docs.pb(Val(:JacFullSparseInplace), J, orbitguess, p). Same aspb(Val(:JacFullSparse), orbitguess, p)but overwritesJinplace. Note that the sparsity pattern must be the same independantly of the values of the parameters or oforbitguess. In this case, this is significantly faster thanpb(Val(:JacFullSparse), orbitguess, p).pb(Val(:JacCyclicSparse), orbitguess, p)return the sparse cyclic matrix Jc (see the docs) of the jacobiandG(orbitguess)atorbitguesspb(Val(:BlockDiagSparse), orbitguess, p)return the diagonal of the sparse matrix of the jacobiandG(orbitguess)atorbitguess. This allows to design Jacobi preconditioner. Useblockdiag.
For these methods to work on the GPU, for example with CuArrays in mode allowscalar(false), we face the issue that the function extractPeriodFDTrap won't be well defined because it is a scalar operation. One may have to redefine it like extractPeriodFDTrap(x::CuArray) = x[end:end] or something else. Also, note that you must pass the option ongpu = true for the functional to be evaluated efficiently on the gpu.
BifurcationKit.ShootingProblem — Typepb = ShootingProblem(flow::Flow, ds, section; isparallel = false)This composite type implements the Standard Simple / Parallel Multiple Standard Shooting method to locate periodic orbits. The arguments are as follows
flow::Flow: implements the flow of the Cauchy problem though the structureFlow.ds: vector of time differences for each shooting. Its length is writtenM. IfM==1, then the simple shooting is implemented and the multiple one otherwise.section: implements a phase condition. The evaluationsection(x)must return a scalar number wherexis a guess for the periodic orbit. Note that the periodTof the guessxis always included either as the last component ofT = x[end]or asT = x.p. The type ofxdepends on what is passed to the newton solver.isparallelwhether the shooting are computed in parallel (threading). Only available through the use of Flows defined byEnsembleProblem.
You can then call pb(orbitguess, par) to apply the functional to a guess. Note that orbitguess::AbstractVector must be of size M * N + 1 where N is the number of unknowns of the state space and orbitguess[M * N + 1] is an estimate of the period T of the limit cycle. This form of guess is convenient for the use of the linear solvers in IterativeSolvers.jl (for example) which accepts only AbstractVectors. Another accepted guess is of the form BorderedArray(guess, T) where guess[i] is the state of the orbit at the ith time slice. This last form allows for non-vector state space which can be convenient for 2d problems for example, use GMRESKrylovKit for the linear solver in this case.
A functional, hereby called G, encodes the shooting problem. For example, the following methods are available:
pb(orbitguess, par)evaluates the functional G onorbitguesspb(orbitguess, par, du)evaluates the jacobiandG(orbitguess).dufunctional atorbitguessondu
Simplified constructors
- A simpler way to build a functional is to use
pb = ShootingProblem(F, p, prob::Union{ODEProblem, EnsembleProblem}, alg, centers::AbstractVector; kwargs...)where F(x,p) is the vector field, p is a parameter (to be passed to the vector field and the flow), prob is an ODEProblem (resp. EnsembleProblem) which is used to create a flow using the ODE solver alg (for example Tsit5()). centers is list of M points close to the periodic orbit, they will be used to build a constraint for the phase. isparallel = false is an option to use Parallel simulations (Threading) to simulate the multiple trajectories in the case of multiple shooting. This is efficient when the trajectories are relatively long to compute. Finally, the arguments kwargs are passed to the ODE solver defining the flow. Look at DifferentialEquations.jl for more information. Note that, in this case, the derivative of the flow is computed internally using Finite Differences.
- Another way with more options is the following where in particular, one can provide its own scalar constraint
section(x)::Numberfor the phase
pb = ShootingProblem(F, p, prob::Union{ODEProblem, EnsembleProblem}, alg, M::Int, section; isparallel = false, kwargs...)or
pb = ShootingProblem(F, p, prob::Union{ODEProblem, EnsembleProblem}, alg, ds, section; isparallel = false, kwargs...)- The next way is an elaboration of the previous one
pb = ShootingProblem(F, p, prob1::Union{ODEProblem, EnsembleProblem}, alg1, prob2::Union{ODEProblem, EnsembleProblem}, alg2, M::Int, section; isparallel = false, kwargs...)or
pb = ShootingProblem(F, p, prob1::Union{ODEProblem, EnsembleProblem}, alg1, prob2::Union{ODEProblem, EnsembleProblem}, alg2, ds, section; isparallel = false, kwargs...)where we supply now two ODEProblems. The first one prob1, is used to define the flow associated to F while the second one is a problem associated to the derivative of the flow. Hence, prob2 must implement the following vector field $\tilde F(x,y,p) = (F(x,p),dF(x,p)\cdot y)$.
BifurcationKit.PoincareShootingProblem — Typepb = PoincareShootingProblem(flow::Flow, M, sections; δ = 1e-8, interp_points = 50, isparallel = false)
This composite type implements the Poincaré Shooting method to locate periodic orbits by relying on Poincaré return maps. The arguments are as follows
flow::Flow: implements the flow of the Cauchy problem though the structureFlow.M: the number of Poincaré sections. IfM==1, then the simple shooting is implemented and the multiple one otherwise.sections: function or callable strict which implements a Poincaré section condition. The evaluationsections(x)must return a scalar number whenM==1. Otherwise, one must implement a functionsection(out, x)which populatesoutwith theMsections.δ = 1e-8used to compute the jacobian of the fonctional by finite differences. If set to0, an analytical expression of the jacobian is used instead.interp_points = 50number of interpolation point used to define the callback (to compute the hitting of the hyperplan section)isparallel = falsewhether the shooting are computed in parallel (threading). Only available through the use of Flows defined byEnsembleProblem.
Simplified constructors
- A simpler way is to create a functional is
pb = PoincareShootingProblem(F, p, prob::ODEProblem, alg, section; kwargs...)
for simple shooting or
pb = PoincareShootingProblem(F, p, prob::Union{ODEProblem, EnsembleProblem}, alg, M::Int, section; kwargs...)
for multiple shooting . Here F(x,p) is the vector field, p is a parameter (to be passed to the vector and the flow), prob is an Union{ODEProblem, EnsembleProblem} which is used to create a flow using the ODE solver alg (for example Tsit5()). Finally, the arguments kwargs are passed to the ODE solver defining the flow. We refere to DifferentialEquations.jl for more information.
- Another convenient call is
pb = PoincareShootingProblem(F, p, prob::Union{ODEProblem, EnsembleProblem}, alg, normals::AbstractVector, centers::AbstractVector; δ = 1e-8, kwargs...)
where normals (resp. centers) is a list of normals (resp. centers) which defines a list of hyperplanes $\Sigma_i$. These hyperplanes are used to define partial Poincaré return maps.
Computing the functionals
A functional, hereby called G encodes this shooting problem. You can then call pb(orbitguess, par) to apply the functional to a guess. Note that orbitguess::AbstractVector must be of size M * N where N is the number of unknowns in the state space and M is the number of Poincaré maps. Another accepted guess is such that guess[i] is the state of the orbit on the ith section. This last form allows for non-vector state space which can be convenient for 2d problems for example.
pb(orbitguess, par)evaluates the functional G onorbitguesspb(orbitguess, par, du)evaluates the jacobiandG(orbitguess).dufunctional atorbitguessondu
You can use the function getPeriod(pb, sol, par) to get the period of the solution sol for the problem with parameters par.
Misc.
BifurcationKit.PrecPartialSchurKrylovKit — FunctionPrecPartialSchurKrylovKit(J, x0, nev, which = :LM; krylovdim = max(2nev, 20), verbosity = 0)Builds a preconditioner based on deflation of nev eigenvalues chosen according to which. A partial Schur decomposition is computed (Matrix-Free), using the package KrylovKit.jl, from which a projection is built. The options are similar to the ones of EigKrylovKit().
BifurcationKit.PrecPartialSchurArnoldiMethod — FunctionPrecPartialSchurArnoldiMethod(J, N, nev, which = LM(); tol = 1e-9, kwargs...)Builds a preconditioner based on deflation of nev eigenvalues chosen according to which. A partial Schur decomposition is computed (Matrix-Free), using the package ArnoldiMethod.jl, from which a projection is built. See the package ArnoldiMethod.jl for how to pass the proper options.
BifurcationKit.Flow — Typestruct Flow{TF, Tf, Tts, Tff, Td, Tse}F::AnyThe vector field
(x, p) -> F(x, p)associated to a Cauchy problem,flow::AnyThe flow (or semigroup) associated to the Cauchy problem
(x, p, t) -> flow(x, p, t). Only the last time point must be returned.flowTimeSol::AnyFlow which returns the tuple (t, u(t))
flowFull::AnyThe flow (or semigroup) associated to the Cauchy problem
(x, p, t) -> flow(x, p, t). The whole solution on the time interval [0,t] must be returned. It is not strictly necessary to provide this.dflow::AnyThe differential
dflowof the flow w.r.t.x,(x, p, dx, t) -> dflow(x, p, dx, t). One important thing is that we requiredflow(x, dx, t)to return a Named Tuple:(t = t, u = flow(x, p, t), du = dflow(x, p, dx, t)), the last composant being the value of the derivative of the flow.dfserial::AnySerial version of dflow
Simplified constructors
There are some simple constructors for which you only have to pass a prob::ODEProblem or prob::EnsembleProblem (for parallel computation) from DifferentialEquations.jl and an ODE time stepper like Tsit5(). Hence, you can do for example
fl = Flow(F, prob, Tsit5(); kwargs...)where kwargs is passed to DiffEqBase::solve. If your vector field depends on parameters p, you can define a Flow using
fl = Flow(F, p, prob, Tsit5(); kwargs...)Finally, you can pass two ODEProblem where the second one is used to compute the variational equation:
fl = Flow(F, p, prob1::ODEProblem, alg1, prob2::ODEProblem, alg2; kwargs...)BifurcationKit.FloquetQaDTrap — Typefloquet = FloquetQaDTrap(eigsolver::AbstractEigenSolver)This composite type implements the computation of the eigenvalues of the monodromy matrix in the case of periodic orbits problems based on Finite Differences (Trapeze method), also called the Floquet multipliers. The method, dubbed Quick and Dirty (QaD), is not numerically very precise for large / small Floquet exponents. It allows, nevertheless, to detect bifurcations. The arguments are as follows:
eigsolver::AbstractEigenSolversolver used to compute the eigenvalues.
If eigsolver isa DefaultEig, then the monodromy matrix is formed and its eigenvalues are computed. Otherwise, a Matrix-Free version of the monodromy is used.
The computation of Floquet multipliers is necessary for the detection of bifurcations of periodic orbits (which is done by analyzing the Floquet exponents obtained from the Floquet multipliers). Hence, the eigensolver eigsolver needs to compute the eigenvalues with largest modulus (and not with largest real part which is their default behavior). This can be done by changing the option which = :LM of eigsolver. Nevertheless, note that for most implemented eigensolvers in the current Package, the proper option is set.
BifurcationKit.FloquetQaDShooting — Typefloquet = FloquetQaDShooting(eigsolver::AbstractEigenSolver)This composite type implements the computation of the eigenvalues of the monodromy matrix in the case of periodic orbits problems based on the Shooting method, also called the Floquet multipliers. The method, dubbed Quick and Dirty (QaD), is not numerically very precise for large / small Floquet exponents. It allows, nevertheless, to detect bifurcations. The arguments are as follows:
eigsolver::AbstractEigenSolversolver used to compute the eigenvalues.
If eigsolver == DefaultEig(), then the monodromy matrix is formed and its eigenvalues are computed. Otherwise, a Matrix-Free version of the monodromy is used.
The computation of Floquet multipliers is necessary for the detection of bifurcations of periodic orbits (which is done by analyzing the Floquet exponents obtained from the Floquet multipliers). Hence, the eigensolver eigsolver needs to compute the eigenvalues with largest modulus (and not with largest real part which is their default behavior). This can be done by changing the option which = :LM of eigsolver. Nevertheless, note that for most implemented eigensolvers in the current Package, the proper option is set.
BifurcationKit.guessFromHopf — MethodguessFromHopf(br, ind_hopf, eigsolver::AbstractEigenSolver, M, amplitude; phase = 0)This function returns several useful quantities regarding a Hopf bifurcation point. More precisely, it returns:
- the parameter value at which a Hopf bifurcation occurs
- the period of the bifurcated periodic orbit
- a guess for the bifurcated periodic orbit
- the equilibrium at the Hopf bifurcation point
- the eigenvector at the Hopf bifurcation point.
The arguments are
br: the continuation branch which lists the Hopf bifurcation pointsind_hopf: index of the bifurcation branch, as inbr.bifpointeigsolver: the eigen solver used to find the eigenvectorsMnumber of time slices in the periodic orbit guessamplitude: amplitude of the periodic orbit guess
BifurcationKit.computeNormalForm — FunctioncomputeNormalForm(F, dF, d2F, d3F, br, id_bif; δ, nev, Jt, verbose, ζs, lens, issymmetric, Teigvec)
Compute the normal form of the bifurcation point located at br.bifpoint[ind_bif].
Arguments
F, dF, d2F, d3Fvector field(x, p) -> F(x, p)and its derivatives w.r.t.x.brresult from a call tocontinuationind_bifindex of the bifurcation point inbr.bifpoint
Optional arguments
δused to compute ∂pF with finite differencesnevnumber of eigenvalues used to compute the spectral projection. This number has to be adjusted when used with iterative methods.Jt = (x,p) -> ...jacobian adjoint, it should be implemented in an efficient manner. For matrix-free methods,transposeis not readily available and the user must provide a dedicated method. In the case of sparse based jacobian,Jtshould not be passed as it is computed internally more efficiently, i.e. it avoid recomputing the jacobian as it would be if you passJt = (x, p) -> transpose(dF(x, p)).verbosewhether to display informationζslist of vectors spanning the kernel ofdFat the bifurcation point. Useful to enforce the basis for the normal form.lens::Lensspecify which parameter to take the partial derivative ∂pFissymmetricwhether the Jacobian is Symmetric, avoid computing the left eigenvectors.
Newton
BifurcationKit.newton — Function newton(F, J, x0, p0, options::NewtonPar; normN = norm, callback = (x, f, J, res, iteration, itlinear, optionsN; kwargs...) -> true, kwargs...)This is the Newton-Krylov Solver for F(x, p0) = 0 with Jacobian w.r.t. x written J(x, p0) and initial guess x0. The function normN allows to specify a norm for the convergence criteria. It is important to set the linear solver options.linsolver properly depending on your problem. This linear solver is used to solve $J(x, p0)u = -F(x, p0)$ in the Newton step. You can for example use linsolver = DefaultLS() which is the operator backslash: it works well for Sparse / Dense matrices. See Linear solvers for more informations.
Arguments:
(x, p) -> F(x, p)functional whose zeros are looked for. In particular, it is not inplace,dF(x, p) = (x, p) -> J(x, p)compute the jacobian ofFatx. It is then passed tooptions.linsolver. The JacobianJ(x, p)can be a matrix or an out-of-place function.x0initial guessp0set of parameters to be passed toFandJoptionsvariable holding the internal parameters used by thenewtonmethodcallbackfunction passed by the user which is called at the end of each iteration. Can be used to update a preconditionner for example. The arguments passed to the callback are as followsxcurrent solutionfcurrent residualJcurrent jacobianrescurrent norm of the residualiterationcurrent newton iterationitlinearnumber of iterations to solve the linear systemoptionsNa copy of the argumentoptionspassed tonewtonkwargskwargs arguments, contain your initial guessx0
kwargsarguments passed to the callback. Useful whennewtonis called fromcontinuation
Output:
- solution:
- history of residuals
- flag of convergence
- number of iterations
Simplified calls
When J is not passed, the jacobian matrix is then computed with finite differences (beware of large systems of equations!). The call is as follows:
newton(Fhandle, x0, p0, options::NewtonPar; kwargs...)You can also pass functions which do not have parameters x -> F(x), x -> J(x) as follows
newton(F, J, x0, options::NewtonPar; kwargs...)or
newton(F, x0, options::NewtonPar; kwargs...)Example
julia> F(x, p) = x.^3 .- 1
julia> Jac(x, p) = spdiagm(0 => 3 .* x.^2) # sparse jacobian
julia> x0 = rand(1_000)
julia> opts = NewtonPar()
julia> sol, hist, flag, _ = newton(F, Jac, x0, nothing, opts, normN = x->norm(x, Inf))If you don't have parameters, you can still use newton as follows newton((x,p) -> F(x), (x,p)-> J(x), x0, nothing, options)
Make sure that the linear solver (Matrix-Free...) corresponds to you jacobian (Matrix-Free vs. Matrix based).
function newton(F, J, x0::vectype, p0, options:: NewtonPar{T}, defOp::DeflationOperator{T, Tf, vectype}, linsolver = DeflatedLinearSolver(); kwargs...) where {T, Tf, vectype}This is the deflated version of the Krylov-Newton Solver for F(x, p0) = 0 with Jacobian J(x, p0). We refer to newton for more information. It penalises the roots saved in defOp.roots. The other arguments are as for newton. See DeflationOperator for more information.
Arguments
Compared to newton, the only different arguments are
defOpdeflation operatorlinsolverlinear solver used to invert the Jacobian of the deflated functional. We have a custom solverDeflatedLinearSolver()with requires solving two linear systemsJ⋅x = rhs. For other linear solvers, a matrix free method is used for the deflated functional.
Output:
- solution:
- history of residuals
- flag of convergence
- number of iterations
Simplified call
When J is not passed. It then computed with finite differences. The call is as follows:
newton(F, x0, p0, options, defOp; kwargs...)This specific Newton-Kyrlov method first tries to converge to a solution sol0 close the guess x0. It then attempts to converge to the guess x1 while avoiding the previous solution sol0. This is very handy for branch switching. The mnethod is based on a deflated Newton-Krylov solver.
newton(F, J, br, ind_bif, par, lens; Jt, d2F, normN, options, kwargs...)
This function turns an initial guess for a Fold/Hopf point into a solution to the Fold/Hopf problem based on a Minimally Augmented formulation. The arguments are as follows
F = (x, p) -> F(x, p)wherepis a set of parameters.J = (x, p) -> d_xF(x, p)associated jacobianbrresults returned after a call tocontinuationind_bifbifurcation index inbrparparameters used for the vector fieldlensparameter axis used to locate the Fold/Hopf point.options::NewtonPar
Optional arguments:
Jt = (x, p) -> transpose(d_xF(x, p))jacobian adjoint, it should be implemented in an efficient manner. For matrix-free methods,transposeis not readily available and the user must provide a dedicated method. In the case of sparse based jacobian,Jtshould not be passed as it is computed internally more efficiently, i.e. it avoid recomputing the jacobian as it would be if you passJt = (x, p) -> transpose(dF(x, p))d2F = (x, p, v1, v2) -> d2F(x, p, v1, v2)a bilinear operator representing the hessian ofF. It has to provide an expression ford2F(x,p)[v1,v2].normN = normoptionsYou can pass newton parameters different from the ones stored inbrby using this argumentoptions.kwargskeywords arguments to be passed to the regular Newton-Krylov solver
newton(prob, orbitguess, par, options; kwargs...)
This is the Newton-Krylov Solver for computing a periodic orbit using (Standard / Poincaré) Shooting method. Note that the linear solver has to be apropriately set up.
Arguments
Similar as newton except that prob is either a ShootingProblem or a PoincareShootingProblem. These two problems have specific options to be tuned, we refer to their link for more information and to the tutorials.
Output:
- solution
- history of residuals
- flag of convergence
- number of iterations
newton(prob, orbitguess, options, defOp; kwargs...)
This is the deflated Newton-Krylov Solver for computing a periodic orbit using a (Standard / Poincaré) Shooting method.
Arguments
Similar as newton except that prob is either a ShootingProblem or a PoincareShootingProblem.
Output:
- solution
- history of residuals
- flag of convergence
- number of iterations
Continuation
BifurcationKit.continuation — Functioncontinuation(F, J, x0, par, lens::Lens, contParams::ContinuationPar; plot = false, normC = norm, dotPALC = (x,y) -> dot(x,y) / length(x), printSolution = norm, plotSolution = (x, p; kwargs...)->nothing, finaliseSolution = (z, tau, step, contResult) -> true, callbackN = (x, f, J, res, iteration, itlinear, options; kwargs...) -> true, linearAlgo = BorderingBLS(), tangentAlgo = SecantPred(), verbosity = 0)Compute the continuation curve associated to the functional F and its jacobian J.
Arguments:
F = (x, p) -> F(x, p)wherepis the set of parameters passed toF.J = (x, p) -> d_xF(x, p)its associated jacobian. It can be a matrix, a function or a callable struct.x0initial guessparinitial set of parameters.lens::Lensspecifies which parameter axis amongparis used for continuation. For example, ifpar = (α = 1.0, β = 1), we can perform continuation w.r.t.αby usinglens = (@lens _.α). If you have an arraypar = [ 1.0, 2.0]and want to perform continuation w.r.t. the first variable, you can uselens = (@lens _[1]). For more information, we refer toSetField.jl.contParamsparameters for continuation. SeeContinuationParfor more information about the optionsplot = falsewhether to plot the solution while computingprintSolution = (x, p) -> norm(x)function used to plot in the continuation curve. It is also used in the way results are saved. It could benormorx -> x[1]. This is also useful when saving several huge vectors is not possible for memory reasons (for example on GPU...).plotSolution = (x, p; kwargs...) -> nothingfunction implementing the plot of the solution.finaliseSolution = (z, tau, step, contResult) -> trueFunction called at the end of each continuation step. Can be used to alter the continuation procedure (stop it by returning false), saving personal data, plotting... The notations are $z=(x,p)$,tauis the tangent atz(see below),stepis the index of the current continuation step andContResultis the current branch. Note that you can have a better control over the continuation procedure by using an iterator, see Iterator Interface.callbackNcallback for newton iterations. see docs fornewton. Can be used to change preconditionerstangentAlgo = SecantPred()controls the algorithm used to predict the tangents along the curve of solutions or the corrector. Can beNaturalPred,SecantPredorBorderedPred. See below for more information.linearAlgo = BorderingBLS(). Used to control the way the extended linear system associated to the continuation problem is solved. Can beMatrixBLS,BorderingBLSorMatrixFreeBLS.verbosity::Intcontrols the amount of information printed during the continuation process. Must belong to{0,1,2,3}normC = normnorm used in the different Newton solvesdotPALC = (x, y) -> dot(x, y) / length(x), dot product used to define the weighted dot product (resp. norm) $\|(x, p)\|^2_\theta$ in the constraint $N(x, p)$ (see below). This arguement can be used to remove the factor1/length(x)for example in problems where the dimension of the state space changes (mesh adaptation, ...)filenamename of a file to save the computed branch during continuation. The identifier .jld2 will be appended to this filename
Outputs:
contres::ContResultcomposite type which contains the computed branch. SeeContResultfor more information.u::BorderedArraythe last solution computed on the branch
In this simplified interface to continuation, the argument linearAlgo is internally overwritten to provide a valid argument to the algorithm. If you do not want this to happen, call directly continuation(F, J, x0, par, lens, contParams, linearAlgo; kwargs...).
Simplified call:
You can also use the following call for which the jacobian matrix (beware of large systems of equations!) is computed internally using Finite Differences
continuation(Fhandle, x0, par, lens, contParams::ContinuationPar; kwargs...)Method
Bordered system of equations
In what follows, we abuse of notations, p refers to the scalar value of the parameter we perform continuation with. Hence, it should be p = get(par, lens).
The pseudo-arclength continuation method solves the equation $F(x, p) = 0$ (of dimension N) together with the pseudo-arclength constraint $N(x, p) = \frac{\theta}{length(x)} \langle x - x_0, dx_0\rangle + (1 - \theta)\cdot(p - p_0)\cdot dp_0 - ds = 0$ and $\theta\in[0,1]$. In practice, a curve $\gamma$ of solutions is sought and is parametrised by $s$: $\gamma(s) = (x(s), p(s))$ is a curve of solutions to $F(x, p)$. This formulation allows to pass turning points (where the implicit theorem fails). In the previous formula, $(x_0, p_0)$ is a solution for a given $s_0$, $\tau_0\equiv(dx_0, dp_0)$ is the tangent to the curve $\gamma$ at $s_0$. Hence, to compute the curve of solutions, we need to solve an equation of dimension N+1 which is called a Bordered system.
The parameter theta in the struct ContinuationParis very important. It should be tuned for the continuation to work properly especially in the case of large problems where the $\langle x - x_0, dx_0\rangle$ component in the constraint might be favoured too much. Also, large thetas favour p as the corresponding term in $N$
The parameter ds is adjusted internally depending on the number of Newton iterations and other factors. See the function stepSizeControl for more information. An important parameter to adjust the magnitude of this adaptation is the parameter a in the struct ContinuationPar.
Algorithm
The algorithm works as follows:
- Start from a known solution $(x_0, p_0)$ with tangent to the curve of solutions: $(dx_0 ,dp_0)$
- Predictor: set $(x_1, p_1) = (x_0, p_0) + ds\cdot (dx_0, dp_0)$
- Corrector: solve $F(x, p)=0,\ N(x, p)=0$ with a (Bordered) Newton Solver with initial guess $(x_1, p_1)$.
- if Newton in 3. did not converge, update ds/2 ⟶ ds in $N$ and go to 1.
- New tangent: Compute a new tangent (see below) $(dx_1, dp_1)$ and update $N$ with it. Set $(x_0, p_0, dx_0, dp_0) = (x_1, p_1, dx_1, dp_1)$ and return to step 2
Natural continuation
We speak of natural continuation when we do not consider the constraint $N(x, p)=0$. Knowing $(x_0, p_0)$, we use $x_0$ as a guess for solving $F(x, p_1)=0$ with $p_1$ close to $p_0$. Again, this fails at Turning points but it can be faster to compute than the constrained case. This is set by the option tangentAlgo = NaturalPred() in continuation.
Tangent computation (step 4)
There are various ways to compute $(dx_1, dp_1)$. The first one is called secant and is parametrised by the option tangentAlgo = SecantPred(). It is computed by $(dx_1, dp_1) = (z_1, p_1) - (z_0, p_0)$ and normalised by the norm $\|(x, p)\|^2_\theta = \frac{\theta}{length(x)} \langle x,x\rangle + (1 - \theta)\cdot p^2$. Another method is to compute $(dx_1, dp_1)$ by solving solving the bordered linear system $\begin{bmatrix} F_x & F_p ; \ \frac{\theta}{length(x)}dx_0 & (1-\theta)dp_0\end{bmatrix}\begin{bmatrix}dx_1 ; dp_1\end{bmatrix} =\begin{bmatrix}0 ; 1\end{bmatrix}$ ; it is set by the option tangentAlgo = BorderedPred().
Bordered linear solver
When solving the Bordered system $F(x, p) = 0,\ N(x, p)=0$, one faces the issue of solving the Bordered linear system $\begin{bmatrix} J & a ; b^T & c\end{bmatrix}\begin{bmatrix}X ; y\end{bmatrix} =\begin{bmatrix}R ; n\end{bmatrix}$. This can be solved in many ways via bordering (which requires two Jacobian inverses), by forming the bordered matrix (which works well for sparse matrices) or by using a full Matrix Free formulation. The choice of method is set by the argument linearAlgo. Have a look at the struct linearBorderedSolver for more information.
Linear Algebra
Let us discuss here more about the norm and dot product. First, the option normC gives a norm that is used to evaluate the residual in the following way: $max(normC(F(x,p)), \|N(x,p)\|)<tol$. It is thus used as a stopping criterion for a Newton algorithm. The dot product (resp. norm) used in $N$ and in the (iterative) linear solvers is LinearAlgebra.dot (resp. LinearAlgebra.norm). It can be changed by importing these functions and redefining it. Not that by default, the $L^2$ norm is used. These details are important because of the constraint $N$ which incorporates the factor length. For some custom composite type implementing a Vector space, the dot product could already incorporates the length factor in which case you should either redefine the dot product or change $\theta$.
Step size control
As explained above, each time the corrector phased failed, the step size $ds$ is halved. This has the disavantage of having lost Newton iterations (which costs time) and impose small steps (which can be slow as well). To prevent this, the step size is controlled internally with the idea of having a constant number of Newton iterations per point. This is in part controlled by the aggressiveness factor a in ContinuationPar. Further tuning is performed by using doArcLengthScaling=true in ContinuationPar. This adjusts internally $\theta$ so that the relative contributions of $x$ and $p$ are balanced in the constraint $N$.
This function is the analog of continuation when the two first points on the branch are passed (instead of a single one). Hence x0 is the first point on the branch (with palc s=0) with parameter par0 and x1 is the second point with parameter set(par0, lens, p1).
continuation(F, dF, d2F, d3F, br, ind_bif, optionsCont; Jt, δ, δp, ampfactor, nev, issymmetric, usedeflation, Teigvec, kwargs...)
Automatic branch switching at branch points based on a computation of the normal form. More information is provided in Branch switching. An example of use is provided in A generalized Bratu–Gelfand problem in two dimensions.
Arguments
F, dF, d2F, d3F: function(x,p) -> F(x,p)and its differentials(x,p,dx) -> d1F(x,p,dx),(x,p,dx1,dx2) -> d2F(x,p,dx1,dx2)...brbranch result from a call tocontinuationind_bifindex of the bifurcation point inbrfrom which you want to branch fromoptionsContoptions for the call tocontinuation
Optional arguments
Jtassociated jacobian transpose, it should be implemented in an efficient manner. For matrix-free methods,transposeis not readily available and the user must provide a dedicated method. In the case of sparse based jacobian,Jtshould not be passed as it is computed internally more efficiently, i.e. it avoid recomputing the jacobian as it would be if you passJt = (x, p) -> transpose(dF(x, p)).δused internally to compute derivatives w.r.t the parameterp.δpused to specify a particular guess for the parameter on the bifurcated branch which is otherwise determined byoptionsCont.ds. This allows to use a step larger thanoptionsCont.dsmax.ampfactor = 1factor which alter the amplitude of the bifurcated solution. Useful to magnify the bifurcated solution when the bifurcated branch is very steep.nevnumber of eigenvalues to be computed to get the right eigenvectorissymmetricwhether the Jacobian is Symmetric, avoid computing the left eigenvectors.usedeflation = truewhether to use nonlinear deflation (see Deflated problems) to help finding the guess on the bifurcated branchkwargsoptional arguments to be passed tocontinuation, the regularcontinuationone.
continuation(prob, orbitguess, par, lens, contParams; linearAlgo, printPeriod, kwargs...)
This is the continuation routine for computing a periodic orbit using a (Standard / Poincaré) Shooting method.
Arguments
Similar as continuation except that prob is either a ShootingProblem or a PoincareShootingProblem.
printPeriodboolean to print the period of the solution. This is useful forprob::PoincareShootingProblemas this information is not easily available.
continuation(F, dF, d2F, d3F, br, ind_bif, _contParams, prob; Jt, δ, δp, ampfactor, usedeflation, nev, kwargs...)
Perform automatic branch switching from a Hopf bifurcation point labelled ind_bif in the list of the bifurcated points on a previously computed branch br::ContResult. It first computes a Hopf normal form.
Arguments
F, dF, d2F, d3F: function(x,p) -> F(x,p)and its differentials(x,p,dx) -> d1F(x,p,dx),(x,p,dx1,dx2) -> d2F(x,p,dx1,dx2)... These are used to compute the Hopf normal form.brbranch result from a call tocontinuationind_hopfindex of the bifurcation point inbrcontParamsparameters for the call tocontinuationprobproblem used to specify the way the periodic orbit is computed. It can bePeriodicOrbitTrapProblem,ShootingProblemorPoincareShootingProblem.
Optional arguments
linearPOlinear algorithm used for the computation of periodic orbits whenprobisPeriodicOrbitTrapProblem)Jtis the jacobian adjoint, used for computation of the eigen-elements of the jacobian adjoint, needed to compute the spectral projector for the Hopf normal form!!!!!! COMME NEWTON FOLDδ = 1e-8used for finite differencesδpused to specify a particular guess for the parameter on the bifurcated branch which is otherwise determined bycontParams.ds. This allows to use a step larger thancontParams.dsmax.ampfactor = 1factor which alter the amplitude of the bifurcated solution. Useful to magnify the bifurcated solution when the bifurcated branch is very steep.usedeflation = truewhether to use nonlinear deflation (see Deflated problems) to help finding the guess on the bifurcated branch
You have to be carefull about the options contParams.newtonOptions.linsolver. In the case of Matrix-Free solver, you have to pass the right number of unknowns N * M + 1. Note that the options for the preconditioner are not accessible yet.
The adjoint of the jacobian J is computed internally when Jt = nothing by using tranpose(J) which works fine when J is an AbstractArray. In this case, do not pass the jacobian adjoint like Jt = (x, p) -> transpose(d_xF(x, p)) otherwise the jacobian would be computed twice!
The hessian of F, when d2F is not nothing, is computed with Finite differences.
continuation(probPO, orbitguess, p0::Real, _contParams::ContinuationPar; linearPO = :BorderedLU, printSolution = (u,p) -> u[end], linearAlgo = BorderingBLS(), kwargs...)This is the continuation routine for computing a periodic orbit using a functional G based on Finite Differences and a Trapezoidal rule.
Arguments
prob::PeriodicOrbitTrapProblemencodes the functional Gorbitguessa guess for the periodic orbit whereorbitguess[end]is an estimate of the period of the orbit. It could be a vector of sizeN * M + 1whereMis the number of time slices,Nis the dimension of the phase space. This must be compatible with the numbersN, Minprob.p0set of parameters passed to the vector fieldcontParamssame as for the regularcontinuationmethodlinearAlgosame as incontinuationlinearPO = :BorderedLU. Same asnewtonwhen applied toPeriodicOrbitTrapProblem. More precisely:- For
:FullLU, we use the default linear solver on a sparse matrix representation ofdG. This matrix is assembled at each newton iteration. - For
:FullSparseInplace, this is the same as for:FullLUbut the sparse matrixdGis updated inplace. This method allocates much less. In some cases, this is significantly faster than using:FullLU. Note that this method can only be used if the sparsity pattern of the jacobian is always the same. - For
:BorderedLU, we take advantage of the bordered shape of the linear solver and use LU decomposition to invertdGusing a bordered linear solver. This is the default algorithm. - For
:FullMatrixFree, a matrix free linear solver is used fordG: note that a preconditioner is very likely required here because of the cyclic shape ofdGwhich affects negatively the convergence properties of GMRES. - For
:BorderedMatrixFree, a matrix free linear solver is used but forJconly (see docs): it means thatoptions.linsolveris used to invertJc. These two Matrix-Free options thus expose different part of the jacobiandGin order to use specific preconditioners. For example, an ILU preconditioner onJccould remove the constraints indGand lead to poor convergence. Of course, for these last two methods, a preconditioner is likely to be required.
- For
Note that by default, the method prints the period of the periodic orbit as function of the parameter. This can be changed by providing your printSolution argument.
Bifurcation diagram
BifurcationKit.bifurcationdiagram — Functionbifurcationdiagram(F, dF, d2F, d3F, x0, par0, lens, level, options; usedeflation, kwargs...)
Compute the bifurcation diagram associated with the problem F=0 recursively.
Arguments
F, dF, d2F, d3Ffunctional and its derivativesx0initial guesspar0parameter values atx0lenslens to select the parameter axislevelmaximum branching (or recursion) level for computing the bifurcation diagramoptions = (x, p, level) -> contparamsthis function allows to change thecontinuationoptions depending on the branchinglevel.x,pis the current solution toF(x,p)=0.kwargsoptional arguments as forcontinuationbut also for the different versions listed in Continuation.
Simplified call:
We also provide the call
bifurcationdiagram(F, dF, d2F, d3F, br::ContResult, level::Int, options; usedeflation = false, kwargs...)
where br is a branch computed after a call to continuation from which we want to compute the bifurcating branches recursively.
BifurcationKit.bifurcationdiagram! — Functionbifurcationdiagram!(F, dF, d2F, d3F, node, level, options; code, usedeflation, kwargs...)
Same as bifurcationdiagram but you pass a previously computed bifurcation diagram node from which you want to further compute the bifurcated branches. It is usually used with node = getBranch(diagram, code) from a previously computed bifurcation diagram.
BifurcationKit.getBranch — FunctiongetBranch(tree, code)
Return the part of the tree (bifurcation diagram) by recursively descending the tree using the Int valued tuple code. For example getBranch(tree, (1,2,3,)) returns tree.child[1].child[2].child[3].
BifurcationKit.getBranchesFromBP — FunctiongetBranchesFromBP(tree, indbif)
Return the part of the tree corresponding to the indbith-th bifurcation point on the root branch.
Utils for periodic orbits
BifurcationKit.getPeriod — FunctiongetPeriod(sh, x)
getPeriod(sh, x, par)
Compute the period of the periodic orbit associated to x.
getPeriod(psh, x_bar, par)
Compute the period of the periodic orbit associated to x_bar.
getPeriod(prob, x, p)
Compute the period of the periodic orbit associated to x.
BifurcationKit.getAmplitude — FunctiongetAmplitude(prob, x, p; ratio)
Compute the amplitude of the periodic orbit associated to x. The keyword argument ratio = 1 is used as follows. If length(x) = ratio * n, the call returns the amplitude over x[1:n].
getAmplitude(prob, x, p; ratio)
Compute the amplitude of the periodic orbit associated to x. The keyword argument ratio = 1 is used as follows. If length(x) = 1 + ratio * n, the call returns the amplitude over x[1:n].
BifurcationKit.getMaximum — FunctiongetMaximum(prob, x, p; ratio)
Compute the maximum of the periodic orbit associated to x. The keyword argument ratio = 1 is used as follows. If length(x) = ratio * n, the call returns the amplitude over x[1:n].
getMaximum(prob, x, p; ratio)
Compute the maximum of the periodic orbit associated to x. The keyword argument ratio = 1 is used as follows. If length(x) = 1 + ratio * n, the call returns the amplitude over x[1:n].