Which algorithm is used as derivative-free optimization techniques


There are many derivative free optimization techniques that can be used for a variety of purposes. Some of the most popular algorithms include conjugate gradient, Monte Carlo, and simulated annealing. Each of these algorithms has its own advantages and disadvantages, so it is important to choose the right one for your specific needs.

Gradient descent

Gradient descent is an optimization algorithm used to minimize some function by iteratively moving in the direction of steepest descent as defined by the negative of the gradient. In machine learning, we use gradient descent to update the parameters of our model.Parameters are the values we adjust to minimize cost. For example, in linear regression, our parameters are model coefficients.

We start by initializing our parameter values. Then we calculate the cost function for our current parameter values. We can think of this as how far off our predictions are from actual data points. Based on the cost function, we calculate the gradient. The gradient is a vector that points in the direction of greatest increase of the cost function. To update our parameter values, we take a small step in the direction opposite of the gradient—this is called a Gradient Descent Step—and repeat until we arrive at a minimum.”

conjugate gradient

Conjugate gradient (CG) is an optimization algorithm for finding the minimum of a function that has continuous first and second derivatives. It is typically used to solve problems in high-dimensional space that cannot be solved by other gradient-based methods due to the curse of dimensionality.

CG converges faster than gradient descent in most cases, but it can take longer to find the optimal solution. CG is a popular algorithm for training neural networks and other machine learning models.

Newton’s Method

Newton’s method, also called the Newton-Raphson method, is a powerful technique for solving equations numerically. Given a function f(x) whose derivatives f ‘(x) and f ”(x) exist, it proceeds iteratively to find better approximations to the roots (or zeroes) of the function.

Quasi-Newton Method

Quasi-Newton methods are a class of numerical optimization algorithms that do not require derivatives to be computed. These methods are iterative in nature, meaning they move from one guess of the solution to another, hopefully getting closer to the global optimum with each step. Quasi-Newton methods combine features of both Newton’s method and steepest descent.


The Davidon–Fletcher–Powell (DFP) update is a means of updating the inverse Hessian approximation in order to more rapidly converge upon a minimum of a twice differentiable function. The DFP update was developed by Fletcher and Powell as an improvement upon earlier Quasi-Newton Methods such as the Broyden–Fletcher–Goldfarb–Shanno (BFGS) method. This article discusses the specifics of the DFP update and its advantages over BFGS.


In mathematical optimization, the Broyden–Fletcher–Goldfarb–Shanno (BFGS) algorithm is an iterative method for solving unconstrained optimization problems. It is a quasi-Newton method that employs an approximation of the Hessian matrix of the objective function that is based on updates of rank one using Broyden’s update formula.

Nonlinear Conjugate Gradient


The Fletcher-Reeves algorithm is a derivative free optimization technique that is used to find the minimum of a function. This algorithm is based on the concept of conjugate gradients and is an improvement on the original CG algorithm developed by CG. Fletcher and Reeves. The Fletcher-Reeves algorithm is more efficient than the original CG algorithm and has better convergence properties.


Polak-Ribière is a nonlinear conjugate gradient algorithm used as a derivative free optimization technique. The algorithm is named after the two mathematicians who developed it, Claude E. Polak and Paul R. Ribière. It is an improvement on the earlier steepest descent algorithm developed by George Dantzig.

The Polak-Ribière algorithm is based on a formula for the Update Direction, which is a vector that represents the direction in which the search should be conducted:

$$ \mathbf{d}_{k+1} = -\nabla f(\mathbf{x}_k) + \beta \mathbf{d}_k $$

where $$\beta$$ is a parameter that determines how much of the previous search direction to use in the current search. If $$\beta$$ is set to zero, then the Update Direction will be exactly the same as the steepest descent direction. However, if $$\beta$$ is positive, then the Update Direction will be a combination of the steepest descent direction and the previous search direction. This can be useful if the objective function has multiple local minima and the search needs to be restarted from different starting points in order to find a global minimum.

The Polak-Ribière algorithm has several important properties that make it attractive for use in derivative free optimization:

  • It converges more rapidly than steepest descent for smooth objective functions.
  • It can escape from local minima that are not global minima.
  • It can find saddle points and other stationary points of nonsmooth objective functions.
    Stochastic Gradient Descent
    Stochastic Gradient Descent (SGD) is a simple yet very efficient approach to fitting linear models. SGD has been successful in large-scale machine learning problems for many years, and recent advances in SGD for non-linear optimization problems have shown that SGD is also competitive for high-dimensional problems with a large number of observations.
    Coordinate Descent
    Coordinate descent is an algorithm that solves a minimization problem by iteratively choosing a coordinate (or variable) and minimizing the corresponding univariate function over that coordinate, holding the other coordinates fixed. This process is then repeated until all coordinates have been minimized with respect to and a local minimum is found. derivative free optimization

Leave a Reply

Your email address will not be published.