Home Subscribe

5. Optimization

Optimization techniques play a crucial role in engineering design and decision-making processes. In this field, we often encounter complex systems where multiple components need to be fine-tuned to achieve the best possible performance. This article aims to provide an overview of optimization techniques that are relevant for engineering, including linear and nonlinear optimization, constraint handling, and optimization algorithms such as gradient descent and genetic algorithms. We will also explore practical applications of these techniques in the field of mechatronics, along with a brief introduction to using Python’s Scipy.optimize module for solving optimization problems.

5.1. Linear and Nonlinear Optimization

Linear optimization, also known as linear programming, involves solving problems where the objective function and constraints are linear. In contrast, nonlinear optimization deals with problems where either the objective function, the constraints, or both are nonlinear.

In mechatronics engineering, linear optimization can be applied to problems such as resource allocation, production planning, and routing, whereas nonlinear optimization finds applications in system identification, parameter estimation, and trajectory optimization.

Consider a simple linear optimization problem where the objective is to minimize the cost of producing two products, x and y:

\[\min_{x, y} C = c_x x + c_y y\]

Subject to the constraints:

\[a_{11} x + a_{12} y \geq b_1 \ a_{21} x + a_{22} y \geq b_2 \ x \geq 0, y \geq 0\]

Nonlinear optimization problems can be more complex. For instance, in a mechatronics system, we might want to minimize the error between a desired and actual trajectory, which could involve nonlinear equations.

5.2. Constraint Handling

In engineering optimization problems, we often have constraints that must be satisfied for a solution to be feasible. Constraints can be equality constraints, where a particular function must be equal to a constant value, or inequality constraints, where a function must be greater than or equal to a constant value.

In mechatronics, constraints can arise due to physical limitations, such as actuator saturation, or design requirements, such as maximum allowable stress.

For example, in a robotic arm design problem, the constraints might include:

  • Maximum allowable torque for each joint

  • Workspace limitations

  • Collision avoidance with obstacles

5.3. Optimization Algorithms

5.3.1. Gradient Descent

Gradient descent is an iterative optimization algorithm that can be used to find the minimum of a differentiable function. The basic idea is to update the parameters in the negative direction of the gradient:

\[\mathbf{x}_{k+1} = \mathbf{x}_k - \alpha \nabla f(\mathbf{x}_k)\]

Where \(\mathbf{x}_k\) is the current parameter vector, \(\alpha\) is the step size, and \(\nabla f(\mathbf{x}_k)\) is the gradient of the objective function at the current point.

In mechatronics, gradient descent can be applied to problems such as system identification, where we want to find the parameters of a model that minimize the error between the model’s output and the measured output of a real system.

5.3.2. Genetic Algorithms

Genetic algorithms are optimization techniques inspired by the process of natural selection. They involve a population of candidate solutions that evolve over time to find an optimal or near-optimal solution. The main components of a genetic algorithm include selection, crossover, and mutation.

In mechatronics, genetic algorithms can be applied to problems such as multi-objective optimization, where we want to find the best trade-off between conflicting objectives, like maximizing performance and minimizing energy consumption. For instance, in the design of an energy-efficient robotic system, a genetic algorithm could be used to find the best combination of actuator sizing, control strategy, and trajectory planning.

5.4. Python and Scipy.optimize for Engineering Optimization

Python’s Scipy.optimize module provides a wide range of optimization algorithms and functions that can be employed to solve various optimization problems in mechatronics engineering. The module includes solvers for linear programming (linprog), nonlinear optimization (minimize), and constrained optimization problems (minimize with the constraints parameter), among others.

Consider the example of tuning the gains of a PID controller to minimize overshoot and settling time in a control system. We can use the minimize function from the Scipy.optimize module to find the optimal PID gains. First, we define an objective function that computes the overshoot and settling time given a set of PID gains:

import numpy as np

def objective_function(gains, system):
    Kp, Ki, Kd = gains
    # Simulate the system with the given PID gains
    # Compute overshoot and settling time
    overshoot, settling_time = simulate_system(system, Kp, Ki, Kd)
    return overshoot + settling_time

Next, we can use the minimize function to find the optimal PID gains:

from scipy.optimize import minimize

initial_gains = np.array([1.0, 1.0, 1.0])
result = minimize(objective_function, initial_gains, args=(system,))
optimal_gains = result.x

In this example, system represents the control system to be optimized, and simulate_system is a function that simulates the system response given the PID gains and computes the overshoot and settling time.

5.5. Exercise

Idea

Example 1

Formulate a linear optimization problem to minimize the total power consumption of a mechatronics system with two motors, given their respective power consumptions and constraints on their operating speeds.

Solution:

Let \(P_1(x_1)\) and \(P_2(x_2)\) be the power consumption functions for motor 1 and motor 2, respectively. We want to minimize the total power consumption:

\[\min_{x_1, x_2} P(x_1, x_2) = P_1(x_1) + P_2(x_2)\]

Subject to the constraints on the operating speeds, for example:

\[x_{1,\min} \leq x_1 \leq x_{1,\max} \ x_{2,\min} \leq x_2 \leq x_{2,\max}\]
import numpy as np
from scipy.optimize import linprog

# Sample power consumption functions: P1(x1) = c1 * x1, P2(x2) = c2 * x2
c1, c2 = 10, 20

# Operating speed constraints
x1_min, x1_max = 0, 100
x2_min, x2_max = 0, 50

# Define the coefficients for the objective function
c = [c1, c2]

# Define the inequality constraints matrix and vector
A = [[1, 0], [0, 1], [-1, 0], [0, -1]]
b = [x1_max, x2_max, -x1_min, -x2_min]

# Solve the linear programming problem
result = linprog(c, A_ub=A, b_ub=b)

# Extract the optimal solution
optimal_solution = result.x

print("Optimal solution:", optimal_solution)

This code defines a simple power consumption model for two motors (\(P_1(x_1) = c_1 x_1\) and \(P_2(x_2) = c_2 x_2\)) and solves the linear programming problem using the linprog function from the scipy.optimize module.

Idea

Example 2

Implement a Python function to compute the gradient of a nonlinear objective function for a mechatronics system, given the system’s parameters.

Solution:
import numpy as np

def objective_function(x):
    # Example: Rosenbrock function
    return (1 - x[0])**2 + 100 * (x[1] - x[0]**2)**2

def gradient(obj_func, x, h=1e-6):
    n = len(x)
    grad = np.zeros(n)
    for i in range(n):
        x_plus_h = x.copy()
        x_plus_h[i] += h
        grad[i] = (obj_func(x_plus_h) - obj_func(x)) / h
    return grad

# Test the gradient function
x_test = np.array([1, 1])
grad_test = gradient(objective_function, x_test)
print("Gradient:", grad_test)

Idea

Example 3

Write a Python function to implement a simple genetic algorithm for optimizing the design of a mechatronics system with conflicting objectives.

Solution:
import random
import numpy as np

def create_random_individual():
    return np.random.uniform(-5, 5, size=2)

def selection_func(population, fitness_values):
    return random.choices(population, weights=fitness_values, k=len(population))

def crossover_func(parent1, parent2):
    alpha = random.random()
    child1 = alpha * parent1 + (1 - alpha) * parent2
    child2 = (1 - alpha) * parent1 + alpha * parent2
    return child1, child2

def mutation_func(individual):
    mutation_rate = 0.1
    mutated_individual = individual.copy()
    for i in range(len(individual)):
        if random.random() < mutation_rate:
            mutated_individual[i] += random.gauss(0, 1)
    return mutated_individual

def fitness_func(individual):
    return -objective_function(individual)

def simple_genetic_algorithm(population_size, num_generations, selection_func, crossover_func, mutation_func, fitness_func):
    # Initialize population
    population = [create_random_individual() for _ in range(population_size)]

    # Run the genetic algorithm for the specified number of generations
    for gen in range(num_generations):
        # Compute fitness for each individual
        fitness_values = [fitness_func(individual) for individual in population]

        # Perform selection
        selected_individuals = selection_func(population, fitness_values)

        # Apply crossover
        offspring = []
        for i in range(population_size // 2):
            parent1, parent2 = selected_individuals[2 * i], selected_individuals[2 * i + 1]
            child1, child2 = crossover_func(parent1, parent2)
            offspring.extend([child1, child2])

        # Apply mutation
        mutated_offspring = [mutation_func(individual) for individual in offspring]

        # Replace the current population with the new offspring
        population = mutated_offspring

    # Return the best solution found
    best_individual = max(population, key=fitness_func)
    return best_individual

# Test the genetic algorithm
best_solution = simple_genetic_algorithm(50, 100, selection_func, crossover_func, mutation_func, fitness_func)
print("Best solution:", best_solution)

Idea

Example 4

Use the Scipy.optimize module to solve a constrained optimization problem in mechatronics, involving equality and inequality constraints.

Solution:

In this example, we will minimize the Rosenbrock function with constraints on variables and an additional inequality constraint.

from scipy.optimize import minimize

def objective_function(x):
    # Example: Rosenbrock function
    return (1 - x[0])**2 + 100 * (x[1] - x[0]**2)**2

def constraint1(x):
    # Example: x0 + x1 - 3 = 0 (equality constraint)
    return x[0] + x[1] - 3

def constraint2(x):
    # Example: x1 - x0**2 >= 0 (inequality constraint)
    return x[1] - x[0]**2

# Define the constraints as a list of dictionaries
constraints = [{'type': 'eq', 'fun': constraint1},
               {'type': 'ineq', 'fun': constraint2}]

# Choose an initial point
initial_point = [0, 0]

# Perform the optimization
result = minimize(objective_function, initial_point, constraints=constraints)

# Extract the optimal solution
optimal_solution = result.x

print("Optimal solution:", optimal_solution)

This code defines the Rosenbrock function as the objective function and adds two constraints, an equality constraint (\(x_0 + x_1 = 3\)) and an inequality constraint (\(x_1 \ge x_0^2\)). The problem is solved using the minimize function from the scipy.optimize module, and the optimal solution is printed.



Add Comment

* Required information
1000
Drag & drop images (max 3)
Out of 56, 14 or 27, which is the smallest?

Comments (1)

Avatar
New
Mwangi Muriithi

Learnt a lot about modelling.

The number of the total global nuclear arsenal is around 12500