已收录 273081 条政策
 政策提纲
  • 暂无提纲
Distributed Optimization Algorithms for Networked Systems
[摘要]

Distributed optimization methods allow us to decompose an optimization problem

into smaller, more manageable subproblems that are solved in parallel. For this

reason, they are widely used to solve large-scale problems arising in areas as diverse

as wireless communications, optimal control, machine learning, artificial intelligence,

computational biology, finance and statistics, to name a few. Moreover, distributed

algorithms avoid the cost and fragility associated with centralized coordination, and

provide better privacy for the autonomous decision makers. These are desirable

properties, especially in applications involving networked robotics, communication

or sensor networks, and power distribution systems.

In this thesis we propose the Accelerated Distributed Augmented Lagrangians

(ADAL) algorithm, a novel decomposition method for convex optimization prob-

lems with certain separability structure. The method is based on the augmented

Lagrangian framework and addresses problems that involve multiple agents optimiz-

ing a separable convex objective function subject to convex local constraints and

linear coupling constraints. We establish the convergence of ADAL and also show

that it has a worst-case O(1/k) convergence rate, where k denotes the number of

iterations.

Moreover, we show that ADAL converges to a local minimum of the problem

for cases with non-convex objective functions. This is the first published work that

formally establishes the convergence of a distributed augmented Lagrangian method

ivfor non-convex optimization problems. An alternative way to select the stepsizes

used in the algorithm is also discussed. These two contributions are independent

from each other, meaning that convergence of the non-convex ADAL method can

still be shown using the stepsizes from the convex case, and, similarly, convergence

of the convex ADAL method can be shown using the stepsizes proposed in the non-

convex proof.

Furthermore, we consider cases where the distributed algorithm needs to operate

in the presence of uncertainty and noise and show that the generated sequences of

primal and dual variables converge to their respective optimal sets almost surely. In

particular, we are concerned with scenarios where: i) the local computation steps

are inexact or are performed in the presence of uncertainty, and ii) the message

exchanges between agents are corrupted by noise. In this case, the proposed scheme

can be classified as a distributed stochastic approximation method. Compared to

existing literature in this area, our work is the first that utilizes the augmented

Lagrangian framework. Moreover, the method allows us to solve a richer class of

problems as compared to existing methods on distributed stochastic approximation

that consider only consensus constraints.

Extensive numerical experiments have been carried out in an effort to validate

the novelty and effectiveness of the proposed method in all the areas of the afore-

mentioned theoretical contributions. We examine problems in convex, non-convex,

and stochastic settings where uncertainties and noise affect the execution of the al-

gorithm. For the convex cases, we present applications of ADAL to certain popular

network optimization problems, as well as to a two-stage stochastic optimization

problem. The simulation results suggest that the proposed method outperforms

the state-of-the-art distributed augmented Lagrangian methods that are known in

the literature. For the non-convex cases, we perform simulations on certain simple

non-convex problems to establish that ADAL indeed converges to non-trivial local

vsolutions of the problems; in comparison, the straightforward implementation of the

other distributed augmented Lagrangian methods on the same problems does not

lead to convergence. For the stochastic setting, we present simulation results of

ADAL applied on network optimization problems and examine the effect that noise

and uncertainties have in the convergence behavior of the method.

As an extended and more involved application, we also consider the problem

of relay cooperative beamforming in wireless communications systems. Specifically,

we study the scenario of a multi-cluster network, in which each cluster contains

multiple single-antenna source destination pairs that communicate simultaneously

over the same channel. The communications are supported by cooperating amplify-

and-forward relays, which perform beamforming. Since the emerging problem is non-

convex, we propose an approximate convex reformulation. Based on ADAL, we also

discuss two different ways to obtain a distributed solution that allows for autonomous

computation of the optimal beamforming decisions by each cluster, while taking into

account intra- and inter-cluster interference effects.

Our goal in this thesis is to advance the state-of-the-art in distributed optimization by proposing methods that combine fast convergence, wide applicability, ease

of implementation, low computational complexity, and are robust with respect to

delays, uncertainty in the problem parameters, noise corruption in the message ex-

changes, and inexact computations.

[发布日期]  [发布机构] 
[效力级别] Operations research [学科分类] 
[关键词]  [时效性] 
   浏览次数:3      统一登录查看全文      激活码登录查看全文