Distributed Computing (2008)
756 pág.

Distributed Computing (2008)


DisciplinaAlgorítimos Distribuídos32 materiais283 seguidores
Pré-visualização50 páginas
as a single event. Also, whether the message multicast is counted as
a single message or as multiple messages needs to be clarified. This would
depend on whether or not hardware multicasting is used by the lower layers
of the network protocol stack.
For shared memory systems, the message complexity is not an issue if
the shared memory is not being provided by the distributed shared mem-
ory abstraction over a message-passing system. The following additional
changes in the emphasis on the usual complexity measures would need to be
considered:
\u2022 The size of shared memory, as opposed to the size of local memory,
is important. The justification is that shared memory is expensive, local
memory is not.
\u2022 The number of synchronization operations using synchronization variables
is a useful metric because it affects the time complexity.
5.4 Program structure
Hoare, who pioneered programming language support for concurrent
processes, designed concurrent sequential processes (CSP), which allows
communicating processes to synchronize efficiently. The typical program
structure for any process in a distributed application is based on CSP\u2019s repeti-
tive command over the alternative command on multiple guarded commands,
and is as follows:
\u2217 \ufffdG1 \u2212\u2192 CL1 \ufffd\ufffdG2 \u2212\u2192 CL2 \ufffd\ufffd · · · \ufffd\ufffdGk \u2212\u2192 CLk \ufffd\ufffd
The repetitive command (denoted by \u201c*\u201d) denotes an infinite loop. Inside the
repetitive command is the alternative command over guarded commands.
The alternative command, denoted by a sequence of \u201c\ufffd\ufffd\u201d separating guarded
commands, specifies execution of exactly one of its constituent guarded com-
mands. The guarded command has the syntax \u201cG\u2212\u2192 CL\u201d where the guard
G is a boolean expression and CL is a list of commands that are only executed
if G is true. The guard expression may contain a term to check if a mes-
sage from a/any other process has arrived. The alternative command over the
guarded commands fails if all the guards fail; if more than one guard is true,
one of those successful guarded commands is nondeterministically chosen for
execution. When a guarded command Gm \u2212\u2192 CLm does get executed, the
execution of CLm is atomic with the execution of Gm.
The structure of distributed programs has similar semantics to that of CSP
although the syntax has evolved to something very different. The format for
the pseudo-code used in this book is as indicated below. Algorithm 5.2 serves
to illustrate this format.
138 Terminology and basic algorithms
1. The process-local variables whose scope is global to the process, and
message types, are declared first.
2. Shared variables, if any, (for distributed shared memory systems) are
explicitly labeled as such.
3. This is followed by any initialization code.
4. The repetitive and the alternative commands are not explicitly shown.
5. The guarded commands are shown as explicit modules or procedures (e.g.,
lines 1\u20134 in Algorithm 5.2). The guard usually checks for the arrival of
a message of a certain type, perhaps with additional conditions on some
parameter values and other local variables.
6. The body of the procedure gives the list of commands to be executed if
the guard evaluates to true.
7. Process termination may be explicitly stated in the body of any proce-
dure(s).
8. The symbol \u22a5 is used to denote an undefined value. When used in a
comparison, its value is \u2212\ufffd.
5.5 Elementary graph algorithms
This section examines elementary distributed algorithms on graphs. The reader
is assumed to be familiar with the centralized algorithms to solve these basic
graph problems. The distributed algorithms here introduce the reader to the
difficulty of designing distributed algorithms wherein each node has only a
partial view of the graph (system), which is confined to its immediate neigh-
bors. Further, a node can communicate with only its immediate neighbors
along the incident edges. Unless otherwise specified, we assume unweighted
undirected edges, and asynchronous execution by the processors. Communi-
cation is by message-passing on the edges.
The first algorithm is a synchronous spanning tree algorithm. The next
three are asynchronous algorithms to construct spanning trees. These ele-
mentary algorithms are theoretically important from a practical perspective
because spanning trees are a very efficient form of information distribution
and collection in distributed systems.
5.5.1 Synchronous single-initiator spanning tree algorithm using flooding
The code for all processes is not only symmetrical, but also proceeds
in rounds. This algorithm assumes a designated root node, root, which
initiates the algorithm. The pseudo-code for each process Pi is shown in
Algorithm 5.1. The root initiates a flooding of QUERY messages in the graph
to identify tree edges. The parent of a node is that node from which a QUERY
is first received; if multiple QUERYs are received in the same round, one of
the senders is randomly chosen as the parent. Exercise 5.1 asks you to modify
139 5.5 Elementary graph algorithms
(local variables)
int visited\ufffddepth\u2190\u2212 0
int parent\u2190\u2212\u22a5
set of int Neighbors\u2190\u2212 set of neighbors
(message types)
QUERY
(1) if i= root then
(2) visited\u2190\u2212 1;
(3) depth\u2190\u2212 0;
(4) send QUERY to Neighbors;
(5) for round = 1 to diameter do
(6) if visited = 0 then
(7) if any QUERY messages arrive then
(8) parent\u2190\u2212 randomly select a node from which
QUERY was received;
(9) visited\u2190\u2212 1;
(10) depth\u2190\u2212 round;
(11) send QUERY to Neighbors \ 	senders of
QUERYs received in this round\ufffd;
(12) delete any QUERY messages that arrived in this round.
Algorithm 5.1 Spanning tree algorithm: the synchronous breadth-first search (BFS) spanning tree
algorithm. The code shown is for processor Pi , 1\u2264 i \u2264 n.
the algorithm so that each node identifies not only its parent node but also all
its children nodes.
Example Figure 5.2 shows an example execution of the algorithm with node
A as initiator. The resulting tree is shown in boldface, and the round numbers
in which the QUERY messages are sent are indicated next to the messages.
The reader should trace through this example for clarity. For example, at the
end of round 2, E receives a QUERY from B and F and randomly chooses
F as the parent. A total of nine QUERY messages are sent in the network
which has eight links.
Figure 5.2 Example execution
of the synchronous BFS
spanning tree algorithm
(Algorithm 5.1).
E DF
(1)
(2)
(2)
A (1) B C(2)
(3)
(3)
(3)
(3) QUERY
140 Terminology and basic algorithms
Termination
The algorithm terminates after all the rounds are executed. It is straightforward
to modify the algorithm so that a process exits after the round in which it sets
its parent variable (see Exercise 5.1).
Complexity
\u2022 The local space complexity at a node is of the order of the degree of edge
incidence.
\u2022 The local time complexity at a node is of the order of (diameter + degree
of edge incidence).
\u2022 The global space complexity is the sum of the local space complexities.
\u2022 This algorithm sends at least one message per edge, and at most two
messages per edge. Thus the number of messages is between l and 2l.
\u2022 The message time complexity is d rounds or message hops.
The spanning tree obtained is a breadth-first tree (BFS). Although the
code is the same for all processes, the predesignated root executes a dif-
ferent logic to being with. Hence, in the strictest sense, the algorithm is
asymmetric.
5.5.2 Asynchronous single-initiator spanning tree algorithm using flooding
This algorithm assumes a designated root node which initiates the algorithm.
The pseudo-code for each process Pi is shown in Algorithm 5.2. The
root initiates a flooding of QUERY messages in the graph to identify tree
edges. The parent of a node is that node from which a QUERY is first
received; an ACCEPT message is sent in response to such a QUERY. Other
QUERY messages received are replied