Parallel Programming: Vector Clocks

Groups of nodes act together, can send messages (multicast) to a group, and the messages are received by all the nodes that are in the group (Sandén, 2011).  If there is a broadcast, all nodes in the system get the same message.  In a multicast, the messages can reach the nodes in a group in a different order: First In First Out order, casual order, or total order (atomic multicast if it is reliable).

Per Sandén (2011), a multicast can occur if the source is a member of the group, but it cannot span across groups in causal order. Two-phase, total order multicast systems can look like a vector clock but they are not, and each message sends or receive will increment on this system by one as they talk between the systems.

Below is an example of a vector clock:

To G1

  • m1 suggested time at 6 and  m2 suggested time at 11
  • m1 commit time at 7 and m2 commit time at 12

To G2

  • m1 suggested time at 7 and  m2 suggested time at 12
  • m1 commit time at 8 and m2 commit time at 13vectorclock

Reference

Parallel Programming: State diagram of Maekawa’s voting algorithm

Sandén (2011) defines state diagrams as a way to show the possible states an object could be on.  He also defines, that events are action verbs that occur on an arrow between two events (if an action doesn’t change the state it can be listed in the state).  Whereas an action can have conditions on them.

Thus, a state diagram shows the transition from state to state as events occur.  An event usually has many occurrences, and they are instantaneous.  Finally, a super-state can encompass multiple states (Sandén, 2011).  An activity is an operation that takes time, and it has the keyword “do /”

The goal of this post was to make a state diagram of Maekawa’s voting algorithm on the “Maekawa’s algorithm” within the “Distributed mutual exclusion” set. This can be done in various ways. One option is the following 5 states:

  • Released and not voted
  • Released and voted
  • Wanted and not voted
  • Wanted and voted
  • Held and voted (for self)

Events are:

  • request_received, etc., for messages arriving from other nodes
  • acquire for when the local node wants the lock
  • release for when the local node gives up the lock.

A possible solution is shown below:

statediagram

Reference