Jump to content

Distributed algorithmic mechanism design

From Wikipedia, the free encyclopedia
This is an old revision of this page, as edited by Rohollahsoltani (talk | contribs) at 19:23, 7 January 2015. The present address (URL) is a permanent link to this revision, which may differ significantly from the current revision.

Distributed algorithmic mechanism design (DAMD) is an extension of algorithmic mechanism design.

DAMD differs from Algorithmic mechanism design since the algorithm is computed in a distributed manner rather than by a central authority. This greatly improves computation time since the burden is shared by all agents within a network

One major obstacle in DAMD is ensuring that agents reveal the true costs or preferences related to a given scenario. Often these agents would rather lie in order to improve his or her own utility. DAMD is full of new challenges since one can no longer assume an obedient networking and mechanism infrastructure where rational players control the message paths and mechanism computation.

Game Theoretic Model

Game theory and distributed computing both deal with a system with many agents, in which the agents may possibly pursue different goals. However they have different focuses. For instance one of the concerns of distributed computing is to prove the correctness of algorithms for tolerance of faulty agents and concurrency of agents. Whereas In game theory the focus is on devising a strategy which leads us to an equilibrium in the system. [1]

Nash equilibrium

Nash equilibrium is the most commonly-used notion of equilibrium in game theory. However Nash equilibrium does not deal with faulty or unexpected behavior. A protocol that reaches Nash equilibrium is guaranteed to execute correctly in the face of rational agents, with no agent being able to improve its utility by deviating from the protocol.[2]

Solution Preference

There is no trusted center like in AMD, mechanism must be done by the agents themselves. However, we have a central enforcer whose responsibility is to force agents to decide on an outcome. The enforcer can impose severe penalties if the agents do not agree on an outcome or fail the algorithm. In other words, as Yehuda Afek et. al. said “agents cannot gain if the algorithm fails”[3]. As a result, though agents have preferences, they have no incentive to fail the algorithm.

Truthfulness

A mechanism is considered to be truthful if the agents don’t intend to lie in the system, and gain nothing by lying about their or other agents values. A good example would be a leader election algorithm which selects a computation server within a network. The algorithm specifies that agents are required to send their total computational power to each other, after which the most powerful agent (leader) is chosen to complete the task. In this algorithm agents may lie about their true computation power because they are potentially in danger of being tasked with CPU-intensive jobs which will take from their power to complete local jobs. This can be overcome with the help of truthful mechanisms which without any apriori knowledge of the existing data, and inputs of each agent, causes each agent to respond truthfully to requests.[4]

A well-known truthful mechanism in game theory is Vickrey auction

Classic distributed computing problems

Leader Election(Completely connected network, synchronous case)

Leader election is a fundamental problem in distributed computing. Not surprisingly, there are numerous protocols to solve this problem. System agents are assumed to be rational, and therefore prefer having a leader to not having one. The agents also have different preferences regarding who becomes the leader (an agent may prefer that he himself becomes the leader). Standard protocols may choose leaders based on the lowest or highest ID of system agents. However since agents have an incentive to lie about their ID in order to improve their utility such protocols are rendered useless.
A protocol for leader election in the presence of rational agents has been introduced by Ittai et al. is :

  • At round 1, each agent i sends everyone his id;
  • At round 2, i sends each other agent j the set of ids that he (i) has received (including his own). If the sets received by agent i are not all identical or if i does not receive an id from some agent, then i sets its output to Null, and leader election fails. Otherwise, let n be the cardinality of the set of ids.
  • Agent i chooses a random number Ni in {0, ..., n-1} and sends it to all the other agents.
  • Each agent i then computes ∑i=1n Ni (mod n), and then takes the agent with the Nth highest id in the set to be the leader. (If some agent j does not send i a random number, then i sets its output to Null.)

This protocol correctly elects a leader while reaching equilibrium is truthful since no agent can benefit by lying about its input[5].

See also

References

  1. ^ Halpern, Joseph Y. (2008). "Computer science and game theory: A brief survey". The New Palgrave Dictionary of Economics.
  2. ^ Martin, Osborne; Rubinstein, Ariel (1994). A course in game theory. MIT press.
  3. ^ Afek, Yehuda; Ginzberg, Yehonatan; Feibish, Shir Landau; Sulamy, Moshe (2014). "Distributed computing building blocks for rational agents". PODC '14 Proceedings of the 2014 ACM symposium on Principles of distributed computing.
  4. ^ hneidman, Jeffrey; Parkes, David (2004). "Specification faithfulness in networks with rational nodes". twenty-third annual ACM symposium on principles of distributed computing: PODC.
  5. ^ Abraham, Ittai; Dolev, Danny (2013). "Distributed Protocols for Leader Election: a Game-Theoretic Perspective". DISC: 61–75.
  • [1] Distributed Algorithmic Mechanism Design: Recent Results and Future Directions
  • [2] Distributed algorithmic mechanism design and network security
  • [3] Service Allocation in Selfish Mobile Ad Hoc Networks Using Vickrey Auction