Jump to content

Distributed algorithmic mechanism design

From Wikipedia, the free encyclopedia
This is an old revision of this page, as edited by BG19bot (talk | contribs) at 06:49, 13 December 2014 (WP:CHECKWIKI error fix for #03. Missing Reflist. Do general fixes if a problem exists. - using AWB (10514)). The present address (URL) is a permanent link to this revision, which may differ significantly from the current revision.

Distributed algorithmic mechanism design (DAMD) is an extension of algorithmic mechanism design.

DAMD differs from Algorithmic mechanism design since the algorithm is computed in a distributed manner rather than by a central authority. This greatly improves computation time since the burden is shared by all agents within a network

One major obstacle in DAMD is ensuring that agents reveal the true costs or preferences related to a given scenario. Often these agents would rather lie in order to improve his or her own utility. DAMD is full of new challenges since one can no longer assume an obedient networking and mechanism infrastructure where rational players control the message paths and mechanism computation.

Game Theoretic Model

Distributed computing and game theory are interested in much the same problems which is dealing with systems where there are many agents, facing uncertainty, and having possibly different goals. However, in distributed computing, the focus has been on problems such as fault tolerance, asynchrony, and proving correctness of algorithms; in game theory, the focus has been on strategic concerns.[1]

Nash equilibrium

Nash equilibrium is the most commonly-used notion of equilibrium in game theory. However Nash equilibrium does not deal with faulty or unexpected behavior. A protocol that reaches Nash equilibrium is guaranteed to execute correctly in the face of rational agents, with no agent being able to improve its utility by deviating from the protocol.

Solution Preference

We assume there is no trusted center like in AMD, mechanism must be done by the agents themselves. However, we do presume the existence of some central enforcer whose responsibility is to implement the outcome decided. The enforcer can impose severe penalties if the agents do not agree on an outcome or fail the algorithm. In other words, agents cannot gain if the algorithm fails. As a result, though agents have preferences, they have no incentive to fail the algorithm.

Truthfulness

Imagine that a designer specifies a leader election algorithm to select a computation server in a network whose nodes are distributed. The winner of this leader election is responsible for running some CPU-intensive task. The designer wants the most powerful node to be selected and specifies an algorithm where each node is to submit its true computation power and then come to a distributed consensus as to which node should be leader. In this election problem, it is possible that nodes do not want to participate faithfully in the distributed algorithm. By truthfully revealing a node’s computational power and following the distributed election protocol, a node is in danger of being tasked with a CPU intensive chore that would take resources away from local jobs. This shows that, even though each agent satisfies the solution preference, it is not enough in order to solve protocols in the game-theoretic model. To overcome this, in game theory mechanisms are designed that are truthful, such that agents do not have an incentive to lie about their input, regardless of any a-priori knowledge they may have of the input or strategies of other players.

See also

References

  1. ^ Halpern, Joseph Y. (2008). "Computer science and game theory: A brief survey". The New Palgrave Dictionary of Economics.
  • [1] Distributed Algorithmic Mechanism Design: Recent Results and Future Directions
  • [2] Distributed algorithmic mechanism design and network security
  • [3] Service Allocation in Selfish Mobile Ad Hoc Networks Using Vickrey Auction