Bayesian persuasion

From Wikipedia, the free encyclopedia

In economics and game theory, Bayesian persuasion is a form of mechanism design. One participant (the sender) intends to persuade the other (the receiver) of a certain course of action. The sender must decide what action to take to maximize their expected utility by providing evidence to the receiver, under the assumption that the receiver will revise their belief about the state of the world using Bayes' Rule. Bayesian persuasion was introduced by Kamenica and Gentzkow.[1]

Bayesian persuation is a special case of a principal–agent problem: the principal is the sender and the agent is the receiver. It can also be seen as a communication protocol, comparable to signaling games[2]; the sender must decide what signal to reveal to the receiver to maximize their expected utility. It can also be seen as a form of cheap talk.[3]

Example[edit]

Kamenica and Gentzkow[1] use the following example. The sender is a medicine company, and the receiver is the medical regulator. The company produces a new medicine, and needs the approval of the regulator. There are two possible states of the world: the medicine can be either "good" or "bad". The company and the regulator do not know the true state. However, the company can run an experiment and report the results to the regulator. The question is what experiment the company should run in order to get the best outcome for themselves. The assumptions are:

  • The company gains utility if and only if the medicine is approved.
  • The regulator gains utility if and only if it provides an accurate outcome (approving a good medicine or rejecting a bad one).
  • Both company and regulator know the prior probability that the medicine is good.
  • Both parties agree on the design of the experiment and the reporting of the results (so there is no element of deception).

For example, suppose the prior probability that the medicine is good is 1/3 and that the company has a choice of three actions:

  1. Conduct a thorough experiment that always detects whether the medicine is good or bad, and truthfully report the results to the regulator. In this case, the regulator will approve the medicine with probability 1/3, so the expected utility of the company is 1/3.
  2. Don't conduct any experiment; always say "the medicine is good". In this case, the signal does not give any information to the regulator. As the regulator believes that the medicine is good with probability 1/3, the expectation-maximizing action is to always reject it. Therefore, the expected utility of the company is 0.
  3. Conduct an experiment that, if the medicine is good, always reports "good", and if the medicine is bad, it reports "good" or "bad" with probability 1/2. Here, the regulator applies Bayes' rule: given a signal "good", the probability that the medicine is good is 1/2, so the regulator approves it. Given a signal "bad", the probability that the medicine is good is 0, so the regulator rejects it. All in all, the regulator approves the medicine in 2/3 of the cases, so the expected utility of the company is 2/3.

In this case, the third policy is optimal for the sender since this has the highest expected utility of the available options. Using the Bayes rule, the sender has persuaded the receiver to act in a favorable way to the sender.

Generalized model[edit]

The basic model has been generalized in a number of ways, including:

  • The receiver may have private information not shared with the sender.[4][5][6]
  • The sender and receiver may have a different prior on the state of the world.[7]
  • There may be multiple senders, where each sends a signal simultaneously and all receivers receive all signals before acting.[8][9]
  • There may be multiple receivers, including cases where each receives their own signal, the same signal, or signals which are correlated in some way, and where each receiver may factor in the actions of other receivers.[10]
  • A series of signals may be sent over time.[11]

Practical application[edit]

The applicability of the model has been assessed in a number of real-world contexts:

Computational approach[edit]

Algorithmic techniques have been developed to compute the optimal signalling scheme in practice. This can be found in polynomial time with respect to the number of actions and pseudo-polynomial time with respect to the number of states of the world.[3] Algorithms with lower computational complexity are also possible under stronger assumptions.

The online case, where multiple signals are sent over time, can be solved efficiently as a regret minimization problem.[16]

References[edit]

  1. ^ a b Kamenica, Emir; Gentzkow, Matthew (2011-10-01). "Bayesian Persuasion". American Economic Review. 101 (6): 2590–2615. doi:10.1257/aer.101.6.2590. ISSN 0002-8282.
  2. ^ Kamenica, Emir (2019-05-13). "Bayesian Persuasion and Information Design". Annual Review of Economics. doi:10.1146/annurev-economics-080218-025739.
  3. ^ a b Dughmi, Shaddin; Xu, Haifeng (June 2016). "Algorithmic Bayesian persuasion". STOC '16: Proceedings of the forty-eighth annual ACM symposium on Theory of Computing: 412–425. doi:10.1145/2897518.2897583.
  4. ^ Hedlund, Jonas (2017-01-01). "Bayesian persuasion by a privately informed sender". Journal of Economic Theory. doi:10.1016/j.jet.2016.11.003.
  5. ^ Kolotilin, Anton (2018-05-29). "Optimal information disclosure: A linear programming approach". Theoretical Economics. doi:10.3982/TE1805.
  6. ^ Rayo, Luis; Segal, Ilya (2010-10-01). "Optimal Information Disclosure". Journal of Political Economy. doi:10.1086/657922.
  7. ^ Camara, Modibo K.; Hartline, Jason D.; Johnsen, Aleck (2020-11-01). "Mechanisms for a No-Regret Agent: Beyond the Common Prior". 2020 IEEE 61st Annual Symposium on Foundations of Computer Science (FOCS). IEEE. doi:10.1109/focs46700.2020.00033.
  8. ^ Gentzkow, Matthew; Kamenica, Emir (2016-10-18). "Competition in Persuasion". The Review of Economic Studies. 84: 300–322. doi:10.1093/restud/rdw052.
  9. ^ Gentzkow, Matthew; Shapiro, Jesse M. (2008). "Competition and Trust in the Market for News". Journal of Economic Perspectives. 22 (2): 133–154. doi:10.1257/jep.22.2.133.
  10. ^ Bergemann, Dirk; Morris, Stephen (2019-03-01). "Information Design: A Unified Perspective". Journal of Economic Literature. 57. doi:10.1257/jel.20181489.
  11. ^ Ely, Jeffrey C. (January 2017). "Beeps". American Economic Review. 107 (1): 31–53. doi:10.1257/aer.20150218.
  12. ^ Goldstein, Itay; Leitner, Yaron (September 2018). "Stress tests and information disclosure". Journal of Economic Theory. 177: 34–69. doi:10.1016/j.jet.2018.05.013.
  13. ^ Boleslavsky, Raphael; Cotton, Christopher (May 2015). "Grading Standards and Education Quality". American Economic Journal: Microeconomics. 7 (2): 248–279. doi:10.1257/mic.20130080.
  14. ^ Habibi, Amir (January 2020). "Motivation and information design". Journal of Economic Behavior & Organization. 169: 1–18. doi:10.1016/j.jebo.2019.10.015.
  15. ^ Ely, Jeffrey; Frankel, Alexander; Kamenica, Emir (February 2015). "Suspense and Surprise". Journal of Political Economy. doi:10.1086/677350.
  16. ^ Bernasconi, Martino; Castiglioni, Matteo (2023). "Optimal Rates and Efficient Algorithms for Online Bayesian Persuasion". Proceedings of Machine Learning Research. 202: 2164–2183.