[go: up one dir, main page]

Jump to content

Simplicity theory

From Wikipedia, the free encyclopedia

Simplicity theory is a cognitive theory that seeks to explain the attractiveness of situations or events to human minds. It is based on work done by scientists like behavioural scientist Nick Chater,[1] computer scientist Paul Vitanyi,[2] psychologist Jacob Feldman,[3] and artificial intelligence researchers Jean-Louis Dessalles[4][5] and Jürgen Schmidhuber.[6] It claims that interesting situations appear simpler than expected to the observer.

Overview

[edit]

Technically, simplicity corresponds in a drop in Kolmogorov complexity, which means that, for an observer, the shortest description of the situation is shorter than anticipated. For instance, the description of a consecutive lottery draw, such as 22-23-24-25-26-27, is significantly shorter than a typical one, such as 12-22-27-37-38-42. The former requires only one instantiation (choice of the first lottery number), whereas the latter requires six instantiations.

Simplicity theory makes several quantitative predictions concerning the way atypicality,[7] distance, recency or prominence (places, individuals)[5] influence interestingness.

Formalization

[edit]

The basic concept of simplicity theory is unexpectedness, defined as the difference between expected complexity and observed complexity:

This definition extends the notion of randomness deficiency.[7] In most contexts, corresponds to generation or causal complexity, which is the smallest description of all parameters that must be set in the "world" for the situation to exist. In the lottery example, generation complexity is identical for a consecutive draw and a typical draw (as long as no cheating is imagined) and amounts to six instantiations.

Simplicity theory avoids most criticisms addressed at Kolmogorov complexity by considering only descriptions that are available to a given observer (instead of any imaginable description). This makes complexity, and thus unexpectedness, observer-dependent. For instance, the typical draw 12-22-27-37-38-42 will appear very simple, even simpler than the consecutive one, to the person who played that combination.

Connection with probability

[edit]

Algorithmic probability is defined based on Kolmogorov complexity:[8] complex objects are less probable than simple ones. The link between complexity and probability is reversed when probability measures surprise[7] and unexpectedness:[5] simple events appear less probable than complex ones. Unexpectedness is linked to subjective probability as

The advantage of this formula is that subjective probability can be assessed without necessarily knowing the alternatives. Classical approaches to (objective) probability consider sets of events, since fully instantiated individual events have virtually zero probability to have occurred and to occur again in the world. Subjective probability concerns individual events. Simplicity theory measures it based on randomness deficiency, or complexity drop. This notion of subjective probability does not refer to the event itself, but to what makes the event unique.

References

[edit]
  1. ^ Chater, N. (1999). "The search for simplicity: A fundamental cognitive principle?" The Quarterly Journal of Experimental Psychology, 52 (A), 273–302.
  2. ^ Chater, N. & Vitányi, P. (2003). "Simplicity: a unifying principle in cognitive science?". [Trends in Cognitive Sciences], 7 (1), 19–22.
  3. ^ Feldman, J. (2004). "How surprising is a simple pattern? Quantifying 'Eureka!'". Cognition, 93, 199–224.
  4. ^ Dessalles, Jean-Louis (2008). La pertinence et ses origines cognitives. Paris: Hermes-Science Publications. ISBN 978-2-7462-2087-4.
  5. ^ a b c Dessalles, J.-L. (2013). "Algorithmic simplicity and relevance". In D. L. Dowe (Ed.), Algorithmic probability and friends - LNAI 7070, 119-130. Berlin, D: Springer Verlag.
  6. ^ Schmidhuber, J. (1997). "What’s interesting?" Lugano, CH: Technical Report IDSIA-35-97.
  7. ^ a b c Maguire, P., Moser, P. & Maguire, R. (2019). "Seeing patterns in randomness: a computational model of surprise". Topics in Cognitive Science, 11 (1), 103-118.
  8. ^ Solomonoff, R. J. (1964). "A Formal Theory of Inductive Inference. Information and Control, 7 (1), 1-22.
[edit]