Skip to content
On this page
On this page

Perception

  • top down vs bottom up (optical illusions)
  • same data, different representations given context
  • AlphaGo's structure has parallels with system-1,2 thinking
    • the Monte Carlo tree search is system-2: has all the logic, slow, deliberate
    • however, you can't just exhaustively search the tree, so you need a neural network to have gain some intuition/heuristics about where to go in the tree, and that's system-1
  • GPT-3 ≠ system-1: heuristics are borne out of forming representations/models that are internally coherent
    • i.e. they find patterns, but they don't actually have
    • absurd mistakes: 2-dimensional data, object-permanence, basically
  • system-1,2:
    • framed mainly for the benefit of the populace, as people understand agents.
    • in fact, it's more categorizations of mental processes, and some are slower and more deliberate than others
  • the example of simple shapes moving around: clearly we're proscribing agency and come with all this model baggage even for such simple data
  • system-1 is what happens 95% of the time, until you hit something surprising, not coherent, at which point it triggers system-2.
  • system-2 can be thought of as an editor (filter) for system-1.
  • distinction between doubt and surprise:
    • instead of continuously predicting what is happening next, you see what happens and then make sense of it
    • the idea here is that you have your system-1 that just keeps running and continuously checking the data in a very straightforward manner. but then once something weird happens, your attention is drawn.
    • much more economical (?)

LINKS TO THIS PAGE

Edit this page
Last updated on 12/24/2021