[PREV - SUSPENDED]    [TOP]

ARTIFICIAL_SIMON


                                             January 31, 2022


At the outset in "The Sciences of the Artificial",
Herbert A. Simon begins with the insight that you
often understand the behavior of a large system
without any detailed understanding of its components.


    "We knew a great deal about the gross physical and chemical
    behavior of matter before we had a knowledge of molecules, a great
    deal about molecular chemistry before we had an atomic theory, and
    a great deal about atoms before we had any theory of elementary
    particles if indeed we have such a theory today."

    "This skyhook-skyscraper construction of science from the roof
    down to the yet unconstructed foundations was possible because the
    behavior of the system at each level depended on only a very
    approximate, simplified, abstracted characterization of the system
    at the level next beneath."

                               -- Herbert A. Simon
                                  "The Sciences of the Artifical"

There's a footnote here where he quotes
Bertrand Russell from _Principia Mathematica_:
                                                     Simon presents this
   "'... the chief reason in favour of any           Russell quote as
    theory on the principles of mathematics must     saying something
    always be inductive, i.e., it must lie in        similar to his own--
    the fact that the theory in question enables     one also builds
    us to deduce ordinary mathematics. In            mathematics "from the
    mathematics, the greatest degree of              roof down", or so
    self-evidence is usually not to be found         Russell seems to
    quite at the beginning, but at some later        suggest here.
    point; hence the early deductions, until
    they reach this point, give reasons rather       I interpret this a little
    for believing the premises because true          differently:
    consequences follow from them, than for
    believing the consequences because they          THE_DOCTRINE_OF_POSTULATES
    follow from the premises.'"
                                                            EPISTEMS

    Simon comments:

    "Contemporary preferences for deductive
    formalisms frequently blind us to this
    important fact, which is no less true
    today than it was in 1910."                     By the way, note that the
                                                    phrase "formalism": here
                                                    has at least a touch of a
         In the case of the sciences,               negative connotation.
         knowing more about, the "lower"
         level, like the nature of                  Does it have a negative
         atoms, *also* improves your                connotation in general?
         understanding of the "higher"
         level-- we begin with chemistry                     FORMALIST
         and use that knowledge to infer
         something about atoms, and then
         knowing something about atoms
         then improves our understanding
         of chemistry.

         It is not at all clear to me
         that we actually *get* anything
         out of the theories of the
         ultimate "foundations" of
         mathematics.   Once you know
         something of the foundations,
         can you build something you
         couldn't build before?


Herbert A Simon often seems
to have a naive 1950s
optimism about, say, the
efficacy of computer            But then I haven't looked at the new chapter
simulation techniques:          on complexity sciences he added in 1981 for
                                the third edition.

    "Thus we might expect simulation to be a powerful        COMPLEX_SIMON
    technique for deriving, from our knowledge of the
    mechanisms governing the behavior of gases, a
    theory of the weather and a means of weather
    prediction. Indeed, as many people are aware,
    attempts have been under way for some years to
    apply this technique.  Greatly oversimplified,
    the idea is that we already know the correct
    basic assumptions, the local atmospheric
    equations, but we need the computer to work out
    the implications of the interactions of vast
    numbers of variables starting from complicated
    initial conditions. This is simply an
    extrapolation to the scale of modern computers of
    the idea we use when we solve two simultaneous
    equations by algebra."  (p.15)

  Weather has since become the classic example of a
  system whose complexity causes it to elude much more
  than short term prediction-- really, we understand
  the *components* (gas molecules) extremely well,
  the sheer number of them pushes us into the realm of
  chaotic indeterminacy.

        Simon argues for the *possibilty*
        of working out rules governing
        the high level behavior of a         It also doesn't mean you
        system, but can't really             *couldn't* do the converse: work
        guarantee it's achievable in         out a general theory from the
        advance of actually doing it.        bottom up, by first understanding
                                             the components.



    "The natural laws governing relays are very well known,
    while the natural laws governing neurons are known most
    imperfectly. But that does not matter, for all that is
    relevant for the theory is that the components have the
    specified level of unreliability and be interconnected
    in the specified way."

      Yes, it could be that the behavior you're interested
      in can be understood and predicted without regard to
      the nature of the components-- but it also could be
      it can't be understood of predicted *at all*, no
      matter how well you understand its components.



And-- pick a different nit entirely--

    In the case of computers, because they're
    artificial systems, we intentionally use
    different technologies to implement similar
    behaviors.  The programmer doesn't care if the
    chips are TTL or CMOS or something else
    entirely, because the circuit designer creates
    a virtual entity (a "microprocessor") with a
    particular external behavior.

    Similarly, the Assembly language programmer
    can implement an operating system that works
    in a certain way, irrespective of the various
    quirks of a particular microprocessor architecture.

    So, if there's some independence of the behavior
    of the "upper level" from the nature of it's
    components, it's because it was designed in,
    we implement things that way on purpose.






--------
[NEXT - SIMON_SAYS_MORE]