[PREV - BAY_WINDOW]    [TOP]

MORATECH


                                             August    25, 2019
                                             September 16, 2019

Kentaro Toyama had a piece up at slate                 https://slate.com/technology/2019/08/ban-facial-recognition-emerging-technologies-drones.html
"We Need to Ban More Emerging Technologies".           https://www.reddit.com/r/Futurology/comments/cuz6h8/we_need_to_ban_more_emerging_technologies/
It isn't brilliant, but it hits some subjects
of interest to me...  I posted it to reddit's
"Futurology" groups because I hoped it might
annoy them productively.  (I got it half right.)


  Kentaro Toyama is essentially a former tech evangelist:
  he worked for Microsoft, trying to get computers into
  schools in India.

  He eventually soured on "technological optimism",
  concluding that technology tends to amplify conditions
  already present: if a society is corrupt and the people
  are impoverished, dumping a technology on them is unlikey
  to fix the central problems.

  He makes the point that in recent decades in the
  United States the various information technology
  "revolutions" have made no dent in the poverty level,     He's done some
  which remains rather high for an "advanced" country.      Ted/TedX talks
                                                            on this.



Kentaro Toyama's piece at slate argues:

    "To tame this onrushing tide, society needs dams and dikes.
    Just as has begun to happen with facial recognition, it’s
    time to consider legal bans and moratoriums on other emerging
    technologies. These need not be permanent or absolute, but
    innovation is not an unmitigated good. The more powerful a
    technology is, the more care it requires to safely operate."

He was suggesting some sort of social evaluation process
should be applied before new technologies are rolled out:

    "Technology has its benefits. But slowing the
    pace of its advance would give society more time       DRAG
    to think through the consequences and debate
    which aspects of new technologies are desirable,
    and which should be outlawed."

In particular, he applauded
recent bans of facial                        Banning facial recognition
recognition technologies, and                technology doesn't sound
argues we should consider more               like a bad move to me.
such bans.
                                                 That anyone would even
It would be an understatement to say             consider doing something
that the people at /r/Futurology were            like that is remarkable in
critical of this piece-- they were               the "post 9/11" era, which
only barely capable of reading it.               often seems hellbent on a
It violates the local religious precepts.        1984 police state approach
                                                 to every problem.

In particular, on every point where Toyama
mentioned a potential negative feature of a
technology, "mhornberger" made the point that
it also has positive aspects, as though this     Reasonably, I think you'd
somehow refuted Toyama in some way-- but         need some way of estimating
Toyama doesn't ever say there are *no*           the relatively liklihood of
positive aspects to new technologies.            upside and downside, to get
                                                 some idea of whether a
                                                 proposed new set of
  Toyama: "New technologies always               restrictions on technological
          have unintended consequences--         development would be more
          often negative"                        likely to do more harm than
                                                 good.
  mhornberger:
                                                   But then: just as the
          "Doing nothing also                      costs and benefits a new
          has consequences."                       technology are difficult
                                                   to evaluate, it's also
  That would be a relevent comment                 difficult to evaluate a
  if Toyama was recommending                       new social process.
  zero innovation-- he's literally
  recommending *slower* innovation.                Innovations in social
                                                   institutions are
  That could be argued against                     essentially just another
  in various ways, but you'd                       type of "technology".
  have to be capable hearing
  what's actually being said.
                                   Fighting the
                                   last war?
                                                  FLORID


Quoting bits of this dicussion,
starting in the middle
(I'm "doomvox", of course.):
                                        https://www.reddit.com/r/Futurology/comments/cuz6h8/we_need_to_ban_more_emerging_technologies/ey1p0sk/

mhornberger wrote:

    This point of his:

        it’s time to consider legal bans and
        moratoriums on other emerging technologies.

    Is untenable. You can mention specific technologies if
    you like, such as facial recognition, but it cannot be
    applied in a general sense.

doomvox wrote:

    Perhaps not, but you hadn't really made that point. You
    keep making the point there's good effects as well as bad,
    but it's not like Toyama says there aren't.


mhornberger wrote:

    doomvox wrote:

        he wants to see some sort of social evaluation process
        applied before new technologies are rolled out.

    "Good luck with that. With many things people's initial assessment
    is dismissive, and it's only over time that they come to like and
    depend on it. And only through adoption does the price come down
    and it become economically viable. And also, the ill effects are
    generally not known for some time. I mean, not many would predict
    that WhatsApp messages could lead to vigilante murders, but it
    happened. People have warned of the dangers of technology
    forever, whether it be writing, the printing press, television,
    etc. All of which have caused some harm in the world. No
    technology is an unmitigated good. That alone is not enough of a
    reason to block any specific technology."

    "You also have to decide how much harm is allowable, and how to
    balance that against the benefits you're foregoing. Self-driving
    cars might cause some problems down the road, but will also pose
    considerable benefits. You can't know ahead of time how things
    will pan out."


doomvox wrote:

    It's not at all clear Kentaro Toyama is trying to call
    for a moratorium on all emerging technologies. I think
    literally he's just saying we should consider it for
    some others.

    If we were to look at the more extreme claim, though, I
    think you're correct that it's difficult to imagine what
    sort of social institutions we could set up that would be
    up to the task of evaluating the costs and benefits of a
    technological advance before they're rolled out.

    Taking it from the other side though, if the idea that
    every new product should be regarded as innocent until
    proven guilty seems more than a little dubious. Technical
    fads can spread remarkably fast these days, and if the
    benefits take some time to become clear, that's also true
    of the potential downsides. You can argue we've been
    rolling the dice repeatedly and thus far we've just
    (mostly) lucked out.

        "And also, the ill effects are
        generally not known for some time."

    Sure. And the situation can
    remain murky for decades
    afterwards. Consider say, the
    rapid adoption of the car after
    WWII. Did the benefits out weigh    This though, may be a bad example:
    the damage?
                                           The idea that "new technology" is
    What about, say, just the use          *just* putting a product on the
    of leaded gas? There's a               market is completely wrong:
    tenable theory this caused             suburbia wasn't *just* a market
    tremendous human damage, e.g.          phenomena, it was enabled by policy
    the crime wave of the 60s-90s.         decisions favoring highways and low
                                           density zoning, and abandoning much
      "You also have to                    of public transit.
      decide how much harm is
      allowable, and how to                    It's *always* like this: An
      balance that against                     "uber" isn't a market phenomena
      the benefits you're                      that exists in isolation: it
      foregoing."                              rests on legal judgements that
                                               they're not cabs operated by
    That's indeed the kind of                  employees.
    problems we'd be up against.


                                                (And the complex of changes
                                                that led to the explosion of
                                                suburbian sprawl may have been
                                                full of mistakes, but *now*
                                                what do we do about it?)

                                                      (But: I don't know why
                                                       I thought that question
                                                       was relevant here.)

    (mhornberger repeatedly argues along the
    lines "it would be difficult to do this,
    therefore we shouldn't try"-- you could just
    as easily say "we might need to do this, so
    we should start thinking about how we might
    do it".  He's not so pessimistic about
    research programs into more conventional
    "technological" ideas...)

        "Self-driving cars might
        cause some problems down the
        road, but will also pose
        considerable benefits. You
        can't know ahead of time how
        things will pan out."

    It's hard to say what will happen, so
    let's roll the dice again?

        "He swept up a lot of other things in that
        net too.  He complained about kids' faces
        being stuck to screens as if that itself is
        one of the problems he's railing against."


The faces glued to screens strike me as
prima facie evidence that we're looking
at addictive behavior.                         A recent study shows that
                                               facebook addiction has
   It might not be a problem in itself         some negative psychological
   (though I'd be surprised if it doesn't      impact on young users,
   correlate with some problems, if only       but perhaps surprisingly
   lack of physical exercise).                 there's no similar effect
                                               from computer games.

                                                  (Find the link.)


Consider the case of drug approvals:
We don't just approve all new drugs      Only a very unusual extremist
by default and go after specific         would try to argue that we don't
ones if we suspect they might have       need the drug approvals process,
problems.                                and that they'd be cheaper if we
                                         just relied on liability law to
It is perhaps a little peculiar          go after companies only after
that we would worry about a new          they've screwed up.
addictive drug (even if it were
"merely" psychologically                    This is a case where the general
addictive), but we presume                  sentiment is that we can't rely on
addictive electronic toys are               industry to self-regulate.
benign.
                                               A similar case is food safety.


  Or consider the cellphone craze of the 90s:
  Microwave repeaters everywhere and half of
  the populace suddenly holding transmitters
  against their skulls for hours a day-- as it
  happens, the (belated) fears that there
  might be radiation exposure issues with this
  were wrong, but I would make the points:

    (1) there was no particular reason to
    believe that in advance

    (2) the population of cellphone users
    seemed completely uninterested in the   They were similarly uninterested
    possibility--                           any evidence that they were
                                            killing themselves trying to drive
  It seemed pretty clear to me that the     while speaking on the phone.
  technology was *literally* addictive
  (that's not just a hyperbolic                 Eventually some legislation
  analogy)-- but there was no need to           started getting passed that
  seek approval from anything like an           pretended to address these
  FDA because this was an electronic toy        issues, e.g. outlawing
  not a drug: this seems like a rather          driving while using a phone
  shallow distinction to base public            unless it was hands-free,
  policy on...                                  ignoring that the central
                                                trouble was mental distraction.

                                                     Cellphone companies
                                                     liked that one, because
                                                     people would have to buy
                                                     new equipment.

                                              The Kentaro Toyama makes the
                                              point that there's a rachet
                                              effect in technological
                                              products that makes it difficult
                                              for the populace to give them up.

                                              Cellphones are a decent example,
                                              I think-- you *could* take it
                                              the other way though.  People
                                              gravitated to different
                                              technologies (like "texting")
                                              that ameliorated some of the
                                              problems, and they developed
                                              some new social customs that
                                              fixed some of the annoyances
                                              with the new technology.


                    (Sep 18, 2017)

So, should we ignore the potential
effects of a technology, secure in          https://slashdot.org/comments.pl?sid=11125847&cid=55218513
the belief that further
technological progress will always
solve any problems?

Allow me to suggest another
possibility: at present, we have no
sensible method of evaluating what
a new product is going to do to us,
and we end up swept along by the
enthusiasm of faddish people who         Everyone likes it, it must be good!
can't even imagine a down-side to        But what's bad may be that everyone
their latest obsessions                  likes it.


Might there not be a set of alternative social
institutions we could use to attempt to
anticipate (and perhaps "regulate"?)
potential problems before we encounter them?



The technological optimists are
literally that-- they have trouble in
believing in the reality of possible     And I used to be one
downsides, and insist that the           of these guys, hence      RAT3
potential rewards justify making         my interest in arguing
almost any experiment.                   with them now.

There's an implicit presumption                     I was reacting to the
that new tech has a very low                        anti-technology
probability of results that go                      movement of the 70s
net-negative.                                       which was particularly
                                                    crazy, but we need not
                                                    veer from that to an
                                                    automatic approval of
                                                    every new gadget...


It might be worth considering
the analogy to military actions:

  It's extremly difficult to sum up the likely
  risks and benefits of something like a military
  engagement, and yet I don't think there's any
  question that it needs to be done in some fashion.

  Few of us would say it's reasonble to just go--
  "Oh, well there's no way to know what's going to     Though that is
  happen here.  What the fuck, let's just invade       indeed the way
  and figure it out later."                            the US reacted
                                                       circa the
                                                       invasion of Iraq.



    If I might play "futurologist" for a moment:
    I envision a future where we will look back
    on the grip of FANG has on our lives, and
    wonder why didn't impose on them the various           FANG
    regulatory checks and balances that seem so
    obvious in our far future present day.

    In the world of the future, the Information
    Technology Administration will strictly limit
    electronic privacy invasions, and take steps
    to require full disclosure of conflicts of
    interest in online communications.

    REGULATING_NEWTECH


--------
[NEXT - REGULATING_NEWTECH]