On Fri, Apr 28, 2017 at 3:45 PM, mike.valk@gmail.com
<mike.valk@gmail.com> wrote:
> 2017-04-28 16:17 GMT+02:00 Luke Kenneth Casson Leighton <lkcl@lkcl.net>:
>>
>>
>> the rest of the article makes a really good point, which has me
>> deeply concerned now that there are fuckwits out there making
>> "driverless" cars, toying with people's lives in the process. you
>> have *no idea* what unexpected decisions are being made, what has been
>> "optimised out".
>
>
> That's no different from regular "human" programming.
it's *massively* different. a human will follow their training,
deploy algorithms and have an *understanding* of the code and what it
does.
monte-carlo-generated iterative algorthms you *literally* have no
idea what it does or how it does it. the only guarantee that you have
is that *for the set of inputs CURRENTLY tested to date* you have
"known behaviour".
but for the cases which you haven't catered for you *literally* have
no way of knowing how the code is going to react.
now this sounds very very similar to the human case: yes you would
expect human-written code to also have to pass test suites.
but the real difference is highighted with the following question:
when it comes to previously undiscovered bugs, how the heck are you
supposed to "fix" bugs that you have *LITERALLY* no idea how the code
even works?
and that's what it really boils down to:
(a) in unanticipated circumstances you have literally no idea what
the code will do. it could do something incredibly dangerous.
(b) in unanticipated circumstances the chances of *fixing* the bug in
the genetic-derived code are precisely: zero. the only option is to
run the algorithm again but with a new set of criteria, generating an
entirely new algorithm which *again* is in the same (dangerous)
category.
l.
_______________________________________________
arm-netbook mailing list arm-netbook@lists.phcomp.co.uk
http://lists.phcomp.co.uk/mailman/listinfo/arm-netbook
Send large attachments to arm-netbook@files.phcomp.co.uk