Dagens ord


Ansvar väger tyngre än frihet - Responsibility trumps liberty

4 nov. 2017

Är AI en logisk omöjlighet - och spelar det någon roll?


Should we be afraid of AI? (AI is both logically possible and utterly implausible)


Daniel Dennett delade den här texten av Floridi på Twitter igår, med kommentaren ”Bullseye!”, och menade att den slår KO på singulariteten.



Detta förvånar mig högeligen.

Floridi vill inte positionera sig som varken ”Singularist” eller ”AIteist”. Han medger att (stark) AI är en logisk möjlighet men menar att den i praktiken är högst orimlig och osannolik, och att vi därför borde ägna oss åt viktigare saker - som att hantera problemen med allt smartare (snabbare, större, mer kompetenta) datorer.

Det finns en logik i ovanstående. (Läs igen!) Och Floridi pekar ut många högst påtagliga problem som förvisso är värda större uppmärksamhet.

Men Floridi tror inte på det han själv skriver - att stark AI är en logisk möjlighet. Det säger han bara (på ett enda ställe) för att vinna trovärdighet. Han försöker på så sätt ställa sig vid sidan av (och över) de AIteister som, liksom Dreyfus och Searle m.fl., framstår som allt mer verklighetsfrånvända i sina vädjanden till mysticism.

Floridi beskriver dessa AIteister kärnfullt:
”AI is just computers, computers are just Turing Machines, Turing Machines are merely syntactic engines, and syntactic engines cannot think, cannot know, cannot be conscious. End of story.”

Och sedan kommer han ut som just en sådan AIteist!

Skälen han anför är inte raffinerade. Floridis tydligaste argument mot ”riktig” maskinell intelligens är de matematisk-logiska gränserna för beräkningsbarhet. Vilket förstås får vän av ordning att undra på vilket övernaturligt sätt dessa överskrids av mänskliga hjärnor.

Som om han redan glömt sitt avståndstagande från naiva AIteister beskriver han det verkliga problemet:
...technologies can replace us without having any understanding, mental states, intentions, interpretations, emotional states, semantic skills, consciousness, self-awareness or flexible intelligence.

Det är förunderligt hur Floridi kan missa hur gränsen mellan syntax och semantik ständigt har förskjutits under de senaste 60 årens filosofiska diskusioner om AI - inte minst som han själv sällar sig till ledet genom att bagatellisera Alpha Go.

OK, låt gå för Floridis antropocentriska högmod. Låt honom reservera ord som ”förståelse” etc. för oss människor. (Men då kan han inte längre hävda sin unika och upplysta position.) Vad innebär detta för hans eget resonemang, egentligen? Att vi inte borde oroa oss för att allt kraftfullare datorer på egen hand - med flit! - hittar på dumheter? Räcker det inte med alla de problem som Floridi faktiskt räknar upp, tillsammans med de illasinnade eller oöverlagda beslut som människor ingjuter i systemen? Eller innebär det att det hade varit mycket värre att en medveten AI ödelägger jorden än att en icke-medveten d:o gör det? Eller att endast en kännande AI skulle kunna hitta på något riktigt elakt?

Jag vet inte vad det är Dennett uppskattar i den här texten, som dessutom är (medvetet) otydlig. Troligen (det underförstådda) budskapet att även evolutionära algoritmer behöver mycket speciella omständigheter och tidsrymder för att ens närma sig något som motsvarar eller överträffar den mänskliga hjärnan. Även detta förvånar mig något - med tanke på att Dennetts hela karriär gått ut på att visa att detta är det enda rimliga sättet att förklara alla former av mentalt liv - men det är åtminstone en rimlig hållning.

---

From the ensuing Facebook debate:


Me

I find it petty to brush aside AI and x-risk as irresponsible indulgences. The resources and attention it gets, and wants, is nothing compared to, e.g., the so called "debate" on climate change, environmental degradation, etc. It's hardly the case that we're so grown up as to prioritize the (only) right thing, at the right time, with the appropriate measures. Once (if) we get close, as a global society, then maybe we can start to make tougher priorities.

If Dennett and Floridi had said: Look, we've known for a hundred years that exploitative fossil-driven capitalism is bringing us towards environmental and social collapse. Let's do something about it, once and for all, and if we succeed, then let's move on to more "luxurious" worries. Then, maybe, I would have accepted the argument.


Thore

Why so certain? We all associate some very small probability ε > 0 to the event “Apocalyptic AI happens within a timeframe in which it makes sense to think about it.” We also all associate some very small probability δ > 0 to the event “x-riskers speculating about AI seriously hurts their credibility to the point of hurting climate change activism.”

We may differ on if ε < δ. But it’s a discussion about the relationship between two extremely ill-defined and ill-understood numbers, whose only commonality is that they are very small.

Of course, a different discussion can be had about A, the effect of apocalyptic AI, and C, the effect of climate change. Many people (but not all) would agree on A > C. (Of course, you could to discount that for proximity to the event to make it even more precise. But let me stop this exercise here.)

So is εA > δC or εA < δC? Serious rationalists must know the answer to this, because their moral certitude in their activism depends one this tradeoff. Get it wrong, and you’re responsible for the death. Of. Civilisation!

So, given the ingredients of (i) extreme uncertainty about ε, δ, and maybe A, and (ii) the deep moral repercussion of being wrong, we’d expect these debates to end in tribalism, name-calling, etc. We need to actively prevent ourselves and each other from that trap.


Me

Thore, I feel that there is something fishy about your argument; several things, actually, but I'm not sure I can pin them down (I suspect that at least one of them is connected to my understanding of what Olle refers to as vulgo-popperianism). I do appreciate your input and I find this discussion interesting. I hope you do too. So let's take another spin.

I think I can make a strong case for the following:

δ < τ; δ < μ; δ < ρ; δ < θ ...

...where τ, μ, ρ, θ etc. stand for any number of things - both actions/states and inactions/non-states - that are more detrimental to the mitigation of climate change than raising the problem of other x-risks.

I can definitely (with logical certainty) make the case that:

δ < Σ (τ, μ, ρ, θ, ...)

As regards moral certitude and "serious rationalism", I don't feel guilty about not being able to produce an ironclad proof that δ < ε. That kind of standard can only lead to inertia. Very rarely are there opportunities, in real life, to hold to such a standard. It would be more morally suspect of me not to act or speak out of my deepest concern.

(It would be great to live in a Leibnizian world where everyone agreed on the truth and the right course of action. History is a tug-of-war and I don't see the point of not participating.)

If I am to be held to such a standard, then everyone else should too. In that case, I could easily conjure up situations where my opponents would need to provide impossible certainties in order for me to validate their point of view - just by reframing the situation.

I could, for instance, present the number ɣ for the very small but non-zero possibility that you not eating your veggies everyday might bring us closer to catastrophe (environmental or whatever) and then insist that since you cannot know the value of ɣ you would be morally culpable to prioritize any other (unknown) quantity.

Logic aside, there is an informal fallacy being committed here, namely a false dichotomy.

Furthermore, Floridi himself would need to show that δ > ε in order to pass your test. He doesn't even argue for it! He doesn't even really argue for the fact that ε is small, or that δ is large.

And Floridi most certainly doesn't argue that τ, μ, ρ, θ, ... are small. To him, history is a blank slate, and current technological developments are all there is - to effect either good or bad.

But let's not forget: Even disregarding risks altogether; disregarding also the almost non-existent arguments for why we shouldn't be concerned with AI; Floridi's article is very poorly argued. First, he purports not to be an AItheist but then makes it abundantly clear that he is. Secondly, he naively thinks that consciousness (or whatever) is what would make a "super-intelligence" truly dangerous; more dangerous than the merely "smart" machines he does warn us about. Thirdly (which I already hinted above) he lazily reiterates dusty old quips to the effect that "of course" we cannot expect to find truly human-like qualities in any machine - relying on equally lazy readers' anthropocentric intuitions.


Thore

Hm… you must be able to at least loyally represent the arguments against Apocalyptic AI scenarios.

(I know at least two: (i) it makes us worry about the wrong thing regarding the very real dangers of nonfictional AI, and (ii) it taints the x-risk in the public eye, in particular by trivialising the importance of actual scientific evidence for climate change.)


You may discount these arguments. But they exist. The epistemological step from taking these arguments Very Seriously and taking Bostromian extrapolations Very Seriously is small, as far as I can see.


Me

To your first point: I agree that I should, and that those are valid arguments. I wish Floridi had put it like that!

Your second point I can also provisionally accept, in isolation.


But we still do not operate in a vacuum, where every other concern fades away. Again, if Floridi had listed a top-ten list of things, other than powerful computing, as major concerns not to be forgotten - and then just focused on the immediate danger of current computing technology (without even mentioning the singularity, or at least not his pet problems with it) - well, then he could have made a decent point.


---


Inga kommentarer:

Skicka en kommentar