why ai doomsayers are like sceptical theists and why it matters pdf

Why Ai Doomsayers Are Like Sceptical Theists And Why It Matters Pdf

File Name: why ai doomsayers are like sceptical theists and why it matters .zip
Size: 13647Kb
Published: 26.04.2021

Artificial intelligence AI and robotics are digital technologies that will have significant impact on the development of humanity in the near future. They have raised fundamental questions about what we should do with these systems, what the systems themselves should do, what risks they involve, and how we can control these. Many such concerns turn out to be rather quaint trains are too fast for souls ; some are predictably wrong when they suggest that the technology will fundamentally change humans telephones will destroy personal communication, writing will destroy memory, video cassettes will make going out redundant ; some are broadly correct but moderately relevant digital technology will destroy industries that make photographic film, cassette tapes, or vinyl records ; but some are broadly correct and deeply relevant cars will kill children and fundamentally change the landscape.

Artificial intelligence AI and robotics are digital technologies that will have significant impact on the development of humanity in the near future. They have raised fundamental questions about what we should do with these systems, what the systems themselves should do, what risks they involve, and how we can control these.

Why AI Doomsayers are Like Sceptical Theists and Why it Matters

Several research institutes have been set-up to address those risks. And there is an increasing number of academic publications analysing and evaluating their seriousness. In this article, I argue that in defending the credibility of AI risk, Bostrom makes an epistemic move that is analogous to one made by so-called sceptical theists in the debate about the existence of God. And while this analogy is interesting in its own right, what is more interesting are its potential implications.

It has been repeatedly argued that sceptical theism has devastating effects on our beliefs and practices.

Could it be that AI-doomsaying has similar effects? I argue that it could. Specifically, and somewhat paradoxically, I argue that it could amount to either a reductio of the doomsayers position, or an important and additional reason to join their cause.

I use this paradox to suggest that the modal standards for argument in the superintelligence debate need to be addressed. This is a preview of subscription content, access via your institution. Rent this article via DeepDyve. Here I appeal to two theses defended by Bostrom in his recent book Superintelligence Bostrom : the strategic advantage thesis and the orthogonality thesis.

The latter thesis is particularly important for the doomsday scenario discussed in the text. It maintains that pretty much any level of intelligence is compatible with pretty much any final goal. The thesis has been defended elsewhere as well Bostrom ; Armstrong Only the latter dedicates itself entirely to the topic of AI risk. The other institutes address other potential risks as well. The standard presentation is that of Rowe ; for a more detailed overview, see Trakakis The idea was introduced originally by Wykstra The summary is based on the discussion of sceptical theism in Bergmann , This is the orthogonality thesis as defended in Bostrom and Armstrong This orthogonality thesis could be criticised.

Some would argue that intelligence and benevolence go hand in hand, i. I have some sympathy for this view. I believe that if there are genuine objectively verifiable moral truths, then the more intelligent the more likely they are to discover and act upon the moral truth.

Indeed, this view is popular among some theists. For instance, Richard Swinburne has argued that omniscience may imply omnibenevolence. I am indebted to an anonymous reviewer for urging me to clarify this point. This is the instrumental convergence thesis. See Bostrom , , pp. The leading critics in the academic literature are probably Ben Goertzel and Richard Loosemore; online, Alexander Kruel maintains a regularly updated blog critiquing the doomsday scenario.

This is how Bostrom describes is at pp. Bostrom , p. But one should not be too confident that this is so. Instead, the AI might calculate that if it is terminated, the programmers who built it will develop a new and somewhat different AI architecture, but one that will be given a similar utility function. To be clear, this does not mean that an infinitesimal probability of an existential risk should be taken seriously. But, say, a 0.

The one exception here might be beliefs about logical or mathematical truths, though there are theists who claim that those truths are dependent on God as well. Note how the focus here is limited to how the treacherous turn affects inductive inferences we make about artificial intelligences only.

It does not affect all inductive inferences. This is unlike the situation with respect to sceptical theism. Richard Loosemore has made these complaints. This is the view of the Machine Intelligence Research Institute and some of its affiliated scholars, e. Almeida, M. Sceptical theism and evidential arguments from evil. Australasian Journal of Philosophy, 81 , — Anderson, D. Skeptical theism and value judgments. International Journal for the Philosophy of Religion, 72 , 27— Armstrong, S.

General purpose intelligence: Arguing the orthogonality thesis. Analysis and Metaphysics, 12 , 68— Google Scholar. Barrat, J. Our final invention: Artificial intelligence and the end of the human era. New York: St. Bergmann, M. Nous, 35 , Skeptical theism and the problem of evil. Rea Eds. Oxford: OUP.

In defence of skeptical theism: A reply to Almeida and Oppy. Bostrom, N. The superintelligent will: Motivation and instrumental rationality in advanced artificial agents. Minds and Machines, 22 2 , 71— Existential risk prevention as a global priority. Global Policy, 4 , 15— Superintelligence: Paths, dangers, strategies.

Bringsjord, S. Belief in the singularity is fideistic. Eden, J. Moor, J. Steinhardt Eds. Dordrecht: Springer. Danaher, J. Skeptical theism and divine permission: A reply to Anderson. International Journal for Philosophy of Religion, 75 2 , — Doctorow, C.

The rapture of the nerds. New York: Tor Books. Dougherty, T. Recent work on the problem of evil. Analysis, 71 , — Skeptical theism: New essays. Eden, A. Singularity hypotheses: A scientific and philosophical assessment.

Hasker, W. All too skeptical theism. International Journal for Philosophy of Religion, 68 , 15— Loosemore, R. The fallacy of dumb superintelligence. Lovering, R. On what god would do International Journal for the Philosophy of Religion, 66 2 , 87— Maitzen, S. The moral skepticism objection to skeptical theism. Howard-Snyder Eds. Oxford: Wiley. McBrayer, J.

Ethics of Artificial Intelligence and Robotics

Several research institutes have been set-up to address those risks. And there is an increasing number of academic publications analysing and evaluating their seriousness. In this article, I argue that in defending the credibility of AI risk, Bostrom makes an epistemic move that is analogous to one made by so-called sceptical theists in the debate about the existence of God. And while this analogy is interesting in its own right, what is more interesting are its potential implications. It has been repeatedly argued that sceptical theism has devastating effects on our beliefs and practices. Could it be that AI-doomsaying has similar effects? I argue that it could.

This paper aims to contribute to the futurology of a possible artificial intelligence AI breakthrough, by reexamining the Omohundro—Bostrom theory for instrumental vs final AI goals. Does that theory, along with its predictions for what a superintelligent AI would be motivated to do, hold water? The two cornerstones of Omohundro—Bostrom theory — the orthogonality thesis and the instrumental convergence thesis — are both open to various criticisms that question their validity and scope. These criticisms are however far from conclusive: while they do suggest that a reasonable amount of caution and epistemic humility is attached to predictions derived from the theory, further work will be needed to clarify its scope and to put it on more rigorous foundations. The practical value of being able to predict AI goals and motivations under various circumstances cannot be overstated: the future of humanity may depend on it.

Challenges to the Omohundro–Bostrom framework for AI motivations

Sign in Create an account. Syntax Advanced Search. Results for 'Artificial intelligence AI '.

DOI: LI Shuai. This issue is examined from the philosophical and logical point of view so as to construct an argument of abduction. The current argumentation of this hypothesis is based on induction and does not have the necessary reliability. Key words: artificial intelligence, threat, abduction, Pascal's wager.

JavaScript is disabled for your browser.

 - Я его продала. ГЛАВА 33 Токуген Нуматака смотрел в окно и ходил по кабинету взад-вперед как зверь в клетке. Человек, с которым он вступил в контакт, Северная Дакота, не звонил. Проклятые американцы. Никакого представления о пунктуальности.

Послышались гудки. Беккер разглядывал зал. Один гудок… два… три… Внезапно он увидел нечто, заставившее его бросить трубку. Беккер повернулся и еще раз оглядел больничную палату.

Беккер с трудом вел мотоцикл по крутым изломам улочки. Урчащий мотор шумным эхо отражался от стен, и он понимал, что это с головой выдает его в предутренней тишине квартала Санта-Крус. В данный момент у него только одно преимущество - скорость.

1 comments

Kiera B.

Why AI Doomsayers are like Sceptical Theists and. Why it Matters. John Danaher​*. Forthcoming in Minds and Machines DOI: /sy.

REPLY

Leave a comment

it’s easy to post a comment

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>