AI Safety and General Collective Intelligence

Loading...
Thumbnail Image

Date

2020-12-31

Journal Title

Journal ISSN

Volume Title

Publisher

Abstract

Considering both current narrow AI, and any Artificial General Intelligence (AGI) that might be implemented in the future, there are two categories of ways such systems might be made safe for the human beings that interact with them. One category consists of mechanisms that are internal to the system, and the other category consists of mechanisms that are external to the system. In either case, the complexity of the behaviours that such systems might be capable of can rise to the point at which such measures cannot be reliably implemented. However, General Collective Intelligence or GCI can exponentially increase the general problem-solving ability of groups, and therefore their ability to manage complexity. This paper explores the specific cases in which AI or AGI safety cannot be reliably assured without GCI.

Description

Keywords

AGI safety, AI safety, General Collective Intelligence, Human-Centric Functional Modeling

Citation

Collections