AI Safety and General Collective Intelligence

dc.contributor.authorWilliams, Andy
dc.date.accessioned2024-03-18T14:07:49Z
dc.date.available2024-03-18T14:07:49Z
dc.date.issued2020-12-31
dc.description.abstractConsidering both current narrow AI, and any Artificial General Intelligence (AGI) that might be implemented in the future, there are two categories of ways such systems might be made safe for the human beings that interact with them. One category consists of mechanisms that are internal to the system, and the other category consists of mechanisms that are external to the system. In either case, the complexity of the behaviours that such systems might be capable of can rise to the point at which such measures cannot be reliably implemented. However, General Collective Intelligence or GCI can exponentially increase the general problem-solving ability of groups, and therefore their ability to manage complexity. This paper explores the specific cases in which AI or AGI safety cannot be reliably assured without GCI.
dc.identifier.doihttps://doi.org/10.31730/osf.io/gw3ks
dc.identifier.urihttps://africarxiv.ubuntunet.net/handle/1/874
dc.identifier.urihttps://doi.org/10.60763/africarxiv/827
dc.identifier.urihttps://doi.org/10.60763/africarxiv/827
dc.identifier.urihttps://doi.org/10.60763/africarxiv/827
dc.subjectAGI safety
dc.subjectAI safety
dc.subjectGeneral Collective Intelligence
dc.subjectHuman-Centric Functional Modeling
dc.titleAI Safety and General Collective Intelligence

Files

Original bundle
Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
AI Safety and General Collective Intelligence -unformatted v5.pdf
Size:
521.15 KB
Format:
Adobe Portable Document Format
License bundle
Now showing 1 - 1 of 1
No Thumbnail Available
Name:
license.txt
Size:
1.72 KB
Format:
Item-specific license agreed to upon submission
Description:

Collections