AI Safety and General Collective Intelligence
dc.contributor.author | Williams, Andy | |
dc.date.accessioned | 2024-03-18T14:07:49Z | |
dc.date.available | 2024-03-18T14:07:49Z | |
dc.date.issued | 2020-12-31 | |
dc.description.abstract | Considering both current narrow AI, and any Artificial General Intelligence (AGI) that might be implemented in the future, there are two categories of ways such systems might be made safe for the human beings that interact with them. One category consists of mechanisms that are internal to the system, and the other category consists of mechanisms that are external to the system. In either case, the complexity of the behaviours that such systems might be capable of can rise to the point at which such measures cannot be reliably implemented. However, General Collective Intelligence or GCI can exponentially increase the general problem-solving ability of groups, and therefore their ability to manage complexity. This paper explores the specific cases in which AI or AGI safety cannot be reliably assured without GCI. | |
dc.identifier.doi | https://doi.org/10.31730/osf.io/gw3ks | |
dc.identifier.uri | https://africarxiv.ubuntunet.net/handle/1/874 | |
dc.identifier.uri | https://doi.org/10.60763/africarxiv/827 | |
dc.identifier.uri | https://doi.org/10.60763/africarxiv/827 | |
dc.identifier.uri | https://doi.org/10.60763/africarxiv/827 | |
dc.subject | AGI safety | |
dc.subject | AI safety | |
dc.subject | General Collective Intelligence | |
dc.subject | Human-Centric Functional Modeling | |
dc.title | AI Safety and General Collective Intelligence |
Files
Original bundle
1 - 1 of 1
Loading...
- Name:
- AI Safety and General Collective Intelligence -unformatted v5.pdf
- Size:
- 521.15 KB
- Format:
- Adobe Portable Document Format
License bundle
1 - 1 of 1
No Thumbnail Available
- Name:
- license.txt
- Size:
- 1.72 KB
- Format:
- Item-specific license agreed to upon submission
- Description: