Williams, Andy2024-03-182024-03-182020-12-31https://doi.org/10.31730/osf.io/gw3kshttps://africarxiv.ubuntunet.net/handle/1/874https://doi.org/10.60763/africarxiv/827https://doi.org/10.60763/africarxiv/827https://doi.org/10.60763/africarxiv/827Considering both current narrow AI, and any Artificial General Intelligence (AGI) that might be implemented in the future, there are two categories of ways such systems might be made safe for the human beings that interact with them. One category consists of mechanisms that are internal to the system, and the other category consists of mechanisms that are external to the system. In either case, the complexity of the behaviours that such systems might be capable of can rise to the point at which such measures cannot be reliably implemented. However, General Collective Intelligence or GCI can exponentially increase the general problem-solving ability of groups, and therefore their ability to manage complexity. This paper explores the specific cases in which AI or AGI safety cannot be reliably assured without GCI.AGI safetyAI safetyGeneral Collective IntelligenceHuman-Centric Functional ModelingAI Safety and General Collective Intelligence