Ethics and Governance in AI

Permanent URI for this collection

Addressing the moral implications, fairness, transparency, and regulation of AI systems to ensure they align with societal values and human rights.

Browse

Recent Submissions

Now showing 1 - 2 of 2
  • Item
    The Case for General Collective Intelligence Rather than Artificial General Intelligence being the Most Important Human Innovation in the History and Future of Mankind
    (2020-04-17) Williams, Andy
    Artificial General Intelligence, that is an Artificial Intelligence with the ability to redesign itself and other technology on its own, has been called “mankind’s last invention”, since it may not only remove the necessity of any human invention afterwards, but also might design solutions far too complex for human beings to have the ability to contribute to in any case. Because of this, if and when AGI is ever invented, it has been argued by many that it will be the most important innovation in the history of the mankind up to that point. Just as nature’s invention of human intelligence might have transformed the entire planet and generated a greater economic impact than any other innovation in the history of the planet, AGI has been suggested to have the potential for an economic impact larger than that resulting from any other innovation in the history of mankind. This paper explores the case for General Collective Intelligence being a far more important innovation than AGI. General Collective Intelligence has been defined as a solution with the capacity to organize groups of human or artificial intelligences into a single collective intelligence with vastly greater general problem solving ability. A recently proposed model of GCI not only outlines a model for cognition that might also enable AGI, but also identifies hidden patterns in collective outcomes for groups that might make GCI necessary in order to reliably achieve the benefits of AGI while reliably avoiding the potentially catastrophic costs of AGI.
  • Item
    Trustworthy Machine Information Behaviour and Open Research Repositories
    (2023-06-14) Simango, Samuel
    The perception regarding the nature of repository users is changing as a result technological advancements. This is reflected in arguments contending that machines should be recognised as users of library services. Such arguments are based on the view that library collections such as open repositories should be considered as data that can be used by artificially intelligent machine users. These arguments raise questions regarding the concept of trust as they do not actually address attributes that machines users must possess in order for them to be considered trustworthy. This research develops a conceptual framework for understanding the parameters within which trustworthy machine information behavior can emerge. This outcome is achieved by applying machine ethics to a modified version of Wilson's general theory of information behavior that incorporates elements of machine learning. The results indicate that the level of trust placed in machines users is dependent on the algorithms and software used for programming such AI systems as well as the actions of humans who make use of such machine users. In order for any semblance of trust in machine users of open repositories to exist, the machine information behavior employed by the machine users should adhere to certain ethical principles.