Artificial Intelligence
Permanent URI for this community
When studying Artificial Intelligence (AI), the subject can be divided into several key categories, each addressing different aspects of the field. These categories provide a comprehensive framework for studying AI, allowing researchers, practitioners, and students to explore the field from multiple perspectives, considering both the technical aspects and the broader societal impacts.
Here's an overview:
Browse
Browsing Artificial Intelligence by Title
Now showing 1 - 17 of 17
Results Per Page
Sort Options
Item A pre-Tranind Model for Driver Drowsiness Detection(2023-06-22) Ahmed, AmiraDrowsiness is among the important factors that cause traffic accidents; therefore, a monitoring system is necessary to detect the state of a driver’s drowsiness. Driver monitoring systems usually detect three types of information: biometric information, vehicle behavior, and the driver’s graphic information. Drowsiness detection methods based on the three types of information are discussed. A prospect for arousal level detection and estimation technology for autonomous driving is also presented. The technology will not be used to detect and estimate wakefulness for accident prevention; rather, it can be used to ensure that the driver has enough sleep to arrive comfortably at the destination. In this paper, we propose a Resnet (50) pre-trained model for driver drowsiness detection that achieves robust results and reaches 98% accuracy.Item A Technical Evaluation of the Performance of Classical Artificial Intelligence (AI) and Methods Based on Computational Intelligence (CI) i.e Supervised Learning, Unsupervised Learning And Ensemble Algorithms in Intrusion Detection Systems(2016-11) Zvarevashe, Kudakwashe; Mapanga, Innocent; Kadebu, PrudenceThe emergence of new technologies in this dynamic information era has caused a tremendous increase in the rate at which data is being generated through interactive applications thereby increasing the movement of information and data on communication networks as individuals, organizations and business interact on a daily basis. Big Data is flooding our networks and storage devices stimulating a cause for concern in terms of processing, storage, access and security of large blocks of data in most networks. The facilitation of online research services is always under the risk of intruders and malicious activity. Most techniques used in today's Intrusion Detection Systems are not able to deal with the dynamic and complex nature of cyber-attacks on computer networks. Over the years, Intrusion Detection Systems .Various methods have been developed by many researchers to detect intrusions aimed at networks as well as standalone devices which are based on machine learning algorithms, neural networks, statistical methods etc. In this paper, we study several such schemes and compare their performance. The experiments are done using WEKA (Waikato Environment for Knowledge Analysis) and one of the most popular Intrusion Detection Systems datasets which is NSL-KDD99 so as to analyse the consistency of each algorithm. We divide the schemes into methods based on classical artificial intelligence (AI) and methods based on computational intelligence (CI) i.e supervised learning, unsupervised learning, ensemble and immune algorithms. We explain how various characteristics of CI techniques can be used to build efficient IDS. This paper will further evaluate the performance of the algorithms using the following parameters: accuracy, detection rate and false alarm.Item Achieving Smart Resource Management for Better Disaster Management using Space-based Technology in Lowershire Basin, Malawi(2015-11) Chilonga, DonnexAdvancements in new geo-spatial technologies across the globe are seen as a way to advance the decision-making process of first responders before and after a disaster. Unstructured disaster information and an infrastructure for accurate disaster information may be accessed and retrieved successfully through present new and important tools. Such tools could help improve the performance of disaster prediction in any country across the world. This paper illustrates a method based on incorporation of space and terrestrial technologies that aims at providing vital information to first responders for smart management of floods in Loweshire Basin, Malawi.Item Artificial General Intelligence (AGI) for Medical Education and Training(2023-10-20) Lema, KennedyArtificial General Intelligence (AGI) has garnered worldwide attention as a transformative technology, thanks to the emergence of groundbreaking Large AI Models (LAMs), including Large Language Models, Large Vision Models, and Large Multi-Modal Models. AGI represents an ambitious endeavor to replicate human intelligence within computer systems, making it a pivotal technology poised to revolutionize Medical Training. Fueled by recent advancements in large pre-trained models, AGI signifies a remarkable stride in empowering machines to perform tasks demanding human-level intelligence. These tasks encompass reasoning, problem solving, decision-making, and even the comprehension of human emotions and social interactions. This work conducts a comprehensive exploration of AGI, elucidating its fundamental concepts, capabilities, scope, and transformative potential in the realm of Medical Education and Training. It specifically delves into Medical Simulation Environments, Interactive Virtual Labs, Humanoid Robots in Medical Education, Continuing Medical Education (CME), Personalized Learning Pathways, Intelligent Tutoring Systems, Natural Language Processing for Medical Texts, Clinical Decision Support, and Automated Assessment Tools. The examination encompasses a thorough analysis of the prospective advantages, challenges, limitations, risks, and ethical considerations that AGI poses to Medical education and training programs, as well as its implications for Medical educators. The development of AGI necessitates fostering interdisciplinary collaboration between educators and AI engineers to propel research and application endeavors in this transformative field.Item Artificial Intelligence: The Game Changer in Scientific Research(2024-08-08) Ilegbusi, PaulArtificial Intelligence (AI) has revolutionised scientific research by enhancing data analysis, accelerating research processes, and improving accuracy. AI's applications span various fields, including biomedicine, environmental science, physics, and materials science. This paper explores AI's transformative impact on scientific research, highlighting its role, applications, challenges, and future prospects. AI tools, such as Explain Paper, Paper Digest, and Chatdoc, facilitate research by summarizing papers, explaining complex concepts, and assisting with literature reviews. Despite AI's benefits, challenges persist, including data privacy and security concerns, bias, and transparency issues. To address these challenges, the paper emphasizes the need for ethical guidelines, robust security measures, and interpretable AI models. The future of AI in scientific research holds promise, with emerging trends and technologies, interdisciplinary innovations, and collaborative platforms driving progress. The paper concludes by highlighting the importance of addressing AI's challenges to ensure its beneficial impact on science and society.Item Defining Functional Models of Artificial Intelligence Solutions to Create a Library that an Artificial General Intelligence can use to Increase General Problem Solving Ability(2020-04-27) Williams, AndyThe AI industry continues to enjoy robust growth. With the growing number of AI algorithms, the question becomes how to leverage all these models intelligently in a way that reliably converges on AGI. One approach is to gather all these models ingo a single library that a system of artificial intelligence might use to increase it's general problem solving ability. This paper explores the requirements for building such a library, the requirements for that library to be searchable for AI algorithms that might have the capacity to significantly increase impact on any given problem, and the requirements for the use of that library to reliably converge on AGI. This paper also explores the importance to such an effort of defining a common set of semantic functional building blocks that AI models can be represented in terms of. In particular, how that functional decomposition might be used to organize large scale cooperation to create such an AI library, where that cooperation has not yet proved possible otherwise. And how such collaboration, as well as how such a library, might significantly increase the impact of each AI and AGI researcher’s work.Item Exploring the Potential of Artificial Intelligence for Supporting Indigenous Language Journalism Pedagogy in Nigeria(2023-06-14) Iyinolakan, OlayinkaThe African continent has more than 2100 indigenous languages, but many of them are not well- represented in the media. Artificial intelligence (AI) technology offers an opportunity to digitally incorporate these languages into news media and enable journalism pedagogy that emphasizes their use. However, there is limited research on how to integrate AI into journalism training in Africa, especially for indigenous languages. This study evaluates the benefits and challenges of integrating AI tools into journalism training in Nigeria to promote productivity and inclusion of indigenous communities in media content. Mixed research design via in-depth interviews was used to collect data from journalism schools in Nigeria, semi-structured survey with current journalist and secondary data available via AI tools. The findings suggest that using AI tools in journalism education can improve the quality of journalism and equip journalists with skills needed to succeed in the digital age. However, there is no immediate urgency to integrate native language journalism beyond entry level. A bureaucracy-free dynamic curriculum is needed to train budding journalists and retrain veteran practitioners, with funding for recent tools. Future research should broaden the scope and sample size to produce comprehensive and generalizable results for other AI contexts within and beyond Nigeria.Item Increasing Discovery in Research, Design, and Other Processes with Artificial General Intelligence and General Collective Intelligence(2020-12-17) Williams, AndyAny system with repeatable behavior can potentially be defined with the minimal set of functions that might be composed to represent the entirety of that behavior. The states accessible through these functions then forms a “functional state space” through which the system moves. Since functional states spaces can be used to represent every problem domain from physics, to communications, to business operations, to the human cognition itself, a general approach to not only research but design and all other processes of discovery that is applicable to all domains can potentially be defined to radically increase capacity for discovery in each domain.Item Individualization of Products and Services with Artificial General Intelligence and General Collective Intelligence(2020-12-15) Williams, AndyINTRODUCTION: With advances in big data techniques having already led to search results and advertising being customized to the individual user, the concept of an online education designed solely for an individual, or the concept of online news or entertainment media, or any other virtual service being designed uniquely for each individual, no longer seems as far fetched. However, designing services that maximize user outcomes as opposed to services that maximize outcomes for the corporation owning them, requires modeling user processes and the outcomes they target. OBJECTIVES: To explore the use of Human-Centric Functional Modeling (HCFM) to define functional state spaces within which human processes are well-defined paths, and within which products and services solve specific navigation problems, so that by considering all of any given individual’s desired paths through a given state space, it is possible to automate the customization of those products and services for that individual or to groups of individuals. METHODS: An analysis is performed to assess how and whether intelligent agents based on some subset of functionality required for Artificial General Intelligence (AGI) might be used to optimize for the individual user. And an analysis is performed to determine whether and if so how General Collective Intelligence (GCI) might be used to optimize across all users. RESULTS: AGI and GCI create the possibility to individualize products and services, even shared services such as the Internet, or news services so that every individual sees a different version. CONCLUSION: The conceptual example of customizing a news media website for two individual users of opposite political persuasions suggests that while the overhead of customizing such services might potentially result in massively increased storage and processing overhead, within a network of cooperating services in which this customization reliably creates value, this is potentially a significant opportunity.Item Leveraging Artificial Intelligence for Advancements in the Pharmaceutical Field: A Comprehensive Review(2023-09-13) Hesham, MostafaThe pharmaceutical industry has witnessed a paradigm shift with the integration of artificial intelligence (AI) into various aspects of drug discovery, development, and healthcare delivery. This paper provides a comprehensive review of the impact of AI in the pharmaceutical field, highlighting its contributions, challenges, and prospects. We explore AI applications in drug discovery, clinical trials, personalized medicine, and healthcare management, emphasizing the potential benefits and ethical considerations associated with this transformative technology.Item On the Preservation of Africa's Cultural Heritage in the Age of Artificial Intelligence(2024-03-08) Mohamed LouadiIn this paper we delve into the historical evolution of data as a fundamental element in communication and knowledge transmission. The paper traces the stages of knowledge dissemination from oral traditions to the digital era, highlighting the significance of languages and cultural diversity in this progression. It also explores the impact of digital technologies on memory, communication, and cultural preservation, emphasizing the need for promoting a culture of the digital (rather than a digital culture) in Africa and beyond. Additionally, it discusses the challenges and opportunities presented by data biases in AI development, underscoring the importance of creating diverse datasets for equitable representation. We advocate for investing in data as a crucial raw material for fostering digital literacy, economic development, and, above all, cultural preservation in the digital age.Item Question Banks: A Tool for Improving Higher Education Assessment Across National Resource Networks: The Polytechnic of Malawi Case Study(2015-11) Chilivumbo, ChifundoQuestion Banks are used to increase the access to quality material for assessing the students in institutions of higher learning. A good question bank which is in line with the learning orientated assessment framework, should facilitate the, learning orientated assessment tasks, developing evaluate expertise and aid in student engagement with some feedback. This paper seeks to create a solution that will allow for these properties to be streamlined by an information solution for higher institutions to be delivered over National resource networks, with one of the University of Malawi’s constituent colleges, the Polytechnic as a case study. This paper documents the work done with the Department of Mathematics and Statistics and Language and Communication in the Faculties of Applied Sciences and Education and Media Studies respectively. Information about the assessment creation and assessment artifact storage was gathered from these two departments through the study of existent literature, observation of processes and a self-administered questionnaire given to participants from the two departments. The first version of the software was created with work being done to improve the system to ensure it’s efficiently aids the assessment process. Although the current process is paper based the system tracks the process using an electronic solution with the aim to allow for an aid to creation and research of questions and possibly in the future for electronic delivery of assessments. The system also takes into account the issue of interoperability of the new system with the Universities existing systems that support Virtual Leaning and Student Information Management.Item Strengthening the Emerging Digital Technologies Ecosystem in Kenya(2023-09) Ogot, Madara; Muthee, Margaret; Muriuki, Rita; Njunguna, SamuelThe rapid increase in devices (mobile phones, computers, sensors, etc.) connected to the Internet (and thus to databases) has resulted in exponential growth in data generation and associated EDTs that can “identify patterns in observed data, build explanatory models, and make predictions quicker and with more accuracy than humans” (Pawelke et al., 2017) EDT/x-data-based applications and algorithms are mainly created in the developed countries and often lack transparency arising from intellectual property rights, thus hindering realization of the enormous potential EDT/x-data- based applications have in addressing socio-economic challenges faced by developing countries, including Kenya. Where applications exist, they are often not broadly accessible, especially for persons with disabilities, areas with slow internet connections or members of underrepresented groups. In this policy brief, the generic term “big data” is unpacked into four overlapping categories of data: big data, open data, user-generated data and real-time data, and are collectively referred to as “x-data”. EDTs are taken to include artificial intelligence (AI), blockchain, geographic information systems (GIS), the Internet of Things (IoT), and big data analytics. These methodologies are often used collectively. Gaps in support systems to develop EDT/x-data-based applications have created new digital divides between developing and developed countries. Further, barriers persist in the use and take-up of x-data by decision-makers, competing data sources, quality of data, limited aware- ness of data existence, and inadequate transformation of data into useful information or tailoring to match the decision-makers needsItem The Case for General Collective Intelligence Rather than Artificial General Intelligence being the Most Important Human Innovation in the History and Future of Mankind(2020-04-17) Williams, AndyArtificial General Intelligence, that is an Artificial Intelligence with the ability to redesign itself and other technology on its own, has been called “mankind’s last invention”, since it may not only remove the necessity of any human invention afterwards, but also might design solutions far too complex for human beings to have the ability to contribute to in any case. Because of this, if and when AGI is ever invented, it has been argued by many that it will be the most important innovation in the history of the mankind up to that point. Just as nature’s invention of human intelligence might have transformed the entire planet and generated a greater economic impact than any other innovation in the history of the planet, AGI has been suggested to have the potential for an economic impact larger than that resulting from any other innovation in the history of mankind. This paper explores the case for General Collective Intelligence being a far more important innovation than AGI. General Collective Intelligence has been defined as a solution with the capacity to organize groups of human or artificial intelligences into a single collective intelligence with vastly greater general problem solving ability. A recently proposed model of GCI not only outlines a model for cognition that might also enable AGI, but also identifies hidden patterns in collective outcomes for groups that might make GCI necessary in order to reliably achieve the benefits of AGI while reliably avoiding the potentially catastrophic costs of AGI.Item The integration of Artificial intelligence (AI) in literature review and its potentials to revolutionize scientific knowledge acquisition(2024-04-28) Ilegbusi, PaulThis presentation discusses the role of artificial intelligence (AI) in enhancing the literature review process and its potential to transform scientific knowledge acquisition. The presentation highlights the importance of literature review in research and the challenges associated with the traditional manual approach. The presentation emphasizes that integrating AI in literature review can significantly improve efficiency, accuracy, and reduce bias. AI-powered tools can automate various aspects of the literature review process, including search, selection, analysis, and synthesis of relevant literature. The benefits of AI in literature review include increased efficiency, improved coverage of literature, and the ability to identify gaps in knowledge and uncover new research questions. The presentation also provides a comprehensive list of AI tools that can be used in literature review, such as Cramly.ai, Quillbot, GPT-minus 1, ChatGPT, Samwell.ai, and many others. These tools offer functionalities such as rewriting, paraphrasing, summarizing, understanding literature, and extracting key information from articles. The future of AI in literature review is promising, with emerging trends such as deep learning models and knowledge graphs. These trends have the potential to enhance the accuracy and comprehensiveness of literature reviews. In conclusion, the integration of AI in literature review has the potential to revolutionize scientific knowledge acquisition by improving efficiency, accuracy, and coverage of literature. By combining AI with human expertise, researchers can unlock new insights and accelerate scientific progress in various fields.Item Trustworthy Machine Information Behaviour and Open Research Repositories(2023-06-14) Simango, SamuelThe perception regarding the nature of repository users is changing as a result technological advancements. This is reflected in arguments contending that machines should be recognised as users of library services. Such arguments are based on the view that library collections such as open repositories should be considered as data that can be used by artificially intelligent machine users. These arguments raise questions regarding the concept of trust as they do not actually address attributes that machines users must possess in order for them to be considered trustworthy. This research develops a conceptual framework for understanding the parameters within which trustworthy machine information behavior can emerge. This outcome is achieved by applying machine ethics to a modified version of Wilson's general theory of information behavior that incorporates elements of machine learning. The results indicate that the level of trust placed in machines users is dependent on the algorithms and software used for programming such AI systems as well as the actions of humans who make use of such machine users. In order for any semblance of trust in machine users of open repositories to exist, the machine information behavior employed by the machine users should adhere to certain ethical principles.Item Visualising Multi-Sensor Predictions from a Rice Disease Classifier(2022-12-03) Muhia, BrianThe Microsoft Rice Disease Classification Challenge introduced a dataset comprising RGB and RGNiR (RG-Near-infra-Red) images. This second image type increased the difficulty of the challenge such that all of the winning models worked with RGB only. In this challenge we applied a res2next50 encoder that was first pre-trained with self-supervised learning through the SwAV algorithm, to represent each RGB and their corresponding RGNIR images with the same weights. The encoder was then fine-tuned and self-distilled to classify the images which produced a public test set score of 0.228678639, and a private score of 0.183386940. K-fold cross-validation was not used for this challenge result. To better understand the impact of self-supervised pre-training on the problem of classifying each image type, we apply t-distributed Stochastic Neighbour Embedding (t-SNE) on the logits (predictions before applying softmax). We show how this method graphically provides some of the value of a confusion matrix, by locating some incorrect predictions. We then render the visualisation by overlaying the raw images in each data point, and note that to this model, the RGNIR images do not appear to be inherently more difficult to categorise. We make no comparisons through sweeps, RGB-only models or RGNIR-only models. This is left to future work.