This post discusses on the salient aspects of my latest article, entitled: “Artificial intelligence governance: Ethical considerations and implications for social responsibility“, published through Wiley’s Expert Systems.
This contribution: “raises awareness on the importance of responsible artificial intelligence (AI) governance in data science research, as more individuals and organizations are utilizing AI systems including machine learning (ML) and deep learning (DL) algorithms, among other disruptive innovations, for different applications”.
Significance: “Today, online users can utilize AI algorithms, prior to making strategic decisions. These automated technologies are helping them to improve their performance, in various contexts. For example, service businesses are relying on conversational technologies like generative pre-trained transformers (GPT) to interact via text, images or speech communications. AI-driven chatbots’ dialogue formats enable them to respond to consumer inquiries. In addition, several companies are using ML/DL algorithms for business process automation (BPA), fraud prevention, malware detection, spam filtering, as well as for the predictive maintenance of recommender systems, among other purposes. In this light, some businesses are already availing themselves from facial recognition technologies.
Advanced systems are equipped to provide fast and effective responses to customers. Other ML/DL applications are related to business intelligence (BI) and analytics, as algorithms can be used to identify important information in datasets, and reveal patterns, trends, cycles and anomalies from the big data as well as from small data. ML/DL may also be used in human resources information databases to identify the best candidates for an open position, and for other business purposes.
DL algorithms enable computers and their artificial neural networks to collect and process data like a human brain. They can create complex patterns in texts, images, audio and video, and can provide reliable insights and predictions into the future. The deep-learning architectures include deep belief networks, deep neural networks, deep reinforcement learning, convolutional neural networks, recurrent neural networks, and transformers are applied in various fields including for bioinformatics, computer vision, machine translation, material inspection, natural language processing, and speech recognition, among other areas. Frequently, DL algorithms are yielding significant results that are similar to (and in some cases, are even surpassing) the human experts’ performance.
Arguably, these disruptive AI technologies may be used in an irresponsible manner and/or for malicious purposes. Several academic colleagues have identified some of the risks of AI (Li et al., 2021; Magas & Kiritsis, 2022). In many cases, they warned that AI could disseminate misinformation, foster prejudice, bias and discrimination, raise privacy concerns, and could lead to the loss of jobs (Butcher & Beridze, 2019). A few commentators argue about the ‘singularity’ or the moment where machine learning technologies could even surpass human intelligence (Huang & Rust, 2022). They predict that a critical shift could occur if humans are no longer in a position to control AI anymore. Hence, policymakers and academia, among other stakeholders, are concerned on the well-being of their users.
This article sheds light on substantive regulations, as well as on reflexive principles and guidelines, that are intended at practitioners who are researching, testing, developing and implementing AI models. It clearly explains how institutions, non-governmental organizations and technology conglomerates are introducing protocols (including self-regulations) to prevent contingencies from even happening due to inappropriate AI governance”.
This research “indicates that the latest AI developments call for responsible governance and corporate digital responsibility to ensure that humanity can easily access and benefit from advanced data systems, in a protected, safe and secure environment. It reports that various governments and international organizations are stepping in with their commitment to protect their citizens and the businesses’ interests. As a result, several regulatory authorities are outlining governance principles and guidelines that are intended to support practitioners in the development of AI, ML and DL technologies, with the aim to mitigate and reduce the risks associated with them. AI governance is intended to minimize risks including violations of privacy, misuse of personal information, bias, discrimination, and the like”.
The implications of these findings resonate in both theory and practice as: “(i) this discursive paper sheds light on the latest developments in terms of regulatory instruments, rules and principles on AI governance that apply to practitioners who are creating, testing and implementing AI models; (ii) it describes the findings from a rigorous review of high impact articles focused on ‘AI governance’ and on the intersection of ‘AI’ and ‘corporate social responsibility’ (CSR), and (iii) it raises awareness about the importance and timeliness of formalizing responsible AI governance protocols to ensure that ML and DL systems are reliable, dependable and safe for business and society at large.
This paper contributes to the discourse on the interplay between Sustainable Development and Data Science, “as it reports that, for the time being, there are limited articles focused on responsible governance of AI data science frameworks that provide substantive (outcome-based) and reflexive (process-based) guidelines for practitioners who are developing AI innovations. This research addresses this knowledge gap. Specifically, it puts forward an AI governance framework that is intended to promote sustainable development through accountable, transparent, explainable / interpretable / reproducible, fair, inclusive and secure AI solutions. In sum, it clarifies the meanings of essential elements that are required for the governance of AI data science, in order to prevent unnecessary risks and occurrences from affecting any parties. In conclusion, it discusses managerial implications for AI practitioners and policymakers”.
The open access article is available here: https://onlinelibrary.wiley.com/doi/10.1111/exsy.13406