This is an excerpt from one of my contributions on the use of responsive chatbots by service businesses. The content was adapted for this blogpost.
(Source: Google Gemini)
Suggested citation: Camilleri, M.A. & Troise, C. (2022). Live support by chatbots with artificial intelligence: A future research agenda. Service Business, https://doi.org/10.1007/s11628-022-00513-9
Chatbots are usually considered as automated conversational systems that are capable of mimicking humanlike conversations. Previous research suggested that, at times, human beings are treating computers as social beings (Nass and Moon 2000; Nass et al. 1994; Rha and Lee 2022) although they are well aware that dialogue programs do not possess emotions, feelings and identities. Individuals may still perceive that service chatbots have some sort of social presence when they interact with them (Leung and Wen 2020; McLean et al. 2020; Pantano & Scarpi, 2022; Schuetzler et al. 2020), even though these technologies are capable of responding to thousands of potential users at once (Caldarini et al. 2022).
Currently, few academic contributions are using theoretical bases like the social presence theory (Grimes et al. 2020; Schuetzler et al. 2020) and/or the social response theory (Adam et al. 2021; Huang and Lin 2011), to explore human-computer interactions, and/or the utility of dialogue systems like chatbots, albeit a few exceptions. A few commentators made specific reference to related theories to describe the characteristics of chatbots or of conversational 4 agents, that are primarily used for consumer engagement purposes (Cheng and Jiang 2020; Kull et al. 2021; Mostafa and Kasamani 2021; Nuruzzaman and Hussain 2020).
The human machine communication theory was formulated in response to the growing number of technologies like AI and robotics, that are designed to function as message sources, rather than as message channels (Flavián et al. 2021). Lewis et al. (2019) contended that social bots, and even a few chatbots have pushed into the realm of what was thought to be a purely human role. Wilkinson et al.’s (2021) study shed light on the human beings’ perceptions about conversational recommender systems. In this case, the authors went on to suggest that experienced users trusted their disruptive technologies and had higher expectations from them.
Other researchers examined the online users’ trust toward chatbots in various settings (Balakrishnan and Dwivedi 2021; Borau et al. 2021; Cheng and Jiang 2020; De Cicco et al. 2020; Hildebrand and Bergner 2021; Kushwaha et al. 2021; Mozafari et al. 2021; Nuruzzaman and Hussain 2020; Pillai and Sivathanu 2020). Eren (2021) confirmed that the users’ performance perceptions regarding the use of chatbots positively affected their customer satisfaction levels in the banking sector. This finding is in line with the expectancy violation theory, as individuals form expectations following their interactions with information systems (Chopra 2019; Neuburger et al. 2018).
The individuals’ social expectations from conversational technologies are especially pronounced when they incorporate cues of humanness (Adam et al. 2021; Pfeuffer et al. 2019), that are not present in traditional systems like websites, mobile applications, and databases (Belanche et al. 2021). The anthropomorphic features of AI dialogue systems make it easier for humans to connect with them (Adam et al. 2021; Becker et al. 2022; Forgas-Coll et al. 2022; Van Pinxteren et al. 2020).
In many cases, a number of quantitative researchers have investigated online users’ perceptions and attitudes toward these interactive technologies. Very often, they relied on valid measures that were tried and tested in academia. Some utilized the theory of reasoned action (Huang and Kao, 2021), the theory of planned behavior (Brachten et al. 2021; Ciechanowski et al. 2019), the behavioral reasoning theory (Lalicic and Weismayer 2021), the technology acceptance model (Kasilingam 2020) or the unified theory of acceptance and use of technology (Mostafa and Kasamani 5 2021), as they sought to investigate the individuals’ utilitarian motivations to use chatbot technologies to resolve their consumer issues. Others examined the users’ gratifications (Cheng and Jiang 2020; Rese et al. 2020), perceived enjoyment (De Cicco et al. 2020; Kushwaha et al. 2021; Rese et al. 2020), emotional factors (Crolic et al. 2021; Lou et al. 2021; Schepers et al. 2022; Wei et al. 2021), and/or intrinsic motivations (Jiménez-Barreto et al. 2021), to determine whether they were (or were not) affecting their intentions to use them.
Featuring a few snippets from one of my latest co-authored papers on the use of sustainable technologies in different industry sectors. A few sections have been adapted to be presented as a blog post.
Suggested citation: Varriale, V., Camilleri, M. A., Cammarano, A., Michelino, F., Müller, J., & Strazzullo, S.(2024). Unleashing digital transformation to achieve the sustainable development goals across multiple sectors. Sustainable Development, https://doi.org/10.1002/sd.3139
Abstract: Digital technologies have the potential to support the achievement of the Sustainable Development Goals (SDGs). Existing scientific literature lacks a comprehensive analysis of the triple link: “digital technologies – different industry sectors – SDGs”. By systematically analyzing extant literature, 1098 sustainable business practices are identified from 578 papers. The researchers noted that 11 digital technologies are employed across 17 industries to achieve the 17 SDGs. They report that artificial intelligence can be used to achieve affordable and clean energy (SDG 7), responsible consumption and production (SDG 12) as well as to address climate change (SDG 13). Further, geospatial technologies may be applied in the agricultural industry to reduce hunger in various domains (SDG 2), to foster good health and well‐being (SDG 3), to improve the availability of clean water and sanitation facilities (SDG 6), raise awareness on responsible consumption and production (SDG 12), and to safeguard life on land (SDG 15), among other insights.
Literature review: The integration of digital technologies has emerged as a transforma-tive force in advancing sustainability objectives across diverse sectorsand industries. Digital technologies offer unprecedented opportunitiesto enhance resource efficiency, optimize processes, and foster innovation, thereby facilitating progress toward the attainment of the SDGs (Birkel & Müller, 2021; Camilleri et al., 2023; Cricelli et al., 2024). Table 1 sheds light on digital technologies that can be used to achieve the sustainable development goals.
Table 2 provides a list of digital technologies (Perano et al., 2023). These disruptive innovations were used as keywords in the search string through SCOPUS.
Table 3 identifies sectors and industries based on the SIC code classification (United Kingdom Government, 2024).
Theoretical implications: This article offers a comprehensive overview of the intersection between digitalization and sustainability across various industry sectors. It also considers their peculiar characteristics. The research analyzed 578 articles and identified 1098 sustainable business practices (SBPs), which were categorized into a three-dimensional framework connecting digital technologies, sectors & industries, as well as SDGs. This approach provides a new and innovative perspective on combining sustainability and digitalization by highlighting both promising and established areas of digital technology implementation. Theoretically, this study presents a clear and comprehensive picture of how digital technologies are adopted in different industries to achieve the SDGs. It classifies SBPs into three dimensions: (a) digital technology, (b) sectors & industries, and (c) SDGs. The goal is to present an up-to-date and thorough representation of digital technologies used to achieve the SDGs, based on information from scientific articles.
This contribution sheds light on key opportunities for the application of digital technologies. It identifies specific areas where they can be most effective. Unlike other research studies, this study uses a database of SBPs that can be applied across different industry sectors, to explain how practitioners can enhance their sustainability performance and achieve the SDGs. The three-dimensional framework illustrated in this article allows stakeholders to better understand how to adapt their business strategies and day-to-day operations to increase their sustainability credentials and to reduce their environmental impacts.
Managerial and policy implications: This research provides a comprehensive overview of the implementation of digital technologies across various industries and sectors. It raises awareness on how they can be utilized to achieve the SDGs. It highlights established applications of technologies and also identifies new ones. The proposed framework associates various digital technologies with specific industry sectors. It clearly explains who they can be employed to achieve the SDGs. Hence, this research and its findings would surely benefit practitioners, managers, and policy-makers.
The rationale behind this contribution is to build a robust knowledge base about the use of sustainable technologies among stakeholders. This way, they will be in a better position to improve their corporate responsibility credentials. Managers can use this study’s proposed framework to gain a deeper understanding of SBPs at three levels. In a nutshell, this research posits that SBPs can support practitioners in their strategic and operational decisions while minimizing the risks associated with adopting technologies that are less effective in addressing sustainability challenges. Additionally, this paper offers valuable insights for policymakers. It implies that research funds ought to be allocated toward specific sustainable technologies. This way, they can support various industry sectors in a targeted manner, and foster the development of digital transformation for the achievement of different SDGs.
Featuring an excerpt and a few snippets from one of my latest articles related to Generative Artificial Intelligence (AI).
Suggested Citation: Camilleri, M.A. (2024). Factors affecting performance expectancy and intentions to use ChatGPT: Using SmartPLS to advance an information technology acceptance framework, Technological Forecasting and Social Change, https://doi.org/10.1016/j.techfore.2024.123247
The introduction
Artificial intelligence (AI) chatbots utilize algorithms that are trained to process and analyze vast amounts of data by using techniques ranging from rule-based approaches to statistical models and deep learning, to generate natural text, to respond to online users, based on the input they received (OECD, 2023). For instance, Open AI‘s Chat Generative Pre-Trained Transformer (ChatGPT) is one of the most popular AI-powered chatbots. The company claims that ChatGPT “is designed to assist with a wide range of tasks, from answering questions to generating text in various styles and formats” (OpenAI, 2023a). OpenAI clarifies that its GPT-3.5, is a free-to-use language model that was optimized for dialogue by using Reinforcement Learning with Human Feedback (RLHF) – a method that relies on human demonstrations and preference comparisons to guide the model toward desired behaviors. Its models are trained on vast amounts of data including conversations that were created by humans (such content is accessed through the Internet). The responses it provides appear to be as human-like as possible (Jiang et al., 2023).
GPT-3.5’s database was last updated in September 2021. However, GPT-4.0 version comes with a paid plan that is more creative than GPT-3.5, could accept images as inputs, can generate captions, classifications and analyses (Qureshi et al., 2023). Its developers assert that GPT-4.0 can create better content including extended conversations, as well as document search and analysis (Takefuji, 2023). Recently, its proponents noted that ChatGPT can be utilized for academic purposes, including research. It can extract and paraphrase information, translate text, grade tests, and/or it may be used for conversation purposes (MIT, 2023). Various stakeholders in education noted that this LLM tool may be able to provide quick and easy answers to questions.
However, earlier this year, several higher educational institutions issued statements that warned students against using ChatGPT for academic purposes. In a similar vein, a number of schools banned ChatGPT from their networks and devices (Rudolph et al., 2023). Evidently, policy makers were concerned that this text generating AI system could disseminate misinformation and even promote plagiarism. Some commentators argue that it can affect the students’ critical-thinking and problem-solving abilities. Such skill sets are essential aspects for their academic and lifelong successes (Liebrenz et al., 2023; Thorp, 2023). Nevertheless, a number of jurisdictions are reversing their decisions that impede students from using this technology (Reuters, 2023). In many cases, educational leaders are realizing that their students could benefit from this innovation, if they are properly taught how to adopt it as a tool for their learning journey.
Academic colleagues are increasingly raising awareness on different uses of AI dialogue systems like service chatbots and/or virtual assistants (Baabdullah et al., 2022; Balakrishnan et al., 2022; Brachten et al., 2021; Hari et al., 2022; Li et al., 2021; Lou et al., 2022; Malodia et al., 2021; Sharma et al., 2022). Some of them are evaluating their strengths and weaknesses, including of OpenAI’s ChatGPT (Farrokhnia et al., 2023; Kasneci et al., 2023). Very often, they argue that there may be instances where the chatbots’ prompts are not completely accurate and/or may not fully address the questions that are asked to them (Gill et al., 2024). This may be due to different reasons. For example, GPT-3.5’s responses are based on the data that were uploaded before a knowledge cut-off date (i.e. September 2021). This can have a negative effect on the quality of its replies, as the algorithm is not up to date with the latest developments. Although, at the moment, there is a knowledge gap and a few grey areas on the use of AI chatbots that use natural language processing to create humanlike conversational dialogue, currently, there are still a few contributions that have critically evaluated their pros and cons, and even less studies have investigated the factors affecting the individuals’ engagement levels with ChatGPT.
This empirical research builds on theoretical underpinnings related to information technologyadoption in order to examine the online users’ perceptions and intentions to use AI Chatbots. Specifically, it integrates a perceived interactivity construct (Baabdullah et al., 2022; McMillan and Hwang, 2002) with information quality and source trustworthiness measures (Leong et al., 2021; Sussman and Siegal, 2003) from the Information Adoption Model (IAM) with performance expectancy, effort expectancy and social influences constructs (Venkatesh et al., 2003; Venkatesh et al., 2012) from the Unified Theory of Acceptance and Use of Technology (UTAUT1/UTAUT2) to determine which factors are influencing the individuals’ intentions to use AI text generation systems like ChatGPT. This study’s focused research questions are:
RQ1
How and to what extent are information quality and source trustworthiness influencing the online users’ performance expectancy from ChatGPT?
RQ2
How and to what extent are their perceptions about ChatGPT’s interactivity, performance expectancy, effort expectancy, as well as their social influences affecting their intentions to continue using their large language models?
RQ3
How and to what degree is the performance expectancy construct mediating effort expectancy – intentions to use these interactive AI technologies?
This study hypothesizes that information quality and source trustworthiness are significant antecedents of performance expectancy. It presumes that this latter construct, together with effort expectancy, social influences as well as perceived interactivity affect the online users’ acceptance and usage of generative pre-trained AI chatbots like GPT-3.5 or GPT-4.
Notwithstanding, for the time being, there is still scant research that is focused on AI-powered LLM, like ChatGPT, that are capable of generating human-like text that is based on previous contexts and drawn from past conversations. This timely study raises awareness on the individuals’ perceptions about the utilitarian value of such interactive technologies, in an academic (higher educational) context. It clearly identifies the factors that are influencing the individuals’ intentions to continue using them, in the future.
From the literature review
Table 1 features a summary of the most popular theoretical frameworks that sought to identify the antecedents and the extent to which they may affect the individuals’ intentions to use information technologies.
Table 1. A non-exhaustive list of theoretical frameworks focused on (information) technology adoption behaviors
Figure 1. features the conceptual framework that investigates information technology adoption factors. It represents a visual illustration of the hypotheses of this study. In sum, this empirical research presumes that information quality and source trustworthiness (from Information Adoption Model) precede performance expectancy. The latter construct together with effort expectancy, social influences (from Unified Theory of Acceptance and Use of Technology) as well as the perceived interactivity construct, are significant antecedents of the individuals’ intentions to use ChatGPT.
The survey instrument
The respondents were instructed to answer all survey questions that were presented to them about information quality, source trustworthiness, performance expectancy, effort expectancy, social influences, perceived interactivity and on their behavioral intentions to continue using this technology (otherwise, they could not submit the questionnaire). Table 2 features the list of measures as well as their corresponding items that were utilized in this study. It also provides a definition of the constructs used in the proposed information technology acceptance framework.
Table 2. The list of measures and the corresponding items used in this research.
Theoretical implications
This research sought to explore the factors that are affecting the individuals’ intentions to use ChatGPT. It examined the online users’ effort and performance expectancy, social influences as well as their perceptions about the information quality, source trustworthiness and interactivity of generative text AI chatbots. The empirical investigation hypothesized that performance expectancy, effort expectancy and social influences from Venkatesh et al.’s (2003) UTAUT together with a perceived interactivity construct (McMillan and Hwang, 2002) were significant antecedents of their intentions to revisit ChatGPT’s website and/or to use its app. Moreover, it presumed that information quality and source trustworthiness measures from Sussman and Siegal’s (2003) IAM were found to be the precursors of performance expectancy.
The results from this study report that source trustworthiness-performance expectancy is the most significant path in this research model. They confirm that online users indicated that they believed that there is a connection between the source’s trustworthiness in terms of its dependability, and the degree to which they believe that using such an AI generative system will help them improve their job performance. Similar effects were also evidenced in previous IAM theoretical frameworks (Kang and Namkung, 2019; Onofrei et al., 2022), as well as in a number of studies related to TAM (Assaker, 2020; Chen and Aklikokou, 2020; Shahzad et al., 2018) and/or to UTAUT/UTAUT2 (Lallmahomed et al., 2017).
In addition, this research also reports that the users’ peceptions about information quality significantly affects their performance expectancy/expectancies from ChatGPT. Yet, in this case, this link was weaker than the former, thus implying that the respondents’ perceptions about the usefulness of this text generative technology were clearly influenced by the peripheral cues of communication (Cacioppo and Petty, 1981; Shi et al., 2018; Sussman and Siegal, 2003; Tien et al., 2019).
Very often, academic colleagues noted that individuals would probably rely on the information that is presented to them, if they perceive that the sources and/or their content are trustworthy (Bingham et al., 2019; John and De’Villiers, 2020; Winter, 2020). Frequently, they indicated that source trustworthiness would likely affect their beliefs about the usefulness of information technologies, as they enable them to enhance their performance. Conversely, some commentators argued that there may be users that could be skeptical and wary about using new technologies, especially if they are unfamiliar with them (Shankar et al., 2021). They noted that such individuals may be concerned about the reliability and trustworthiness of the latest technologies.
The findings suggest that the individuals’ perceptions about the interactivity of ChatGPT are a precursor of their intentions to use it. This link is also highly significant. Therefore, the online users were somehow appreciating this information technology’s responsiveness to their prompts (in terms of its computer-human communications). Evidently, ChatGPT’s interactivity attributes are having an impact on the individuals’ readiness to engage with it, and to seek answers to their questions. Similar results were reported in other studies that analyzed how the interactivity and anthropomorphic features of dialogue systems like live support chatbots, or virtual assistants can influence the online users’ willingness to continue utilizing them in the future (Baabdullah et al., 2022; Balakrishnan et al., 2022; Brachten et al., 2021; Liew et al., 2017).
There are a number of academic contributions that sought to explore how, why, where and when individuals are lured by interactive communication technologies (e.g. Hari et al., 2022; Li et al., 2021; Lou et al., 2022). Generally, these researchers posited that users are habituated with information systems that are programed to engage with them in a dynamic and responsive manner. Very often they indicated that many individuals are favorably disposed to use dialogue systems that are capable of providing them with instant feedback and personalized content. Several colleagues suggest that positive user experiences as well as high satisfaction levels and enjoyment, could enhance their connection with information technologies, and will probably motivate them to continue using them in the future (Ashfaq et al., 2020; Camilleri and Falzon, 2021; Huang and Chueh, 2021; Wolfinbarger and Gilly, 2003).
Another important finding from this research is that the individuals’ social influences (from family, friends or colleagues) are affecting their interactions with ChatGPT. Again, this causal path is also very significant. Similar results were also reported in UTAUT/UTAUT2 studies that are focused on the link between social influences and its link with intentional behaviors to use technologies (Gursoy et al., 2019; Patil et al., 2020). In addition, TPB/TRA researchers found that subjective norms also predict behavioral intentions (Driediger and Bhatiasevi, 2019; Sohn and Kwon, 2020). This is in stark contract with other studies that reported that there was no significant relationship between social influences/subjective norms and behavioral intentions (Ho et al., 2020; Kamble et al., 2019).
Interestingly, the results report that there are highly significant effects between effort expectancy (i.e. ease of use of the generative AI technology) and performance expectancy (i.e. its perceived usefulness). Many scholars posit that perceived ease of use is a significant driver of perceived usefulness of technology (Bressolles et al., 2014; Davis, 1989; Davis et al., 1989; Kamble et al., 2019; Yoo and Donthu, 2001). Furthermore, there are significant causal paths between performance expectancy-intentions to use ChatGPT and even between effort expectancy-intentions to use ChatGPT, albeit to a lesser extent. Yet, this research indicates that performance expectancy partially mediates effort expectancy-intentions to use ChatGPT. In this case, this link is highly significant.
In sum, this contribution validates key information technology measures, specifically, performance expectancy, effort expectancy, social influences and behavioral intentions from UTAUT/UTAUT2, as well as information quality and source trustworthiness from ELM/IAM and integrates them with a perceived interactivity factor. It builds on previous theoretical underpinnings. Yet, it differentiates itself from previous studies. To date, there are no other empirical investigations that have combined the same constructs that are presented in this article. Notwithstanding, this research puts forward a robust Information Technology Acceptance Framework. The results confirm the reliability and validity of the measures. They clearly outline the relative strength and significance of the causal paths that are predicting the individuals’ intentions to use ChatGPT.
Managerial implications
This empirical study provides a snapshot on the online users’ perceptions about ChatGPT’s responses to verbal queries, and sheds light on their dispositions to avail themselves from its natural language processing. It explores their performance expectations about their usefulness and their effort expectations related to the ease of use of these information technologies and investigates whether they are affected by colleagues or by other social influences to use such dialogue systems. Moreover, it examines their insights about the content quality, source trustworthiness as well as on the interactivity features of these text- generative AI models.
Generally, the results suggest that the research participants felt thatthese algorithms are easy to use. The findings indicate that they consider them to be useful too, specifically when the information they generate is trustworthy and dependable. The respondents suggest that they are concerned about the quality and accuracy of the content that is featured in the AI chatbots’ answers. This contingent issue can have a negative effect on the use of the information that is created by online dialogue systems.
OpenAI’s ChatGPT is a case in point. Its app is freely available in many countries, via desktop and mobile technologies including iOS and Android. The company admits that its GPT-3.5 outputs may be inaccurate, untruthful, and misleading at times. It clarifies that its algorithm is not connected to the internet, and that it can occasionally produce incorrect answers (OpenAI, 2023a). It posits that GPT-3.5 has limited knowledge of the world and events after 2021 and may also occasionally produce harmful instructions or biased content. OpenAI recommends checking whether its chatbot’s responses are accurate or not, and to let them know when and if it answers in an incorrect manner, by using their “Thumbs Down” button. They even declare that their ChatGPT’s Help Center can occasionally make up facts or “hallucinate” outputs (OpenAI, 2023a,b).
OpenAI reports that its top notch ChatGPT Plus subscribers can access safer and more useful responses. In this case, users can avail themselves from a number of beta plugins and resources that can offer a wide range of capabilities including text-to-speech applications as well as web browsing features through Bing. Yet again, OpenAI (2023b) indicates that its GPT-4 still has many known limitations that the company is working to address, such as “social biases and adversarial prompts” (at the time of writing this article). Evidently, works are still in progress at OpenAI. The company needs to resolve these serious issues, considering that its Content Policy and Terms clearly stipulate that OpenAI’s consumers are the owners of the output that is created by ChatGPT. Hence, ChatGPT’s users have the right to reprint, sell, and merchandise the content that is generated for them through OpenAI’s platforms, regardless of whether the output (its response) was provided via a free or a paid plan.
Various commentators are increasingly raising awareness about the corporate digital responsibilities of those involved in the research, development and maintenance of such dialogue systems. A number of stakeholders, particularly the regulatory ones, are concerned on possible risks and perils arising from AI algorithms including interactive chatbots. In many cases, they are warning that disruptive chatbots could disseminate misinformation,foster prejudice, bias and discrimination, raise privacy concerns, and could lead to the loss of jobs. Arguably, one has to bear in mind that, in many cases, many governments are outpaced by the proliferation of technological innovations (as their development happens before the enactment of legislation). As a result, they tend to be reactive in the implementation of substantive regulatory interventions. This research reported that the development of ChatGPT has resulted in mixed reactions among different stakeholders in society, especially during the first months after its official launch. At the moment, there are just a few jurisdictions that have formalized policies and governance frameworks that are meant to protect and safeguard individuals and entities from possible risks and dangers of AI technologies (Camilleri, 2023). Of course, voluntary principles and guidelines are a step in the right direction. However, policy makers are expected by various stakeholders to step-up their commitment by introducing quasi-regulations and legislation.
Currently, a number of technology conglomerates including Microsoft-backed OpenAI, Apple and IBM, among others, anticipated the governments’ regulations by joining forces in a non-profit organization entitled, “Partnership for AI” that aims to advance safe, responsible AI, that is rooted in open innovation. In addition, IBM has also teamed up with Meta and other companies, startups, universities, research and government organizations, as well as non-profit foundations to form an “AI Alliance”, that is intended to foster innovations across all aspects of AI technology, applications and governance.
This is an excerpt from my latest contribution on responsible artificial intelligence (AI).
Suggested citation: Camilleri, M. A. (2023). Artificial intelligence governance: Ethical considerations and implications for socialresponsibility. Expert Systems, e13406. https://doi.org/10.1111/exsy.13406
The term “artificial intelligence governance” or “AI governance” integrates the notions of “AI” and “corporate governance”. AI governance is based on formal rules (including legislative acts and binding regulations) as well as on voluntary principles that are intended to guide practitioners in their research, development and maintenance of AI systems (Butcher & Beridze, 2019; Gonzalez et al., 2020). Essentially, it represents a regulatory framework that can support AI practitioners in their strategy formulation and in day-to-day operations (Erdélyi & Goldsmith, 2022; Mullins et al., 2021; Schneider et al., 2022). The rationale behind responsible AI governance is to ensure that automated systems including ML/DL technologies, are supporting individuals and organizations in achieving their long terms objectives, whist safeguarding the interests of all stakeholders (Corea et al., 2023; Hickok et al., 2022).
AI governance requires that the organizational leaders comply with relevant legislation, hard laws and regulations (Mäntymäki et al., 2022). Moreover, they are expected to follow ethical norms, values and standards (Koniakou, 2023). Practitioners ought to be trustworthy, diligent and accountable in how they handle their intellectual capital and other resources including their information technologies, finances as well as members of staff, in order to overcome challenges, minimize uncertainties, risks and any negative repercussions (E.g. decreased human oversight in decision making, among others) (Agbese et al., 2023; Smuha, 2019).
Procedural governance mechanisms ought to be in place to ensure that AI technologies and ML/DL models are operating in a responsible manner. Figure 1 features some of the key elements that are required for the responsible governance of artificial intelligence. The following principles are aimed to provide guidelines for the modus operandi of AI practitioners (including ML/DL developers).
Figure 1. A Responsible Artificial Intelligence Governance Framework
Accountability and transparency
“Accountability” refers to the stakeholders’ expectations about the proper functioning of AI systems, in all stages, including in the design, creation, testing or deployment, in accordance with relevant regulatory frameworks. It is imperative that AI developers are held accountable for the smooth operation of AI systems throughout their lifecycle (Raji et al., 2020). Stakeholders expect them to be accountable by keeping a track record of their AI development processes (Mäntymäki et al., 2022).
The transparency notion refers to the extent to which end-users could be in a position to understand how AI systems work (Andrada et al., 2020; Hollanek, 2020). AI transparency is associated with the degree of comprehension about algorithmic models in terms of “simulatability” (an understanding of AI functioning), “decomposability” (related to how individual components work), and algorithmic transparency (this is associated to the algorithms’ visibility).
In reality, it is difficult to understand how AI systems, including deep learning models and their neural networks are learning (as they acquire, process and store data) during training phases. They are often considered as black box models. It may prove hard to algorithmically translate derived concepts into human-understandable terms, even though developers may use certain jargon to explain their models’ attributes and features. Many legislators are striving in their endeavors to pressurize AI actors to describe the algorithms they use in automated decision-making, yet the publication of algorithms is useless if outsiders cannot access the data of the AI model.
Explainability and interpretability
Explainability is the concept that sheds light on how AI models work, in a way that is comprehensible to a human being. Arguably, the explainabilty of AI systems could improve their transparency, trustworthiness and accountability. At the same time, it can reduce bias and unfairness. The explainability of artificial intelligence systems could clarify how they reached their decisions (Arya et al., 2019; Keller & Drake, 2021). For instance, AI could explain how and why autonomous cars decide to stop or to slow down when there are pedestrians or other vehicles in front of them.
Explainable AI systems might improve consumer trust and may enable engineers to develop other AI models, as they are in a position to track provenance of every process, to ensure reproducibility, and to enable checks and balances (Schneider et al., 2022). Similarly, interpretability refers to the level of accuracy of machine learning programs in terms of linking the causes to the effects (John-Mathews, 2022).
Fairness and inclusiveness
The responsible AI’s fairness dimension refers to the practitioners’ attempts to correct algorithmic biases that may possibly (voluntarily or involuntarily) be included in their automation processes (Bellamy et al., 2019; Mäntymäki, et al., 2022). AI systems can be affected by their developers’ biases that could include preferences or antipathies toward specific demographic variables like genders, age groups and ethnicities, among others (Madaio et al., 2020). Currently, there is no universal definition on AI fairness.
However, recently many multinational corporations have developed instruments that are intended to detect bias and to reduce it as much as possible (John-Mathews et al., 2022). In many cases, AI systems are learning from the data that is fed to them. If the data are skewed and/or if they comprise implicit bias into them, they may result in inappropriate outputs.
Fair AI systems rely on unbiased data (Wu et al., 2020). For this reason, many companies including Facebook, Google, IBM and Microsoft, among others are striving in their endeavors to involve members of staff hailing from diverse backgrounds. These technology conglomerates are trying to become as inclusive and as culturally aware as possible in order to minimize bias from affecting their AI processes. Previous research reported that AI’s bias may result in inequality, discrimination and in the loss of jobs (Butcher & Beridze, 2019).
Privacy and safety for consumers
Consumers are increasingly concerned about the privacy of their data. They have a right to control who has access to their personal information. The data that is collected or used by third parties, without the authorization or voluntary consent of individuals, would result in the violations of their privacy (Zhu et al., 2020; Wu et al., 2022).
AI-enabled products, including dialogue systems like chatbots and virtual assistants, as well as digital assistants (e.g. like Siri, Alexa or Cortana), and/or wearable technologies such as smart watches and sensorial smart socks, among others, are increasingly capturing and storing large quantities of consumer information. The benefits that are delivering these interactive technologies may be offset by a number of challenges. The technology businesses who developed these products are responsible to protect their consumers’ personal data (Rodríguez-Barroso et al., 2020). Their devices are capable of holding a wide variety of information on their users. They are continuously gathering textual, visual, audio, verbal, and other sensory data from consumers. In many cases, the customers are not aware that they are sharing personal information to them.
For example, facial recognition technologies are increasingly being used in different contexts. They may be used by individuals to access websites and social media, in a secure manner and to even authorize their payments through banking and financial services applications. Employers may rely on such systems to track and monitor their employees’ attendance. Marketers can utilize such technologies to target digital advertisements to specific customers. Police and security departments may use them for their surveillance systems and to investigate criminal cases. The adoption of these technologies has often raised concerns about privacy and security issues. According to several data privacy laws that have been enacted in different jurisdictions, organizations are bound to inform users that they are gathering and storing their biometric data. The businesses that employ such technologies are not authorized to use their consumers’ data without their consent.
Companies are expected to communicate about their data privacy policies with their target audiences (Wong, 2020). They have to reassure consumers that the consented data they collect from them is protected and are bound to inform them that they may use their information to improve their customized services to them. The technology giants can reward their consumers to share sensitive information. They could offer them improved personalized services among other incentives, in return for their data. In addition, consumers may be allowed to access their own information and could be provided with more control (or other reasonable options) on how to manage their personal details.
The security and robustness of AI systems
AI algorithms are vulnerable to cyberattacks by malicious actors. Therefore, it is in the interest of AI developers to secure their automated systems and to ensure that they are robust enough against any risks and attempts to hack them (Gehr et al., 2018; Li et al., 2020).
The accessibility to AI models ought to be continuously monitored at all times during their development and deployment (Bertino et al., 2021). There may be instances when AI models could encounter incidental adversities, leading to the corruption of data. Alternatively, they might encounter intentional adversities when they experience sabotage from hackers. In both cases, the AI model will be compromised and can result in system malfunctions (Papagiannidis et al., 2023).
AI models have to prevent such contingent issues from happening. Their developers’ responsibilities are to improve the robustness of their automated systems, and to make them as secure of possible, to reduce the chances of threats, including by inadvertent irregularities, information leakages, as well as by privacy violations like data breaches, contamination and poisoning by malicious actors (Agbese et al., 2023; Hamon et al., 2020).
AI developers should have preventive policies and measures related to the monitoring and control of their data. They ought to invest in security technologies including authentication and/or access systems with encryption software as well as firewalls for their protection against cyberattacks. Routine testing can increase data protection, improve security levels and minimize the risks of incidents.
Conclusions
This review indicates that more academics as well as practitioners, are increasingly devoting their attention to AI as they elaborate about its potential uses, as well as on its opportunities and threats. It reported that its proponents are raising awareness on the benefits of AI systems for individuals as well as for organizations. At the same time, it suggests that a number of scholars and other stakeholders including policy makers, are raising their concerns about its possible perils (e.g. Berente et al., 2021; Gonzalez et al., 2020; Zhang & Lu, 2021).
Many researchers identified some of the risks of AI (Li et al., 2021; Magas & Kiritsis, 2022). In many cases, they warned that AI could disseminate misinformation, foster prejudice, bias and discrimination, raise privacy concerns, and could lead to the loss of jobs (Butcher & Beridze, 2019). A few commentators argue about the “singularity” or the moment where machine learning technologies could even surpass human intelligence (Huang & Rust, 2022). They predict that a critical shift could occur if humans are no longer in a position to control AI anymore.
In this light, this article sought to explore the governance of AI. It sheds light on substantive regulations, as well as on reflexive principles and guidelines, that are intended at practitioners who are researching, testing, developing and implementing AI models. It clearly explains how institutions, non-governmental organizations and technology conglomerates are introducing protocols (including self-regulations) to prevent contingencies from even happening due to inappropriate AI governance.
Debatably, the voluntary or involuntary mishandling of automated systems can expose practitioners to operational disruptions and to significant risks including to their corporate image and reputation (Watts & Adriano, 2021). The nature of AI requires practitioners to develop guardrails to ensure that their algorithms work as they should (Bauer, 2022). It is imperative that businesses comply with relevant legislations and to follow ethical practices (Buhmann & Fieseler, 2023). Ultimately, it is in their interest to operate their company in a responsible manner, and to implement AI governance procedures. This way they can minimize unnecessary risks and safeguard the well-being of all stakeholders.
This contribution has addressed its underlying research objectives. Firstly, it raised awareness on AI governance frameworks that were developed by policy makers and other organizations, including by the businesses themselves. Secondly, it scrutinized the extant academic literature focused on AI governance and on the intersection of AI and CSR. Thirdly, it discussed about essential elements for the promotion of socially responsible behaviors and ethical dispositions of AI developers. In conclusion it put forward an AI governance conceptual model for practitioners.
This research made reference to regulatory instruments that are intended to govern AI expert systems. It reported that, at the moment there are a few jurisdictions that have formalized their AI policies and governance frameworks. Hence, this article urges laggard governments to plan, organize, design and implement regulatory instruments that ensure that individuals and entities are safe when they utilize AI systems for personal benefit, educational and/or for commercial purposes.
Arguably, one has to bear in mind that, in many cases, policy makers have to face a “pacing problem” as the proliferation of innovation is much quicker than legislation. As a result, governments tend to be reactive in the implementation of regulatory interventions relating to innovations. They may be unwilling to hold back the development of disruptive technologies from their societies. Notwithstanding, they may face criticism by a wide array of stakeholders in this regard, as they may have conflicting objectives and expectations.
The governments’ policy is to regulate business and industry to establish technical, safety and quality standards as well as to monitor their compliance. Yet, they may consider introducing different forms of regulation other than the traditional “command and control” mechanisms. They may opt for performance-based and/or market-based incentive approaches, co-regulation and self-regulation schemes, among others (Hepburn, 2009), in order to foster technological innovations.
This research has shown that a number of technology giants, including IBM and Microsoft, among others, are anticipating the regulatory interventions of different governments where they operate their businesses. It reported that they are communicating about their responsible AI governance initiatives as they share information on their policies and practices that are meant to certify, explain and audit their AI developments. Evidently, these companies, among others, are voluntarily self-regulating themselves as they promote accountability, fairness, privacy and robust AI systems. These two organizations, in particular, are raising awareness about their AI governance frameworks to increase their CSR credentials with stakeholders.
Likewise, AI developers who work for other businesses, are expected to forge relationships with external stakeholders including with policy makers as well as with actors including individuals and organizations who share similar interests in AI. Innovative clusters and network developments may result in better AI systems and can also decrease the chances of possible risks. Indeed, practitioners can be in better position if they cooperate with stakeholders for the development of trustworthy AI and if they increase their human capacity to improve the quality of their intellectual properties (Camilleri et al., 2023). This way, they can enhance their competitiveness and growth prospects (Troise & Camilleri, 2021). Arguably, it is in their interest to continuously engage with internal stakeholders (and employees), and to educate them about AI governance dimensions, that are intended to promote accountable, transparent, explainable interpretable reproducible, fair, inclusive and secure AI solutions. Hence, they could maximize AI benefits, minimize their risks as well as associated costs.
Future research directions
Academic colleagues are invited to raise more awareness on AI governance mechanisms as well as on verification and monitoring instruments. They can investigate what, how, when and where protocols could be used to protect and safeguard individuals and entities from possible risks and dangers of AI.
The “what” question involves the identification of AI research and development processes that require regulatory or quasi regulatory instruments (in the absence of relevant legislation) and/or necessitate revisions in existing statutory frameworks.
The “how” question is related to the substance and form of AI regulations, in terms of their completeness, relevance, and accuracy. This argumentation is synonymous with the true and fair view concept applied in the accounting standards of financial statements.
The “when” question is concerned with the timeliness of the regulatory intervention. Policy makers ought to ensure that stringent rules do not hinder or delay the advancement of technological innovations.
The “where” question is meant to identify the context where mandatory regulations or the introduction of soft laws, including non-legally binding principles and guidelines are/are not required.
Future researchers are expected to investigate further these four questions in more depth and breadth. This research indicated that most contributions on AI governance were discursive in nature and/or involved literature reviews. Hence, there is scope for academic colleagues to conduct primary research activities and to utilize different research designs, methodologies and sampling frames to better understand the implications of planning, organizing, implementing and monitoring AI governance frameworks, in diverse contexts.
This is an excerpt from one of my latest contributions on the use of responsive chatbots by service businesses. The content was adapted for this blogpost.
Suggested citation: Camilleri, M.A. & Troise, C. (2022). Live support by chatbots with artificial intelligence: A future research agenda. Service Business, https://doi.org/10.1007/s11628-022-00513-9
(Credit: Chatbots Magazine)
The benefits of using chatbots for online customer services
Frequently, consumers are engaging with chatbot systems without even knowing, as machines (rather than human agents) are responding to online queries (Li et al. 2021; Pantano and Pizzi 2020; Seering et al. 2018; Stoeckli et al. 2020). Whilst 13% of online consumer queries require human intervention (as they may involve complex queries and complaints), more than 87 % of online consumer queries are handled by chatbots (Ngai et al., 2021).
Several studies reported that there are many advantages of using conversational chatbots for customer services. Their functional benefits include increased convenience to customers, enhanced operational efficiencies, reduced labor costs, and time-saving opportunities.
Consumers are increasingly availing themselves of these interactive technologies to retrieve detailed information from their product recommendation systems and/or to request their assistance to help them resolve technical issues. Alternatively, they use them to scrutinize their personal data. Hence, in many cases, customers are willing to share their sensitive information in exchange for a better service.
Although, these interactive technologies are less engaging than human agents, they can possibly elicit more disclosures from consumers. They are in a position to process the consumers’ personal data and to compare it with prior knowledge, without any human instruction. Chatbots can learn in a proactive manner from new sources of information to enrich their database.
Whilst human customer service agents may usually handle complex queries including complaints, service chatbots can improve the handling of routine consumer queries. They are capable of interacting with online users in two-way communications (to a certain extent). Their interactions may result in significant effects on consumer trust, satisfaction, and repurchase intentions, as well as on positive word-of-mouth publicity.
Many researchers reported that consumers are intrigued to communicate with anthropomorphized technologies as they invoke social responses and norms of reciprocity. Such conversational agents are programed with certain cues, features and attributes that are normally associated with humans.
The findings from this review clearly indicate that individuals feel comfortable using chatbots that simulate human interactions, particularly with those that have enhanced anthropomorphic designs. Many authors noted that the more chatbots respond to users in a natural, humanlike way, the easier it is for the business to convert visitors into customers, particularly if they improve their online experiences. This research indicates that there is scope for businesses to use conversational technologies to personalize interactions with online users, to build better relationships with them, to enhance consumer satisfaction levels, to generate leads as well as sales conversions.
The costsof using chatbotsfor online customer services
Despite the latest advances in the delivery of electronic services, there are still individuals who hold negative perceptions and attitudes towards the use of interactive technologies. Although AI technologies have been specifically created to foster co-creation between the service provider and the customer,
There are a number of challenges (like authenticity issues, cognition challenges, affective issues, functionality issues and integration conflicts) that may result in a failed service interaction and in dissatisfied customers. There are consumers, particularly the older ones, who do not feel comfortable interacting with artificially intelligent technologies like chatbots, or who may not want to comply with their requests, for different reasons. For example, they could be wary about cyber-security issues and/or may simply refuse to engage in conversations with a robot.
A few commentators contended that consumers should be informed when they are interacting with a machine.In many cases, online users may not be aware that they are engaging with elaborate AI systems that use cues such as names, avatars, and typing indicators that are intended to mimic human traits.Many researchers pointed out that consumers may or may not want to be serviced by chatbots.
A number of researchers argued that some chatbots are still not capable of communicative behaviors that are intended to enhance relational outcomes. For the time being, there are chatbot technologies that are not programed to answer to all of their customers’ queries (if they do not recognize the keywords that are used by the customers), or may not be quick enough to deal with multiple questions at the same time. Therefore, the quality of their conversations may be limited. Such automated technologies may not always be in a position to engage in non-linear conversations, especially when they have to go back and forth on a topic with online users.
Theoretical and practical implications
This contribution confirms that recently there is a growing interest among academia as well as by practitioners on research that is focused on the use of chatbots that can improve the businesses’ customer-centric services. It clarifies that various academic researchers have often relied on different theories including on the expectancy theory, or on the expectancy violation theory, the human computer interaction theory/human machine communication theory, the social presence theory, and/or on the social response theory, among others.
Currently, there are limited publications that integrated well-established conceptual bases (like those featured in the literature review), or that presented discursive contributions on this topic. Moreover, there are just a few review articles that capture, scrutinize and interpret the findings from previous theoretical underpinnings, about the use of responsive chatbots in service business settings. Therefore, this systematic review paper addresses this knowledge gap in the academic literature.
It clearly differentiates itself from mainstream research as it scrutinizes and synthesizes the findings from recent, high impact articles on this topic. It clearly identifies the most popular articles from Scopus and Web of Science, and advances a definition about anthropomorphic chatbots, artificial intelligence chatbots (or AI chatbots), conversational chatbot agents (or conversational entities, conversational interfaces, conversational recommender systems or dialogue systems), customer experience with chatbots, chatbot customer service, customer satisfaction with chatbots, customer value (or the customers’ perceived value) of chatbots, and on service robots (robot advisors). It discusses about the different attributes of conversational chatbots and sheds light on the benefits and costs of using interactive technologies to respond to online users’ queries.
In sum, the findings from this research reveal that there is a business case for online service providers to utilize AI chatbots. These conversational technologies could offer technical support to consumers and prospects, on various aspects, in real time, round the clock. Hence, service businesses could be in a position to reduce their labor costs as they would require fewer human agents to respond to their customers. Moreover, the use of interactive chatbot technologies could improve the efficiency and responsiveness of service delivery. Businesses could utilize AI dialogue systems to enhance their customer-centric services and to improve online experiences. These service technologies can reduce the workload of human agents. The latter ones can dedicate their energies to resolve serious matters, including the handling of complaints and time-consuming cases.
On the other hand, this paper also discusses potential pitfalls. Currently, there are consumers who for some reason or another, are not comfortable interacting with automated chatbots. They may be reluctant to engage with advanced anthropomorphic systems that use avatars, even though, at times, they can mimic human communications relatively well. Such individuals may still appreciate a human presence to resolve their service issues. They may perceive that interactive service technologies are emotionless and lack a sense of empathy.
Presently, chatbots can only respond to questions, keywords and phrases that they were programed to answer. Although they are useful in solving basic queries, their interactions with consumers are still limited. Their dialogue systems require periodic maintenance. Unlike human agents they cannot engage in in-depth conversations or deal with multiple queries, particularly if they are expected to go back and forth on a topic.
Most probably, these technical issues will be dealt with over time, as more advanced chatbots will be entering the market in the foreseeable future. It is likely that these AI technologies would possess improved capabilities and will be programmed with up-to-date information, to better serve future customers, to exceed their expectations.
Limitations and future research avenues
This research suggests that this area of study is gaining traction in academic circles, particularly in the last few years. In fact, it clarifies that there were four hundred twenty-one 421 publications on chatbots in business-related journals, up to December 2021. Four hundred fifteen (415) of them were published in the last 5 years.
The systematic analysis that was presented in this research was focused on “chatbot(s)” or “chatterbot(s)”. Other academics may refer to them by using different synonyms like “artificial conversational entity (entities)”, “bot(s)”, “conversational avatar(s)”, “conversational interface agent”, “interactive agent(s)”, “talkbot(s)”, “virtual agent(s)”, and/or “virtual assistant(s)”, among others. Therefore, future researchers may also consider using these keywords when they are other exploring the academic and nonacademic literature on conversational chatbots that are being used for customer-centric services.
Nevertheless, this bibliographic study has identified some of the most popular research areas relating to the use of responsive chatbots in online customer service settings. The findings confirmed that many authors are focusing on the chatbots’ anthropomorphic designs, AI capabilities and on their dialogue systems. This research suggests that there are still knowledge gaps in the academic literature. The following table clearly specifies that there are untapped opportunities for further empirical research in this promising field of study.
The full article is forthcoming. A prepublication version will be available through Researchgate.
You must be logged in to post a comment.