Category Archives: chatbots

The use of chatbots for online customer services

This is an excerpt from one of my contributions on the use of responsive chatbots by service businesses. The content was adapted for this blogpost.

(Source: Google Gemini)

Suggested citation: Camilleri, M.A. & Troise, C. (2022). Live support by chatbots with artificial intelligence: A future research agenda. Service Businesshttps://doi.org/10.1007/s11628-022-00513-9

Chatbots are usually considered as automated conversational systems that are capable of mimicking humanlike conversations. Previous research suggested that, at times, human beings are treating computers as social beings (Nass and Moon 2000; Nass et al. 1994; Rha and Lee 2022) although they are well aware that dialogue programs do not possess emotions, feelings and identities. Individuals may still perceive that service chatbots have some sort of social presence when they interact with them (Leung and Wen 2020; McLean et al. 2020; Pantano & Scarpi, 2022; Schuetzler et al. 2020), even though these technologies are capable of responding to thousands of potential users at once (Caldarini et al. 2022).

Currently, few academic contributions are using theoretical bases like the social presence theory (Grimes et al. 2020; Schuetzler et al. 2020) and/or the social response theory (Adam et al. 2021; Huang and Lin 2011), to explore human-computer interactions, and/or the utility of dialogue systems like chatbots, albeit a few exceptions. A few commentators made specific reference to related theories to describe the characteristics of chatbots or of conversational 4 agents, that are primarily used for consumer engagement purposes (Cheng and Jiang 2020; Kull et al. 2021; Mostafa and Kasamani 2021; Nuruzzaman and Hussain 2020).

The human machine communication theory was formulated in response to the growing number of technologies like AI and robotics, that are designed to function as message sources, rather than as message channels (Flavián et al. 2021). Lewis et al. (2019) contended that social bots, and even a few chatbots have pushed into the realm of what was thought to be a purely human role. Wilkinson et al.’s (2021) study shed light on the human beings’ perceptions about conversational recommender systems. In this case, the authors went on to suggest that experienced users trusted their disruptive technologies and had higher expectations from them.

Other researchers examined the online users’ trust toward chatbots in various settings (Balakrishnan and Dwivedi 2021; Borau et al. 2021; Cheng and Jiang 2020; De Cicco et al. 2020; Hildebrand and Bergner 2021; Kushwaha et al. 2021; Mozafari et al. 2021; Nuruzzaman and Hussain 2020; Pillai and Sivathanu 2020). Eren (2021) confirmed that the users’ performance perceptions regarding the use of chatbots positively affected their customer satisfaction levels in the banking sector. This finding is in line with the expectancy violation theory, as individuals form expectations following their interactions with information systems (Chopra 2019; Neuburger et al. 2018).

The individuals’ social expectations from conversational technologies are especially pronounced when they incorporate cues of humanness (Adam et al. 2021; Pfeuffer et al. 2019), that are not present in traditional systems like websites, mobile applications, and databases (Belanche et al. 2021). The anthropomorphic features of AI dialogue systems make it easier for humans to connect with them (Adam et al. 2021; Becker et al. 2022; Forgas-Coll et al. 2022; Van Pinxteren et al. 2020).

In many cases, a number of quantitative researchers have investigated online users’ perceptions and attitudes toward these interactive technologies. Very often, they relied on valid measures that were tried and tested in academia. Some utilized the theory of reasoned action (Huang and Kao, 2021), the theory of planned behavior (Brachten et al. 2021; Ciechanowski et al. 2019), the behavioral reasoning theory (Lalicic and Weismayer 2021), the technology acceptance model (Kasilingam 2020) or the unified theory of acceptance and use of technology (Mostafa and Kasamani 5 2021), as they sought to investigate the individuals’ utilitarian motivations to use chatbot technologies to resolve their consumer issues. Others examined the users’ gratifications (Cheng and Jiang 2020; Rese et al. 2020), perceived enjoyment (De Cicco et al. 2020; Kushwaha et al. 2021; Rese et al. 2020), emotional factors (Crolic et al. 2021; Lou et al. 2021; Schepers et al. 2022; Wei et al. 2021), and/or intrinsic motivations (Jiménez-Barreto et al. 2021), to determine whether they were (or were not) affecting their intentions to use them.

The full paper can be downloaded via: Academia, OAR, Repec, ResearchGate, Springer, and SSRN.

Leave a comment

Filed under artificial intelligence, Business, chatbots

Why are people using generative AI like ChatGPT?

The following text is an excerpt from one of my latest articles. I am sharing the managerial implications of my contribution published through Technological Forecasting and Social Change.

This empirical study provides a snapshot of the online users’ perceptions about Chat Generative Pre-Trained Transformer (ChatGPT)’s responses to verbal queries, and sheds light on their dispositions to avail themselves from ChatGPT’s natural language processing.

It explores their performance expectations about their usefulness and their effort expectations related to the ease of use of these information technologies and investigates whether they are affected by colleagues or by other social influences to use such dialogue systems. Moreover, it examines their insights about the content quality, source trustworthiness as well as on the interactivity features of these text-generative AI models.

Generally, the results suggest that the research participants felt that these algorithms are easy to use. The findings indicate that they consider them to be useful too, specifically when the information they generate is trustworthy and dependable.

The respondents suggest that they are concerned about the quality and accuracy of the content that is featured in the AI chatbots’ answers. This contingent issue can have a negative effect on the use of the information that is created by online dialogue systems.

OpenAI’s ChatGPT is a case in point. Its app is freely available in many countries, via desktop and mobile technologies including iOS and Android. The company admits that its GPT-3.5 outputs may be inaccurate, untruthful, and misleading at times. It clarifies that its algorithm is not connected to the internet, and that it can occasionally produce incorrect answers (OpenAI, 2023a). It posits that GPT-3.5 has limited knowledge of the world and events after 2021 and may also occasionally produce harmful instructions or biased content.

OpenAI recommends checking whether its chatbot’s responses are accurate or not, and to let them know when and if it answers in an incorrect manner, by using their “Thumbs Down” button. They even declare that their ChatGPT’s Help Center can occasionally make up facts or “hallucinate” outputs (OpenAI, 2023aOpenAI, 2023b).

OpenAI reports that its top notch ChatGPT Plus subscribers can access safer and more useful responses. In this case, users can avail themselves from a number of beta plugins and resources that can offer a wide range of capabilities including text-to-speech applications as well as web browsing features through Bing.

Yet again, OpenAI (2023b) indicates that its GPT-4 still has many known limitations that the company is working to address, such as “social biases and adversarial prompts” (at the time of writing this article). Evidently, works are still in progress at OpenAI.

The company needs to resolve these serious issues, considering that its Content Policy and Terms clearly stipulate that OpenAI’s consumers are the owners of the output that is created by ChatGPT. Hence, ChatGPT’s users have the right to reprint, sell, and merchandise the content that is generated for them through OpenAI’s platforms, regardless of whether the output (its response) was provided via a free or a paid plan.

Various commentators are increasingly raising awareness about the corporate digital responsibilities of those involved in the research, development and maintenance of such dialogue systems. A number of stakeholders, particularly the regulatory ones, are concerned on possible risks and perils arising from AI algorithms including interactive chatbots.

In many cases, they are warning that disruptive chatbots could disseminate misinformation, foster prejudice, bias and discrimination, raise privacy concerns, and could lead to the loss of jobs. Arguably, one has to bear in mind that, in many cases, many governments are outpaced by the proliferation of technological innovations (as their development happens before the enactment of legislation).

As a result, they tend to be reactive in the implementation of substantive regulatory interventions. This research reported that the development of ChatGPT has resulted in mixed reactions among different stakeholders in society, especially during the first months after its official launch.

At the moment, there are just a few jurisdictions that have formalized policies and governance frameworks that are meant to protect and safeguard individuals and entities from possible risks and dangers of AI technologies (Camilleri, 2023). Of course, voluntary principles and guidelines are a step in the right direction. However, policy makers are expected by various stakeholders to step-up their commitment by introducing quasi-regulations and legislation.

Currently, a number of technology conglomerates including Microsoft-backed OpenAI, Apple and IBM, among others, anticipated the governments’ regulations by joining forces in a non-profit organization entitled, “Partnership for AI” that aims to advance safe, responsible AI, that is rooted in open innovation.

In addition, IBM has also teamed up with Meta and other companies, startups, universities, research and government organizations, as well as non-profit foundations to form an “AI Alliance”, that is intended to foster innovations across all aspects of AI technology, applications and governance.

The full list of references is available here: https://www.sciencedirect.com/science/article/pii/S004016252400043X?via%3Dihub

Suggested citation: Camilleri, M. A. (2024). Factors affecting performance expectancy and intentions to use ChatGPT: Using SmartPLS to advance an information technology acceptance framework. Technological Forecasting and Social Change201, https://doi.org/10.1016/j.techfore.2024.123247

Leave a comment

Filed under academia, chatbots, ChatGPT, Generative AI

Users’ perceptions and expectations of ChatGPT

Featuring an excerpt and a few snippets from one of my latest articles related to Generative Artificial Intelligence (AI).

Suggested Citation: Camilleri, M.A. (2024). Factors affecting performance expectancy and intentions to use ChatGPT: Using SmartPLS to advance an information technology acceptance framework, Technological Forecasting and Social Changehttps://doi.org/10.1016/j.techfore.2024.123247


The introduction

Artificial intelligence (AI) chatbots utilize algorithms that are trained to process and analyze vast amounts of data by using techniques ranging from rule-based approaches to statistical models and deep learning, to generate natural text, to respond to online users, based on the input they received (OECD, 2023). For instance, Open AI‘s Chat Generative Pre-Trained Transformer (ChatGPT) is one of the most popular AI-powered chatbots. The company claims that ChatGPT “is designed to assist with a wide range of tasks, from answering questions to generating text in various styles and formats” (OpenAI, 2023a). OpenAI clarifies that its GPT-3.5, is a free-to-use language model that was optimized for dialogue by using Reinforcement Learning with Human Feedback (RLHF) – a method that relies on human demonstrations and preference comparisons to guide the model toward desired behaviors. Its models are trained on vast amounts of data including conversations that were created by humans (such content is accessed through the Internet). The responses it provides appear to be as human-like as possible (Jiang et al., 2023).

GPT-3.5’s database was last updated in September 2021. However, GPT-4.0 version comes with a paid plan that is more creative than GPT-3.5, could accept images as inputs, can generate captions, classifications and analyses (Qureshi et al., 2023). Its developers assert that GPT-4.0 can create better content including extended conversations, as well as document search and analysis (Takefuji, 2023). Recently, its proponents noted that ChatGPT can be utilized for academic purposes, including research. It can extract and paraphrase information, translate text, grade tests, and/or it may be used for conversation purposes (MIT, 2023). Various stakeholders in education noted that this LLM tool may be able to provide quick and easy answers to questions.

However, earlier this year, several higher educational institutions issued statements that warned students against using ChatGPT for academic purposes. In a similar vein, a number of schools banned ChatGPT from their networks and devices (Rudolph et al., 2023). Evidently, policy makers were concerned that this text generating AI system could disseminate misinformation and even promote plagiarism. Some commentators argue that it can affect the students’ critical-thinking and problem-solving abilities. Such skill sets are essential aspects for their academic and lifelong successes (Liebrenz et al., 2023Thorp, 2023). Nevertheless, a number of jurisdictions are reversing their decisions that impede students from using this technology (Reuters, 2023). In many cases, educational leaders are realizing that their students could benefit from this innovation, if they are properly taught how to adopt it as a tool for their learning journey.

Academic colleagues are increasingly raising awareness on different uses of AI dialogue systems like service chatbots and/or virtual assistants (Baabdullah et al., 2022Balakrishnan et al., 2022Brachten et al., 2021Hari et al., 2022Li et al., 2021Lou et al., 2022Malodia et al., 2021Sharma et al., 2022). Some of them are evaluating their strengths and weaknesses, including of OpenAI’s ChatGPT (Farrokhnia et al., 2023Kasneci et al., 2023). Very often, they argue that there may be instances where the chatbots’ prompts are not completely accurate and/or may not fully address the questions that are asked to them (Gill et al., 2024). This may be due to different reasons. For example, GPT-3.5’s responses are based on the data that were uploaded before a knowledge cut-off date (i.e. September 2021). This can have a negative effect on the quality of its replies, as the algorithm is not up to date with the latest developments. Although, at the moment, there is a knowledge gap and a few grey areas on the use of AI chatbots that use natural language processing to create humanlike conversational dialogue, currently, there are still a few contributions that have critically evaluated their pros and cons, and even less studies have investigated the factors affecting the individuals’ engagement levels with ChatGPT.

This empirical research builds on theoretical underpinnings related to information technology adoption in order to examine the online users’ perceptions and intentions to use AI Chatbots. Specifically, it integrates a perceived interactivity construct (Baabdullah et al., 2022McMillan and Hwang, 2002) with information quality and source trustworthiness measures (Leong et al., 2021Sussman and Siegal, 2003) from the Information Adoption Model (IAM) with performance expectancy, effort expectancy and social influences constructs (Venkatesh et al., 2003Venkatesh et al., 2012) from the Unified Theory of Acceptance and Use of Technology (UTAUT1/UTAUT2) to determine which factors are influencing the individuals’ intentions to use AI text generation systems like ChatGPT. This study’s focused research questions are:

RQ1

How and to what extent are information quality and source trustworthiness influencing the online users’ performance expectancy from ChatGPT?

RQ2

How and to what extent are their perceptions about ChatGPT’s interactivity, performance expectancy, effort expectancy, as well as their social influences affecting their intentions to continue using their large language models?

RQ3

How and to what degree is the performance expectancy construct mediating effort expectancy – intentions to use these interactive AI technologies?

This study hypothesizes that information quality and source trustworthiness are significant antecedents of performance expectancy. It presumes that this latter construct, together with effort expectancy, social influences as well as perceived interactivity affect the online users’ acceptance and usage of generative pre-trained AI chatbots like GPT-3.5 or GPT-4.

Many academic researchers sought to explore the individuals’ behavioral intentions to use a wide array of technologies (Alalwan, 2020Alam et al., 2020Al-Saedi et al., 2020Raza et al., 2021Tam et al., 2020). Very often, they utilized measures from the Theory of Reasoned Action (TRA) (Fishbein and Ajzen, 1975), the Theory of Planned Behavior (TPB) (Ajzen, 1991), the Technology Acceptance Model (TAM) (Davis, 1989Davis et al., 1989), TAM2 (Venkatesh and Davis, 2000), TAM3 (Venkatesh and Bala, 2008), UTAUT (Venkatesh et al., 2003) or UTAUT2 (Venkatesh et al., 2012). Few scholars have integrated constructs like UTAUT/UTAUT2’s performance expectancy, effort expectancy, social influences and intentions to use technologies with information quality and source trust measures from the Elaboration Likelihood Model (ELM) and IAM. Currently, there is still limited research that incorporates a perceived interactivity factor within information technology frameworks. Therefore, this contribution addresses this deficit in academic knowledge.

Notwithstanding, for the time being, there is still scant research that is focused on AI-powered LLM, like ChatGPT, that are capable of generating human-like text that is based on previous contexts and drawn from past conversations. This timely study raises awareness on the individuals’ perceptions about the utilitarian value of such interactive technologies, in an academic (higher educational) context. It clearly identifies the factors that are influencing the individuals’ intentions to continue using them, in the future.


From the literature review

Table 1 features a summary of the most popular theoretical frameworks that sought to identify the antecedents and the extent to which they may affect the individuals’ intentions to use information technologies.

Table 1. A non-exhaustive list of theoretical frameworks focused on (information) technology adoption behaviors

Figure 1. features the conceptual framework that investigates information technology adoption factors. It represents a visual illustration of the hypotheses of this study. In sum, this empirical research presumes that information quality and source trustworthiness (from Information Adoption Model) precede performance expectancy. The latter construct together with effort expectancy, social influences (from Unified Theory of Acceptance and Use of Technology) as well as the perceived interactivity construct, are significant antecedents of the individuals’ intentions to use ChatGPT.


The survey instrument

The respondents were instructed to answer all survey questions that were presented to them about information quality, source trustworthiness, performance expectancy, effort expectancy, social influences, perceived interactivity and on their behavioral intentions to continue using this technology (otherwise, they could not submit the questionnaire). Table 2 features the list of measures as well as their corresponding items that were utilized in this study. It also provides a definition of the constructs used in the proposed information technology acceptance framework.

Table 2. The list of measures and the corresponding items used in this research.


Theoretical implications

This research sought to explore the factors that are affecting the individuals’ intentions to use ChatGPT. It examined the online users’ effort and performance expectancy, social influences as well as their perceptions about the information quality, source trustworthiness and interactivity of generative text AI chatbots. The empirical investigation hypothesized that performance expectancy, effort expectancy and social influences from Venkatesh et al.’s (2003) UTAUT together with a perceived interactivity construct (McMillan and Hwang, 2002) were significant antecedents of their intentions to revisit ChatGPT’s website and/or to use its app. Moreover, it presumed that information quality and source trustworthiness measures from Sussman and Siegal’s (2003) IAM were found to be the precursors of performance expectancy.

The results from this study report that source trustworthiness-performance expectancy is the most significant path in this research model. They confirm that online users indicated that they believed that there is a connection between the source’s trustworthiness in terms of its dependability, and the degree to which they believe that using such an AI generative system will help them improve their job performance. Similar effects were also evidenced in previous IAM theoretical frameworks (Kang and Namkung, 2019; Onofrei et al., 2022), as well as in a number of studies related to TAM (Assaker, 2020; Chen and Aklikokou, 2020; Shahzad et al., 2018) and/or to UTAUT/UTAUT2 (Lallmahomed et al., 2017).

In addition, this research also reports that the users’ peceptions about information quality significantly affects their performance expectancy/expectancies from ChatGPT. Yet, in this case, this link was weaker than the former, thus implying that the respondents’ perceptions about the usefulness of this text generative technology were clearly influenced by the peripheral cues of communication (Cacioppo and Petty, 1981; Shi et al., 2018; Sussman and Siegal, 2003; Tien et al., 2019).

Very often, academic colleagues noted that individuals would probably rely on the information that is presented to them, if they perceive that the sources and/or their content are trustworthy (Bingham et al., 2019; John and De’Villiers, 2020; Winter, 2020). Frequently, they indicated that source trustworthiness would likely affect their beliefs about the usefulness of information technologies, as they enable them to enhance their performance. Conversely, some commentators argued that there may be users that could be skeptical and wary about using new technologies, especially if they are unfamiliar with them (Shankar et al., 2021). They noted that such individuals may be concerned about the reliability and trustworthiness of the latest technologies.

The findings suggest that the individuals’ perceptions about the interactivity of ChatGPT are a precursor of their intentions to use it. This link is also highly significant. Therefore, the online users were somehow appreciating this information technology’s responsiveness to their prompts (in terms of its computer-human communications). Evidently, ChatGPT’s interactivity attributes are having an impact on the individuals’ readiness to engage with it, and to seek answers to their questions. Similar results were reported in other studies that analyzed how the interactivity and anthropomorphic features of dialogue systems like live support chatbots, or virtual assistants can influence the online users’ willingness to continue utilizing them in the future (Baabdullah et al., 2022; Balakrishnan et al., 2022; Brachten et al., 2021; Liew et al., 2017).

There are a number of academic contributions that sought to explore how, why, where and when individuals are lured by interactive communication technologies (e.g. Hari et al., 2022; Li et al., 2021; Lou et al., 2022). Generally, these researchers posited that users are habituated with information systems that are programed to engage with them in a dynamic and responsive manner. Very often they indicated that many individuals are favorably disposed to use dialogue systems that are capable of providing them with instant feedback and personalized content. Several colleagues suggest that positive user experiences as well as high satisfaction levels and enjoyment, could enhance their connection with information technologies, and will probably motivate them to continue using them in the future (Ashfaq et al., 2020; Camilleri and Falzon, 2021; Huang and Chueh, 2021; Wolfinbarger and Gilly, 2003).

Another important finding from this research is that the individuals’ social influences (from family, friends or colleagues) are affecting their interactions with ChatGPT. Again, this causal path is also very significant. Similar results were also reported in UTAUT/UTAUT2 studies that are focused on the link between social influences and its link with intentional behaviors to use technologies (Gursoy et al., 2019; Patil et al., 2020). In addition, TPB/TRA researchers found that subjective norms also predict behavioral intentions (Driediger and Bhatiasevi, 2019; Sohn and Kwon, 2020). This is in stark contract with other studies that reported that there was no significant relationship between social influences/subjective norms and behavioral intentions (Ho et al., 2020; Kamble et al., 2019).

Interestingly, the results report that there are highly significant effects between effort expectancy (i.e. ease of use of the generative AI technology) and performance expectancy (i.e. its perceived usefulness). Many scholars posit that perceived ease of use is a significant driver of perceived usefulness of technology (Bressolles et al., 2014; Davis, 1989; Davis et al., 1989; Kamble et al., 2019; Yoo and Donthu, 2001). Furthermore, there are significant causal paths between performance expectancy-intentions to use ChatGPT and even between effort expectancy-intentions to use ChatGPT, albeit to a lesser extent. Yet, this research indicates that performance expectancy partially mediates effort expectancy-intentions to use ChatGPT. In this case, this link is highly significant.

In sum, this contribution validates key information technology measures, specifically, performance expectancy, effort expectancy, social influences and behavioral intentions from UTAUT/UTAUT2, as well as information quality and source trustworthiness from ELM/IAM and integrates them with a perceived interactivity factor. It builds on previous theoretical underpinnings. Yet, it differentiates itself from previous studies. To date, there are no other empirical investigations that have combined the same constructs that are presented in this article. Notwithstanding, this research puts forward a robust Information Technology Acceptance Framework. The results confirm the reliability and validity of the measures. They clearly outline the relative strength and significance of the causal paths that are predicting the individuals’ intentions to use ChatGPT.


Managerial implications

This empirical study provides a snapshot on the online users’ perceptions about ChatGPT’s responses to verbal queries, and sheds light on their dispositions to avail themselves from its natural language processing. It explores their performance expectations about their usefulness and their effort expectations related to the ease of use of these information technologies and investigates whether they are affected by colleagues or by other social influences to use such dialogue systems. Moreover, it examines their insights about the content quality, source trustworthiness as well as on the interactivity features of these text- generative AI models.

Generally, the results suggest that the research participants felt that these algorithms are easy to use. The findings indicate that they consider them to be useful too, specifically when the information they generate is trustworthy and dependable. The respondents suggest that they are concerned about the quality and accuracy of the content that is featured in the AI chatbots’ answers. This contingent issue can have a negative effect on the use of the information that is created by online dialogue systems.

OpenAI’s ChatGPT is a case in point. Its app is freely available in many countries, via desktop and mobile technologies including iOS and Android. The company admits that its GPT-3.5 outputs may be inaccurate, untruthful, and misleading at times. It clarifies that its algorithm is not connected to the internet, and that it can occasionally produce incorrect answers (OpenAI, 2023a). It posits that GPT-3.5 has limited knowledge of the world and events after 2021 and may also occasionally produce harmful instructions or biased content. OpenAI recommends checking whether its chatbot’s responses are accurate or not, and to let them know when and if it answers in an incorrect manner, by using their “Thumbs Down” button. They even declare that their ChatGPT’s Help Center can occasionally make up facts or “hallucinate” outputs (OpenAI, 2023a,b).

OpenAI reports that its top notch ChatGPT Plus subscribers can access safer and more useful responses. In this case, users can avail themselves from a number of beta plugins and resources that can offer a wide range of capabilities including text-to-speech applications as well as web browsing features through Bing. Yet again, OpenAI (2023b) indicates that its GPT-4 still has many known limitations that the company is working to address, such as “social biases and adversarial prompts” (at the time of writing this article). Evidently, works are still in progress at OpenAI. The company needs to resolve these serious issues, considering that its Content Policy and Terms clearly stipulate that OpenAI’s consumers are the owners of the output that is created by ChatGPT. Hence, ChatGPT’s users have the right to reprint, sell, and merchandise the content that is generated for them through OpenAI’s platforms, regardless of whether the output (its response) was provided via a free or a paid plan.

Various commentators are increasingly raising awareness about the corporate digital responsibilities of those involved in the research, development and maintenance of such dialogue systems. A number of stakeholders, particularly the regulatory ones, are concerned on possible risks and perils arising from AI algorithms including interactive chatbots. In many cases, they are warning that disruptive chatbots could disseminate misinformation, foster prejudice, bias and discrimination, raise privacy concerns, and could lead to the loss of jobs. Arguably, one has to bear in mind that, in many cases, many governments are outpaced by the proliferation of technological innovations (as their development happens before the enactment of legislation). As a result, they tend to be reactive in the implementation of substantive regulatory interventions. This research reported that the development of ChatGPT has resulted in mixed reactions among different stakeholders in society, especially during the first months after its official launch. At the moment, there are just a few jurisdictions that have formalized policies and governance frameworks that are meant to protect and safeguard individuals and entities from possible risks and dangers of AI technologies (Camilleri, 2023). Of course, voluntary principles and guidelines are a step in the right direction. However, policy makers are expected by various stakeholders to step-up their commitment by introducing quasi-regulations and legislation.

Currently, a number of technology conglomerates including Microsoft-backed OpenAI, Apple and IBM, among others, anticipated the governments’ regulations by joining forces in a non-profit organization entitled, “Partnership for AI” that aims to advance safe, responsible AI, that is rooted in open innovation. In addition, IBM has also teamed up with Meta and other companies, startups, universities, research and government organizations, as well as non-profit foundations to form an “AI Alliance”, that is intended to foster innovations across all aspects of AI technology, applications and governance.

Continue reading

Leave a comment

Filed under artificial intelligence, chatbots, ChatGPT, digital media, Generative AI, Marketing

An artificial intelligence governance framework

This is an excerpt from my latest contribution on responsible artificial intelligence (AI).

Suggested citation: Camilleri, M. A. (2023). Artificial intelligence governance: Ethical considerations and implications for socialresponsibility. Expert Systems, e13406. https://doi.org/10.1111/exsy.13406

The term “artificial intelligence governance” or “AI governance” integrates the notions of “AI” and “corporate governance”. AI governance is based on formal rules (including legislative acts and binding regulations) as well as on voluntary principles that are intended to guide practitioners in their research, development and maintenance of AI systems (Butcher & Beridze, 2019; Gonzalez et al., 2020). Essentially, it represents a regulatory framework that can support AI practitioners in their strategy formulation and in day-to-day operations (Erdélyi & Goldsmith, 2022; Mullins et al., 2021; Schneider et al., 2022). The rationale behind responsible AI governance is to ensure that automated systems including ML/DL technologies, are supporting individuals and organizations in achieving their long terms objectives, whist safeguarding the interests of all stakeholders (Corea et al., 2023; Hickok et al., 2022).

AI governance requires that the organizational leaders comply with relevant legislation, hard laws and regulations (Mäntymäki et al., 2022). Moreover, they are expected to follow ethical norms, values and standards (Koniakou, 2023). Practitioners ought to be trustworthy, diligent and accountable in how they handle their intellectual capital and other resources including their information technologies, finances as well as members of staff, in order to overcome challenges, minimize uncertainties, risks and any negative repercussions (E.g. decreased human oversight in decision making, among others) (Agbese et al., 2023; Smuha, 2019).

Procedural governance mechanisms ought to be in place to ensure that AI technologies and ML/DL models are operating in a responsible manner. Figure 1 features some of the key elements that are required for the responsible governance of artificial intelligence. The following principles are aimed to provide guidelines for the modus operandi of AI practitioners (including ML/DL developers).

Figure 1. A Responsible Artificial Intelligence Governance Framework

Accountability and transparency

“Accountability” refers to the stakeholders’ expectations about the proper functioning of AI systems, in all stages, including in the design, creation, testing or deployment, in accordance with relevant regulatory frameworks. It is imperative that AI developers are held accountable for the smooth operation of AI systems throughout their lifecycle (Raji et al., 2020). Stakeholders expect them to be accountable by keeping a track record of their AI development processes (Mäntymäki et al., 2022).

The transparency notion refers to the extent to which end-users could be in a position to understand how AI systems work (Andrada et al., 2020; Hollanek, 2020). AI transparency is associated with the degree of comprehension about algorithmic models in terms of “simulatability” (an understanding of AI functioning), “decomposability” (related to how individual components work), and algorithmic transparency (this is associated to the algorithms’ visibility).

 In reality, it is difficult to understand how AI systems, including deep learning models and their neural networks are learning (as they acquire, process and store data) during training phases. They are often considered as black box models. It may prove hard to algorithmically translate derived concepts into human-understandable terms, even though developers may use certain jargon to explain their models’ attributes and features. Many legislators are striving in their endeavors to pressurize AI actors to describe the algorithms they use in automated decision-making, yet the publication of algorithms is useless if outsiders cannot access the data of the AI model.

Explainability and interpretability

Explainability is the concept that sheds light on how AI models work, in a way that is comprehensible to a human being. Arguably, the explainabilty of AI systems could improve their transparency, trustworthiness and accountability. At the same time, it can reduce bias and unfairness. The explainability of artificial intelligence systems could clarify how they reached their decisions (Arya et al., 2019; Keller & Drake, 2021). For instance, AI could explain how and why autonomous cars decide to stop or to slow down when there are pedestrians or other vehicles in front of them.

Explainable AI systems might improve consumer trust and may enable engineers to develop other AI models, as they are in a position to track provenance of every process, to ensure reproducibility, and to enable checks and balances (Schneider et al., 2022). Similarly, interpretability refers to the level of accuracy of machine learning programs in terms of linking the causes to the effects (John-Mathews, 2022).

Fairness and inclusiveness

The responsible AI’s fairness dimension refers to the practitioners’ attempts to correct algorithmic biases that may possibly (voluntarily or involuntarily) be included in their automation processes (Bellamy et al., 2019; Mäntymäki, et al., 2022). AI systems can be affected by their developers’ biases that could include preferences or antipathies toward specific demographic variables like genders, age groups and ethnicities, among others (Madaio et al., 2020). Currently, there is no universal definition on AI fairness.

However, recently many multinational corporations have developed instruments that are intended to detect bias and to reduce it as much as possible (John-Mathews et al., 2022). In many cases, AI systems are learning from the data that is fed to them. If the data are skewed and/or if they comprise implicit bias into them, they may result in inappropriate outputs.

Fair AI systems rely on unbiased data (Wu et al., 2020). For this reason, many companies including Facebook, Google, IBM and Microsoft, among others are striving in their endeavors to involve members of staff hailing from diverse backgrounds. These technology conglomerates are trying to become as inclusive and as culturally aware as possible in order to minimize bias from affecting their AI processes. Previous research reported that AI’s bias may result in inequality, discrimination and in the loss of jobs (Butcher & Beridze, 2019).

Privacy and safety for consumers

Consumers are increasingly concerned about the privacy of their data. They have a right to control who has access to their personal information. The data that is collected or used by third parties, without the authorization or voluntary consent of individuals, would result in the violations of their privacy (Zhu et al., 2020; Wu et al., 2022).

AI-enabled products, including dialogue systems like chatbots and virtual assistants, as well as digital assistants (e.g. like Siri, Alexa or Cortana), and/or wearable technologies such as smart watches and sensorial smart socks, among others, are increasingly capturing and storing large quantities of consumer information. The benefits that are delivering these interactive technologies may be offset by a number of challenges. The technology businesses who developed these products are responsible to protect their consumers’ personal data (Rodríguez-Barroso et al., 2020). Their devices are capable of holding a wide variety of information on their users. They are continuously gathering textual, visual, audio, verbal, and other sensory data from consumers. In many cases, the customers are not aware that they are sharing personal information to them.

For example, facial recognition technologies are increasingly being used in different contexts. They may be used by individuals to access websites and social media, in a secure manner and to even authorize their payments through banking and financial services applications. Employers may rely on such systems to track and monitor their employees’ attendance. Marketers can utilize such technologies to target digital advertisements to specific customers. Police and security departments may use them for their surveillance systems and to investigate criminal cases. The adoption of these technologies has often raised concerns about privacy and security issues. According to several data privacy laws that have been enacted in different jurisdictions, organizations are bound to inform users that they are gathering and storing their biometric data. The businesses that employ such technologies are not authorized to use their consumers’ data without their consent.

Companies are expected to communicate about their data privacy policies with their target audiences (Wong, 2020). They have to reassure consumers that the consented data they collect from them is protected and are bound to inform them that they may use their information to improve their customized services to them. The technology giants can reward their consumers to share sensitive information. They could offer them improved personalized services among other incentives, in return for their data. In addition, consumers may be allowed to access their own information and could be provided with more control (or other reasonable options) on how to manage their personal details.

The security and robustness of AI systems

AI algorithms are vulnerable to cyberattacks by malicious actors. Therefore, it is in the interest of AI developers to secure their automated systems and to ensure that they are robust enough against any risks and attempts to hack them (Gehr et al., 2018; Li et al., 2020).

The accessibility to AI models ought to be continuously monitored at all times during their development and deployment (Bertino et al., 2021). There may be instances when AI models could encounter incidental adversities, leading to the corruption of data. Alternatively, they might encounter intentional adversities when they experience sabotage from hackers. In both cases, the AI model will be compromised and can result in system malfunctions (Papagiannidis et al., 2023).

AI models have to prevent such contingent issues from happening. Their developers’ responsibilities are to improve the robustness of their automated systems, and to make them as secure of possible, to reduce the chances of threats, including by inadvertent irregularities, information leakages, as well as by privacy violations like data breaches, contamination and poisoning by malicious actors (Agbese et al., 2023; Hamon et al., 2020).

AI developers should have preventive policies and measures related to the monitoring and control of their data. They ought to invest in security technologies including authentication and/or access systems with encryption software as well as firewalls for their protection against cyberattacks. Routine testing can increase data protection, improve security levels and minimize the risks of incidents.

Conclusions

This review indicates that more academics as well as practitioners, are increasingly devoting their attention to AI as they elaborate about its potential uses, as well as on its opportunities and threats. It reported that its proponents are raising awareness on the benefits of AI systems for individuals as well as for organizations. At the same time, it suggests that a number of scholars and other stakeholders including policy makers, are raising their concerns about its possible perils (e.g. Berente et al., 2021; Gonzalez et al., 2020; Zhang & Lu, 2021).

Many researchers identified some of the risks of AI (Li et al., 2021; Magas & Kiritsis, 2022). In many cases, they warned that AI could disseminate misinformation, foster prejudice, bias and discrimination, raise privacy concerns, and could lead to the loss of jobs (Butcher & Beridze, 2019). A few commentators argue about the “singularity” or the moment where machine learning technologies could even surpass human intelligence (Huang & Rust, 2022). They predict that a critical shift could occur if humans are no longer in a position to control AI anymore.

In this light, this article sought to explore the governance of AI. It sheds light on substantive regulations, as well as on reflexive principles and guidelines, that are intended at practitioners who are researching, testing, developing and implementing AI models. It clearly explains how institutions, non-governmental organizations and technology conglomerates are introducing protocols (including self-regulations) to prevent contingencies from even happening due to inappropriate AI governance.

Debatably, the voluntary or involuntary mishandling of automated systems can expose practitioners to operational disruptions and to significant risks including to their corporate image and reputation (Watts & Adriano, 2021). The nature of AI requires practitioners to develop guardrails to ensure that their algorithms work as they should (Bauer, 2022). It is imperative that businesses comply with relevant legislations and to follow ethical practices (Buhmann & Fieseler, 2023). Ultimately, it is in their interest to operate their company in a responsible manner, and to implement AI governance procedures. This way they can minimize unnecessary risks and safeguard the well-being of all stakeholders.

This contribution has addressed its underlying research objectives. Firstly, it raised awareness on AI governance frameworks that were developed by policy makers and other organizations, including by the businesses themselves. Secondly, it scrutinized the extant academic literature focused on AI governance and on the intersection of AI and CSR. Thirdly, it discussed about essential elements for the promotion of socially responsible behaviors and ethical dispositions of AI developers. In conclusion it put forward an AI governance conceptual model for practitioners.

This research made reference to regulatory instruments that are intended to govern AI expert systems. It reported that, at the moment there are a few jurisdictions that have formalized their AI policies and governance frameworks. Hence, this article urges laggard governments to plan, organize, design and implement regulatory instruments that ensure that individuals and entities are safe when they utilize AI systems for personal benefit, educational and/or for commercial purposes.

Arguably, one has to bear in mind that, in many cases, policy makers have to face a “pacing problem” as the proliferation of innovation is much quicker than legislation. As a result, governments tend to be reactive in the implementation of regulatory interventions relating to innovations. They may be unwilling to hold back the development of disruptive technologies from their societies. Notwithstanding, they may face criticism by a wide array of stakeholders in this regard, as they may have conflicting objectives and expectations.

The governments’ policy is to regulate business and industry to establish technical, safety and quality standards as well as to monitor their compliance. Yet, they may consider introducing different forms of regulation other than the traditional “command and control” mechanisms. They may opt for performance-based and/or market-based incentive approaches, co-regulation and self-regulation schemes, among others (Hepburn, 2009), in order to foster technological innovations.

This research has shown that a number of technology giants, including IBM and Microsoft, among others, are anticipating the regulatory interventions of different governments where they operate their businesses. It reported that they are communicating about their responsible AI governance initiatives as they share information on their policies and practices that are meant to certify, explain and audit their AI developments. Evidently, these companies, among others, are voluntarily self-regulating themselves as they promote accountability, fairness, privacy and robust AI systems. These two organizations, in particular, are raising awareness about their AI governance frameworks to increase their CSR credentials with stakeholders.

Likewise, AI developers who work for other businesses, are expected to forge relationships with external stakeholders including with policy makers as well as with actors including individuals and organizations who share similar interests in AI. Innovative clusters and network developments may result in better AI systems and can also decrease the chances of possible risks.  Indeed, practitioners can be in better position if they cooperate with stakeholders for the development of trustworthy AI and if they increase their human capacity to improve the quality of their intellectual properties (Camilleri et al., 2023). This way, they can enhance their competitiveness and growth prospects (Troise & Camilleri, 2021). Arguably, it is in their interest to continuously engage with internal stakeholders (and employees), and to educate them about AI governance dimensions, that are intended to promote accountable, transparent, explainable interpretable reproducible, fair, inclusive and secure AI solutions. Hence, they could maximize AI benefits, minimize their risks as well as associated costs.

Future research directions

Academic colleagues are invited to raise more awareness on AI governance mechanisms as well as on verification and monitoring instruments. They can investigate what, how, when and where protocols could be used to protect and safeguard individuals and entities from possible risks and dangers of AI.

The “what” question involves the identification of AI research and development processes that require regulatory or quasi regulatory instruments (in the absence of relevant legislation) and/or necessitate revisions in existing statutory frameworks.

The “how” question is related to the substance and form of AI regulations, in terms of their completeness, relevance, and accuracy. This argumentation is synonymous with the true and fair view concept applied in the accounting standards of financial statements.

The “when” question is concerned with the timeliness of the regulatory intervention. Policy makers ought to ensure that stringent rules do not hinder or delay the advancement of technological innovations.

The “where” question is meant to identify the context where mandatory regulations or the introduction of soft laws, including non-legally binding principles and guidelines are/are not required.

Future researchers are expected to investigate further these four questions in more depth and breadth. This research indicated that most contributions on AI governance were discursive in nature and/or involved literature reviews. Hence, there is scope for academic colleagues to conduct primary research activities and to utilize different research designs, methodologies and sampling frames to better understand the implications of planning, organizing, implementing and monitoring AI governance frameworks, in diverse contexts.

The full article is also available here: https://www.researchgate.net/publication/372412209_Artificial_intelligence_governance_Ethical_considerations_and_implications_for_social_responsibility

Leave a comment

Filed under artificial intelligence, chatbots, Corporate Social Responsibility, internet technologies, internet technologies and society

The functionality and usability features of mobile apps

This is an excerpt from one of my latest publications.

Suggested citation: Camilleri, M.A., Troise, C. & Kozak, M. (2023). Functionality and usability features of ubiquitous mobile technologies: The acceptance of interactive travel apps. Journal of Hospitality and Tourism Technology, https://doi.org/10.1108/JHTT-12-2021-0345 Available from: https://www.researchgate.net/publication/366633583_Functionality_and_usability_features_of_ubiquitous_mobile_technologies_The_acceptance_of_interactive_travel_apps.

(C) DrMarkCamilleri.com

Prior studies relied on specific theoretical frameworks like the Interactive Technology Adoption Model – ITAM (Camilleri and Kozak, 2022), elaboration likelihood model (ELM), information adoption model (IAM) and/or technology acceptance model (TAM), among others, to better understand which factors are having an impact on the individuals’ engagement with digital media or information technologies.

In this case, this research identifies the factors that are influencing the adoption of travel apps, in the aftermath of COVID-19. It examines the effects of information quality and source credibility (these measures are drawn from IAM framework), as well as of technical functionality, relating to electronic service quality (eSERVQUAL), on the individuals’ perceptions about the usefulness of these mobile technologies and on their intentions to continue using them on a habitual basis (the latter two factors are used in TAM models), to shed light on the consumers’ beliefs about their usability and functionality features.

This study suggests that consumers are valuing the quality of the digital content that is presented to them through these mobile technologies. Apparently, they are perceiving that the sources (who are curating the content) were knowledgeable and proficient in the upkeep and maintenance of their apps. Moreover, they are appreciating their functional attributes including their instrumental utility and appealing designs. Evidently, these factors are influencing their intentions to use the travel apps in the future. They may even lead them to purchase travel and hospitality services. Furthermore, they can have an impact on their social facilitation behaviors like positive publicity (via electronic word of mouth like online reviews, as well as in-person/offline), among other outcomes.

This contribution implies that there is scope for future researchers to incorporate a functionality factor in addition to ITAM, IAM and/or TAM ‘usability’ constructs to investigate the individuals’ dispositions to utilize technological innovations and to adopt their information. It confirms that the functionality features including their ease of use, responsiveness, organized layout and technical capabilities can trigger users to increase their app engagement on a habitual basis.

Practical recommendations

The results from this study reveal that the respondents hold positive perceptions toward interactive travel apps. In the main, they indicate that these mobile technologies feature high quality content, are organized, work well, offer a good selection of products and are easy to use.

This research posits that mobile users appreciate the quality of information that is presented to them through the travel apps, in terms of their completed-ness, accuracy and timeliness of information. Yet, the findings show that there is room for improvement. There is scope for service providers (and for the curators of their travel apps) to increase their credentials on source trustworthiness and expertise among consumers.

The results suggest that information quality had a more significant effect on the respondents’ perceived usefulness of travel apps than source credibility. Moreover, they also suggest that consumers are willing to engage with travel apps as they believe that they offer seamless functionality features, including customization capabilities and fast loading screens. Most probably, the respondents are cognizant that they offer differentiated pricing options on flights, hotels and cars, from various service providers. They may be aware that many travel apps also enable their users to access their itineraries even when they are offline and allow them to keep a track record of their reward points (e.g. of frequent flyer programs) on every booking.

In this day and age, consumers can utilize mobile devices to access asynchronous content in webpages, including detailed information on tourism service providers, transportation services, tours to attractions, the provision of amenities in tourist destinations, frequently answered questions, efficient booking engines with high resolution images and videos, quick loading and navigation, detailed maps, as well as with qualitative reviews and quantitative ratings. Very often they can even be accessed through different languages.

A number of travel apps allow their users to log in with a secure, random password authentication method, to keep a track record of their credit card details and past transactions. Most of them are also sending price alerts as well as push notifications that remind consumers about their past searches. These services are adding value to the electronic service quality as opposed to unsolicited promotional messages, that are not always related to the consumers’ interests.

Generally, customers expect travel and tourism service providers to respond to their online queries in an instantaneous manner. They are increasingly demanding web chat services to resolve queries, as soon as possible, preferably in real time.

Tourism and hospitality service providers are already using augmented reality (AR) and virtual reality (VR) software, to improve their consumers’ online experiences and to emphasize their brand positioning as high-quality service providers. In the foreseeable future, it is very likely that practitioners could avail themselves of Metaverse technologies that could teleport consumers in the cyberspace, to lure them to book their flight, stays, car rentals or tours. Online (and mobile) users may be using electronic personas, called avatars to move them around virtual spaces and to engage with other users, when they are in the Metaverse.

This interactive technology is poised to enhance its users’ immersive experiences, in terms of their sensory inputs, definitions of space and points of access to information, particularly those that work with VR headsets. Hence, travel and hospitality businesses could avail themselves of such interactive technologies to gain a competitive advantage.

You can access this paper in its entirety, via: https://www.researchgate.net/publication/366633583_Functionality_and_usability_features_of_ubiquitous_mobile_technologies_The_acceptance_of_interactive_travel_apps

Leave a comment

Filed under Business, business, chatbots, corporate communication, customer service, digital media, Marketing, online, Small Business, Travel, Web

Factors affecting intentions to use interactive technologies

This is an excerpt from one of our latest academic articles (that was accepted by the Journal of Services Marketing).

Theoretical implications

Previous studies reported that interactive websites ought to be accessible, appealing, convenient, functional, secure and responsive to their users (Crolic et al., 2021; Hoyer et al., 2020; Kabadayi et al., 2020; Klaus and Zaichkowsky, 2020; Rosenmayer et al., 2018; Sheehan et al., 2020; Valtakoski, 2019). Online service providers are expected to deliver a personalized customer service experience and to exceed their consumers’ expectations at all times, to encourage repeat business and loyal behaviors (Li et al., 2017; Tong et al., 2020; Zeithaml et al. 2002).

Many service marketing researchers have investigated the individuals’ perceptions about price comparison sites, interactive websites, ecommerce / online marketplaces, electronic banking, and social media, among other virtual domains (Donthu et al., 2021; Kabadayi et al., 2020; Klaus and Zaichkowsky, 2020; Rosenbaum and Russell-Bennett, 2020; Rosenmayer et al., 2018; Valtakoski, 2019; Zaki, 2019). Very often, they relied on measures drawn from electronic service quality (e-SQ or e-SERVQUAL), electronic retail quality (eTailQ), transaction process-based approaches for capturing service quality (eTransQual), net quality (NETQual), perceived electronic service quality (PeSQ), site quality (SITEQUAL) and website quality (webQual), among others.

Technology adoption researchers often adapted TAM measures, including perceived usefulness and behavioral intentions constructs, among others, or relied on psychological theories like the Theory of Reasoned Action (Fishbein and Ajzen, 195) and the Theory of Planned Behavior (Ajzen, 1991), among others, to explore the individuals’ acceptance and use of different service technologies, in various contexts (Park et al., 2007; Chen and Chang, 2018). Alternatively, they utilized IAM’s theoretical framework to investigate the online users’ perceptions about the usefulness of information or online content. Very often they examined the effects of information usefulness on information adoption (Erkan and Evans, 2016; Liu et al., 2017).

A review of the relevant literature suggests that good quality content (in terms of its understandability, completeness, timeliness and accuracy) as well as the sources’ credibility (with regard to their trustworthiness and expertise) can increase the individuals’ expectations regarding a business and its products or services (Cheung et al., 2008; Li et al., 2017; Liu et al., 2017). ELM researchers suggest that a high level of message elaboration (i.e., argument quality) as well as the peripheral cues like the credibility of the sources and their appealing content, can have a positive impact on the individuals’ attitudes toward the conveyors of information (Allison et al., 2017; Chen and Chang, 2018; Petty et al., 1983), could affect their intentions to (re)visit the businesses’ websites (Salehi-Esfahani et al., 2016), and may even influence their purchase intentions (Chen and Chang, 2018; Erkan and Evans, 2016).

This contribution differentiates itself from previous research as the researchers adapted key measures from ELM/IAM namely ‘information quality’ (Filieri and McLeay, 2014; Salehi-Esfahani et al., 2016; Shu and Scott, 2013; Tseng and Wang, 2016) and ‘source credibility’ (Ayeh, 2015; Leong et al., 2019; Wang and Scheinbaum, 2018) and integrated them with an ‘interactive engagement’ construct (McMillan and Hwang, 2002), to better understand the individuals’ utilitarian motivations to use the service businesses’ interactive websites. The researchers hypothesized that these three constructs were plausible antecedents of TAM’s ‘perceived usefulness’ and ‘intentions to use the technology’. Specifically, this research examines the direct effects of information quality, source credibility and interactive engagement on the individuals’ perceived usefulness of interactive website, as well as their indirect effects on their intentions to continue using these service technologies.

To the best of the researchers’ knowledge, there is no other research in academia that included an interactive engagement construct in addition to ELM/IAM and TAM measures. This contribution addresses this gap in the literature. The engagement construct was used to better understand the respondents’ perceptions about the ease-of-use of interactive websites, to ascertain whether they are captivating their users’ attention by offering a variety of content, and more importantly, to determine whether they consider them as responsive technologies.

Managerial implications

This study sheds light on the travel websites’ interactive capabilities during an unprecedented crisis situation, when businesses received higher volumes of inquiries through different channels (to change bookings, cancel itineraries and/or submit refund requests). At the same time, it identified the most significant factors that were affecting the respondents’ perceptions and motivations to continue using interactive service technologies in the future.

In sum, this research confirmed that the respondents were evaluating the quality of information that is featured in interactive websites. The findings reported they were well acquainted with the websites’ content (e.g. news feeds, product information, differentiated pricing options, images, video clips, and/or web chat facilities). The researchers presumed that the respondents were well aware of the latest developments. During COVID-19, a number of travel websites have eased their terms and conditions relating to cancellations and refund policies (EU, 2020), to accommodate their customers. Online businesses were expected to communicate with their customers and to clarify any changes in their service delivery, in a timely manner.

The contribution clarified that online users were somehow influenced by the asynchronous content that is featured in webpages. Therefore, service businesses ought to publish quality information to satisfy their customers’ expectations.  They may invest in service technologies like a frequently answered questions widget in their websites to enhance their online customer services, and to support online users during and after the sales transactions. Service businesses could integrate events’ calendars, maps, multi-lingual accessibility options, online reviews and ratings, high resolution images and/or videos in their interactive websites, to entertain their visitors (Cao and Yang, 2016; Bastida and Huan, 2014).  

This research underlines the importance for service providers to consistently engage in concurrent, online conversations with customers and prospects, in real-time (Buhalis and Sinarta 2019; Chattaraman et al., 2019; Rihova et al., 2018; Harrigan et al., 2017). Recently, more researchers are raising awareness on the provision of live chat facilities through interactive websites or via SNSs like WhatsApp or Messenger (Camilleri & Troise, 2022). Services businesses are expected to respond to consumer queries, and to address their concerns, as quickly as possible (McLean and Osei-Frimpong, 2019), in order to minimize complaints.

AI chatbot technologies are increasingly enabling service businesses to handle numerous interactions with online users, when compared to telephone conversations with human customer services representatives (Adam et al., 2021; Hoyer et al., 2020; Luo et al., 2019; McLean and Osei-Frimpong, 2019; Van Pinxteren et al., 2019). The most advanced dialogue systems are equipped with features like omnichannel messaging support, no code deployment, fallback options, as well as sentiment analysis. These service technologies are designed to improve the consumers’ experiences by delivering automated smart responses, in an efficient manner. Hence, online businesses will be in a better position to meet and exceed their customers’ service expectations. Indeed, service businesses can leverage themselves with a responsive website. These interactive technologies enable them to improve their positioning among customers, and to generate positive word-of-mouth publicity.

Limitations and future research avenues

This study has included a perceived interactivity dimension, namely an ‘interactive engagement’ construct within an information adoption model. The findings revealed that the respondents believed that the websites’ engaging content was a significant antecedent of their perceptions about the usefulness of interactive websites. This study also reported that the interactive engagement construct indirectly affected the individuals’ intentions to revisit them again.

In conclusion, the authors recommend that future researchers validate this study’s measures in other contexts, to determine the effects of interactive engagement on information adoption and/or on the acceptance and usage of online technologies. Further research is required to better understand which attributes and features of interactive websites are appreciated by online users. Recent contributions suggest that there are many benefits for service businesses to use conversational chatbots to respond to online customer services. These interactive technologies can offer increased convenience to consumers and prospects (Thomaz et al., 2020), improved operational efficiencies (Pantano and Pizzi, 2020), reduced labor costs (Belanche et al., 2020), as well as time-saving opportunities for customers and service providers (Adam et al., 2021).

Prospective empirical research may consider different constructs from other theoretical frameworks to examine the individuals’ perceptions and/or attitudes toward interactive websites and their service technologies. Academic researchers are increasingly relying on the expectancy theory/expectancy violation theory (Crolic et al., 2021), the human computer interaction theory/human machine communication theory (Wilkinson et al., 2021), the social presence theory (Tsai et al., 2021), and/or the social response theory (Adam et al., 2021), among others, to investigate the customers’ engagement with service technologies.

Notwithstanding, different methodologies and sampling frames could be used to capture and analyze primary data. For instance, inductive studies may investigate the consumers’ in-depth opinions and beliefs on this topic. Interpretative studies may reveal important insights on how to improve the efficacy and/or the perceived usefulness of interactive service technologies.

The full paper is available here: https://www.researchgate.net/publication/366055918_Utilitarian_motivations_to_engage_with_travel_websites_An_interactive_technology_adoption_model

Leave a comment

Filed under Business, chatbots, corporate communication, customer service, digital media, ecommerce, Marketing, online, Small Business, tourism, Travel

Live support by chatbots with artificial intelligence: A future research agenda

This is an excerpt from one of my latest contributions on the use of responsive chatbots by service businesses. The content was adapted for this blogpost.

Suggested citation: Camilleri, M.A. & Troise, C. (2022). Live support by chatbots with artificial intelligence: A future research agenda. Service Business, https://doi.org/10.1007/s11628-022-00513-9

(Credit: Chatbots Magazine)

The benefits of using chatbots for online customer services

Frequently, consumers are engaging with chatbot systems without even knowing, as machines (rather than human agents) are responding to online queries (Li et al. 2021; Pantano and Pizzi 2020; Seering et al. 2018; Stoeckli et al. 2020). Whilst 13% of online consumer queries require human intervention (as they may involve complex queries and complaints), more than 87 % of online consumer queries are handled by chatbots (Ngai et al., 2021).

Several studies reported that there are many advantages of using conversational chatbots for customer services. Their functional benefits include increased convenience to customers, enhanced operational efficiencies, reduced labor costs, and time-saving opportunities.

Consumers are increasingly availing themselves of these interactive technologies to retrieve detailed information from their product recommendation systems and/or to request their assistance to help them resolve technical issues. Alternatively, they use them to scrutinize their personal data. Hence, in many cases, customers are willing to share their sensitive information in exchange for a better service.

Although, these interactive technologies are less engaging than human agents, they can possibly elicit more disclosures from consumers. They are in a position to process the consumers’ personal data and to compare it with prior knowledge, without any human instruction. Chatbots can learn in a proactive manner from new sources of information to enrich their database.

Whilst human customer service agents may usually handle complex queries including complaints, service chatbots can improve the handling of routine consumer queries. They are capable of interacting with online users in two-way communications (to a certain extent). Their interactions may result in significant effects on consumer trust, satisfaction, and repurchase intentions, as well as on positive word-of-mouth publicity.

Many researchers reported that consumers are intrigued to communicate with anthropomorphized technologies as they invoke social responses and norms of reciprocity. Such conversational agents are programed with certain cues, features and attributes that are normally associated with humans.

The findings from this review clearly indicate that individuals feel comfortable using chatbots that simulate human interactions, particularly with those that have enhanced anthropomorphic designs. Many authors noted that the more chatbots respond to users in a natural, humanlike way, the easier it is for the business to convert visitors into customers, particularly if they improve their online experiences. This research indicates that there is scope for businesses to use conversational technologies to personalize interactions with online users, to build better relationships with them, to enhance consumer satisfaction levels, to generate leads as well as sales conversions.

The costs of using chatbots for online customer services

Despite the latest advances in the delivery of electronic services, there are still individuals who hold negative perceptions and attitudes towards the use of interactive technologies. Although AI technologies have been specifically created to foster co-creation between the service provider and the customer,

There are a number of challenges (like authenticity issues, cognition challenges, affective issues, functionality issues and integration conflicts) that may result in a failed service interaction and in dissatisfied customers. There are consumers, particularly the older ones, who do not feel comfortable interacting with artificially intelligent technologies like chatbots, or who may not want to comply with their requests, for different reasons. For example, they could be wary about cyber-security issues and/or may simply refuse to engage in conversations with a robot.

A few commentators contended that consumers should be informed when they are interacting with a machine. In many cases, online users may not be aware that they are engaging with elaborate AI systems that use cues such as names, avatars, and typing indicators that are intended to mimic human traits. Many researchers pointed out that consumers may or may not want to be serviced by chatbots.

A number of researchers argued that some chatbots are still not capable of communicative behaviors that are intended to enhance relational outcomes. For the time being, there are chatbot technologies that are not programed to answer to all of their customers’ queries (if they do not recognize the keywords that are used by the customers), or may not be quick enough to deal with multiple questions at the same time. Therefore, the quality of their conversations may be limited. Such automated technologies may not always be in a position to engage in non-linear conversations, especially when they have to go back and forth on a topic with online users.

Theoretical and practical implications

This contribution confirms that recently there is a growing interest among academia as well as by practitioners on research that is focused on the use of chatbots that can improve the businesses’ customer-centric services. It clarifies that various academic researchers have often relied on different theories including on the expectancy theory, or on the expectancy violation theory, the human computer interaction theory/human machine communication theory, the social presence theory, and/or on the social response theory, among others.

Currently, there are limited publications that integrated well-established conceptual bases (like those featured in the literature review), or that presented discursive contributions on this topic. Moreover, there are just a few review articles that capture, scrutinize and interpret the findings from previous theoretical underpinnings, about the use of responsive chatbots in service business settings. Therefore, this systematic review paper addresses this knowledge gap in the academic literature.

It clearly differentiates itself from mainstream research as it scrutinizes and synthesizes the findings from recent, high impact articles on this topic. It clearly identifies the most popular articles from Scopus and Web of Science, and advances a definition about anthropomorphic chatbots, artificial intelligence chatbots (or AI chatbots), conversational chatbot agents (or conversational entities, conversational interfaces, conversational recommender systems or dialogue systems), customer experience with chatbots, chatbot customer service, customer satisfaction with chatbots, customer value (or the customers’ perceived value) of chatbots, and on service robots (robot advisors). It discusses about the different attributes of conversational chatbots and sheds light on the benefits and costs of using interactive technologies to respond to online users’ queries.

In sum, the findings from this research reveal that there is a business case for online service providers to utilize AI chatbots. These conversational technologies could offer technical support to consumers and prospects, on various aspects, in real time, round the clock. Hence, service businesses could be in a position to reduce their labor costs as they would require fewer human agents to respond to their customers. Moreover, the use of interactive chatbot technologies could improve the efficiency and responsiveness of service delivery. Businesses could utilize AI dialogue systems to enhance their customer-centric services and to improve online experiences.  These service technologies can reduce the workload of human agents. The latter ones can dedicate their energies to resolve serious matters, including the handling of complaints and time-consuming cases.

On the other hand, this paper also discusses potential pitfalls. Currently, there are consumers who for some reason or another, are not comfortable interacting with automated chatbots. They may be reluctant to engage with advanced anthropomorphic systems that use avatars, even though, at times, they can mimic human communications relatively well.  Such individuals may still appreciate a human presence to resolve their service issues. They may perceive that interactive service technologies are emotionless and lack a sense of empathy.

Presently, chatbots can only respond to questions, keywords and phrases that they were programed to answer. Although they are useful in solving basic queries, their interactions with consumers are still limited. Their dialogue systems require periodic maintenance. Unlike human agents they cannot engage in in-depth conversations or deal with multiple queries, particularly if they are expected to go back and forth on a topic.

Most probably, these technical issues will be dealt with over time, as more advanced chatbots will be entering the market in the foreseeable future. It is likely that these AI technologies would possess improved capabilities and will be programmed with up-to-date information, to better serve future customers, to exceed their expectations.

Limitations and future research avenues

This research suggests that this area of study is gaining traction in academic circles, particularly in the last few years. In fact, it clarifies that there were four hundred twenty-one 421 publications on chatbots in business-related journals, up to December 2021. Four hundred fifteen (415) of them were published in the last 5 years. 

The systematic analysis that was presented in this research was focused on “chatbot(s)” or “chatterbot(s)”. Other academics may refer to them by using different synonyms like “artificial conversational entity (entities)”, “bot(s)”, “conversational avatar(s)”, “conversational interface agent”, “interactive agent(s)”, “talkbot(s)”, “virtual agent(s)”, and/or “virtual assistant(s)”, among others. Therefore, future researchers may also consider using these keywords when they are other exploring the academic and nonacademic literature on conversational chatbots that are being used for customer-centric services.

Nevertheless, this bibliographic study has identified some of the most popular research areas relating to the use of responsive chatbots in online customer service settings. The findings confirmed that many authors are focusing on the chatbots’ anthropomorphic designs, AI capabilities and on their dialogue systems. This research suggests that there are still knowledge gaps in the academic literature. The following table clearly specifies that there are untapped opportunities for further empirical research in this promising field of study.

The full article is forthcoming. A prepublication version will be available through Researchgate.

Leave a comment

Filed under artificial intelligence, Business, chatbots, customer service, Marketing