Tag Archives: AI

The use of Generative AI for travel and tourism planning

📣📣📣 Published via Technological Forecasting and Social Change.

👉 Very pleased to share this timely article that examines the antecedents of the users’ trust in Generative AI’s recommendations, related to travel and tourism planning.

🙏 I would like to thank my colleagues (and co-authors), namely, Hari Babu Singu, Debarun Chakraborty, Ciro Troise and Stefano Bresciani, for involving me in this meaningful research collaboration. It’s been a real pleasure working with you on this topic!

https://doi.org/10.1016/j.techfore.2025.124407

Highlights

  • •The study focused on the enablers and the inhibitors of generative AI usage
  • •It adopted 2 experimental studies with a 2 × 2 between-subjects factorial design
  • •The impact of the cognitive load produced mixed results
  • •Personalized recommendations explained each responsible AI system construct
  • •Perceived controllability was a significant moderator

Abstract

Generative AI models are increasingly adopted in tourism marketing content based on text, image, video, and code, which generates new content as per the needs of users. The potential uses of generative AI are promising; nonetheless, it also raises ethical concerns that affect various stakeholders. Therefore, this research, which comprises two experimental studies, aims to investigate the enablers and the inhibitors of generative AI usage. Studies 1 (n = 403 participants) and 2 (n = 379 participants) applied a 2 Ă— 2 between-subjects factorial design in which cognitive load, personalized recommendations, and perceived controllability were independently manipulated. The initial study examined the probability of reducing the cognitive load (reduction/increase) due to the manual search for tourism information. The second study considers the probability of receiving personalized recommendations using generative AI features on tourism websites. Perceived controllability was treated as a moderator in each study. The impact of the cognitive load produced mixed results (i.e., predicting perceived fairness and environmental well-being), with no responsible AI system constructs explaining trust within Study 1. In study 2, personalized recommendations explained each responsible AI system construct, though only perceived fairness and environmental well-being significantly explained trust in generative AI. Perceived controllability was a significant moderator in all relationships within study 2. Hence, to design and execute generative AI systems in the tourism domain, professionals should incorporate ethical concerns and user-empowerment strategies to build trust, thereby supporting the responsible and ethical use of AI that aligns with users and society. From a practical standpoint, the research provides recommendations on increasing user trust through the incorporation of controllability and transparency features in AI-powered platforms within tourism. From a theoretical perspective, it enriches the Technology Threat Avoidance Theory by incorporating ethical design considerations as fundamental factors influencing threat appraisal and trust.

Introduction

Information and communication technologies have been playing a key role in enhancing the tourism experience (Asif and Fazel, 2024; Salamzadeh et al., 2022). The tourism industry has evolved as a content-centric industry (Chuang, 2023). It means the growth of the tourism sector is attributed to the creation, distribution, and strategic use of information. The shift from the traditional model of demand–driven to the content-centric model represents a transformation in user behaviour (Yamagishi et al., 2023; Hosseini et al., 2024). Modern travellers are increasingly dependent on user-generated content to decide on their choices and travel planning (Yamagishi et al., 2023; Rahaman et al., 2024). The content-focused marketing approach in tourism emphasizes the role of digital tools and storytelling to assist in creating a holistic experience (Xiao et al., 2022; Jiang and Phoong, 2023). From planning a trip to sharing cherished memories, content helps add value to the travellers and tourism businesses (Su et al., 2023). For example, MakeMyTrip (MMT) integrated generative AI trip planning assistant which facilitates conversational bookings assisting the users with destination exploration, in-trip needs, personalized travel recommendations, summaries of hotel reviews based on user content and voice navigation support positioning the MMT’s platform more inclusive to the users. The content marketing landscape is changing due to the introduction of generative AI models that help generate text, images, videos, and interesting code for users (Wach et al., 2023; Salamzadeh et al., 2025). These models assist in expressing the language, creativity, and aesthetics as humans do and enhance user experience in various industries, including travel and tourism (Binh Nguyen et al., 2023; Chan and Choi, 2025; Tussyadiah, 2014).

Gen AI enhances natural flow of interactions by offering personalized experiences that align with consumer profiles and preferences (Blanco-Moreno et al., 2024). Gen AI is gaining significant momentum for its transformative impact within the tourism sector, revolutionizing marketing, operations, design, and destination management (Duong et al., 2024; Rayat et al., 2025). Accordingly, empirical studies suggest that Generative AI has the potential to transform tourists’ decision-making process at every stage of their journey, demonstrating a significant disruption to conventional tourism models (Florido-BenĂ­tez, 2024). Nonetheless, concerns have been raised about the potential implications of generative AI models, and their generated content might possess inaccurate or deceptive information that could adversely impact consumer decision-making (Kim et al., 2025a, Kim et al., 2025b). In its report titled “Navigating the future: How Generative Artificial Intelligence (AI) is Transforming the Travel Industry”, Amadeus highlighted key concerns and challenges in implementation Gen AI such as data security concerns (35 %), lack of expertise and training in Gen AI (34 %), data quality and inadequate infrastructure (33 %), ROI concerns and lack of clear use cases (30 %) and difficulty in connecting with partners or vendors (29 %). Therefore, the present study argues that with the intuitive design, the travel agents could tackle the lack of expertise and clear use of Gen AI. The study suggests that for travel and tourism companies to build trust in Gen AI, they must tackle the root causes of user apprehension. This means addressing what makes users fear the unknown, ensuring they understand the system’s purpose, and fixing problems with biased or poor data. Also, previous studies highlighted how the integration of Gen AI and tourism throws certain issues such as misinformation and hallucinations, data privacy and security, human disconnection, and inherent algorithmic biases (Christensen et al., 2025; Luu et al., 2025). Moreover, if Gen AI provides biased recommendations, the implications are adverse. If the users perceive that the recommendations are biased, they avoid using them, leading to high churn and abandoning platforms (Singh et al., 2023). Users’ satisfaction will decline, replaced by frustration and anger as biased output damages the promise of personalized services. This negatively impacts brand reputation and loss of significant market competitive advantage (Wu and Yang, 2023). Such scenarios will likely lead to stricter regulations, mandatory algorithmic audits, and new consumer protection laws forcing the industry to prioritize fairness as well as explainability to avoid serious consequences. Interestingly, research studies draw attention to an interesting paradox, that consumers are heavily relying on AI-generated travel itineraries, even when they are aware of Gen AI’s occasional inaccuracies (Osadchaya et al., 2024). This reliance might stem from a belief that AI’s perceived objectivity and capacity for personalized recommendations indicate a significant transformation of trust between human and non-human agents in the travel decision-making process (Kim et al., 2023a, Kim et al., 2023b). Empirical findings indicate that AI implementation in travel planning contributes to the objectivity of the results, effectively mitigates cognitive load, and supports higher levels of personalization aligned with user preferences (Kim et al., 2023a, Kim et al., 2023b). Despite the growing body of literature explaining the role of trust in Gen AI acceptance and its influence on travellers’ decision making and behavioural intentions, the potential biases in AI-generated content continue to pose challenges to users’ confidence (Kim et al., 2021a, Kim et al., 2021b). Therefore, this research aims to examine the influence of generative AI in tourism on consumers’ trust in AI technologies, particularly their balance between technological progress and ethical responsibility, concerning the future of tourism (Dogru(Dr. True et al., 2025).

Existing research has focused more on the technology of AI as a phenomenon rather than translating those theories into studies on how the ethics involved would affect perceptions and trust (Glikson and Woolley, 2020). In addition, there is still the black box phenomenon, which is the inability of the user to understand what happens in AI. It also emphasizes the need for more integrative studies between morally sound AI development, user trust, and design in tourism (Tuo et al., 2024).

Moreover, scant research has examined the factors that inhibit tourists from embracing Generative AI technologies, resulting in limited understanding of travellers’ reluctance to Generative AI adoption for travel planning (Fakfare et al., 2025). Despite a growing body of literature examining the antecedents and outcomes of Generative AI (GAI) adoption, large body of research has been based on established frameworks such as Information Systems Success (ISS) model (Nguyen and Malik, 2022), Technology Acceptance Mode; (TAM) (Chatterjee et al., 2021), and the Unified Theory of Acceptance and Use of Technology (UTAUT) (Venkatesh, 2022).

However, the extensive reliance on traditional acceptance models might face the risk of ignoring the critical socio-technical aspects, which are paramount in the context of GAI (Yu et al., 2022). While most of the studies explore the overarching effects of user acceptance and use of GenAI using TAM, UTAUT, and Delone and McLean IS success models, there has been a lack of consideration of ethical factors as well as responsible AI systems. Addressing these gaps could significantly broaden our theoretical understanding of how individuals evaluate and adopt generative AI technologies within users’ ethical behaviour and socio-technical perspective.

Therefore, this research aims to fill this gap by investigating factors that facilitate or inhibit trust in generative AI systems, considering responsible AI and Technology Threat Avoidance Theory, and advancing the following research questions:

RQ1

How does the customer experience of using generative AI in tourism reflect the impact of enablers (such as responsible AI systems) and inhibitors (such as ambiguity and anxiety) on trust in generative AI?

RQ2

Does perceived controllability moderate the enablers and inhibitors of trust in generative AI in tourism?

This research includes responsible AI principles and the technology threat avoidance theory to explicate the relationship between generative AI and trust in tourism. Seen from the conceptual lens of Ethical Behaviours, responsible AI principles are crucial for enhancing trust in Gen AI within tourism (Law et al., 2024). When users perceive Gen AI recommendations as fair, transparent, and bias-free, they are more likely to perceive the systems as trustworthy, which in turn mitigates user skepticism and promotes trust (Ali et al., 2023). Also, when Gen AI promotes sustainable and environmentally friendly practices, it demonstrates ethical responsibility and enhances trust in alignment with shared social values (DĂ­az-RodrĂ­guez et al., 2023). By operationalizing responsible AI principles like transparency, fairness, and sustainability, Gen AI transforms from a black-box tool into a more trustworthy and responsible system for travel decisions (Kirilenko and Stepchenkova, 2025). From the socio-technical perspective, the Technology threat avoidance theory (TTAT) supports the logic of how perceived ambiguity and perceived anxiety act as inhibitors of trust. In tourism, users’ experience holds paramount importance (Torkamaan et al., 2024). When users encounter Gen AI content that is difficult to comprehend, recommendations are unstable or ambiguous, and users’ data is exposed to privacy concerns, these apprehensions will turn into a threat to using Gen AI (Bang-Ning et al., 2025). According to TTAT, when users perceive a greater threat, they are more inclined to engage in avoidance behaviours, which also erodes trust in the system. Hence, TTAT explains why users might hesitate or avoid using Gen AI tools, even if they offer functional benefits such as personalized recommendations and reduced cognitive load (Shang et al., 2023).

The study adopted an experimental research design that would help us to explore the independent phenomenon (use of Gen AI for content generation) and observe and explain its role to establish a cause-and-effect relationship between factors of responsible AI systems and TTAT (Leung et al., 2023). The experimental setting helps us to understand the differences empirically between human and non-human generated content from users’ travel decision-making perspective towards destinations. The study enriched the literature on both the ethical aspects and environmental aspects (perceived fairness and environmental well-being) and the perceived risks (perceived ambiguity and perceived anxiety) perspective in the tourism context. The situation of perceived controllability as a moderator is tested in the literature, offering help to managers on how to develop AI systems responsible for lowering user fear and building trust. The study also facilitated practitioners in understanding how the personalized recommendations & cognitive load facilitated by Gen AI in content generation impact the Gen AI Trust of the tourists.

Access through your organization

Check access to the full text by signing in through your organization.

Section snippets

Responsible AI systems

Responsible AI adequately incorporates ethical aspects of AI system design and implementation and ensures that the systems are transparent, fair, and responsible (DĂ­az-RodrĂ­guez et al., 2023). Responsible AI includes ethical, transparent, and accountable use of artificial intelligence systems, ensuring they are fair, secure, and aligned with societal values. It is also an approach to design, develop, and deploy AI systems so that they are ethical, safe, and trustworthy. It is a system that

Cognitive load, personalized recommendations, and perceived fairness

Cognitive load is the mental effort to process and choose information (Islam et al., 2020). A cognitive load can also be high when people interact with complex systems such as AI. Thus, high cognitive load may affect the ability of users to judge whether the AI-based decisions can be considered fair, since they may not grasp enough of the workings of the system and its specific decisions (Westphal et al., 2023). On the other hand, whereas perceived fairness refers to the users’ feelings about

Research methods and analysis

The experiments adopted in this study are scenario-based. Participants’ emotions cannot be manipulated easily in an ethical manner (Anand and Gaur, 2018). Also, the scenario-based approach helps test the causal relationship between constructs used for experimentation in a given scenario. This approach also reduces the minimal interference from extraneous variables. In this method, respondents answered questions based on hypothetical scenarios developed in each scenario. Therefore, scenarios

Discussion

Study 1 shows that cognitive load is detrimental to an individual’s notion of justice or environmental wellbeing, indicating that such factors may be difficult for a user to rate properly based on expending greater cognitive effort. However, cognitive load can also limit the extent of open-mindedness and critical evaluation of AI-assisted communication (T. Li et al., 2024), which could leave people resorting to mental shortcuts or simple fairness and environmental fairness issues. Under such

Theoretical implications

Trust is an important element in the design of organizations and systems, and the current study’s theoretical implications extend the understanding of trust in generative AI systems by integrating constructs of responsible AI and Technology Threat Avoidance Theory. This research underscores the significance of moral factors in creating and using AI systems by exploring relationships between perceived justice, environmental concern, and trust. In this context, the study notes that the degree of

Practical implications

To develop and retain users’ confidence, professionals in the field should observe responsible AI principles, particularly perceived equity and ecological sustainability. It is possible for consumers to be amused by and trust that AI recommendations are perceived as fair. This involves developing algorithms that align with users’ interests while promoting green aspects in AI. It also becomes important for management to note that during AI interface design, cognitive load should be considered so 

Limitations and future research

This study has certain limitations. First, the use of self-reported measures could pose certain biases, as the participants’ experiences with generative AI or social desirability could affect their judgment. The reliance on self-reported data introduces potential biases from participants’ prior engagements with generative AI, social desirability bias, or limited technological competence. Secondly, focusing on a particular context (i.e., tourism) can be seen as a limitation when it comes to

Conclusion

A thorough examination of advancing artificial intelligence in the tourism industry draws attention to the fact that there is no way of avoiding the issue of encouraging responsible AI use. Extending user satisfaction with rhetoric based on AI suggests that user perceptions are not only shaped by the quality of the recommendations but also by the ethical implications of the system and users’ affective states. A range in the effect of personalized suggestions on some parameters that influenced

Leave a comment

Filed under Marketing

Why are people using generative AI like ChatGPT?

The following text is an excerpt from one of my latest articles. I am sharing the managerial implications of my contribution published through Technological Forecasting and Social Change.

This empirical study provides a snapshot of the online users’ perceptions about Chat Generative Pre-Trained Transformer (ChatGPT)’s responses to verbal queries, and sheds light on their dispositions to avail themselves from ChatGPT’s natural language processing.

It explores their performance expectations about their usefulness and their effort expectations related to the ease of use of these information technologies and investigates whether they are affected by colleagues or by other social influences to use such dialogue systems. Moreover, it examines their insights about the content quality, source trustworthiness as well as on the interactivity features of these text-generative AI models.

Generally, the results suggest that the research participants felt that these algorithms are easy to use. The findings indicate that they consider them to be useful too, specifically when the information they generate is trustworthy and dependable.

The respondents suggest that they are concerned about the quality and accuracy of the content that is featured in the AI chatbots’ answers. This contingent issue can have a negative effect on the use of the information that is created by online dialogue systems.

OpenAI’s ChatGPT is a case in point. Its app is freely available in many countries, via desktop and mobile technologies including iOS and Android. The company admits that its GPT-3.5 outputs may be inaccurate, untruthful, and misleading at times. It clarifies that its algorithm is not connected to the internet, and that it can occasionally produce incorrect answers (OpenAI, 2023a). It posits that GPT-3.5 has limited knowledge of the world and events after 2021 and may also occasionally produce harmful instructions or biased content.

OpenAI recommends checking whether its chatbot’s responses are accurate or not, and to let them know when and if it answers in an incorrect manner, by using their “Thumbs Down” button. They even declare that their ChatGPT’s Help Center can occasionally make up facts or “hallucinate” outputs (OpenAI, 2023aOpenAI, 2023b).

OpenAI reports that its top notch ChatGPT Plus subscribers can access safer and more useful responses. In this case, users can avail themselves from a number of beta plugins and resources that can offer a wide range of capabilities including text-to-speech applications as well as web browsing features through Bing.

Yet again, OpenAI (2023b) indicates that its GPT-4 still has many known limitations that the company is working to address, such as “social biases and adversarial prompts” (at the time of writing this article). Evidently, works are still in progress at OpenAI.

The company needs to resolve these serious issues, considering that its Content Policy and Terms clearly stipulate that OpenAI’s consumers are the owners of the output that is created by ChatGPT. Hence, ChatGPT’s users have the right to reprint, sell, and merchandise the content that is generated for them through OpenAI’s platforms, regardless of whether the output (its response) was provided via a free or a paid plan.

Various commentators are increasingly raising awareness about the corporate digital responsibilities of those involved in the research, development and maintenance of such dialogue systems. A number of stakeholders, particularly the regulatory ones, are concerned on possible risks and perils arising from AI algorithms including interactive chatbots.

In many cases, they are warning that disruptive chatbots could disseminate misinformation, foster prejudice, bias and discrimination, raise privacy concerns, and could lead to the loss of jobs. Arguably, one has to bear in mind that, in many cases, many governments are outpaced by the proliferation of technological innovations (as their development happens before the enactment of legislation).

As a result, they tend to be reactive in the implementation of substantive regulatory interventions. This research reported that the development of ChatGPT has resulted in mixed reactions among different stakeholders in society, especially during the first months after its official launch.

At the moment, there are just a few jurisdictions that have formalized policies and governance frameworks that are meant to protect and safeguard individuals and entities from possible risks and dangers of AI technologies (Camilleri, 2023). Of course, voluntary principles and guidelines are a step in the right direction. However, policy makers are expected by various stakeholders to step-up their commitment by introducing quasi-regulations and legislation.

Currently, a number of technology conglomerates including Microsoft-backed OpenAI, Apple and IBM, among others, anticipated the governments’ regulations by joining forces in a non-profit organization entitled, “Partnership for AI” that aims to advance safe, responsible AI, that is rooted in open innovation.

In addition, IBM has also teamed up with Meta and other companies, startups, universities, research and government organizations, as well as non-profit foundations to form an “AI Alliance”, that is intended to foster innovations across all aspects of AI technology, applications and governance.

The full list of references is available here: https://www.sciencedirect.com/science/article/pii/S004016252400043X?via%3Dihub

Suggested citation: Camilleri, M. A. (2024). Factors affecting performance expectancy and intentions to use ChatGPT: Using SmartPLS to advance an information technology acceptance framework. Technological Forecasting and Social Change201, https://doi.org/10.1016/j.techfore.2024.123247

Leave a comment

Filed under academia, chatbots, ChatGPT, Generative AI

Ethical considerations of service organizations in the information age

This is an excerpt from one of our latest contributions published through The Service Industries Journal. It features snippets from the ‘Introduction’, ‘Theoretical Implications’, ‘Practical Implications’ as well as from the ‘Limitations and Future Research Avenues’ sections.

Suggested Citation: Camilleri, M.A., Zhong, L., Rosenbaum, M.S. & Wirtz, J. (2024). Ethical considerations of service organizations in the information age, The Service Industries Journal, Forthcoming. https://www.tandfonline.com/doi/full/10.1080/02642069.2024.2353613

Introduction

Ethics is a broad field of study that refers to intellectual and moral philosophical inquiry concerned with value theory. It is clearly evidenced when individuals rely on their personal values, principles and norms to resolve questions about appropriate courses of action, as they attempt to distinguish between right and wrong, good and evil, virtue and vice, justice and crime, et cetera (Budolfson, 2019; Coeckelbergh, 2021; Ramboarisata & Gendron, 2019). Several researchers contend that ethics involves a set of concepts and principles that are meant to guide community members in specific social and environmental behaviors (De Bakker et al., 2019; Hermann, 2022). Very often, commentators argue that a persons’ ethical dispositions are influenced by their upbringing, social conventions, cultural backgrounds, religious beliefs, as well as by regulations (Vallaster et al., 2019).

Individuals, groups, institutions, non-government entities as well as businesses are bound to comply with the rule of law in their society (Groß & Vriens, 2019). As a matter of fact, the businesses’ organizational cultures and modus operandi are influenced by commercial legislation, regulations and taxation systems (Bridges, 2018). For-profit entities are required to adhere to the companies’ acts of the respective jurisdictions where they are running their commercial activities. They are also expected to follow informal codes of conduct and to observe certain ethical practices that are prevalent in the societies where they are based. This line of reasoning is synonymous with mainstream “business ethics” literature, that refer to a contemporary set of values and standards that are intended to govern the individuals’ actions and behaviors in how they manage and lead organizations (DeTienne et al., 2021).

Employers ought to ensure that they are managing their organization in a fair, transparent and responsible manner, by treating their employees with dignity and respect (Saks, 2022). They have to provide decent working environments and appropriate conditions of employment by offering equitable extrinsic rewards to their workers, that are commensurate with their knowledge, skills and competences (Gaur & Gupta, 2021). Moreover, it is in the employers’ interests to nurture their members of staff’s intrinsic motivations if they want them to align with their organizational values and corporate objectives (Camilleri et al., 2023). Notwithstanding, all businesses, including those operating in service industries have ethical as well as environmental, social and governance (ESG) responsibilities to bear towards other stakeholders in society (Aksoy et al., 2022).

This article raises awareness on a wide array of ethical considerations affecting service organizations in today’s information age. Specifically, its research objectives are threefold: (i) It presents the findings from a rigorous and trustworthy systematic review exercise, focused on “ethics” in “service(s)” and/or “ethical services”. This research involves a thorough scrutinization of the most-cited articles published in the last five (5) years; (ii) It utilizes a thematic analysis to determine which paradigms are being associated with service ethics. The rationale is to identify some of the most contemporary topics related to ethical leadership in service organizations. (iii) At the same time, it puts forward theoretical and practical implications that clarify how, why, where, when and to what extent service providers are operating in a legitimate and ethical manner.

A thorough review of the literature reveals that, for the time being, there are just a few colleagues who have devoted their attention to relevant theoretical underpinnings linked to the service ethics literature (Liu et al., 2023; Wirtz et al., 2023). For the time being, there is still limited research that has outlined popular research themes from the most cited articles published in the past five (5) years. It clearly differentiates itself from previous studies as this contribution’s rigorous and transparent systematic review approach clearly recognizes, appraises and describes the methodology that was used to capture and analyze data focused on the provision or lack thereof of ethical services. In addition, unlike other descriptive literature reviews, this paper synthesizes the findings from the latest contributions on this topic and provides a discursive argumentation on their implications. Hence, this article addresses a number of knowledge gaps in academic literature. In conclusion, it identifies the limitations of this review exercise, and outlines future research avenues to academia.

Theoretical implications

This contribution raises awareness of the underexplored notion of service ethics. A number of commentators are making reference to various theories and concepts to clarify how they can guide service organizations in their ethical leadership. In many cases, a number of theories indicate that decision makers ought to be just and fair with individuals or entities in their actions.  Appendix A features a list of ethical theories and provides a short definition for them. For instance, the justice theory suggests that all individuals including service employees should have the same fundamental rights based on the values of equality, non-discrimination, inclusion, human dignity, freedom and democracy. Human rights as well as employee rights and values ought to be protected and reinforced by the respective jurisdictions’ rule of law, for the benefit of all subjects (GrĂ©goire et al., 2019).

Business ethics literature indicates that just societies are characterized by fair, trustworthy, accountable and transparent institutions (and organizations). For instance, the fairness theory raises awareness on certain ethical norms and standards that can help policy makers as well as other organizations including businesses, to ensure that they are continuously providing equal opportunities to everyone. It posits that all individuals ought to be treated with dignity in a respectful and equitable manner (Wei et al., 2019).

This is in stark contrast with the favoritism theory that suggests that certain individuals including employees, can receive preferential treatment, to the detriment of others (Bramoullé & Goyal, 2016). This argumentation is synonymous with the nepotism theory. Like favoritism, nepotism is a phenomenon that is manifested when institutional and organizational leaders help and support specific persons because they are connected with them in a way or another (e.g. through familial ties, friendships, financial, or social factors). Arguably, such favoritisms clearly evidence their conflict(s) of interest, compromise or cloud their judgements, decisions and actions in workplace environments and/or in other social contexts. Many business ethics researchers contend that decision makers ought to be guided by the principle of beneficence (Brear & Gordon, 2021), as they should possess the competences and abilities to recognize between what is morally right and ethically wrong.

This research confirms that frequently, organizational leaders have to deal with difficult and challenging situations, where they are expected to make hard decisions (Islam et al., 2021a; Islam et al., 2021b; Latan et al., 2019; Naseer et al., 2020; Schwepker & Dimitriou, 2021). In such cases, the most reasonable ethical approach would be to follow courses of action that will result in the least possible harm to everyone (Heine et al., 2023). The service organizations’ members of staff are all expected to be collaborative, productive and efficient in their workplace environment. This line of reasoning is related to the attributional theory (Bourdeau et al., 2019) and/or to the consequentialism theory (Budolfson, 2019). Very often, the proponents of these two theories contend that while honest, righteous and virtuous behaviors may yield positive outcomes for colleagues, subordinates and other stakeholders, wrong behaviors can result in negative repercussions to them (Deci & Ryan, 1987; Francis & Keegan, 2020; Lee et al., 2020; Paramita et al., 2021)

Other researchers who contributed to the ethics literature related to the utilitarianism theory, suggest that people tend to make better decisions, when they focus on the consequences of their actions. Hence, they will be in a better position to identify laudable behaviors and codes of conduct that add value to their organization (Coeckelbergh, 2021; Michaelson & Tosti-Kharas, 2019; Ramboarisata & Gendron, 2019). Very often, they argue that there are still unresolved issues in social sciences including the unpredictability of events and incidents from happening (Du & Xie, 2021), and/or the difficulty in measuring the consequences when/if they occur. For example, this review indicated that various authors discussed about the challenges, risks and possible dangers of adopting various technologies including AI, big data, et cetera (Breidbach & Maglio, 2020; Chang et al., 2020; Flavián & Casaló, 2021; Rymarczyk, 2020). In many cases, they hinted that the best ethical choice is to identify which decisions and actions could lead to the greatest good, in terms of positive, righteous and virtuous outcomes (Budolfson, 2019; Gong et al., 2020; Paramita et al., 2021).

Various academic authors who contributed to the formulation of the virtues theory held that there are persons including organizational leaders, whose characters, traits and values drive them to continuously improve and to excel in their duties and responsibilities (Coeckelbergh, 2021; Fatma et al., 2020; Lee et al., 2020). They frequently noted that the persons’ affective feelings as well as their intellectual dispositions enable them to develop a positive mindset, to make the best decisions and to engage in the right behaviors (Gong et al., 2020; Huang & Liu, 2021; Yan et al., 2023). This is congruent with the theory of positivity too, as it explains how the individuals’ optimistic feelings may result in their happiness and wellbeing. Some commentators imply that such positive emotions can influence the individuals’ state of minds and can foster their resilience to engage in productive behaviors (Paramita et al., 2021).

This argumentation is in stark contrast with the emotional labor theory that is manifested when disciplined employees suppress their emotions by engaging in posturing behaviors in order to conform to the organizational culture (Mastracci, 2022). This phenomenon was evidenced in Naseer et al.’s (2020) contribution. In this case, the authors indicated how the employees’ overidentification with unethical organizations can have a negative impact on their engagement, thereby resulting in counterproductive work practices. In addition, Islam et al. (2021b) also suggested that abusive supervision led employees to undesirable outcomes like knowledge hiding behaviors and to low morale in workplace environments.

Several commentators who are focused on psychological issues argue that the individuals’ intrinsic motivations are closely related to their self-determination (Deci & Ryan, 1987). Very often, they contend that individuals should have the autonomy and freedom to make life choices, in order to improve their well-being in the future. The findings from this research reported that organizational leaders who delegated responsibilities to their members of staff, have instilled trust and commitment in their employees, and also improved their intrinsic motivations (Francis & Keegan, 2020; Lee et al., 2020; Schwepker & Dimitriou, 2021).

Hence, organizational leaders of service businesses ought to be aware that there is scope for them to empower their human resources, to help them make responsible choices and decisions relating to their work activities, in a discrete manner (Bourdeau et al., 2019; Islam et al., 2021a; Tanova & Bayighomog, 2022). The employees’ higher levels of autonomy and independence can influence their morale (Paramita et al., 2021; Ramboarisata & Gendron, 2019) and reduce stress levels (Schwepker & Dimitriou, 2021). Various researchers confirmed that employees would be more productive if they were empowered with duties and responsibilities (e.g. Nauman et al., 2023).

This argumentation is congruent with the conservation of resources theory, as business leaders are expected to look after their human resources’ cognitive and emotional wellbeing, if they want to foster their organizational commitment to achieve their corporate objectives. Indeed, their ethical leadership can lead to win-win outcomes, particularly if their employees replicate responsible and altruistic behaviors with one another, and if they strive in their endeavors to develop a caring environment in their organization (Parsons et al., 2021; Saks, 2022). This reasoning is closely related to the social cognition theory that presumes that individuals acquire emotional knowledge and skill sets such as intuition or empathy, among others, through social interactions, including when they are at work (Čaić et al., 2019; Campbell et al., 2020; Rauhaus et al., 2020).

Practical implications

The findings from this research confirm that various service organizations are becoming acquainted with ethical leadership and with social issues in management. Evidently, several listed businesses and large undertakings in service industries are increasingly proving their legitimacy and license to operate, by engaging in ethical behaviors that promote responsible human resources management. Very often, they are fostering an organizational climate that encourages ongoing dialogue, communication and collaboration among members of staff; they empower employees with duties and responsibilities to make important decisions; provide them with equitable compensation that is commensurate with qualifications and experience; and implementing work-life balance policies. Generally, these laudable measures are resulting in motivated, committed and productive employees.

On the other hand, unethical behaviors including abusive organizational practices and coercive leadership styles are generating bitterness and feelings of resentment among employees. The lack of ethical leadership can lead to demotivation, low morale, job stress and even to counterproductive behaviors including wrongdoings like knowledge hiding and abusive supervision in workplace environments. This research reported about irresponsible practices of service businesses operating in the sharing economy, as a number of hospitality companies are subcontracting their food delivery services to independent contractors, who are not safeguarding the rights of their employees. Very often, the workers of the gig economy are offered precarious jobs and unfavorable conditions of employment. Generally, they are not paid in a commensurate manner for their jobs, are not eligible for health or retirement benefits, and cannot affiliate themselves with trade unions.

This discursive review shed light on the service businesses’ dealings with employees and with other stakeholders. It also narrated about their relationships with customers as well as on their ethical and digital responsibilities towards them. For example, it indicated that many businesses are gathering and storing data of customers. Frequently, they are using their personal and transactional information to analyze and interpret shopping behaviors. They may do so to build consumer profiles and/or to retarget them with promotional content. The findings of this research imply that it is the responsibility of service businesses to inform new customers that they are capturing and retaining data from them, when and if they do so (even though in many cases, they are aware that many online users can quickly unsubscribe to marketing messages and/or are becoming adept in blocking advertisements from popping-up in their screens). The authors  contend that service providers ought to explicitly ask their customers’ consent (through opt-in or opt-out choices) to ensure that the former can avail themselves of their consumers’ data.

Currently, certain jurisdictions are not in a position to protect consumers from entities that could use their personal information for different purposes as they did not enact substantive data protection legislation. The European Union’s General Data Protection Regulation (GDPR) or California Consumer Privacy Act (CCPA), are two examples of data regulations that are intended to safeguard the consumers’ interests in this regard. Online users ought to be educated and guided through regulations, policies and data literacy programs, to protect them from potentially unethical technological applications and practices of big data algorithms and advanced analytics. At the moment, various stakeholders including policy makers and academia, among others, are calling for responsible AI governance and for the formulation of (quasi) regulatory frameworks, in order to maximize the benefits of AI and to minimize its negative impacts to humanity.

This research raises awareness about the importance of disclosing corporate governance procedures, and of regularly reporting CSR/ESG credentials with regulatory stakeholders and with other interested parties. In many cases, the majority of service businesses are genuinely following ethical norms and principles that go beyond their commercial and legal obligations. They should bear in mind that their sustainability accounting, transparent ESG disclosures, as well as their audit and assurance mechanisms, can ultimately reduce information asymmetry among stakeholders, whilst enhancing their reputation and image with interested parties. Their ongoing corporate communications can ameliorate stakeholder relationships and could increase their organizational legitimacy in the long run.

Limitations and future research avenues

The notion of service ethics is gaining traction in academic circles. Indeed, it is considered as a contemporary and timely topic for service researchers specializing in business administration and/or business ethics. In fact, the findings from the bibliographic analysis demonstrate that there were more than eleven thousand (11,000) documents focused on service(s), ethics and ethical service(s), published in the last 5 years. This research adds value to the extant literature as it sheds light on the most cited articles focused on these topics. Yet, it differentiates itself from previous papers, as it identifies the themes of fifty (50) of the most cited papers in this promising area of research, describes the methodology that was employed to capture and analyze the data on this topic, and scrutinizes their content, before synthesizing the findings of this contribution.

This article presents the findings of a rigorous review and evaluation of the latest literature revolving on ethical leadership of service organizations. The authors are well aware that, in the past, other academic colleagues may have referred to synonymous keywords to service ethics or ethical services, including ethical business, business ethos, business ethics, business code of conduct, and even corporate social responsibilities of service businesses, among other paradigms. Therefore, future researchers may also consider using these keywords when they investigate ethical behaviors in services-based sectors. It is hoped that they will delve into the research themes, fields of studies and theoretical bases that were identified in this contribution including on the service organizations’ ethical leadership, as proposed in the following table. This research confirms that it is in the interest of service entities to foster a fair and just working environment, particularly for the benefit of their employees, as well as for other stakeholders including for regulatory institutions, creditors, shareholders and customers, among others.

A future agenda for service ethics research

(Developed by the authors)

Indeed, there is scope to investigate further the service organizations’ roles in today’s societies, as they are being urged by policy makers and other interested parties to communicate about their responsible organizational behaviors, in various contexts. Entities operating in service industries including small and medium-sized businesses as well as micro enterprises are increasingly acquainting themselves with sustainability accounting, non-financial reporting and ongoing assurance exercises, as comprehensive CSR/ESG disclosures can enable them to prove their legitimacy and license to operate with stakeholders. Moreover, prospective researchers are invited to continue raising more awareness about ethical leadership among service organizations, particularly when they are adopting disruptive innovations.

The full list of references are available from the open-access article (published through The Service Industries Journal) and via ResearchGate.

Leave a comment

Filed under Business, Corporate Social Responsibility, ESG Reporting, ethics

An artificial intelligence governance framework

This is an excerpt from my latest contribution on responsible artificial intelligence (AI).

Suggested citation: Camilleri, M. A. (2023). Artificial intelligence governance: Ethical considerations and implications for socialresponsibility. Expert Systems, e13406. https://doi.org/10.1111/exsy.13406

The term “artificial intelligence governance” or “AI governance” integrates the notions of “AI” and “corporate governance”. AI governance is based on formal rules (including legislative acts and binding regulations) as well as on voluntary principles that are intended to guide practitioners in their research, development and maintenance of AI systems (Butcher & Beridze, 2019; Gonzalez et al., 2020). Essentially, it represents a regulatory framework that can support AI practitioners in their strategy formulation and in day-to-day operations (Erdélyi & Goldsmith, 2022; Mullins et al., 2021; Schneider et al., 2022). The rationale behind responsible AI governance is to ensure that automated systems including ML/DL technologies, are supporting individuals and organizations in achieving their long terms objectives, whist safeguarding the interests of all stakeholders (Corea et al., 2023; Hickok et al., 2022).

AI governance requires that the organizational leaders comply with relevant legislation, hard laws and regulations (Mäntymäki et al., 2022). Moreover, they are expected to follow ethical norms, values and standards (Koniakou, 2023). Practitioners ought to be trustworthy, diligent and accountable in how they handle their intellectual capital and other resources including their information technologies, finances as well as members of staff, in order to overcome challenges, minimize uncertainties, risks and any negative repercussions (E.g. decreased human oversight in decision making, among others) (Agbese et al., 2023; Smuha, 2019).

Procedural governance mechanisms ought to be in place to ensure that AI technologies and ML/DL models are operating in a responsible manner. Figure 1 features some of the key elements that are required for the responsible governance of artificial intelligence. The following principles are aimed to provide guidelines for the modus operandi of AI practitioners (including ML/DL developers).

Figure 1. A Responsible Artificial Intelligence Governance Framework

Accountability and transparency

“Accountability” refers to the stakeholders’ expectations about the proper functioning of AI systems, in all stages, including in the design, creation, testing or deployment, in accordance with relevant regulatory frameworks. It is imperative that AI developers are held accountable for the smooth operation of AI systems throughout their lifecycle (Raji et al., 2020). Stakeholders expect them to be accountable by keeping a track record of their AI development processes (Mäntymäki et al., 2022).

The transparency notion refers to the extent to which end-users could be in a position to understand how AI systems work (Andrada et al., 2020; Hollanek, 2020). AI transparency is associated with the degree of comprehension about algorithmic models in terms of “simulatability” (an understanding of AI functioning), “decomposability” (related to how individual components work), and algorithmic transparency (this is associated to the algorithms’ visibility).

 In reality, it is difficult to understand how AI systems, including deep learning models and their neural networks are learning (as they acquire, process and store data) during training phases. They are often considered as black box models. It may prove hard to algorithmically translate derived concepts into human-understandable terms, even though developers may use certain jargon to explain their models’ attributes and features. Many legislators are striving in their endeavors to pressurize AI actors to describe the algorithms they use in automated decision-making, yet the publication of algorithms is useless if outsiders cannot access the data of the AI model.

Explainability and interpretability

Explainability is the concept that sheds light on how AI models work, in a way that is comprehensible to a human being. Arguably, the explainabilty of AI systems could improve their transparency, trustworthiness and accountability. At the same time, it can reduce bias and unfairness. The explainability of artificial intelligence systems could clarify how they reached their decisions (Arya et al., 2019; Keller & Drake, 2021). For instance, AI could explain how and why autonomous cars decide to stop or to slow down when there are pedestrians or other vehicles in front of them.

Explainable AI systems might improve consumer trust and may enable engineers to develop other AI models, as they are in a position to track provenance of every process, to ensure reproducibility, and to enable checks and balances (Schneider et al., 2022). Similarly, interpretability refers to the level of accuracy of machine learning programs in terms of linking the causes to the effects (John-Mathews, 2022).

Fairness and inclusiveness

The responsible AI’s fairness dimension refers to the practitioners’ attempts to correct algorithmic biases that may possibly (voluntarily or involuntarily) be included in their automation processes (Bellamy et al., 2019; Mäntymäki, et al., 2022). AI systems can be affected by their developers’ biases that could include preferences or antipathies toward specific demographic variables like genders, age groups and ethnicities, among others (Madaio et al., 2020). Currently, there is no universal definition on AI fairness.

However, recently many multinational corporations have developed instruments that are intended to detect bias and to reduce it as much as possible (John-Mathews et al., 2022). In many cases, AI systems are learning from the data that is fed to them. If the data are skewed and/or if they comprise implicit bias into them, they may result in inappropriate outputs.

Fair AI systems rely on unbiased data (Wu et al., 2020). For this reason, many companies including Facebook, Google, IBM and Microsoft, among others are striving in their endeavors to involve members of staff hailing from diverse backgrounds. These technology conglomerates are trying to become as inclusive and as culturally aware as possible in order to minimize bias from affecting their AI processes. Previous research reported that AI’s bias may result in inequality, discrimination and in the loss of jobs (Butcher & Beridze, 2019).

Privacy and safety for consumers

Consumers are increasingly concerned about the privacy of their data. They have a right to control who has access to their personal information. The data that is collected or used by third parties, without the authorization or voluntary consent of individuals, would result in the violations of their privacy (Zhu et al., 2020; Wu et al., 2022).

AI-enabled products, including dialogue systems like chatbots and virtual assistants, as well as digital assistants (e.g. like Siri, Alexa or Cortana), and/or wearable technologies such as smart watches and sensorial smart socks, among others, are increasingly capturing and storing large quantities of consumer information. The benefits that are delivering these interactive technologies may be offset by a number of challenges. The technology businesses who developed these products are responsible to protect their consumers’ personal data (Rodríguez-Barroso et al., 2020). Their devices are capable of holding a wide variety of information on their users. They are continuously gathering textual, visual, audio, verbal, and other sensory data from consumers. In many cases, the customers are not aware that they are sharing personal information to them.

For example, facial recognition technologies are increasingly being used in different contexts. They may be used by individuals to access websites and social media, in a secure manner and to even authorize their payments through banking and financial services applications. Employers may rely on such systems to track and monitor their employees’ attendance. Marketers can utilize such technologies to target digital advertisements to specific customers. Police and security departments may use them for their surveillance systems and to investigate criminal cases. The adoption of these technologies has often raised concerns about privacy and security issues. According to several data privacy laws that have been enacted in different jurisdictions, organizations are bound to inform users that they are gathering and storing their biometric data. The businesses that employ such technologies are not authorized to use their consumers’ data without their consent.

Companies are expected to communicate about their data privacy policies with their target audiences (Wong, 2020). They have to reassure consumers that the consented data they collect from them is protected and are bound to inform them that they may use their information to improve their customized services to them. The technology giants can reward their consumers to share sensitive information. They could offer them improved personalized services among other incentives, in return for their data. In addition, consumers may be allowed to access their own information and could be provided with more control (or other reasonable options) on how to manage their personal details.

The security and robustness of AI systems

AI algorithms are vulnerable to cyberattacks by malicious actors. Therefore, it is in the interest of AI developers to secure their automated systems and to ensure that they are robust enough against any risks and attempts to hack them (Gehr et al., 2018; Li et al., 2020).

The accessibility to AI models ought to be continuously monitored at all times during their development and deployment (Bertino et al., 2021). There may be instances when AI models could encounter incidental adversities, leading to the corruption of data. Alternatively, they might encounter intentional adversities when they experience sabotage from hackers. In both cases, the AI model will be compromised and can result in system malfunctions (Papagiannidis et al., 2023).

AI models have to prevent such contingent issues from happening. Their developers’ responsibilities are to improve the robustness of their automated systems, and to make them as secure of possible, to reduce the chances of threats, including by inadvertent irregularities, information leakages, as well as by privacy violations like data breaches, contamination and poisoning by malicious actors (Agbese et al., 2023; Hamon et al., 2020).

AI developers should have preventive policies and measures related to the monitoring and control of their data. They ought to invest in security technologies including authentication and/or access systems with encryption software as well as firewalls for their protection against cyberattacks. Routine testing can increase data protection, improve security levels and minimize the risks of incidents.

Conclusions

This review indicates that more academics as well as practitioners, are increasingly devoting their attention to AI as they elaborate about its potential uses, as well as on its opportunities and threats. It reported that its proponents are raising awareness on the benefits of AI systems for individuals as well as for organizations. At the same time, it suggests that a number of scholars and other stakeholders including policy makers, are raising their concerns about its possible perils (e.g. Berente et al., 2021; Gonzalez et al., 2020; Zhang & Lu, 2021).

Many researchers identified some of the risks of AI (Li et al., 2021; Magas & Kiritsis, 2022). In many cases, they warned that AI could disseminate misinformation, foster prejudice, bias and discrimination, raise privacy concerns, and could lead to the loss of jobs (Butcher & Beridze, 2019). A few commentators argue about the “singularity” or the moment where machine learning technologies could even surpass human intelligence (Huang & Rust, 2022). They predict that a critical shift could occur if humans are no longer in a position to control AI anymore.

In this light, this article sought to explore the governance of AI. It sheds light on substantive regulations, as well as on reflexive principles and guidelines, that are intended at practitioners who are researching, testing, developing and implementing AI models. It clearly explains how institutions, non-governmental organizations and technology conglomerates are introducing protocols (including self-regulations) to prevent contingencies from even happening due to inappropriate AI governance.

Debatably, the voluntary or involuntary mishandling of automated systems can expose practitioners to operational disruptions and to significant risks including to their corporate image and reputation (Watts & Adriano, 2021). The nature of AI requires practitioners to develop guardrails to ensure that their algorithms work as they should (Bauer, 2022). It is imperative that businesses comply with relevant legislations and to follow ethical practices (Buhmann & Fieseler, 2023). Ultimately, it is in their interest to operate their company in a responsible manner, and to implement AI governance procedures. This way they can minimize unnecessary risks and safeguard the well-being of all stakeholders.

This contribution has addressed its underlying research objectives. Firstly, it raised awareness on AI governance frameworks that were developed by policy makers and other organizations, including by the businesses themselves. Secondly, it scrutinized the extant academic literature focused on AI governance and on the intersection of AI and CSR. Thirdly, it discussed about essential elements for the promotion of socially responsible behaviors and ethical dispositions of AI developers. In conclusion it put forward an AI governance conceptual model for practitioners.

This research made reference to regulatory instruments that are intended to govern AI expert systems. It reported that, at the moment there are a few jurisdictions that have formalized their AI policies and governance frameworks. Hence, this article urges laggard governments to plan, organize, design and implement regulatory instruments that ensure that individuals and entities are safe when they utilize AI systems for personal benefit, educational and/or for commercial purposes.

Arguably, one has to bear in mind that, in many cases, policy makers have to face a “pacing problem” as the proliferation of innovation is much quicker than legislation. As a result, governments tend to be reactive in the implementation of regulatory interventions relating to innovations. They may be unwilling to hold back the development of disruptive technologies from their societies. Notwithstanding, they may face criticism by a wide array of stakeholders in this regard, as they may have conflicting objectives and expectations.

The governments’ policy is to regulate business and industry to establish technical, safety and quality standards as well as to monitor their compliance. Yet, they may consider introducing different forms of regulation other than the traditional “command and control” mechanisms. They may opt for performance-based and/or market-based incentive approaches, co-regulation and self-regulation schemes, among others (Hepburn, 2009), in order to foster technological innovations.

This research has shown that a number of technology giants, including IBM and Microsoft, among others, are anticipating the regulatory interventions of different governments where they operate their businesses. It reported that they are communicating about their responsible AI governance initiatives as they share information on their policies and practices that are meant to certify, explain and audit their AI developments. Evidently, these companies, among others, are voluntarily self-regulating themselves as they promote accountability, fairness, privacy and robust AI systems. These two organizations, in particular, are raising awareness about their AI governance frameworks to increase their CSR credentials with stakeholders.

Likewise, AI developers who work for other businesses, are expected to forge relationships with external stakeholders including with policy makers as well as with actors including individuals and organizations who share similar interests in AI. Innovative clusters and network developments may result in better AI systems and can also decrease the chances of possible risks.  Indeed, practitioners can be in better position if they cooperate with stakeholders for the development of trustworthy AI and if they increase their human capacity to improve the quality of their intellectual properties (Camilleri et al., 2023). This way, they can enhance their competitiveness and growth prospects (Troise & Camilleri, 2021). Arguably, it is in their interest to continuously engage with internal stakeholders (and employees), and to educate them about AI governance dimensions, that are intended to promote accountable, transparent, explainable interpretable reproducible, fair, inclusive and secure AI solutions. Hence, they could maximize AI benefits, minimize their risks as well as associated costs.

Future research directions

Academic colleagues are invited to raise more awareness on AI governance mechanisms as well as on verification and monitoring instruments. They can investigate what, how, when and where protocols could be used to protect and safeguard individuals and entities from possible risks and dangers of AI.

The “what” question involves the identification of AI research and development processes that require regulatory or quasi regulatory instruments (in the absence of relevant legislation) and/or necessitate revisions in existing statutory frameworks.

The “how” question is related to the substance and form of AI regulations, in terms of their completeness, relevance, and accuracy. This argumentation is synonymous with the true and fair view concept applied in the accounting standards of financial statements.

The “when” question is concerned with the timeliness of the regulatory intervention. Policy makers ought to ensure that stringent rules do not hinder or delay the advancement of technological innovations.

The “where” question is meant to identify the context where mandatory regulations or the introduction of soft laws, including non-legally binding principles and guidelines are/are not required.

Future researchers are expected to investigate further these four questions in more depth and breadth. This research indicated that most contributions on AI governance were discursive in nature and/or involved literature reviews. Hence, there is scope for academic colleagues to conduct primary research activities and to utilize different research designs, methodologies and sampling frames to better understand the implications of planning, organizing, implementing and monitoring AI governance frameworks, in diverse contexts.

The full article is also available here: https://www.researchgate.net/publication/372412209_Artificial_intelligence_governance_Ethical_considerations_and_implications_for_social_responsibility

Leave a comment

Filed under artificial intelligence, chatbots, Corporate Social Responsibility, internet technologies, internet technologies and society

Live support by chatbots with artificial intelligence: A future research agenda

This is an excerpt from one of my latest contributions on the use of responsive chatbots by service businesses. The content was adapted for this blogpost.

Suggested citation: Camilleri, M.A. & Troise, C. (2022). Live support by chatbots with artificial intelligence: A future research agenda. Service Business, https://doi.org/10.1007/s11628-022-00513-9

(Credit: Chatbots Magazine)

The benefits of using chatbots for online customer services

Frequently, consumers are engaging with chatbot systems without even knowing, as machines (rather than human agents) are responding to online queries (Li et al. 2021; Pantano and Pizzi 2020; Seering et al. 2018; Stoeckli et al. 2020). Whilst 13% of online consumer queries require human intervention (as they may involve complex queries and complaints), more than 87 % of online consumer queries are handled by chatbots (Ngai et al., 2021).

Several studies reported that there are many advantages of using conversational chatbots for customer services. Their functional benefits include increased convenience to customers, enhanced operational efficiencies, reduced labor costs, and time-saving opportunities.

Consumers are increasingly availing themselves of these interactive technologies to retrieve detailed information from their product recommendation systems and/or to request their assistance to help them resolve technical issues. Alternatively, they use them to scrutinize their personal data. Hence, in many cases, customers are willing to share their sensitive information in exchange for a better service.

Although, these interactive technologies are less engaging than human agents, they can possibly elicit more disclosures from consumers. They are in a position to process the consumers’ personal data and to compare it with prior knowledge, without any human instruction. Chatbots can learn in a proactive manner from new sources of information to enrich their database.

Whilst human customer service agents may usually handle complex queries including complaints, service chatbots can improve the handling of routine consumer queries. They are capable of interacting with online users in two-way communications (to a certain extent). Their interactions may result in significant effects on consumer trust, satisfaction, and repurchase intentions, as well as on positive word-of-mouth publicity.

Many researchers reported that consumers are intrigued to communicate with anthropomorphized technologies as they invoke social responses and norms of reciprocity. Such conversational agents are programed with certain cues, features and attributes that are normally associated with humans.

The findings from this review clearly indicate that individuals feel comfortable using chatbots that simulate human interactions, particularly with those that have enhanced anthropomorphic designs. Many authors noted that the more chatbots respond to users in a natural, humanlike way, the easier it is for the business to convert visitors into customers, particularly if they improve their online experiences. This research indicates that there is scope for businesses to use conversational technologies to personalize interactions with online users, to build better relationships with them, to enhance consumer satisfaction levels, to generate leads as well as sales conversions.

The costs of using chatbots for online customer services

Despite the latest advances in the delivery of electronic services, there are still individuals who hold negative perceptions and attitudes towards the use of interactive technologies. Although AI technologies have been specifically created to foster co-creation between the service provider and the customer,

There are a number of challenges (like authenticity issues, cognition challenges, affective issues, functionality issues and integration conflicts) that may result in a failed service interaction and in dissatisfied customers. There are consumers, particularly the older ones, who do not feel comfortable interacting with artificially intelligent technologies like chatbots, or who may not want to comply with their requests, for different reasons. For example, they could be wary about cyber-security issues and/or may simply refuse to engage in conversations with a robot.

A few commentators contended that consumers should be informed when they are interacting with a machine. In many cases, online users may not be aware that they are engaging with elaborate AI systems that use cues such as names, avatars, and typing indicators that are intended to mimic human traits. Many researchers pointed out that consumers may or may not want to be serviced by chatbots.

A number of researchers argued that some chatbots are still not capable of communicative behaviors that are intended to enhance relational outcomes. For the time being, there are chatbot technologies that are not programed to answer to all of their customers’ queries (if they do not recognize the keywords that are used by the customers), or may not be quick enough to deal with multiple questions at the same time. Therefore, the quality of their conversations may be limited. Such automated technologies may not always be in a position to engage in non-linear conversations, especially when they have to go back and forth on a topic with online users.

Theoretical and practical implications

This contribution confirms that recently there is a growing interest among academia as well as by practitioners on research that is focused on the use of chatbots that can improve the businesses’ customer-centric services. It clarifies that various academic researchers have often relied on different theories including on the expectancy theory, or on the expectancy violation theory, the human computer interaction theory/human machine communication theory, the social presence theory, and/or on the social response theory, among others.

Currently, there are limited publications that integrated well-established conceptual bases (like those featured in the literature review), or that presented discursive contributions on this topic. Moreover, there are just a few review articles that capture, scrutinize and interpret the findings from previous theoretical underpinnings, about the use of responsive chatbots in service business settings. Therefore, this systematic review paper addresses this knowledge gap in the academic literature.

It clearly differentiates itself from mainstream research as it scrutinizes and synthesizes the findings from recent, high impact articles on this topic. It clearly identifies the most popular articles from Scopus and Web of Science, and advances a definition about anthropomorphic chatbots, artificial intelligence chatbots (or AI chatbots), conversational chatbot agents (or conversational entities, conversational interfaces, conversational recommender systems or dialogue systems), customer experience with chatbots, chatbot customer service, customer satisfaction with chatbots, customer value (or the customers’ perceived value) of chatbots, and on service robots (robot advisors). It discusses about the different attributes of conversational chatbots and sheds light on the benefits and costs of using interactive technologies to respond to online users’ queries.

In sum, the findings from this research reveal that there is a business case for online service providers to utilize AI chatbots. These conversational technologies could offer technical support to consumers and prospects, on various aspects, in real time, round the clock. Hence, service businesses could be in a position to reduce their labor costs as they would require fewer human agents to respond to their customers. Moreover, the use of interactive chatbot technologies could improve the efficiency and responsiveness of service delivery. Businesses could utilize AI dialogue systems to enhance their customer-centric services and to improve online experiences.  These service technologies can reduce the workload of human agents. The latter ones can dedicate their energies to resolve serious matters, including the handling of complaints and time-consuming cases.

On the other hand, this paper also discusses potential pitfalls. Currently, there are consumers who for some reason or another, are not comfortable interacting with automated chatbots. They may be reluctant to engage with advanced anthropomorphic systems that use avatars, even though, at times, they can mimic human communications relatively well.  Such individuals may still appreciate a human presence to resolve their service issues. They may perceive that interactive service technologies are emotionless and lack a sense of empathy.

Presently, chatbots can only respond to questions, keywords and phrases that they were programed to answer. Although they are useful in solving basic queries, their interactions with consumers are still limited. Their dialogue systems require periodic maintenance. Unlike human agents they cannot engage in in-depth conversations or deal with multiple queries, particularly if they are expected to go back and forth on a topic.

Most probably, these technical issues will be dealt with over time, as more advanced chatbots will be entering the market in the foreseeable future. It is likely that these AI technologies would possess improved capabilities and will be programmed with up-to-date information, to better serve future customers, to exceed their expectations.

Limitations and future research avenues

This research suggests that this area of study is gaining traction in academic circles, particularly in the last few years. In fact, it clarifies that there were four hundred twenty-one 421 publications on chatbots in business-related journals, up to December 2021. Four hundred fifteen (415) of them were published in the last 5 years. 

The systematic analysis that was presented in this research was focused on “chatbot(s)” or “chatterbot(s)”. Other academics may refer to them by using different synonyms like “artificial conversational entity (entities)”, “bot(s)”, “conversational avatar(s)”, “conversational interface agent”, “interactive agent(s)”, “talkbot(s)”, “virtual agent(s)”, and/or “virtual assistant(s)”, among others. Therefore, future researchers may also consider using these keywords when they are other exploring the academic and nonacademic literature on conversational chatbots that are being used for customer-centric services.

Nevertheless, this bibliographic study has identified some of the most popular research areas relating to the use of responsive chatbots in online customer service settings. The findings confirmed that many authors are focusing on the chatbots’ anthropomorphic designs, AI capabilities and on their dialogue systems. This research suggests that there are still knowledge gaps in the academic literature. The following table clearly specifies that there are untapped opportunities for further empirical research in this promising field of study.

The full article is forthcoming. A prepublication version will be available through Researchgate.

Leave a comment

Filed under artificial intelligence, Business, chatbots, customer service, Marketing