Tag Archives: artificial intelligence

The use of Generative AI for travel and tourism planning

📣📣📣 Published via Technological Forecasting and Social Change.

👉 Very pleased to share this timely article that examines the antecedents of the users’ trust in Generative AI’s recommendations, related to travel and tourism planning.

🙏 I would like to thank my colleagues (and co-authors), namely, Hari Babu Singu, Debarun Chakraborty, Ciro Troise and Stefano Bresciani, for involving me in this meaningful research collaboration. It’s been a real pleasure working with you on this topic!

https://doi.org/10.1016/j.techfore.2025.124407

Highlights

  • •The study focused on the enablers and the inhibitors of generative AI usage
  • •It adopted 2 experimental studies with a 2 × 2 between-subjects factorial design
  • •The impact of the cognitive load produced mixed results
  • •Personalized recommendations explained each responsible AI system construct
  • •Perceived controllability was a significant moderator

Abstract

Generative AI models are increasingly adopted in tourism marketing content based on text, image, video, and code, which generates new content as per the needs of users. The potential uses of generative AI are promising; nonetheless, it also raises ethical concerns that affect various stakeholders. Therefore, this research, which comprises two experimental studies, aims to investigate the enablers and the inhibitors of generative AI usage. Studies 1 (n = 403 participants) and 2 (n = 379 participants) applied a 2 × 2 between-subjects factorial design in which cognitive load, personalized recommendations, and perceived controllability were independently manipulated. The initial study examined the probability of reducing the cognitive load (reduction/increase) due to the manual search for tourism information. The second study considers the probability of receiving personalized recommendations using generative AI features on tourism websites. Perceived controllability was treated as a moderator in each study. The impact of the cognitive load produced mixed results (i.e., predicting perceived fairness and environmental well-being), with no responsible AI system constructs explaining trust within Study 1. In study 2, personalized recommendations explained each responsible AI system construct, though only perceived fairness and environmental well-being significantly explained trust in generative AI. Perceived controllability was a significant moderator in all relationships within study 2. Hence, to design and execute generative AI systems in the tourism domain, professionals should incorporate ethical concerns and user-empowerment strategies to build trust, thereby supporting the responsible and ethical use of AI that aligns with users and society. From a practical standpoint, the research provides recommendations on increasing user trust through the incorporation of controllability and transparency features in AI-powered platforms within tourism. From a theoretical perspective, it enriches the Technology Threat Avoidance Theory by incorporating ethical design considerations as fundamental factors influencing threat appraisal and trust.

Introduction

Information and communication technologies have been playing a key role in enhancing the tourism experience (Asif and Fazel, 2024; Salamzadeh et al., 2022). The tourism industry has evolved as a content-centric industry (Chuang, 2023). It means the growth of the tourism sector is attributed to the creation, distribution, and strategic use of information. The shift from the traditional model of demand–driven to the content-centric model represents a transformation in user behaviour (Yamagishi et al., 2023; Hosseini et al., 2024). Modern travellers are increasingly dependent on user-generated content to decide on their choices and travel planning (Yamagishi et al., 2023; Rahaman et al., 2024). The content-focused marketing approach in tourism emphasizes the role of digital tools and storytelling to assist in creating a holistic experience (Xiao et al., 2022; Jiang and Phoong, 2023). From planning a trip to sharing cherished memories, content helps add value to the travellers and tourism businesses (Su et al., 2023). For example, MakeMyTrip (MMT) integrated generative AI trip planning assistant which facilitates conversational bookings assisting the users with destination exploration, in-trip needs, personalized travel recommendations, summaries of hotel reviews based on user content and voice navigation support positioning the MMT’s platform more inclusive to the users. The content marketing landscape is changing due to the introduction of generative AI models that help generate text, images, videos, and interesting code for users (Wach et al., 2023; Salamzadeh et al., 2025). These models assist in expressing the language, creativity, and aesthetics as humans do and enhance user experience in various industries, including travel and tourism (Binh Nguyen et al., 2023; Chan and Choi, 2025; Tussyadiah, 2014).

Gen AI enhances natural flow of interactions by offering personalized experiences that align with consumer profiles and preferences (Blanco-Moreno et al., 2024). Gen AI is gaining significant momentum for its transformative impact within the tourism sector, revolutionizing marketing, operations, design, and destination management (Duong et al., 2024; Rayat et al., 2025). Accordingly, empirical studies suggest that Generative AI has the potential to transform tourists’ decision-making process at every stage of their journey, demonstrating a significant disruption to conventional tourism models (Florido-Benítez, 2024). Nonetheless, concerns have been raised about the potential implications of generative AI models, and their generated content might possess inaccurate or deceptive information that could adversely impact consumer decision-making (Kim et al., 2025a, Kim et al., 2025b). In its report titled “Navigating the future: How Generative Artificial Intelligence (AI) is Transforming the Travel Industry”, Amadeus highlighted key concerns and challenges in implementation Gen AI such as data security concerns (35 %), lack of expertise and training in Gen AI (34 %), data quality and inadequate infrastructure (33 %), ROI concerns and lack of clear use cases (30 %) and difficulty in connecting with partners or vendors (29 %). Therefore, the present study argues that with the intuitive design, the travel agents could tackle the lack of expertise and clear use of Gen AI. The study suggests that for travel and tourism companies to build trust in Gen AI, they must tackle the root causes of user apprehension. This means addressing what makes users fear the unknown, ensuring they understand the system’s purpose, and fixing problems with biased or poor data. Also, previous studies highlighted how the integration of Gen AI and tourism throws certain issues such as misinformation and hallucinations, data privacy and security, human disconnection, and inherent algorithmic biases (Christensen et al., 2025; Luu et al., 2025). Moreover, if Gen AI provides biased recommendations, the implications are adverse. If the users perceive that the recommendations are biased, they avoid using them, leading to high churn and abandoning platforms (Singh et al., 2023). Users’ satisfaction will decline, replaced by frustration and anger as biased output damages the promise of personalized services. This negatively impacts brand reputation and loss of significant market competitive advantage (Wu and Yang, 2023). Such scenarios will likely lead to stricter regulations, mandatory algorithmic audits, and new consumer protection laws forcing the industry to prioritize fairness as well as explainability to avoid serious consequences. Interestingly, research studies draw attention to an interesting paradox, that consumers are heavily relying on AI-generated travel itineraries, even when they are aware of Gen AI’s occasional inaccuracies (Osadchaya et al., 2024). This reliance might stem from a belief that AI’s perceived objectivity and capacity for personalized recommendations indicate a significant transformation of trust between human and non-human agents in the travel decision-making process (Kim et al., 2023a, Kim et al., 2023b). Empirical findings indicate that AI implementation in travel planning contributes to the objectivity of the results, effectively mitigates cognitive load, and supports higher levels of personalization aligned with user preferences (Kim et al., 2023a, Kim et al., 2023b). Despite the growing body of literature explaining the role of trust in Gen AI acceptance and its influence on travellers’ decision making and behavioural intentions, the potential biases in AI-generated content continue to pose challenges to users’ confidence (Kim et al., 2021a, Kim et al., 2021b). Therefore, this research aims to examine the influence of generative AI in tourism on consumers’ trust in AI technologies, particularly their balance between technological progress and ethical responsibility, concerning the future of tourism (Dogru(Dr. True et al., 2025).

Existing research has focused more on the technology of AI as a phenomenon rather than translating those theories into studies on how the ethics involved would affect perceptions and trust (Glikson and Woolley, 2020). In addition, there is still the black box phenomenon, which is the inability of the user to understand what happens in AI. It also emphasizes the need for more integrative studies between morally sound AI development, user trust, and design in tourism (Tuo et al., 2024).

Moreover, scant research has examined the factors that inhibit tourists from embracing Generative AI technologies, resulting in limited understanding of travellers’ reluctance to Generative AI adoption for travel planning (Fakfare et al., 2025). Despite a growing body of literature examining the antecedents and outcomes of Generative AI (GAI) adoption, large body of research has been based on established frameworks such as Information Systems Success (ISS) model (Nguyen and Malik, 2022), Technology Acceptance Mode; (TAM) (Chatterjee et al., 2021), and the Unified Theory of Acceptance and Use of Technology (UTAUT) (Venkatesh, 2022).

However, the extensive reliance on traditional acceptance models might face the risk of ignoring the critical socio-technical aspects, which are paramount in the context of GAI (Yu et al., 2022). While most of the studies explore the overarching effects of user acceptance and use of GenAI using TAM, UTAUT, and Delone and McLean IS success models, there has been a lack of consideration of ethical factors as well as responsible AI systems. Addressing these gaps could significantly broaden our theoretical understanding of how individuals evaluate and adopt generative AI technologies within users’ ethical behaviour and socio-technical perspective.

Therefore, this research aims to fill this gap by investigating factors that facilitate or inhibit trust in generative AI systems, considering responsible AI and Technology Threat Avoidance Theory, and advancing the following research questions:

RQ1

How does the customer experience of using generative AI in tourism reflect the impact of enablers (such as responsible AI systems) and inhibitors (such as ambiguity and anxiety) on trust in generative AI?

RQ2

Does perceived controllability moderate the enablers and inhibitors of trust in generative AI in tourism?

This research includes responsible AI principles and the technology threat avoidance theory to explicate the relationship between generative AI and trust in tourism. Seen from the conceptual lens of Ethical Behaviours, responsible AI principles are crucial for enhancing trust in Gen AI within tourism (Law et al., 2024). When users perceive Gen AI recommendations as fair, transparent, and bias-free, they are more likely to perceive the systems as trustworthy, which in turn mitigates user skepticism and promotes trust (Ali et al., 2023). Also, when Gen AI promotes sustainable and environmentally friendly practices, it demonstrates ethical responsibility and enhances trust in alignment with shared social values (Díaz-Rodríguez et al., 2023). By operationalizing responsible AI principles like transparency, fairness, and sustainability, Gen AI transforms from a black-box tool into a more trustworthy and responsible system for travel decisions (Kirilenko and Stepchenkova, 2025). From the socio-technical perspective, the Technology threat avoidance theory (TTAT) supports the logic of how perceived ambiguity and perceived anxiety act as inhibitors of trust. In tourism, users’ experience holds paramount importance (Torkamaan et al., 2024). When users encounter Gen AI content that is difficult to comprehend, recommendations are unstable or ambiguous, and users’ data is exposed to privacy concerns, these apprehensions will turn into a threat to using Gen AI (Bang-Ning et al., 2025). According to TTAT, when users perceive a greater threat, they are more inclined to engage in avoidance behaviours, which also erodes trust in the system. Hence, TTAT explains why users might hesitate or avoid using Gen AI tools, even if they offer functional benefits such as personalized recommendations and reduced cognitive load (Shang et al., 2023).

The study adopted an experimental research design that would help us to explore the independent phenomenon (use of Gen AI for content generation) and observe and explain its role to establish a cause-and-effect relationship between factors of responsible AI systems and TTAT (Leung et al., 2023). The experimental setting helps us to understand the differences empirically between human and non-human generated content from users’ travel decision-making perspective towards destinations. The study enriched the literature on both the ethical aspects and environmental aspects (perceived fairness and environmental well-being) and the perceived risks (perceived ambiguity and perceived anxiety) perspective in the tourism context. The situation of perceived controllability as a moderator is tested in the literature, offering help to managers on how to develop AI systems responsible for lowering user fear and building trust. The study also facilitated practitioners in understanding how the personalized recommendations & cognitive load facilitated by Gen AI in content generation impact the Gen AI Trust of the tourists.

Access through your organization

Check access to the full text by signing in through your organization.

Section snippets

Responsible AI systems

Responsible AI adequately incorporates ethical aspects of AI system design and implementation and ensures that the systems are transparent, fair, and responsible (Díaz-Rodríguez et al., 2023). Responsible AI includes ethical, transparent, and accountable use of artificial intelligence systems, ensuring they are fair, secure, and aligned with societal values. It is also an approach to design, develop, and deploy AI systems so that they are ethical, safe, and trustworthy. It is a system that

Cognitive load, personalized recommendations, and perceived fairness

Cognitive load is the mental effort to process and choose information (Islam et al., 2020). A cognitive load can also be high when people interact with complex systems such as AI. Thus, high cognitive load may affect the ability of users to judge whether the AI-based decisions can be considered fair, since they may not grasp enough of the workings of the system and its specific decisions (Westphal et al., 2023). On the other hand, whereas perceived fairness refers to the users’ feelings about

Research methods and analysis

The experiments adopted in this study are scenario-based. Participants’ emotions cannot be manipulated easily in an ethical manner (Anand and Gaur, 2018). Also, the scenario-based approach helps test the causal relationship between constructs used for experimentation in a given scenario. This approach also reduces the minimal interference from extraneous variables. In this method, respondents answered questions based on hypothetical scenarios developed in each scenario. Therefore, scenarios

Discussion

Study 1 shows that cognitive load is detrimental to an individual’s notion of justice or environmental wellbeing, indicating that such factors may be difficult for a user to rate properly based on expending greater cognitive effort. However, cognitive load can also limit the extent of open-mindedness and critical evaluation of AI-assisted communication (T. Li et al., 2024), which could leave people resorting to mental shortcuts or simple fairness and environmental fairness issues. Under such

Theoretical implications

Trust is an important element in the design of organizations and systems, and the current study’s theoretical implications extend the understanding of trust in generative AI systems by integrating constructs of responsible AI and Technology Threat Avoidance Theory. This research underscores the significance of moral factors in creating and using AI systems by exploring relationships between perceived justice, environmental concern, and trust. In this context, the study notes that the degree of

Practical implications

To develop and retain users’ confidence, professionals in the field should observe responsible AI principles, particularly perceived equity and ecological sustainability. It is possible for consumers to be amused by and trust that AI recommendations are perceived as fair. This involves developing algorithms that align with users’ interests while promoting green aspects in AI. It also becomes important for management to note that during AI interface design, cognitive load should be considered so 

Limitations and future research

This study has certain limitations. First, the use of self-reported measures could pose certain biases, as the participants’ experiences with generative AI or social desirability could affect their judgment. The reliance on self-reported data introduces potential biases from participants’ prior engagements with generative AI, social desirability bias, or limited technological competence. Secondly, focusing on a particular context (i.e., tourism) can be seen as a limitation when it comes to

Conclusion

A thorough examination of advancing artificial intelligence in the tourism industry draws attention to the fact that there is no way of avoiding the issue of encouraging responsible AI use. Extending user satisfaction with rhetoric based on AI suggests that user perceptions are not only shaped by the quality of the recommendations but also by the ethical implications of the system and users’ affective states. A range in the effect of personalized suggestions on some parameters that influenced

Leave a comment

Filed under Marketing

Why are people using generative AI like ChatGPT?

The following text is an excerpt from one of my latest articles. I am sharing the managerial implications of my contribution published through Technological Forecasting and Social Change.

This empirical study provides a snapshot of the online users’ perceptions about Chat Generative Pre-Trained Transformer (ChatGPT)’s responses to verbal queries, and sheds light on their dispositions to avail themselves from ChatGPT’s natural language processing.

It explores their performance expectations about their usefulness and their effort expectations related to the ease of use of these information technologies and investigates whether they are affected by colleagues or by other social influences to use such dialogue systems. Moreover, it examines their insights about the content quality, source trustworthiness as well as on the interactivity features of these text-generative AI models.

Generally, the results suggest that the research participants felt that these algorithms are easy to use. The findings indicate that they consider them to be useful too, specifically when the information they generate is trustworthy and dependable.

The respondents suggest that they are concerned about the quality and accuracy of the content that is featured in the AI chatbots’ answers. This contingent issue can have a negative effect on the use of the information that is created by online dialogue systems.

OpenAI’s ChatGPT is a case in point. Its app is freely available in many countries, via desktop and mobile technologies including iOS and Android. The company admits that its GPT-3.5 outputs may be inaccurate, untruthful, and misleading at times. It clarifies that its algorithm is not connected to the internet, and that it can occasionally produce incorrect answers (OpenAI, 2023a). It posits that GPT-3.5 has limited knowledge of the world and events after 2021 and may also occasionally produce harmful instructions or biased content.

OpenAI recommends checking whether its chatbot’s responses are accurate or not, and to let them know when and if it answers in an incorrect manner, by using their “Thumbs Down” button. They even declare that their ChatGPT’s Help Center can occasionally make up facts or “hallucinate” outputs (OpenAI, 2023aOpenAI, 2023b).

OpenAI reports that its top notch ChatGPT Plus subscribers can access safer and more useful responses. In this case, users can avail themselves from a number of beta plugins and resources that can offer a wide range of capabilities including text-to-speech applications as well as web browsing features through Bing.

Yet again, OpenAI (2023b) indicates that its GPT-4 still has many known limitations that the company is working to address, such as “social biases and adversarial prompts” (at the time of writing this article). Evidently, works are still in progress at OpenAI.

The company needs to resolve these serious issues, considering that its Content Policy and Terms clearly stipulate that OpenAI’s consumers are the owners of the output that is created by ChatGPT. Hence, ChatGPT’s users have the right to reprint, sell, and merchandise the content that is generated for them through OpenAI’s platforms, regardless of whether the output (its response) was provided via a free or a paid plan.

Various commentators are increasingly raising awareness about the corporate digital responsibilities of those involved in the research, development and maintenance of such dialogue systems. A number of stakeholders, particularly the regulatory ones, are concerned on possible risks and perils arising from AI algorithms including interactive chatbots.

In many cases, they are warning that disruptive chatbots could disseminate misinformation, foster prejudice, bias and discrimination, raise privacy concerns, and could lead to the loss of jobs. Arguably, one has to bear in mind that, in many cases, many governments are outpaced by the proliferation of technological innovations (as their development happens before the enactment of legislation).

As a result, they tend to be reactive in the implementation of substantive regulatory interventions. This research reported that the development of ChatGPT has resulted in mixed reactions among different stakeholders in society, especially during the first months after its official launch.

At the moment, there are just a few jurisdictions that have formalized policies and governance frameworks that are meant to protect and safeguard individuals and entities from possible risks and dangers of AI technologies (Camilleri, 2023). Of course, voluntary principles and guidelines are a step in the right direction. However, policy makers are expected by various stakeholders to step-up their commitment by introducing quasi-regulations and legislation.

Currently, a number of technology conglomerates including Microsoft-backed OpenAI, Apple and IBM, among others, anticipated the governments’ regulations by joining forces in a non-profit organization entitled, “Partnership for AI” that aims to advance safe, responsible AI, that is rooted in open innovation.

In addition, IBM has also teamed up with Meta and other companies, startups, universities, research and government organizations, as well as non-profit foundations to form an “AI Alliance”, that is intended to foster innovations across all aspects of AI technology, applications and governance.

The full list of references is available here: https://www.sciencedirect.com/science/article/pii/S004016252400043X?via%3Dihub

Suggested citation: Camilleri, M. A. (2024). Factors affecting performance expectancy and intentions to use ChatGPT: Using SmartPLS to advance an information technology acceptance framework. Technological Forecasting and Social Change201, https://doi.org/10.1016/j.techfore.2024.123247

Leave a comment

Filed under academia, chatbots, ChatGPT, Generative AI

Ethical considerations of service organizations in the information age

This is an excerpt from one of our latest contributions published through The Service Industries Journal. It features snippets from the ‘Introduction’, ‘Theoretical Implications’, ‘Practical Implications’ as well as from the ‘Limitations and Future Research Avenues’ sections.

Suggested Citation: Camilleri, M.A., Zhong, L., Rosenbaum, M.S. & Wirtz, J. (2024). Ethical considerations of service organizations in the information age, The Service Industries Journal, Forthcoming. https://www.tandfonline.com/doi/full/10.1080/02642069.2024.2353613

Introduction

Ethics is a broad field of study that refers to intellectual and moral philosophical inquiry concerned with value theory. It is clearly evidenced when individuals rely on their personal values, principles and norms to resolve questions about appropriate courses of action, as they attempt to distinguish between right and wrong, good and evil, virtue and vice, justice and crime, et cetera (Budolfson, 2019; Coeckelbergh, 2021; Ramboarisata & Gendron, 2019). Several researchers contend that ethics involves a set of concepts and principles that are meant to guide community members in specific social and environmental behaviors (De Bakker et al., 2019; Hermann, 2022). Very often, commentators argue that a persons’ ethical dispositions are influenced by their upbringing, social conventions, cultural backgrounds, religious beliefs, as well as by regulations (Vallaster et al., 2019).

Individuals, groups, institutions, non-government entities as well as businesses are bound to comply with the rule of law in their society (Groß & Vriens, 2019). As a matter of fact, the businesses’ organizational cultures and modus operandi are influenced by commercial legislation, regulations and taxation systems (Bridges, 2018). For-profit entities are required to adhere to the companies’ acts of the respective jurisdictions where they are running their commercial activities. They are also expected to follow informal codes of conduct and to observe certain ethical practices that are prevalent in the societies where they are based. This line of reasoning is synonymous with mainstream “business ethics” literature, that refer to a contemporary set of values and standards that are intended to govern the individuals’ actions and behaviors in how they manage and lead organizations (DeTienne et al., 2021).

Employers ought to ensure that they are managing their organization in a fair, transparent and responsible manner, by treating their employees with dignity and respect (Saks, 2022). They have to provide decent working environments and appropriate conditions of employment by offering equitable extrinsic rewards to their workers, that are commensurate with their knowledge, skills and competences (Gaur & Gupta, 2021). Moreover, it is in the employers’ interests to nurture their members of staff’s intrinsic motivations if they want them to align with their organizational values and corporate objectives (Camilleri et al., 2023). Notwithstanding, all businesses, including those operating in service industries have ethical as well as environmental, social and governance (ESG) responsibilities to bear towards other stakeholders in society (Aksoy et al., 2022).

This article raises awareness on a wide array of ethical considerations affecting service organizations in today’s information age. Specifically, its research objectives are threefold: (i) It presents the findings from a rigorous and trustworthy systematic review exercise, focused on “ethics” in “service(s)” and/or “ethical services”. This research involves a thorough scrutinization of the most-cited articles published in the last five (5) years; (ii) It utilizes a thematic analysis to determine which paradigms are being associated with service ethics. The rationale is to identify some of the most contemporary topics related to ethical leadership in service organizations. (iii) At the same time, it puts forward theoretical and practical implications that clarify how, why, where, when and to what extent service providers are operating in a legitimate and ethical manner.

A thorough review of the literature reveals that, for the time being, there are just a few colleagues who have devoted their attention to relevant theoretical underpinnings linked to the service ethics literature (Liu et al., 2023; Wirtz et al., 2023). For the time being, there is still limited research that has outlined popular research themes from the most cited articles published in the past five (5) years. It clearly differentiates itself from previous studies as this contribution’s rigorous and transparent systematic review approach clearly recognizes, appraises and describes the methodology that was used to capture and analyze data focused on the provision or lack thereof of ethical services. In addition, unlike other descriptive literature reviews, this paper synthesizes the findings from the latest contributions on this topic and provides a discursive argumentation on their implications. Hence, this article addresses a number of knowledge gaps in academic literature. In conclusion, it identifies the limitations of this review exercise, and outlines future research avenues to academia.

Theoretical implications

This contribution raises awareness of the underexplored notion of service ethics. A number of commentators are making reference to various theories and concepts to clarify how they can guide service organizations in their ethical leadership. In many cases, a number of theories indicate that decision makers ought to be just and fair with individuals or entities in their actions.  Appendix A features a list of ethical theories and provides a short definition for them. For instance, the justice theory suggests that all individuals including service employees should have the same fundamental rights based on the values of equality, non-discrimination, inclusion, human dignity, freedom and democracy. Human rights as well as employee rights and values ought to be protected and reinforced by the respective jurisdictions’ rule of law, for the benefit of all subjects (Grégoire et al., 2019).

Business ethics literature indicates that just societies are characterized by fair, trustworthy, accountable and transparent institutions (and organizations). For instance, the fairness theory raises awareness on certain ethical norms and standards that can help policy makers as well as other organizations including businesses, to ensure that they are continuously providing equal opportunities to everyone. It posits that all individuals ought to be treated with dignity in a respectful and equitable manner (Wei et al., 2019).

This is in stark contrast with the favoritism theory that suggests that certain individuals including employees, can receive preferential treatment, to the detriment of others (Bramoullé & Goyal, 2016). This argumentation is synonymous with the nepotism theory. Like favoritism, nepotism is a phenomenon that is manifested when institutional and organizational leaders help and support specific persons because they are connected with them in a way or another (e.g. through familial ties, friendships, financial, or social factors). Arguably, such favoritisms clearly evidence their conflict(s) of interest, compromise or cloud their judgements, decisions and actions in workplace environments and/or in other social contexts. Many business ethics researchers contend that decision makers ought to be guided by the principle of beneficence (Brear & Gordon, 2021), as they should possess the competences and abilities to recognize between what is morally right and ethically wrong.

This research confirms that frequently, organizational leaders have to deal with difficult and challenging situations, where they are expected to make hard decisions (Islam et al., 2021a; Islam et al., 2021b; Latan et al., 2019; Naseer et al., 2020; Schwepker & Dimitriou, 2021). In such cases, the most reasonable ethical approach would be to follow courses of action that will result in the least possible harm to everyone (Heine et al., 2023). The service organizations’ members of staff are all expected to be collaborative, productive and efficient in their workplace environment. This line of reasoning is related to the attributional theory (Bourdeau et al., 2019) and/or to the consequentialism theory (Budolfson, 2019). Very often, the proponents of these two theories contend that while honest, righteous and virtuous behaviors may yield positive outcomes for colleagues, subordinates and other stakeholders, wrong behaviors can result in negative repercussions to them (Deci & Ryan, 1987; Francis & Keegan, 2020; Lee et al., 2020; Paramita et al., 2021)

Other researchers who contributed to the ethics literature related to the utilitarianism theory, suggest that people tend to make better decisions, when they focus on the consequences of their actions. Hence, they will be in a better position to identify laudable behaviors and codes of conduct that add value to their organization (Coeckelbergh, 2021; Michaelson & Tosti-Kharas, 2019; Ramboarisata & Gendron, 2019). Very often, they argue that there are still unresolved issues in social sciences including the unpredictability of events and incidents from happening (Du & Xie, 2021), and/or the difficulty in measuring the consequences when/if they occur. For example, this review indicated that various authors discussed about the challenges, risks and possible dangers of adopting various technologies including AI, big data, et cetera (Breidbach & Maglio, 2020; Chang et al., 2020; Flavián & Casaló, 2021; Rymarczyk, 2020). In many cases, they hinted that the best ethical choice is to identify which decisions and actions could lead to the greatest good, in terms of positive, righteous and virtuous outcomes (Budolfson, 2019; Gong et al., 2020; Paramita et al., 2021).

Various academic authors who contributed to the formulation of the virtues theory held that there are persons including organizational leaders, whose characters, traits and values drive them to continuously improve and to excel in their duties and responsibilities (Coeckelbergh, 2021; Fatma et al., 2020; Lee et al., 2020). They frequently noted that the persons’ affective feelings as well as their intellectual dispositions enable them to develop a positive mindset, to make the best decisions and to engage in the right behaviors (Gong et al., 2020; Huang & Liu, 2021; Yan et al., 2023). This is congruent with the theory of positivity too, as it explains how the individuals’ optimistic feelings may result in their happiness and wellbeing. Some commentators imply that such positive emotions can influence the individuals’ state of minds and can foster their resilience to engage in productive behaviors (Paramita et al., 2021).

This argumentation is in stark contrast with the emotional labor theory that is manifested when disciplined employees suppress their emotions by engaging in posturing behaviors in order to conform to the organizational culture (Mastracci, 2022). This phenomenon was evidenced in Naseer et al.’s (2020) contribution. In this case, the authors indicated how the employees’ overidentification with unethical organizations can have a negative impact on their engagement, thereby resulting in counterproductive work practices. In addition, Islam et al. (2021b) also suggested that abusive supervision led employees to undesirable outcomes like knowledge hiding behaviors and to low morale in workplace environments.

Several commentators who are focused on psychological issues argue that the individuals’ intrinsic motivations are closely related to their self-determination (Deci & Ryan, 1987). Very often, they contend that individuals should have the autonomy and freedom to make life choices, in order to improve their well-being in the future. The findings from this research reported that organizational leaders who delegated responsibilities to their members of staff, have instilled trust and commitment in their employees, and also improved their intrinsic motivations (Francis & Keegan, 2020; Lee et al., 2020; Schwepker & Dimitriou, 2021).

Hence, organizational leaders of service businesses ought to be aware that there is scope for them to empower their human resources, to help them make responsible choices and decisions relating to their work activities, in a discrete manner (Bourdeau et al., 2019; Islam et al., 2021a; Tanova & Bayighomog, 2022). The employees’ higher levels of autonomy and independence can influence their morale (Paramita et al., 2021; Ramboarisata & Gendron, 2019) and reduce stress levels (Schwepker & Dimitriou, 2021). Various researchers confirmed that employees would be more productive if they were empowered with duties and responsibilities (e.g. Nauman et al., 2023).

This argumentation is congruent with the conservation of resources theory, as business leaders are expected to look after their human resources’ cognitive and emotional wellbeing, if they want to foster their organizational commitment to achieve their corporate objectives. Indeed, their ethical leadership can lead to win-win outcomes, particularly if their employees replicate responsible and altruistic behaviors with one another, and if they strive in their endeavors to develop a caring environment in their organization (Parsons et al., 2021; Saks, 2022). This reasoning is closely related to the social cognition theory that presumes that individuals acquire emotional knowledge and skill sets such as intuition or empathy, among others, through social interactions, including when they are at work (Čaić et al., 2019; Campbell et al., 2020; Rauhaus et al., 2020).

Practical implications

The findings from this research confirm that various service organizations are becoming acquainted with ethical leadership and with social issues in management. Evidently, several listed businesses and large undertakings in service industries are increasingly proving their legitimacy and license to operate, by engaging in ethical behaviors that promote responsible human resources management. Very often, they are fostering an organizational climate that encourages ongoing dialogue, communication and collaboration among members of staff; they empower employees with duties and responsibilities to make important decisions; provide them with equitable compensation that is commensurate with qualifications and experience; and implementing work-life balance policies. Generally, these laudable measures are resulting in motivated, committed and productive employees.

On the other hand, unethical behaviors including abusive organizational practices and coercive leadership styles are generating bitterness and feelings of resentment among employees. The lack of ethical leadership can lead to demotivation, low morale, job stress and even to counterproductive behaviors including wrongdoings like knowledge hiding and abusive supervision in workplace environments. This research reported about irresponsible practices of service businesses operating in the sharing economy, as a number of hospitality companies are subcontracting their food delivery services to independent contractors, who are not safeguarding the rights of their employees. Very often, the workers of the gig economy are offered precarious jobs and unfavorable conditions of employment. Generally, they are not paid in a commensurate manner for their jobs, are not eligible for health or retirement benefits, and cannot affiliate themselves with trade unions.

This discursive review shed light on the service businesses’ dealings with employees and with other stakeholders. It also narrated about their relationships with customers as well as on their ethical and digital responsibilities towards them. For example, it indicated that many businesses are gathering and storing data of customers. Frequently, they are using their personal and transactional information to analyze and interpret shopping behaviors. They may do so to build consumer profiles and/or to retarget them with promotional content. The findings of this research imply that it is the responsibility of service businesses to inform new customers that they are capturing and retaining data from them, when and if they do so (even though in many cases, they are aware that many online users can quickly unsubscribe to marketing messages and/or are becoming adept in blocking advertisements from popping-up in their screens). The authors  contend that service providers ought to explicitly ask their customers’ consent (through opt-in or opt-out choices) to ensure that the former can avail themselves of their consumers’ data.

Currently, certain jurisdictions are not in a position to protect consumers from entities that could use their personal information for different purposes as they did not enact substantive data protection legislation. The European Union’s General Data Protection Regulation (GDPR) or California Consumer Privacy Act (CCPA), are two examples of data regulations that are intended to safeguard the consumers’ interests in this regard. Online users ought to be educated and guided through regulations, policies and data literacy programs, to protect them from potentially unethical technological applications and practices of big data algorithms and advanced analytics. At the moment, various stakeholders including policy makers and academia, among others, are calling for responsible AI governance and for the formulation of (quasi) regulatory frameworks, in order to maximize the benefits of AI and to minimize its negative impacts to humanity.

This research raises awareness about the importance of disclosing corporate governance procedures, and of regularly reporting CSR/ESG credentials with regulatory stakeholders and with other interested parties. In many cases, the majority of service businesses are genuinely following ethical norms and principles that go beyond their commercial and legal obligations. They should bear in mind that their sustainability accounting, transparent ESG disclosures, as well as their audit and assurance mechanisms, can ultimately reduce information asymmetry among stakeholders, whilst enhancing their reputation and image with interested parties. Their ongoing corporate communications can ameliorate stakeholder relationships and could increase their organizational legitimacy in the long run.

Limitations and future research avenues

The notion of service ethics is gaining traction in academic circles. Indeed, it is considered as a contemporary and timely topic for service researchers specializing in business administration and/or business ethics. In fact, the findings from the bibliographic analysis demonstrate that there were more than eleven thousand (11,000) documents focused on service(s), ethics and ethical service(s), published in the last 5 years. This research adds value to the extant literature as it sheds light on the most cited articles focused on these topics. Yet, it differentiates itself from previous papers, as it identifies the themes of fifty (50) of the most cited papers in this promising area of research, describes the methodology that was employed to capture and analyze the data on this topic, and scrutinizes their content, before synthesizing the findings of this contribution.

This article presents the findings of a rigorous review and evaluation of the latest literature revolving on ethical leadership of service organizations. The authors are well aware that, in the past, other academic colleagues may have referred to synonymous keywords to service ethics or ethical services, including ethical business, business ethos, business ethics, business code of conduct, and even corporate social responsibilities of service businesses, among other paradigms. Therefore, future researchers may also consider using these keywords when they investigate ethical behaviors in services-based sectors. It is hoped that they will delve into the research themes, fields of studies and theoretical bases that were identified in this contribution including on the service organizations’ ethical leadership, as proposed in the following table. This research confirms that it is in the interest of service entities to foster a fair and just working environment, particularly for the benefit of their employees, as well as for other stakeholders including for regulatory institutions, creditors, shareholders and customers, among others.

A future agenda for service ethics research

(Developed by the authors)

Indeed, there is scope to investigate further the service organizations’ roles in today’s societies, as they are being urged by policy makers and other interested parties to communicate about their responsible organizational behaviors, in various contexts. Entities operating in service industries including small and medium-sized businesses as well as micro enterprises are increasingly acquainting themselves with sustainability accounting, non-financial reporting and ongoing assurance exercises, as comprehensive CSR/ESG disclosures can enable them to prove their legitimacy and license to operate with stakeholders. Moreover, prospective researchers are invited to continue raising more awareness about ethical leadership among service organizations, particularly when they are adopting disruptive innovations.

The full list of references are available from the open-access article (published through The Service Industries Journal) and via ResearchGate.

Leave a comment

Filed under Business, Corporate Social Responsibility, ESG Reporting, ethics

Metaverse education: Opportunities and challenges for immersive learning

The following content was adapted from one of my latest contributions on the Metaverse’s immersive technology.

(Credit: Onurdongel)

Suggested citation: Camilleri, M.A. (2023), “Metaverse applications in education: a systematic review and a cost-benefit analysis”, Interactive Technology and Smart Education, Vol. ahead-of-print No. ahead-of-print. https://doi.org/10.1108/ITSE-01-2023-0017

Online users are connecting to simulated virtual environments through various digital games like Fortnite, Minecraft, Roblox, and World of Warcraft, among others. Very often, gamers are utilizing virtual reality (VR) and augmented reality (AR) technologies to improve their gaming experiences. In many cases, they are engaging with other individuals in the cyberspace and participating in an extensive virtual economy. New users are expected to create electronic personas, called avatars (that represent their identity in these games). They are allowed to move their avatars around virtual spaces and to use them to engage with other users, when they are online. Therefore, interactive games are enhancing their users’ immersive experiences, particularly those that work with VR headsets.

Academic researchers as well as technology giants like Facebook (Meta), Google and Microsoft, among others, anticipate that the Metaverse will shortly change the way we experience the Internet. Whilst on the internet, online users are interacting with other individuals through websites, including games and social media networks (SNSs) in the Metaverse they engage with the digital representations of people (through their avatars), places, and things in a simulated universe. Hence, the Metaverse places its users in the middle of the action. In plain words, it can be described as a combination of multiple elements of interactive technologies, including VR and AR where users can experience a digital universe. Various industry practitioner including Meta (Facebook) argue that this immersive technology will reconfigure the online users’ sensory inputs, definitions of space, and points of access to information.

AR and VR devices can be used to improve the students’ experiences when they engage with serious games. Many commentators noted that these technologies encourage active learning approaches, as well as social interactions among students and/or between students and their teachers. Serious games can provide “gameful experiences”, if they share the immersive features that captivate them, like those relating to the entertaining games. If they do so, it is very likely that students would enjoy their game play (and game-based learning). Similarly, the Metaverse can be used to increase the students; motivations and learning outcomes.

For the time being, there is no universal definition that encapsulates the word “Metaverse”. The term has been used in a 1992 science fiction novel Snow Crash. Basically, it is a blend of two words, in which parts of them, namely “meta” and “universe” were combined to create the “Metaverse” notion. While meta means beyond, universe is a term that is typically used to describe an iteration of the internet that consists of persistent, immersive 3D virtual spaces that are intended to emulate physical interactions in perceived virtual worlds (like a universe).

Although, there are various academic contributions that have explored the utilization of online educational technologies, including AR and VR, in different contexts,  currently, just a few researchers who have evaluated of the latest literature on this contemporary topic, to reveal the benefits and costs of using this disruptive innovation in the context of education. Therefore, this contribution closes this gap in academic literature. The underlying objective of this research is to shed light on the opportunities and challenges of using this immersive technology with students.

Opportunities

    Immersive multi-sensory experiences in 3D environments

    The Metaverse could provide a smooth interaction between the real world and the virtual spaces. Its users can engage in activities that are very similar to what they do in reality. However, it could also provide opportunities for them to experience things that could be impossible for them to do in the real world. Sensory technologies enable users to use their five senses of sight, touch, hearing, taste and smell, to immerse themselves in a virtual 3D environment. VR tools are interactive, entertaining and provide captivating and enjoyable experiences to their users. In the past years, a number of educators and students have been using 3D learning applications (e.g. like Second Life) to visit virtual spaces that resemble video games. Many students are experienced gamers and are lured by their 3D graphics. They learn when they are actively involved. Therefore, the learning applications should be as meaningful, engaging, socially interactive and entertaining as possible.

    There is scope for educators and content developers to create digital domains like virtual schools, colleges and campuses, where students and teachers can socialize and engage in two-way communications. Students could visit the premises of their educational institutions in online tours, from virtually anywhere. A number of universities are replicating their physical campus with virtual ones. The design of the virtual campuses may result in improved student services, shared interactive content that could improve their learning outcomes, and could even reach wider audiences. Previous research confirms that it is more interesting and appealing for students to learn academic topics through the virtual world.

    Equitable and accessible space for all users

    Like other virtual technologies, the Metaverse could be accessed from remote locations. Educational institutions can use its infrastructure to deliver courses (free of charge or against tuition fees, as of now). Metaverse education may enable students from different locations to use its open-source software to pursue courses from anywhere, anytime. Hence, its democratized architecture could reduce geographic disparities among students, and increases their chances of continuing education through higher educational institutions in different parts of the world.

    In the future, students including individuals with different abilities, may use the Metaverse’s multisensory environment to immerse themselves in engaging lectures.

    Interactions with virtual representations of people and physical objects

    Currently, individual users can utilize the AR and VR applications to communicate with others and to exert their influence on the objects within the virtual world. They can organize virtual meetings with geographically distant users, attend conferences, et cetera. Various commentators argued that the Metaverse can be used in education, to learn academic subjects in real-time sessions in a VR setting and to interact with peers and course instructors. The students and their lecturers will probably use an avatar that will represent their identity in the virtual world. Many researchers noted that avatars facilitate interactive communications and are a good way to personalize the students’ learning experiences.

    Interoperability

    Unlike other VR applications, the Metaverse will enable its users to retain their identities as well as the ownership of their digital assets through different virtual worlds and platforms, including those related to the provision of education. This means that Metaverse users can communicate and interact with other individuals in a seamless manner through different devices or servers, across different platforms. They can use the Metaverse to share data and content in different virtual worlds that will be accessed through Web 3.0.

    Challenges

      Infrastructure, resources and capabilities

      The use of the Metaverse technology will necessitate a thorough investment in hardware to operate the university virtual spaces. The Metaverses requires intricate devices, including appropriate high-performance infrastructures to achieve accurate retina display and pixel density for realistic virtual immersions. These systems rely on fast internet connections with good bandwidths as well as computers with adequate processing capabilities, that are equipped with good graphic cards. For the time being, VR, MR and AR hardware may be considered as bulky, heavy, expensive and cost-prohibitive, in some contexts.

      The degree of freedom in a virtual world

      The Metaverse offers higher degrees of freedom than what is available through the worldwide web and web2.0 technologies. Its administrators cannot be in a position to anticipate the behaviors of all persons using their technologies. Therefore, Metaverse users can possibly be exposed to positive as well as to negative influences as other individuals can disguise themselves in the vast virtual environments, through anonymous avatars.

      Privacy and security of users’ personal data

      The users’ interactions with the Metaverse as well as their personal or sensitive information, can be tracked by the platform operators hosting this service, as they continuously record, process and store their virtual activities in real-time. Like its preceding worldwide web and Web 2.0 technologies, the Metaverse can possibly raise the users’ concerns about the security of their data and of their intellectual properties. They may be wary about data breaches, scams, et cetera. Public blockchains and other platforms can already trace the users’ sensitive data, so they are not anonymous to them.  Individuals may decide to use one or more avatars to explore the Metaverse’s worlds. They may risk exposing their personal information, particularly when they are porting from one Metaverse to another and/or when they share transactional details via NFTs. Some Metaverse systems do not require their users to share personal information when they create their avatar. However, they could capture relevant information from sensors that detect their users’ brain activity, monitor their facial features, eye motion and vocal qualities, along with other ambient data pertaining to the users’ homes or offices.

      They may have legitimate reasons to capture such information, in order to protect them against objectionable content and/or unlawful conduct of other users. In many cases, the users’ personal data may be collected for advertising and/or for communication purposes. Currently, different jurisdictions have not regulated their citizens’ behaviors within the Metaverse contexts. Works are still in progress, in this regard.

      Identity theft and hijacking of user accounts

      There may be malicious persons or groups who may try use certain technologies, to obtain the personal information and digital assets from Metaverse users. Recently, a deepfake artificial intelligence software has developed short audible content, that mimicked and impersonated a human voice.

      Other bots may easily copy the human beings’ verbal, vocal and visual data including their personality traits. They could duplicate the avatars’ identities, to commit fraudulent activities including unauthorized transactions and purchases, or other crimes with their disguised identities. Roblox users reported that they experienced avatar scams in the past. In many cases, criminals could try to avail themselves of the digital identities of vulnerable users, including children and senior citizens, among others, to access their funds or cryptocurrencies (as they may be linked to the Metaverse profiles). As a result, Metaverse users may become victims of identity theft. Evolving security protocols and digital ledger technologies like the blockchain will be increasing the transparency and cybersecurity of digital assets. However, users still have to remain vigilant about their digital footprint, to continue protecting their personal information.

      As the use of the virtual environment is expected to increase in the foreseeable future, particularly with the emergence of the Metaverse, it is imperative that new ways are developed to protect all users including students. Individuals ought to be informed about the risks to their privacy. Various validation procedures including authentication, such as face scans, retina scans, and speech recognition may be integrated in such systems to prevent identity theft and hijacking of Metaverse accounts.

      Borderless environment raises ethical and regulatory concerns

      For the time being, a number of policy makers as well as academics are raising their questions on the content that can be presented in the Metaverse’s virtual worlds, as well as to the conduct and behaviors of the Metaverse users. Arguably, it may prove difficult for the regulators of different jurisdictions to enforce their legislation in the Metaverse’s borderless environment. For example, European citizens are well acquainted with the European Union’s (EU) General Data Protection Regulation. Other countries have their own legal frameworks and/or principles that are intended to safeguard the rights of data subjects as well as those of content creators. For example, the United States governments has been slower that the EU to introduce its privacy by design policies. Recently, the South Korean Government announced a set of laudable, non-binding ethical guidelines for the provision and consumption of metaverse services. However, there aren’t a set of formal rules that can apply to all Metaverse users.

      Users’ addictions and mental health issues

      Although many AR and VR technologies have already been tried and tested in the past few years, the Metaverse is still getting started. For the time being, it is difficult to determine what are the effects of the Metaverse on the users’ health and well-being. Many commentators anticipate that an unnecessary exposure to Metaverse’s immersive technologies may result in negative side-effects for the psychological and physical health of human beings.  They are suggesting that individuals may easily become addicted to a virtual environment, where the limits of reality are their own imagination. They are lured to it “for all the things they can do” and will be willing to stay “for all the things they can be” (i.e. excerpts from Ready Player One Movie).

      Past research confirms that spending excessive time on internet, social media or playing video games can increase the chances of mental health problems like attention deficit disorders, eating conditions, as well as anxiety, stress or depression, among others. Individuals play video games to achieve their goals, to advance to the next level. Their gameplay releases dopamine. Similarly, their dopamine levels can increase when they are followed through social media, or when they receive likes, comment or other forms of online engagements.          

      Individuals can easily develop an addiction with this immersive technology, as they seek stimulating and temporary pleasurable experiences in its virtual spaces. As a result, they may become dependent to it. Their interpersonal communications via social media networks are not as authentic or satisfying as real-life relationships, as they are not interacting in-person, with other human beings. In the case of the Metaverse, their engagement experiences may appear to be real. Yet again, in the Metaverse, its users are located in a virtual environment, they not physically present near other individuals. Human beings need to build an honest and trustworthy relationship with one another. The users of the Metaverse can create avatars that could easily conceal their identity.

      Read further! The full paper can be accessed and downloaded from:

      The University of Malta: https://www.um.edu.mt/library/oar/handle/123456789/110459

      Researchgate: https://www.researchgate.net/publication/371275481_Metaverse_applications_in_education_A_systematic_review_and_a_cost-benefit_analysis

      Academia.edu: https://www.academia.edu/102800696/Metaverse_applications_in_education_A_systematic_review_and_a_cost_benefit_analysis

      SSRN: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4490787

      1 Comment

      Filed under digital games, Digital Learning Resources, digital media, Education, education technology, Metaverse

      Users’ perceptions and expectations of ChatGPT

      Featuring an excerpt and a few snippets from one of my latest articles related to Generative Artificial Intelligence (AI).

      Suggested Citation: Camilleri, M.A. (2024). Factors affecting performance expectancy and intentions to use ChatGPT: Using SmartPLS to advance an information technology acceptance framework, Technological Forecasting and Social Changehttps://doi.org/10.1016/j.techfore.2024.123247


      The introduction

      Artificial intelligence (AI) chatbots utilize algorithms that are trained to process and analyze vast amounts of data by using techniques ranging from rule-based approaches to statistical models and deep learning, to generate natural text, to respond to online users, based on the input they received (OECD, 2023). For instance, Open AI‘s Chat Generative Pre-Trained Transformer (ChatGPT) is one of the most popular AI-powered chatbots. The company claims that ChatGPT “is designed to assist with a wide range of tasks, from answering questions to generating text in various styles and formats” (OpenAI, 2023a). OpenAI clarifies that its GPT-3.5, is a free-to-use language model that was optimized for dialogue by using Reinforcement Learning with Human Feedback (RLHF) – a method that relies on human demonstrations and preference comparisons to guide the model toward desired behaviors. Its models are trained on vast amounts of data including conversations that were created by humans (such content is accessed through the Internet). The responses it provides appear to be as human-like as possible (Jiang et al., 2023).

      GPT-3.5’s database was last updated in September 2021. However, GPT-4.0 version comes with a paid plan that is more creative than GPT-3.5, could accept images as inputs, can generate captions, classifications and analyses (Qureshi et al., 2023). Its developers assert that GPT-4.0 can create better content including extended conversations, as well as document search and analysis (Takefuji, 2023). Recently, its proponents noted that ChatGPT can be utilized for academic purposes, including research. It can extract and paraphrase information, translate text, grade tests, and/or it may be used for conversation purposes (MIT, 2023). Various stakeholders in education noted that this LLM tool may be able to provide quick and easy answers to questions.

      However, earlier this year, several higher educational institutions issued statements that warned students against using ChatGPT for academic purposes. In a similar vein, a number of schools banned ChatGPT from their networks and devices (Rudolph et al., 2023). Evidently, policy makers were concerned that this text generating AI system could disseminate misinformation and even promote plagiarism. Some commentators argue that it can affect the students’ critical-thinking and problem-solving abilities. Such skill sets are essential aspects for their academic and lifelong successes (Liebrenz et al., 2023Thorp, 2023). Nevertheless, a number of jurisdictions are reversing their decisions that impede students from using this technology (Reuters, 2023). In many cases, educational leaders are realizing that their students could benefit from this innovation, if they are properly taught how to adopt it as a tool for their learning journey.

      Academic colleagues are increasingly raising awareness on different uses of AI dialogue systems like service chatbots and/or virtual assistants (Baabdullah et al., 2022Balakrishnan et al., 2022Brachten et al., 2021Hari et al., 2022Li et al., 2021Lou et al., 2022Malodia et al., 2021Sharma et al., 2022). Some of them are evaluating their strengths and weaknesses, including of OpenAI’s ChatGPT (Farrokhnia et al., 2023Kasneci et al., 2023). Very often, they argue that there may be instances where the chatbots’ prompts are not completely accurate and/or may not fully address the questions that are asked to them (Gill et al., 2024). This may be due to different reasons. For example, GPT-3.5’s responses are based on the data that were uploaded before a knowledge cut-off date (i.e. September 2021). This can have a negative effect on the quality of its replies, as the algorithm is not up to date with the latest developments. Although, at the moment, there is a knowledge gap and a few grey areas on the use of AI chatbots that use natural language processing to create humanlike conversational dialogue, currently, there are still a few contributions that have critically evaluated their pros and cons, and even less studies have investigated the factors affecting the individuals’ engagement levels with ChatGPT.

      This empirical research builds on theoretical underpinnings related to information technology adoption in order to examine the online users’ perceptions and intentions to use AI Chatbots. Specifically, it integrates a perceived interactivity construct (Baabdullah et al., 2022McMillan and Hwang, 2002) with information quality and source trustworthiness measures (Leong et al., 2021Sussman and Siegal, 2003) from the Information Adoption Model (IAM) with performance expectancy, effort expectancy and social influences constructs (Venkatesh et al., 2003Venkatesh et al., 2012) from the Unified Theory of Acceptance and Use of Technology (UTAUT1/UTAUT2) to determine which factors are influencing the individuals’ intentions to use AI text generation systems like ChatGPT. This study’s focused research questions are:

      RQ1

      How and to what extent are information quality and source trustworthiness influencing the online users’ performance expectancy from ChatGPT?

      RQ2

      How and to what extent are their perceptions about ChatGPT’s interactivity, performance expectancy, effort expectancy, as well as their social influences affecting their intentions to continue using their large language models?

      RQ3

      How and to what degree is the performance expectancy construct mediating effort expectancy – intentions to use these interactive AI technologies?

      This study hypothesizes that information quality and source trustworthiness are significant antecedents of performance expectancy. It presumes that this latter construct, together with effort expectancy, social influences as well as perceived interactivity affect the online users’ acceptance and usage of generative pre-trained AI chatbots like GPT-3.5 or GPT-4.

      Many academic researchers sought to explore the individuals’ behavioral intentions to use a wide array of technologies (Alalwan, 2020Alam et al., 2020Al-Saedi et al., 2020Raza et al., 2021Tam et al., 2020). Very often, they utilized measures from the Theory of Reasoned Action (TRA) (Fishbein and Ajzen, 1975), the Theory of Planned Behavior (TPB) (Ajzen, 1991), the Technology Acceptance Model (TAM) (Davis, 1989Davis et al., 1989), TAM2 (Venkatesh and Davis, 2000), TAM3 (Venkatesh and Bala, 2008), UTAUT (Venkatesh et al., 2003) or UTAUT2 (Venkatesh et al., 2012). Few scholars have integrated constructs like UTAUT/UTAUT2’s performance expectancy, effort expectancy, social influences and intentions to use technologies with information quality and source trust measures from the Elaboration Likelihood Model (ELM) and IAM. Currently, there is still limited research that incorporates a perceived interactivity factor within information technology frameworks. Therefore, this contribution addresses this deficit in academic knowledge.

      Notwithstanding, for the time being, there is still scant research that is focused on AI-powered LLM, like ChatGPT, that are capable of generating human-like text that is based on previous contexts and drawn from past conversations. This timely study raises awareness on the individuals’ perceptions about the utilitarian value of such interactive technologies, in an academic (higher educational) context. It clearly identifies the factors that are influencing the individuals’ intentions to continue using them, in the future.


      From the literature review

      Table 1 features a summary of the most popular theoretical frameworks that sought to identify the antecedents and the extent to which they may affect the individuals’ intentions to use information technologies.

      Table 1. A non-exhaustive list of theoretical frameworks focused on (information) technology adoption behaviors

      Figure 1. features the conceptual framework that investigates information technology adoption factors. It represents a visual illustration of the hypotheses of this study. In sum, this empirical research presumes that information quality and source trustworthiness (from Information Adoption Model) precede performance expectancy. The latter construct together with effort expectancy, social influences (from Unified Theory of Acceptance and Use of Technology) as well as the perceived interactivity construct, are significant antecedents of the individuals’ intentions to use ChatGPT.


      The survey instrument

      The respondents were instructed to answer all survey questions that were presented to them about information quality, source trustworthiness, performance expectancy, effort expectancy, social influences, perceived interactivity and on their behavioral intentions to continue using this technology (otherwise, they could not submit the questionnaire). Table 2 features the list of measures as well as their corresponding items that were utilized in this study. It also provides a definition of the constructs used in the proposed information technology acceptance framework.

      Table 2. The list of measures and the corresponding items used in this research.


      Theoretical implications

      This research sought to explore the factors that are affecting the individuals’ intentions to use ChatGPT. It examined the online users’ effort and performance expectancy, social influences as well as their perceptions about the information quality, source trustworthiness and interactivity of generative text AI chatbots. The empirical investigation hypothesized that performance expectancy, effort expectancy and social influences from Venkatesh et al.’s (2003) UTAUT together with a perceived interactivity construct (McMillan and Hwang, 2002) were significant antecedents of their intentions to revisit ChatGPT’s website and/or to use its app. Moreover, it presumed that information quality and source trustworthiness measures from Sussman and Siegal’s (2003) IAM were found to be the precursors of performance expectancy.

      The results from this study report that source trustworthiness-performance expectancy is the most significant path in this research model. They confirm that online users indicated that they believed that there is a connection between the source’s trustworthiness in terms of its dependability, and the degree to which they believe that using such an AI generative system will help them improve their job performance. Similar effects were also evidenced in previous IAM theoretical frameworks (Kang and Namkung, 2019; Onofrei et al., 2022), as well as in a number of studies related to TAM (Assaker, 2020; Chen and Aklikokou, 2020; Shahzad et al., 2018) and/or to UTAUT/UTAUT2 (Lallmahomed et al., 2017).

      In addition, this research also reports that the users’ peceptions about information quality significantly affects their performance expectancy/expectancies from ChatGPT. Yet, in this case, this link was weaker than the former, thus implying that the respondents’ perceptions about the usefulness of this text generative technology were clearly influenced by the peripheral cues of communication (Cacioppo and Petty, 1981; Shi et al., 2018; Sussman and Siegal, 2003; Tien et al., 2019).

      Very often, academic colleagues noted that individuals would probably rely on the information that is presented to them, if they perceive that the sources and/or their content are trustworthy (Bingham et al., 2019; John and De’Villiers, 2020; Winter, 2020). Frequently, they indicated that source trustworthiness would likely affect their beliefs about the usefulness of information technologies, as they enable them to enhance their performance. Conversely, some commentators argued that there may be users that could be skeptical and wary about using new technologies, especially if they are unfamiliar with them (Shankar et al., 2021). They noted that such individuals may be concerned about the reliability and trustworthiness of the latest technologies.

      The findings suggest that the individuals’ perceptions about the interactivity of ChatGPT are a precursor of their intentions to use it. This link is also highly significant. Therefore, the online users were somehow appreciating this information technology’s responsiveness to their prompts (in terms of its computer-human communications). Evidently, ChatGPT’s interactivity attributes are having an impact on the individuals’ readiness to engage with it, and to seek answers to their questions. Similar results were reported in other studies that analyzed how the interactivity and anthropomorphic features of dialogue systems like live support chatbots, or virtual assistants can influence the online users’ willingness to continue utilizing them in the future (Baabdullah et al., 2022; Balakrishnan et al., 2022; Brachten et al., 2021; Liew et al., 2017).

      There are a number of academic contributions that sought to explore how, why, where and when individuals are lured by interactive communication technologies (e.g. Hari et al., 2022; Li et al., 2021; Lou et al., 2022). Generally, these researchers posited that users are habituated with information systems that are programed to engage with them in a dynamic and responsive manner. Very often they indicated that many individuals are favorably disposed to use dialogue systems that are capable of providing them with instant feedback and personalized content. Several colleagues suggest that positive user experiences as well as high satisfaction levels and enjoyment, could enhance their connection with information technologies, and will probably motivate them to continue using them in the future (Ashfaq et al., 2020; Camilleri and Falzon, 2021; Huang and Chueh, 2021; Wolfinbarger and Gilly, 2003).

      Another important finding from this research is that the individuals’ social influences (from family, friends or colleagues) are affecting their interactions with ChatGPT. Again, this causal path is also very significant. Similar results were also reported in UTAUT/UTAUT2 studies that are focused on the link between social influences and its link with intentional behaviors to use technologies (Gursoy et al., 2019; Patil et al., 2020). In addition, TPB/TRA researchers found that subjective norms also predict behavioral intentions (Driediger and Bhatiasevi, 2019; Sohn and Kwon, 2020). This is in stark contract with other studies that reported that there was no significant relationship between social influences/subjective norms and behavioral intentions (Ho et al., 2020; Kamble et al., 2019).

      Interestingly, the results report that there are highly significant effects between effort expectancy (i.e. ease of use of the generative AI technology) and performance expectancy (i.e. its perceived usefulness). Many scholars posit that perceived ease of use is a significant driver of perceived usefulness of technology (Bressolles et al., 2014; Davis, 1989; Davis et al., 1989; Kamble et al., 2019; Yoo and Donthu, 2001). Furthermore, there are significant causal paths between performance expectancy-intentions to use ChatGPT and even between effort expectancy-intentions to use ChatGPT, albeit to a lesser extent. Yet, this research indicates that performance expectancy partially mediates effort expectancy-intentions to use ChatGPT. In this case, this link is highly significant.

      In sum, this contribution validates key information technology measures, specifically, performance expectancy, effort expectancy, social influences and behavioral intentions from UTAUT/UTAUT2, as well as information quality and source trustworthiness from ELM/IAM and integrates them with a perceived interactivity factor. It builds on previous theoretical underpinnings. Yet, it differentiates itself from previous studies. To date, there are no other empirical investigations that have combined the same constructs that are presented in this article. Notwithstanding, this research puts forward a robust Information Technology Acceptance Framework. The results confirm the reliability and validity of the measures. They clearly outline the relative strength and significance of the causal paths that are predicting the individuals’ intentions to use ChatGPT.


      Managerial implications

      This empirical study provides a snapshot on the online users’ perceptions about ChatGPT’s responses to verbal queries, and sheds light on their dispositions to avail themselves from its natural language processing. It explores their performance expectations about their usefulness and their effort expectations related to the ease of use of these information technologies and investigates whether they are affected by colleagues or by other social influences to use such dialogue systems. Moreover, it examines their insights about the content quality, source trustworthiness as well as on the interactivity features of these text- generative AI models.

      Generally, the results suggest that the research participants felt that these algorithms are easy to use. The findings indicate that they consider them to be useful too, specifically when the information they generate is trustworthy and dependable. The respondents suggest that they are concerned about the quality and accuracy of the content that is featured in the AI chatbots’ answers. This contingent issue can have a negative effect on the use of the information that is created by online dialogue systems.

      OpenAI’s ChatGPT is a case in point. Its app is freely available in many countries, via desktop and mobile technologies including iOS and Android. The company admits that its GPT-3.5 outputs may be inaccurate, untruthful, and misleading at times. It clarifies that its algorithm is not connected to the internet, and that it can occasionally produce incorrect answers (OpenAI, 2023a). It posits that GPT-3.5 has limited knowledge of the world and events after 2021 and may also occasionally produce harmful instructions or biased content. OpenAI recommends checking whether its chatbot’s responses are accurate or not, and to let them know when and if it answers in an incorrect manner, by using their “Thumbs Down” button. They even declare that their ChatGPT’s Help Center can occasionally make up facts or “hallucinate” outputs (OpenAI, 2023a,b).

      OpenAI reports that its top notch ChatGPT Plus subscribers can access safer and more useful responses. In this case, users can avail themselves from a number of beta plugins and resources that can offer a wide range of capabilities including text-to-speech applications as well as web browsing features through Bing. Yet again, OpenAI (2023b) indicates that its GPT-4 still has many known limitations that the company is working to address, such as “social biases and adversarial prompts” (at the time of writing this article). Evidently, works are still in progress at OpenAI. The company needs to resolve these serious issues, considering that its Content Policy and Terms clearly stipulate that OpenAI’s consumers are the owners of the output that is created by ChatGPT. Hence, ChatGPT’s users have the right to reprint, sell, and merchandise the content that is generated for them through OpenAI’s platforms, regardless of whether the output (its response) was provided via a free or a paid plan.

      Various commentators are increasingly raising awareness about the corporate digital responsibilities of those involved in the research, development and maintenance of such dialogue systems. A number of stakeholders, particularly the regulatory ones, are concerned on possible risks and perils arising from AI algorithms including interactive chatbots. In many cases, they are warning that disruptive chatbots could disseminate misinformation, foster prejudice, bias and discrimination, raise privacy concerns, and could lead to the loss of jobs. Arguably, one has to bear in mind that, in many cases, many governments are outpaced by the proliferation of technological innovations (as their development happens before the enactment of legislation). As a result, they tend to be reactive in the implementation of substantive regulatory interventions. This research reported that the development of ChatGPT has resulted in mixed reactions among different stakeholders in society, especially during the first months after its official launch. At the moment, there are just a few jurisdictions that have formalized policies and governance frameworks that are meant to protect and safeguard individuals and entities from possible risks and dangers of AI technologies (Camilleri, 2023). Of course, voluntary principles and guidelines are a step in the right direction. However, policy makers are expected by various stakeholders to step-up their commitment by introducing quasi-regulations and legislation.

      Currently, a number of technology conglomerates including Microsoft-backed OpenAI, Apple and IBM, among others, anticipated the governments’ regulations by joining forces in a non-profit organization entitled, “Partnership for AI” that aims to advance safe, responsible AI, that is rooted in open innovation. In addition, IBM has also teamed up with Meta and other companies, startups, universities, research and government organizations, as well as non-profit foundations to form an “AI Alliance”, that is intended to foster innovations across all aspects of AI technology, applications and governance.

      Continue reading

      Leave a comment

      Filed under artificial intelligence, chatbots, ChatGPT, digital media, Generative AI, Marketing

      Responsible artificial intelligence governance and corporate digital responsibility

      This post discusses on the salient aspects of my latest article, entitled: “Artificial intelligence governance: Ethical considerations and implications for social responsibility“, published through Wiley’s Expert Systems.

      Continue reading

      Leave a comment

      Filed under Marketing

      An artificial intelligence governance framework

      This is an excerpt from my latest contribution on responsible artificial intelligence (AI).

      Suggested citation: Camilleri, M. A. (2023). Artificial intelligence governance: Ethical considerations and implications for socialresponsibility. Expert Systems, e13406. https://doi.org/10.1111/exsy.13406

      The term “artificial intelligence governance” or “AI governance” integrates the notions of “AI” and “corporate governance”. AI governance is based on formal rules (including legislative acts and binding regulations) as well as on voluntary principles that are intended to guide practitioners in their research, development and maintenance of AI systems (Butcher & Beridze, 2019; Gonzalez et al., 2020). Essentially, it represents a regulatory framework that can support AI practitioners in their strategy formulation and in day-to-day operations (Erdélyi & Goldsmith, 2022; Mullins et al., 2021; Schneider et al., 2022). The rationale behind responsible AI governance is to ensure that automated systems including ML/DL technologies, are supporting individuals and organizations in achieving their long terms objectives, whist safeguarding the interests of all stakeholders (Corea et al., 2023; Hickok et al., 2022).

      AI governance requires that the organizational leaders comply with relevant legislation, hard laws and regulations (Mäntymäki et al., 2022). Moreover, they are expected to follow ethical norms, values and standards (Koniakou, 2023). Practitioners ought to be trustworthy, diligent and accountable in how they handle their intellectual capital and other resources including their information technologies, finances as well as members of staff, in order to overcome challenges, minimize uncertainties, risks and any negative repercussions (E.g. decreased human oversight in decision making, among others) (Agbese et al., 2023; Smuha, 2019).

      Procedural governance mechanisms ought to be in place to ensure that AI technologies and ML/DL models are operating in a responsible manner. Figure 1 features some of the key elements that are required for the responsible governance of artificial intelligence. The following principles are aimed to provide guidelines for the modus operandi of AI practitioners (including ML/DL developers).

      Figure 1. A Responsible Artificial Intelligence Governance Framework

      Accountability and transparency

      “Accountability” refers to the stakeholders’ expectations about the proper functioning of AI systems, in all stages, including in the design, creation, testing or deployment, in accordance with relevant regulatory frameworks. It is imperative that AI developers are held accountable for the smooth operation of AI systems throughout their lifecycle (Raji et al., 2020). Stakeholders expect them to be accountable by keeping a track record of their AI development processes (Mäntymäki et al., 2022).

      The transparency notion refers to the extent to which end-users could be in a position to understand how AI systems work (Andrada et al., 2020; Hollanek, 2020). AI transparency is associated with the degree of comprehension about algorithmic models in terms of “simulatability” (an understanding of AI functioning), “decomposability” (related to how individual components work), and algorithmic transparency (this is associated to the algorithms’ visibility).

       In reality, it is difficult to understand how AI systems, including deep learning models and their neural networks are learning (as they acquire, process and store data) during training phases. They are often considered as black box models. It may prove hard to algorithmically translate derived concepts into human-understandable terms, even though developers may use certain jargon to explain their models’ attributes and features. Many legislators are striving in their endeavors to pressurize AI actors to describe the algorithms they use in automated decision-making, yet the publication of algorithms is useless if outsiders cannot access the data of the AI model.

      Explainability and interpretability

      Explainability is the concept that sheds light on how AI models work, in a way that is comprehensible to a human being. Arguably, the explainabilty of AI systems could improve their transparency, trustworthiness and accountability. At the same time, it can reduce bias and unfairness. The explainability of artificial intelligence systems could clarify how they reached their decisions (Arya et al., 2019; Keller & Drake, 2021). For instance, AI could explain how and why autonomous cars decide to stop or to slow down when there are pedestrians or other vehicles in front of them.

      Explainable AI systems might improve consumer trust and may enable engineers to develop other AI models, as they are in a position to track provenance of every process, to ensure reproducibility, and to enable checks and balances (Schneider et al., 2022). Similarly, interpretability refers to the level of accuracy of machine learning programs in terms of linking the causes to the effects (John-Mathews, 2022).

      Fairness and inclusiveness

      The responsible AI’s fairness dimension refers to the practitioners’ attempts to correct algorithmic biases that may possibly (voluntarily or involuntarily) be included in their automation processes (Bellamy et al., 2019; Mäntymäki, et al., 2022). AI systems can be affected by their developers’ biases that could include preferences or antipathies toward specific demographic variables like genders, age groups and ethnicities, among others (Madaio et al., 2020). Currently, there is no universal definition on AI fairness.

      However, recently many multinational corporations have developed instruments that are intended to detect bias and to reduce it as much as possible (John-Mathews et al., 2022). In many cases, AI systems are learning from the data that is fed to them. If the data are skewed and/or if they comprise implicit bias into them, they may result in inappropriate outputs.

      Fair AI systems rely on unbiased data (Wu et al., 2020). For this reason, many companies including Facebook, Google, IBM and Microsoft, among others are striving in their endeavors to involve members of staff hailing from diverse backgrounds. These technology conglomerates are trying to become as inclusive and as culturally aware as possible in order to minimize bias from affecting their AI processes. Previous research reported that AI’s bias may result in inequality, discrimination and in the loss of jobs (Butcher & Beridze, 2019).

      Privacy and safety for consumers

      Consumers are increasingly concerned about the privacy of their data. They have a right to control who has access to their personal information. The data that is collected or used by third parties, without the authorization or voluntary consent of individuals, would result in the violations of their privacy (Zhu et al., 2020; Wu et al., 2022).

      AI-enabled products, including dialogue systems like chatbots and virtual assistants, as well as digital assistants (e.g. like Siri, Alexa or Cortana), and/or wearable technologies such as smart watches and sensorial smart socks, among others, are increasingly capturing and storing large quantities of consumer information. The benefits that are delivering these interactive technologies may be offset by a number of challenges. The technology businesses who developed these products are responsible to protect their consumers’ personal data (Rodríguez-Barroso et al., 2020). Their devices are capable of holding a wide variety of information on their users. They are continuously gathering textual, visual, audio, verbal, and other sensory data from consumers. In many cases, the customers are not aware that they are sharing personal information to them.

      For example, facial recognition technologies are increasingly being used in different contexts. They may be used by individuals to access websites and social media, in a secure manner and to even authorize their payments through banking and financial services applications. Employers may rely on such systems to track and monitor their employees’ attendance. Marketers can utilize such technologies to target digital advertisements to specific customers. Police and security departments may use them for their surveillance systems and to investigate criminal cases. The adoption of these technologies has often raised concerns about privacy and security issues. According to several data privacy laws that have been enacted in different jurisdictions, organizations are bound to inform users that they are gathering and storing their biometric data. The businesses that employ such technologies are not authorized to use their consumers’ data without their consent.

      Companies are expected to communicate about their data privacy policies with their target audiences (Wong, 2020). They have to reassure consumers that the consented data they collect from them is protected and are bound to inform them that they may use their information to improve their customized services to them. The technology giants can reward their consumers to share sensitive information. They could offer them improved personalized services among other incentives, in return for their data. In addition, consumers may be allowed to access their own information and could be provided with more control (or other reasonable options) on how to manage their personal details.

      The security and robustness of AI systems

      AI algorithms are vulnerable to cyberattacks by malicious actors. Therefore, it is in the interest of AI developers to secure their automated systems and to ensure that they are robust enough against any risks and attempts to hack them (Gehr et al., 2018; Li et al., 2020).

      The accessibility to AI models ought to be continuously monitored at all times during their development and deployment (Bertino et al., 2021). There may be instances when AI models could encounter incidental adversities, leading to the corruption of data. Alternatively, they might encounter intentional adversities when they experience sabotage from hackers. In both cases, the AI model will be compromised and can result in system malfunctions (Papagiannidis et al., 2023).

      AI models have to prevent such contingent issues from happening. Their developers’ responsibilities are to improve the robustness of their automated systems, and to make them as secure of possible, to reduce the chances of threats, including by inadvertent irregularities, information leakages, as well as by privacy violations like data breaches, contamination and poisoning by malicious actors (Agbese et al., 2023; Hamon et al., 2020).

      AI developers should have preventive policies and measures related to the monitoring and control of their data. They ought to invest in security technologies including authentication and/or access systems with encryption software as well as firewalls for their protection against cyberattacks. Routine testing can increase data protection, improve security levels and minimize the risks of incidents.

      Conclusions

      This review indicates that more academics as well as practitioners, are increasingly devoting their attention to AI as they elaborate about its potential uses, as well as on its opportunities and threats. It reported that its proponents are raising awareness on the benefits of AI systems for individuals as well as for organizations. At the same time, it suggests that a number of scholars and other stakeholders including policy makers, are raising their concerns about its possible perils (e.g. Berente et al., 2021; Gonzalez et al., 2020; Zhang & Lu, 2021).

      Many researchers identified some of the risks of AI (Li et al., 2021; Magas & Kiritsis, 2022). In many cases, they warned that AI could disseminate misinformation, foster prejudice, bias and discrimination, raise privacy concerns, and could lead to the loss of jobs (Butcher & Beridze, 2019). A few commentators argue about the “singularity” or the moment where machine learning technologies could even surpass human intelligence (Huang & Rust, 2022). They predict that a critical shift could occur if humans are no longer in a position to control AI anymore.

      In this light, this article sought to explore the governance of AI. It sheds light on substantive regulations, as well as on reflexive principles and guidelines, that are intended at practitioners who are researching, testing, developing and implementing AI models. It clearly explains how institutions, non-governmental organizations and technology conglomerates are introducing protocols (including self-regulations) to prevent contingencies from even happening due to inappropriate AI governance.

      Debatably, the voluntary or involuntary mishandling of automated systems can expose practitioners to operational disruptions and to significant risks including to their corporate image and reputation (Watts & Adriano, 2021). The nature of AI requires practitioners to develop guardrails to ensure that their algorithms work as they should (Bauer, 2022). It is imperative that businesses comply with relevant legislations and to follow ethical practices (Buhmann & Fieseler, 2023). Ultimately, it is in their interest to operate their company in a responsible manner, and to implement AI governance procedures. This way they can minimize unnecessary risks and safeguard the well-being of all stakeholders.

      This contribution has addressed its underlying research objectives. Firstly, it raised awareness on AI governance frameworks that were developed by policy makers and other organizations, including by the businesses themselves. Secondly, it scrutinized the extant academic literature focused on AI governance and on the intersection of AI and CSR. Thirdly, it discussed about essential elements for the promotion of socially responsible behaviors and ethical dispositions of AI developers. In conclusion it put forward an AI governance conceptual model for practitioners.

      This research made reference to regulatory instruments that are intended to govern AI expert systems. It reported that, at the moment there are a few jurisdictions that have formalized their AI policies and governance frameworks. Hence, this article urges laggard governments to plan, organize, design and implement regulatory instruments that ensure that individuals and entities are safe when they utilize AI systems for personal benefit, educational and/or for commercial purposes.

      Arguably, one has to bear in mind that, in many cases, policy makers have to face a “pacing problem” as the proliferation of innovation is much quicker than legislation. As a result, governments tend to be reactive in the implementation of regulatory interventions relating to innovations. They may be unwilling to hold back the development of disruptive technologies from their societies. Notwithstanding, they may face criticism by a wide array of stakeholders in this regard, as they may have conflicting objectives and expectations.

      The governments’ policy is to regulate business and industry to establish technical, safety and quality standards as well as to monitor their compliance. Yet, they may consider introducing different forms of regulation other than the traditional “command and control” mechanisms. They may opt for performance-based and/or market-based incentive approaches, co-regulation and self-regulation schemes, among others (Hepburn, 2009), in order to foster technological innovations.

      This research has shown that a number of technology giants, including IBM and Microsoft, among others, are anticipating the regulatory interventions of different governments where they operate their businesses. It reported that they are communicating about their responsible AI governance initiatives as they share information on their policies and practices that are meant to certify, explain and audit their AI developments. Evidently, these companies, among others, are voluntarily self-regulating themselves as they promote accountability, fairness, privacy and robust AI systems. These two organizations, in particular, are raising awareness about their AI governance frameworks to increase their CSR credentials with stakeholders.

      Likewise, AI developers who work for other businesses, are expected to forge relationships with external stakeholders including with policy makers as well as with actors including individuals and organizations who share similar interests in AI. Innovative clusters and network developments may result in better AI systems and can also decrease the chances of possible risks.  Indeed, practitioners can be in better position if they cooperate with stakeholders for the development of trustworthy AI and if they increase their human capacity to improve the quality of their intellectual properties (Camilleri et al., 2023). This way, they can enhance their competitiveness and growth prospects (Troise & Camilleri, 2021). Arguably, it is in their interest to continuously engage with internal stakeholders (and employees), and to educate them about AI governance dimensions, that are intended to promote accountable, transparent, explainable interpretable reproducible, fair, inclusive and secure AI solutions. Hence, they could maximize AI benefits, minimize their risks as well as associated costs.

      Future research directions

      Academic colleagues are invited to raise more awareness on AI governance mechanisms as well as on verification and monitoring instruments. They can investigate what, how, when and where protocols could be used to protect and safeguard individuals and entities from possible risks and dangers of AI.

      The “what” question involves the identification of AI research and development processes that require regulatory or quasi regulatory instruments (in the absence of relevant legislation) and/or necessitate revisions in existing statutory frameworks.

      The “how” question is related to the substance and form of AI regulations, in terms of their completeness, relevance, and accuracy. This argumentation is synonymous with the true and fair view concept applied in the accounting standards of financial statements.

      The “when” question is concerned with the timeliness of the regulatory intervention. Policy makers ought to ensure that stringent rules do not hinder or delay the advancement of technological innovations.

      The “where” question is meant to identify the context where mandatory regulations or the introduction of soft laws, including non-legally binding principles and guidelines are/are not required.

      Future researchers are expected to investigate further these four questions in more depth and breadth. This research indicated that most contributions on AI governance were discursive in nature and/or involved literature reviews. Hence, there is scope for academic colleagues to conduct primary research activities and to utilize different research designs, methodologies and sampling frames to better understand the implications of planning, organizing, implementing and monitoring AI governance frameworks, in diverse contexts.

      The full article is also available here: https://www.researchgate.net/publication/372412209_Artificial_intelligence_governance_Ethical_considerations_and_implications_for_social_responsibility

      Leave a comment

      Filed under artificial intelligence, chatbots, Corporate Social Responsibility, internet technologies, internet technologies and society

      Live support by chatbots with artificial intelligence: A future research agenda

      This is an excerpt from one of my latest contributions on the use of responsive chatbots by service businesses. The content was adapted for this blogpost.

      Suggested citation: Camilleri, M.A. & Troise, C. (2022). Live support by chatbots with artificial intelligence: A future research agenda. Service Business, https://doi.org/10.1007/s11628-022-00513-9

      (Credit: Chatbots Magazine)

      The benefits of using chatbots for online customer services

      Frequently, consumers are engaging with chatbot systems without even knowing, as machines (rather than human agents) are responding to online queries (Li et al. 2021; Pantano and Pizzi 2020; Seering et al. 2018; Stoeckli et al. 2020). Whilst 13% of online consumer queries require human intervention (as they may involve complex queries and complaints), more than 87 % of online consumer queries are handled by chatbots (Ngai et al., 2021).

      Several studies reported that there are many advantages of using conversational chatbots for customer services. Their functional benefits include increased convenience to customers, enhanced operational efficiencies, reduced labor costs, and time-saving opportunities.

      Consumers are increasingly availing themselves of these interactive technologies to retrieve detailed information from their product recommendation systems and/or to request their assistance to help them resolve technical issues. Alternatively, they use them to scrutinize their personal data. Hence, in many cases, customers are willing to share their sensitive information in exchange for a better service.

      Although, these interactive technologies are less engaging than human agents, they can possibly elicit more disclosures from consumers. They are in a position to process the consumers’ personal data and to compare it with prior knowledge, without any human instruction. Chatbots can learn in a proactive manner from new sources of information to enrich their database.

      Whilst human customer service agents may usually handle complex queries including complaints, service chatbots can improve the handling of routine consumer queries. They are capable of interacting with online users in two-way communications (to a certain extent). Their interactions may result in significant effects on consumer trust, satisfaction, and repurchase intentions, as well as on positive word-of-mouth publicity.

      Many researchers reported that consumers are intrigued to communicate with anthropomorphized technologies as they invoke social responses and norms of reciprocity. Such conversational agents are programed with certain cues, features and attributes that are normally associated with humans.

      The findings from this review clearly indicate that individuals feel comfortable using chatbots that simulate human interactions, particularly with those that have enhanced anthropomorphic designs. Many authors noted that the more chatbots respond to users in a natural, humanlike way, the easier it is for the business to convert visitors into customers, particularly if they improve their online experiences. This research indicates that there is scope for businesses to use conversational technologies to personalize interactions with online users, to build better relationships with them, to enhance consumer satisfaction levels, to generate leads as well as sales conversions.

      The costs of using chatbots for online customer services

      Despite the latest advances in the delivery of electronic services, there are still individuals who hold negative perceptions and attitudes towards the use of interactive technologies. Although AI technologies have been specifically created to foster co-creation between the service provider and the customer,

      There are a number of challenges (like authenticity issues, cognition challenges, affective issues, functionality issues and integration conflicts) that may result in a failed service interaction and in dissatisfied customers. There are consumers, particularly the older ones, who do not feel comfortable interacting with artificially intelligent technologies like chatbots, or who may not want to comply with their requests, for different reasons. For example, they could be wary about cyber-security issues and/or may simply refuse to engage in conversations with a robot.

      A few commentators contended that consumers should be informed when they are interacting with a machine. In many cases, online users may not be aware that they are engaging with elaborate AI systems that use cues such as names, avatars, and typing indicators that are intended to mimic human traits. Many researchers pointed out that consumers may or may not want to be serviced by chatbots.

      A number of researchers argued that some chatbots are still not capable of communicative behaviors that are intended to enhance relational outcomes. For the time being, there are chatbot technologies that are not programed to answer to all of their customers’ queries (if they do not recognize the keywords that are used by the customers), or may not be quick enough to deal with multiple questions at the same time. Therefore, the quality of their conversations may be limited. Such automated technologies may not always be in a position to engage in non-linear conversations, especially when they have to go back and forth on a topic with online users.

      Theoretical and practical implications

      This contribution confirms that recently there is a growing interest among academia as well as by practitioners on research that is focused on the use of chatbots that can improve the businesses’ customer-centric services. It clarifies that various academic researchers have often relied on different theories including on the expectancy theory, or on the expectancy violation theory, the human computer interaction theory/human machine communication theory, the social presence theory, and/or on the social response theory, among others.

      Currently, there are limited publications that integrated well-established conceptual bases (like those featured in the literature review), or that presented discursive contributions on this topic. Moreover, there are just a few review articles that capture, scrutinize and interpret the findings from previous theoretical underpinnings, about the use of responsive chatbots in service business settings. Therefore, this systematic review paper addresses this knowledge gap in the academic literature.

      It clearly differentiates itself from mainstream research as it scrutinizes and synthesizes the findings from recent, high impact articles on this topic. It clearly identifies the most popular articles from Scopus and Web of Science, and advances a definition about anthropomorphic chatbots, artificial intelligence chatbots (or AI chatbots), conversational chatbot agents (or conversational entities, conversational interfaces, conversational recommender systems or dialogue systems), customer experience with chatbots, chatbot customer service, customer satisfaction with chatbots, customer value (or the customers’ perceived value) of chatbots, and on service robots (robot advisors). It discusses about the different attributes of conversational chatbots and sheds light on the benefits and costs of using interactive technologies to respond to online users’ queries.

      In sum, the findings from this research reveal that there is a business case for online service providers to utilize AI chatbots. These conversational technologies could offer technical support to consumers and prospects, on various aspects, in real time, round the clock. Hence, service businesses could be in a position to reduce their labor costs as they would require fewer human agents to respond to their customers. Moreover, the use of interactive chatbot technologies could improve the efficiency and responsiveness of service delivery. Businesses could utilize AI dialogue systems to enhance their customer-centric services and to improve online experiences.  These service technologies can reduce the workload of human agents. The latter ones can dedicate their energies to resolve serious matters, including the handling of complaints and time-consuming cases.

      On the other hand, this paper also discusses potential pitfalls. Currently, there are consumers who for some reason or another, are not comfortable interacting with automated chatbots. They may be reluctant to engage with advanced anthropomorphic systems that use avatars, even though, at times, they can mimic human communications relatively well.  Such individuals may still appreciate a human presence to resolve their service issues. They may perceive that interactive service technologies are emotionless and lack a sense of empathy.

      Presently, chatbots can only respond to questions, keywords and phrases that they were programed to answer. Although they are useful in solving basic queries, their interactions with consumers are still limited. Their dialogue systems require periodic maintenance. Unlike human agents they cannot engage in in-depth conversations or deal with multiple queries, particularly if they are expected to go back and forth on a topic.

      Most probably, these technical issues will be dealt with over time, as more advanced chatbots will be entering the market in the foreseeable future. It is likely that these AI technologies would possess improved capabilities and will be programmed with up-to-date information, to better serve future customers, to exceed their expectations.

      Limitations and future research avenues

      This research suggests that this area of study is gaining traction in academic circles, particularly in the last few years. In fact, it clarifies that there were four hundred twenty-one 421 publications on chatbots in business-related journals, up to December 2021. Four hundred fifteen (415) of them were published in the last 5 years. 

      The systematic analysis that was presented in this research was focused on “chatbot(s)” or “chatterbot(s)”. Other academics may refer to them by using different synonyms like “artificial conversational entity (entities)”, “bot(s)”, “conversational avatar(s)”, “conversational interface agent”, “interactive agent(s)”, “talkbot(s)”, “virtual agent(s)”, and/or “virtual assistant(s)”, among others. Therefore, future researchers may also consider using these keywords when they are other exploring the academic and nonacademic literature on conversational chatbots that are being used for customer-centric services.

      Nevertheless, this bibliographic study has identified some of the most popular research areas relating to the use of responsive chatbots in online customer service settings. The findings confirmed that many authors are focusing on the chatbots’ anthropomorphic designs, AI capabilities and on their dialogue systems. This research suggests that there are still knowledge gaps in the academic literature. The following table clearly specifies that there are untapped opportunities for further empirical research in this promising field of study.

      The full article is forthcoming. A prepublication version will be available through Researchgate.

      Leave a comment

      Filed under artificial intelligence, Business, chatbots, customer service, Marketing

      Announcing a Call for Chapters (for Springer)

      Call for Chapters

      Strategic Corporate Communication and Stakeholder Engagement in the Digital Age

       

      Abstract submission deadline: 30th June 2019 (EXTENDED to the 30th September 2019)
      Full chapters due: 31st December 2019

       

      Background

      The latest advances in technologies and networks have been central to the expansion of electronic content across different contexts. Contemporary communication approaches are crossing boundaries as new media are offering both challenges and opportunities. The democratisation of the production and dissemination of information via the online technologies has inevitably led individuals and organisations to share content (including images, photos, news items, videos and podcasts) via the digital and social media. Interactive technologies are allowing individuals and organisations to co-create and manipulate electronic content. At the same time, they enable them to engage in free-flowing conversations with other online users, groups or virtual communities (Camilleri, 2017). Innovative technologies have empowered the organisations’ stakeholders, including; employees, investors, customers, local communities, government agencies, non-governmental organisations (NGOs), as well as the news media, among others. Both internal and external stakeholders are in a better position to scrutinise the organisations’ decisions and actions. For this reason, there is scope for the practitioners to align their corporate communication goals and activities with the societal expectations (Camilleri, 2015; Gardberg & Fombrun, 2006). Therefore, organisations are encouraged to listen to their stakeholders. Several public interest organisations, including listed businesses, banks and insurance companies are already sharing information about their financial and non-financial performance in an accountable and transparent manner. The rationale behind their corporate disclosures is to develop and maintain strong and favourable reputations among stakeholders (Camilleri, 2018; Cornelissen, 2008). The corporate reputation is “a perceptual representation of a company’s past actions and future prospects that describe the firm’s overall appeal to all of its key constituents when compared to other leading rivals” (Fombrun, 1996).

      Business and media practitioners ought to be cognisant about the strategic role of corporate communication in leveraging the organisations’ image and reputation among stakeholders (Van Riel & Fombrun, 2007). They are expected to possess corporation communication skills as they need to forge relationships with different stakeholder groups (including employees, customers, suppliers, investors, media, regulatory authorities and the community at large). They have to be proficient in specialist areas, including; issues management, crises communication as well as in corporate social responsibility reporting, among other topics. At the same time, they should be aware about the possible uses of different technologies, including; artificial intelligence, augmented and virtual reality, big data analytics, blockchain and internet of things, among others; as these innovative tools are disrupting today’s corporate communication processes.

       

      Objective

      This title shall explain how strategic communication and media management can affect various political, economic, societal and technological realities. Theoretical and empirical contributions can shed more light on the existing structures, institutions and cultures that are firmly founded on the communication technologies, infrastructures and practices. The rapid proliferation of the digital media has led both academics and practitioners to increase their interactive engagement with a multitude of stakeholders. Very often, they are influencing regulators, industries, civil society organisations and activist groups, among other interested parties. Therefore, this book’s valued contributions may include, but are not restricted to, the following topics:

       

      Artificial Intelligence and Corporate Communication

      Augmented and Virtual Reality in Corporate Communication

      Blockchain and Corporate Communication

      Big Data and Analytics in Corporate Communication

      Branding and Corporate Reputation

      Corporate Communication via Social Media

      Corporate Communication Policy

      Corporate Culture

      Corporate Identity

      Corporate Social Responsibility Communications

      Crisis, Risk and Change Management

      Digital Media and Corporate Communication

      Employee Communications

      Fake News and Corporate Communication

      Government Relationships

      Integrated Communication

      Integrated Reporting of Financial and Non-Financial Performance

      Internet Technologies and Corporate Communication

      Internet of Things and Corporate Communication

      Investor Relationships

      Issues Management and Public Relations

      Leadership and Change Communication

      Marketing Communications

      Measuring the Effectiveness of Corporate Communications

      Metrics for Corporate Communication Practice

      Press and Media Relationships

      Stakeholder Management and Communication

      Strategic Planning and Communication Management

       

      This publication shall present the academics’ conceptual discussions that cover the contemporary topic of corporate communication in a concise yet accessible way. Covering both theory and practice, this publication shall introduce its readers to the key issues of strategic corporate communication as well as stakeholder management in the digital age. This will allow prospective practitioners to critically analyse future, real-life situations. All chapters will provide a background to specific topics as the academic contributors should feature their critical perspectives on issues, controversies and problems relating to corporate communication.

      This authoritative book will provide relevant knowledge and skills in corporate communication that is unsurpassed in readability, depth and breadth. At the start of each chapter, the authors will prepare a short abstract that summarises the content of their contribution. They are encouraged to include descriptive case studies to illustrate real situations, conceptual, theoretical or empirical contributions that are meant to help aspiring managers and executives in their future employment. In conclusion, each chapter shall also contain a succinct summary that should outline key implications (of the findings) to academia and / or practitioners, in a condensed form. This will enable the readers to retain key information.

       

      Target Audience

      This textbook introduces aspiring practitioners as well as under-graduate and post-graduate students to the subject of corporate communication – in a structured manner. More importantly, it will also be relevant to those course instructors who are teaching media, marketing communications and business-related subjects in higher education institutions, including; universities and colleges. It is hoped that course conveners will use this edited textbook as a basis for class discussions.

       

      Submission Procedure

      Senior and junior academic researchers are invited to submit a 300-word abstract on or before the 30th June 2019. Submissions should be sent to Mark.A.Camilleri@um.edu.mt. Authors will be notified about the editorial decision during July 2019. The length of the chapters should be between 6,000- 8,000 words (including references, figures and tables). These contributions will be accepted on or before the 31st December 2019. The references should be presented in APA style (Version 6). All submitted chapters will be critically reviewed on a double-blind review basis. The authors’ and the reviewers’ identities will remain anonymous. All authors will be requested to serve as reviewers for this book. They will receive a notification of acceptance, rejection or suggested modifications – on or before the 15th February 2020.

      Note: There are no submission or acceptance fees for the publication of this book. All abstracts / proposals should be submitted via the editor’s email.

       

      Editor

      Mark Anthony Camilleri (Ph.D. Edinburgh)
      Department of Corporate Communication,
      Faculty of Media and Knowledge Sciences,
      University of Malta, MALTA.
      Email: mark.a.camilleri@um.edu.mt

       

      Publisher

      Following the double-blind peer review process, the full chapters will be submitted to Springer Nature for final review. For additional information regarding the publisher, please visit https://www.springer.com/gp. This prospective publication will be released in 2020.

       

      Important Dates

      Abstract Submission Deadline:          30th June 2019 30th September 2019
      Notification of Acceptance:               31st July 2019 31st October 2019

      Full Chapters Due:                             31st December 2019

      Notification of Review Results:         15th February 2020
      Final Chapter Submission:                 31st March 2020

      Final Acceptance Notification:          30th April, 2020

      References

      Camilleri, M.A. (2015). Valuing Stakeholder Engagement and Sustainability Reporting. Corporate Reputation Review18(3), 210-222. https://link-springer-com.ejournals.um.edu.mt/article/10.1057/crr.2015.9

      Camilleri, M.A. (2017). Corporate Sustainability, Social Responsibility and Environmental Management, Cham, Switzerland: Springer Nature. https://www.springer.com/gp/book/9783319468488

      Camilleri, M.A. (2018). Theoretical Insights on Integrated Reporting: The Inclusion of Non-Financial Capitals in Corporate Disclosures. Corporate Communications: An International Journal23(4), 567-581. https://www.emeraldinsight.com/doi/full/10.1108/CCIJ-01-2018-0016

      Cornelissen, J.P. (2008). Corporate Communication. The International Encyclopedia of Communication. https://onlinelibrary.wiley.com/doi/abs/10.1002/9781405186407.wbiecc143.pub2

      Fombrun, C.J. (1995). Reputation: Realizing Value from the Corporate Image. Cambridge, MA, USA: Harvard Business School Press.

      Gardberg, N.A., & Fombrun, C. J. (2006). Corporate Citizenship: Creating Intangible Assets across Institutional Environments. Academy of Management Review31(2), 329-346. https://journals.aom.org/doi/abs/10.5465/AMR.2006.20208684

      Van Riel, C.B., & Fombrun, C.J. (2007). Essentials of Corporate Communication: Implementing Practices for Effective Reputation Management. Oxford, UK: Routledge. http://repository.umpwr.ac.id:8080/bitstream/handle/123456789/511/Essentials%20of%20Corporate%20Communication.pdf?sequence=1

      Leave a comment

      Filed under Analytics, Big Data, blockchain, branding, Business, Corporate Governance, Corporate Social Responsibility, Corporate Sustainability and Responsibility, CSR, digital media, ESG Reporting, Higher Education, Human Resources, Impact Investing, Integrated Reporting, internet technologies, internet technologies and society, Marketing, online, Shared Value, Stakeholder Engagement, Sustainability, Web