Metaverse education: Opportunities and challenges for immersive learning

The following content was adapted from one of my latest contributions on the Metaverse’s immersive technology.

(Credit: Onurdongel)

Suggested citation: Camilleri, M.A. (2023), “Metaverse applications in education: a systematic review and a cost-benefit analysis”, Interactive Technology and Smart Education, Vol. ahead-of-print No. ahead-of-print. https://doi.org/10.1108/ITSE-01-2023-0017

Online users are connecting to simulated virtual environments through various digital games like Fortnite, Minecraft, Roblox, and World of Warcraft, among others. Very often, gamers are utilizing virtual reality (VR) and augmented reality (AR) technologies to improve their gaming experiences. In many cases, they are engaging with other individuals in the cyberspace and participating in an extensive virtual economy. New users are expected to create electronic personas, called avatars (that represent their identity in these games). They are allowed to move their avatars around virtual spaces and to use them to engage with other users, when they are online. Therefore, interactive games are enhancing their users’ immersive experiences, particularly those that work with VR headsets.

Academic researchers as well as technology giants like Facebook (Meta), Google and Microsoft, among others, anticipate that the Metaverse will shortly change the way we experience the Internet. Whilst on the internet, online users are interacting with other individuals through websites, including games and social media networks (SNSs) in the Metaverse they engage with the digital representations of people (through their avatars), places, and things in a simulated universe. Hence, the Metaverse places its users in the middle of the action. In plain words, it can be described as a combination of multiple elements of interactive technologies, including VR and AR where users can experience a digital universe. Various industry practitioner including Meta (Facebook) argue that this immersive technology will reconfigure the online users’ sensory inputs, definitions of space, and points of access to information.

AR and VR devices can be used to improve the students’ experiences when they engage with serious games. Many commentators noted that these technologies encourage active learning approaches, as well as social interactions among students and/or between students and their teachers. Serious games can provide “gameful experiences”, if they share the immersive features that captivate them, like those relating to the entertaining games. If they do so, it is very likely that students would enjoy their game play (and game-based learning). Similarly, the Metaverse can be used to increase the students; motivations and learning outcomes.

For the time being, there is no universal definition that encapsulates the word “Metaverse”. The term has been used in a 1992 science fiction novel Snow Crash. Basically, it is a blend of two words, in which parts of them, namely “meta” and “universe” were combined to create the “Metaverse” notion. While meta means beyond, universe is a term that is typically used to describe an iteration of the internet that consists of persistent, immersive 3D virtual spaces that are intended to emulate physical interactions in perceived virtual worlds (like a universe).

Although, there are various academic contributions that have explored the utilization of online educational technologies, including AR and VR, in different contexts,  currently, just a few researchers who have evaluated of the latest literature on this contemporary topic, to reveal the benefits and costs of using this disruptive innovation in the context of education. Therefore, this contribution closes this gap in academic literature. The underlying objective of this research is to shed light on the opportunities and challenges of using this immersive technology with students.

Opportunities

    Immersive multi-sensory experiences in 3D environments

    The Metaverse could provide a smooth interaction between the real world and the virtual spaces. Its users can engage in activities that are very similar to what they do in reality. However, it could also provide opportunities for them to experience things that could be impossible for them to do in the real world. Sensory technologies enable users to use their five senses of sight, touch, hearing, taste and smell, to immerse themselves in a virtual 3D environment. VR tools are interactive, entertaining and provide captivating and enjoyable experiences to their users. In the past years, a number of educators and students have been using 3D learning applications (e.g. like Second Life) to visit virtual spaces that resemble video games. Many students are experienced gamers and are lured by their 3D graphics. They learn when they are actively involved. Therefore, the learning applications should be as meaningful, engaging, socially interactive and entertaining as possible.

    There is scope for educators and content developers to create digital domains like virtual schools, colleges and campuses, where students and teachers can socialize and engage in two-way communications. Students could visit the premises of their educational institutions in online tours, from virtually anywhere. A number of universities are replicating their physical campus with virtual ones. The design of the virtual campuses may result in improved student services, shared interactive content that could improve their learning outcomes, and could even reach wider audiences. Previous research confirms that it is more interesting and appealing for students to learn academic topics through the virtual world.

    Equitable and accessible space for all users

    Like other virtual technologies, the Metaverse could be accessed from remote locations. Educational institutions can use its infrastructure to deliver courses (free of charge or against tuition fees, as of now). Metaverse education may enable students from different locations to use its open-source software to pursue courses from anywhere, anytime. Hence, its democratized architecture could reduce geographic disparities among students, and increases their chances of continuing education through higher educational institutions in different parts of the world.

    In the future, students including individuals with different abilities, may use the Metaverse’s multisensory environment to immerse themselves in engaging lectures.

    Interactions with virtual representations of people and physical objects

    Currently, individual users can utilize the AR and VR applications to communicate with others and to exert their influence on the objects within the virtual world. They can organize virtual meetings with geographically distant users, attend conferences, et cetera. Various commentators argued that the Metaverse can be used in education, to learn academic subjects in real-time sessions in a VR setting and to interact with peers and course instructors. The students and their lecturers will probably use an avatar that will represent their identity in the virtual world. Many researchers noted that avatars facilitate interactive communications and are a good way to personalize the students’ learning experiences.

    Interoperability

    Unlike other VR applications, the Metaverse will enable its users to retain their identities as well as the ownership of their digital assets through different virtual worlds and platforms, including those related to the provision of education. This means that Metaverse users can communicate and interact with other individuals in a seamless manner through different devices or servers, across different platforms. They can use the Metaverse to share data and content in different virtual worlds that will be accessed through Web 3.0.

    Challenges

      Infrastructure, resources and capabilities

      The use of the Metaverse technology will necessitate a thorough investment in hardware to operate the university virtual spaces. The Metaverses requires intricate devices, including appropriate high-performance infrastructures to achieve accurate retina display and pixel density for realistic virtual immersions. These systems rely on fast internet connections with good bandwidths as well as computers with adequate processing capabilities, that are equipped with good graphic cards. For the time being, VR, MR and AR hardware may be considered as bulky, heavy, expensive and cost-prohibitive, in some contexts.

      The degree of freedom in a virtual world

      The Metaverse offers higher degrees of freedom than what is available through the worldwide web and web2.0 technologies. Its administrators cannot be in a position to anticipate the behaviors of all persons using their technologies. Therefore, Metaverse users can possibly be exposed to positive as well as to negative influences as other individuals can disguise themselves in the vast virtual environments, through anonymous avatars.

      Privacy and security of users’ personal data

      The users’ interactions with the Metaverse as well as their personal or sensitive information, can be tracked by the platform operators hosting this service, as they continuously record, process and store their virtual activities in real-time. Like its preceding worldwide web and Web 2.0 technologies, the Metaverse can possibly raise the users’ concerns about the security of their data and of their intellectual properties. They may be wary about data breaches, scams, et cetera. Public blockchains and other platforms can already trace the users’ sensitive data, so they are not anonymous to them.  Individuals may decide to use one or more avatars to explore the Metaverse’s worlds. They may risk exposing their personal information, particularly when they are porting from one Metaverse to another and/or when they share transactional details via NFTs. Some Metaverse systems do not require their users to share personal information when they create their avatar. However, they could capture relevant information from sensors that detect their users’ brain activity, monitor their facial features, eye motion and vocal qualities, along with other ambient data pertaining to the users’ homes or offices.

      They may have legitimate reasons to capture such information, in order to protect them against objectionable content and/or unlawful conduct of other users. In many cases, the users’ personal data may be collected for advertising and/or for communication purposes. Currently, different jurisdictions have not regulated their citizens’ behaviors within the Metaverse contexts. Works are still in progress, in this regard.

      Identity theft and hijacking of user accounts

      There may be malicious persons or groups who may try use certain technologies, to obtain the personal information and digital assets from Metaverse users. Recently, a deepfake artificial intelligence software has developed short audible content, that mimicked and impersonated a human voice.

      Other bots may easily copy the human beings’ verbal, vocal and visual data including their personality traits. They could duplicate the avatars’ identities, to commit fraudulent activities including unauthorized transactions and purchases, or other crimes with their disguised identities. Roblox users reported that they experienced avatar scams in the past. In many cases, criminals could try to avail themselves of the digital identities of vulnerable users, including children and senior citizens, among others, to access their funds or cryptocurrencies (as they may be linked to the Metaverse profiles). As a result, Metaverse users may become victims of identity theft. Evolving security protocols and digital ledger technologies like the blockchain will be increasing the transparency and cybersecurity of digital assets. However, users still have to remain vigilant about their digital footprint, to continue protecting their personal information.

      As the use of the virtual environment is expected to increase in the foreseeable future, particularly with the emergence of the Metaverse, it is imperative that new ways are developed to protect all users including students. Individuals ought to be informed about the risks to their privacy. Various validation procedures including authentication, such as face scans, retina scans, and speech recognition may be integrated in such systems to prevent identity theft and hijacking of Metaverse accounts.

      Borderless environment raises ethical and regulatory concerns

      For the time being, a number of policy makers as well as academics are raising their questions on the content that can be presented in the Metaverse’s virtual worlds, as well as to the conduct and behaviors of the Metaverse users. Arguably, it may prove difficult for the regulators of different jurisdictions to enforce their legislation in the Metaverse’s borderless environment. For example, European citizens are well acquainted with the European Union’s (EU) General Data Protection Regulation. Other countries have their own legal frameworks and/or principles that are intended to safeguard the rights of data subjects as well as those of content creators. For example, the United States governments has been slower that the EU to introduce its privacy by design policies. Recently, the South Korean Government announced a set of laudable, non-binding ethical guidelines for the provision and consumption of metaverse services. However, there aren’t a set of formal rules that can apply to all Metaverse users.

      Users’ addictions and mental health issues

      Although many AR and VR technologies have already been tried and tested in the past few years, the Metaverse is still getting started. For the time being, it is difficult to determine what are the effects of the Metaverse on the users’ health and well-being. Many commentators anticipate that an unnecessary exposure to Metaverse’s immersive technologies may result in negative side-effects for the psychological and physical health of human beings.  They are suggesting that individuals may easily become addicted to a virtual environment, where the limits of reality are their own imagination. They are lured to it “for all the things they can do” and will be willing to stay “for all the things they can be” (i.e. excerpts from Ready Player One Movie).

      Past research confirms that spending excessive time on internet, social media or playing video games can increase the chances of mental health problems like attention deficit disorders, eating conditions, as well as anxiety, stress or depression, among others. Individuals play video games to achieve their goals, to advance to the next level. Their gameplay releases dopamine. Similarly, their dopamine levels can increase when they are followed through social media, or when they receive likes, comment or other forms of online engagements.          

      Individuals can easily develop an addiction with this immersive technology, as they seek stimulating and temporary pleasurable experiences in its virtual spaces. As a result, they may become dependent to it. Their interpersonal communications via social media networks are not as authentic or satisfying as real-life relationships, as they are not interacting in-person, with other human beings. In the case of the Metaverse, their engagement experiences may appear to be real. Yet again, in the Metaverse, its users are located in a virtual environment, they not physically present near other individuals. Human beings need to build an honest and trustworthy relationship with one another. The users of the Metaverse can create avatars that could easily conceal their identity.

      Read further! The full paper can be accessed and downloaded from:

      The University of Malta: https://www.um.edu.mt/library/oar/handle/123456789/110459

      Researchgate: https://www.researchgate.net/publication/371275481_Metaverse_applications_in_education_A_systematic_review_and_a_cost-benefit_analysis

      Academia.edu: https://www.academia.edu/102800696/Metaverse_applications_in_education_A_systematic_review_and_a_cost_benefit_analysis

      SSRN: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4490787

      Leave a comment

      Filed under digital games, Digital Learning Resources, digital media, Education, education technology, Metaverse

      Users’ perceptions and expectations of ChatGPT

      Featuring an excerpt and a few snippets from one of my latest articles related to Generative Artificial Intelligence (AI).

      Suggested Citation: Camilleri, M.A. (2024). Factors affecting performance expectancy and intentions to use ChatGPT: Using SmartPLS to advance an information technology acceptance framework, Technological Forecasting and Social Changehttps://doi.org/10.1016/j.techfore.2024.123247


      The introduction

      Artificial intelligence (AI) chatbots utilize algorithms that are trained to process and analyze vast amounts of data by using techniques ranging from rule-based approaches to statistical models and deep learning, to generate natural text, to respond to online users, based on the input they received (OECD, 2023). For instance, Open AI‘s Chat Generative Pre-Trained Transformer (ChatGPT) is one of the most popular AI-powered chatbots. The company claims that ChatGPT “is designed to assist with a wide range of tasks, from answering questions to generating text in various styles and formats” (OpenAI, 2023a). OpenAI clarifies that its GPT-3.5, is a free-to-use language model that was optimized for dialogue by using Reinforcement Learning with Human Feedback (RLHF) – a method that relies on human demonstrations and preference comparisons to guide the model toward desired behaviors. Its models are trained on vast amounts of data including conversations that were created by humans (such content is accessed through the Internet). The responses it provides appear to be as human-like as possible (Jiang et al., 2023).

      GPT-3.5’s database was last updated in September 2021. However, GPT-4.0 version comes with a paid plan that is more creative than GPT-3.5, could accept images as inputs, can generate captions, classifications and analyses (Qureshi et al., 2023). Its developers assert that GPT-4.0 can create better content including extended conversations, as well as document search and analysis (Takefuji, 2023). Recently, its proponents noted that ChatGPT can be utilized for academic purposes, including research. It can extract and paraphrase information, translate text, grade tests, and/or it may be used for conversation purposes (MIT, 2023). Various stakeholders in education noted that this LLM tool may be able to provide quick and easy answers to questions.

      However, earlier this year, several higher educational institutions issued statements that warned students against using ChatGPT for academic purposes. In a similar vein, a number of schools banned ChatGPT from their networks and devices (Rudolph et al., 2023). Evidently, policy makers were concerned that this text generating AI system could disseminate misinformation and even promote plagiarism. Some commentators argue that it can affect the students’ critical-thinking and problem-solving abilities. Such skill sets are essential aspects for their academic and lifelong successes (Liebrenz et al., 2023Thorp, 2023). Nevertheless, a number of jurisdictions are reversing their decisions that impede students from using this technology (Reuters, 2023). In many cases, educational leaders are realizing that their students could benefit from this innovation, if they are properly taught how to adopt it as a tool for their learning journey.

      Academic colleagues are increasingly raising awareness on different uses of AI dialogue systems like service chatbots and/or virtual assistants (Baabdullah et al., 2022Balakrishnan et al., 2022Brachten et al., 2021Hari et al., 2022Li et al., 2021Lou et al., 2022Malodia et al., 2021Sharma et al., 2022). Some of them are evaluating their strengths and weaknesses, including of OpenAI’s ChatGPT (Farrokhnia et al., 2023Kasneci et al., 2023). Very often, they argue that there may be instances where the chatbots’ prompts are not completely accurate and/or may not fully address the questions that are asked to them (Gill et al., 2024). This may be due to different reasons. For example, GPT-3.5’s responses are based on the data that were uploaded before a knowledge cut-off date (i.e. September 2021). This can have a negative effect on the quality of its replies, as the algorithm is not up to date with the latest developments. Although, at the moment, there is a knowledge gap and a few grey areas on the use of AI chatbots that use natural language processing to create humanlike conversational dialogue, currently, there are still a few contributions that have critically evaluated their pros and cons, and even less studies have investigated the factors affecting the individuals’ engagement levels with ChatGPT.

      This empirical research builds on theoretical underpinnings related to information technology adoption in order to examine the online users’ perceptions and intentions to use AI Chatbots. Specifically, it integrates a perceived interactivity construct (Baabdullah et al., 2022McMillan and Hwang, 2002) with information quality and source trustworthiness measures (Leong et al., 2021Sussman and Siegal, 2003) from the Information Adoption Model (IAM) with performance expectancy, effort expectancy and social influences constructs (Venkatesh et al., 2003Venkatesh et al., 2012) from the Unified Theory of Acceptance and Use of Technology (UTAUT1/UTAUT2) to determine which factors are influencing the individuals’ intentions to use AI text generation systems like ChatGPT. This study’s focused research questions are:

      RQ1

      How and to what extent are information quality and source trustworthiness influencing the online users’ performance expectancy from ChatGPT?

      RQ2

      How and to what extent are their perceptions about ChatGPT’s interactivity, performance expectancy, effort expectancy, as well as their social influences affecting their intentions to continue using their large language models?

      RQ3

      How and to what degree is the performance expectancy construct mediating effort expectancy – intentions to use these interactive AI technologies?

      This study hypothesizes that information quality and source trustworthiness are significant antecedents of performance expectancy. It presumes that this latter construct, together with effort expectancy, social influences as well as perceived interactivity affect the online users’ acceptance and usage of generative pre-trained AI chatbots like GPT-3.5 or GPT-4.

      Many academic researchers sought to explore the individuals’ behavioral intentions to use a wide array of technologies (Alalwan, 2020Alam et al., 2020Al-Saedi et al., 2020Raza et al., 2021Tam et al., 2020). Very often, they utilized measures from the Theory of Reasoned Action (TRA) (Fishbein and Ajzen, 1975), the Theory of Planned Behavior (TPB) (Ajzen, 1991), the Technology Acceptance Model (TAM) (Davis, 1989Davis et al., 1989), TAM2 (Venkatesh and Davis, 2000), TAM3 (Venkatesh and Bala, 2008), UTAUT (Venkatesh et al., 2003) or UTAUT2 (Venkatesh et al., 2012). Few scholars have integrated constructs like UTAUT/UTAUT2’s performance expectancy, effort expectancy, social influences and intentions to use technologies with information quality and source trust measures from the Elaboration Likelihood Model (ELM) and IAM. Currently, there is still limited research that incorporates a perceived interactivity factor within information technology frameworks. Therefore, this contribution addresses this deficit in academic knowledge.

      Notwithstanding, for the time being, there is still scant research that is focused on AI-powered LLM, like ChatGPT, that are capable of generating human-like text that is based on previous contexts and drawn from past conversations. This timely study raises awareness on the individuals’ perceptions about the utilitarian value of such interactive technologies, in an academic (higher educational) context. It clearly identifies the factors that are influencing the individuals’ intentions to continue using them, in the future.


      From the literature review

      Table 1 features a summary of the most popular theoretical frameworks that sought to identify the antecedents and the extent to which they may affect the individuals’ intentions to use information technologies.

      Table 1. A non-exhaustive list of theoretical frameworks focused on (information) technology adoption behaviors

      Figure 1. features the conceptual framework that investigates information technology adoption factors. It represents a visual illustration of the hypotheses of this study. In sum, this empirical research presumes that information quality and source trustworthiness (from Information Adoption Model) precede performance expectancy. The latter construct together with effort expectancy, social influences (from Unified Theory of Acceptance and Use of Technology) as well as the perceived interactivity construct, are significant antecedents of the individuals’ intentions to use ChatGPT.


      The survey instrument

      The respondents were instructed to answer all survey questions that were presented to them about information quality, source trustworthiness, performance expectancy, effort expectancy, social influences, perceived interactivity and on their behavioral intentions to continue using this technology (otherwise, they could not submit the questionnaire). Table 2 features the list of measures as well as their corresponding items that were utilized in this study. It also provides a definition of the constructs used in the proposed information technology acceptance framework.

      Table 2. The list of measures and the corresponding items used in this research.


      Theoretical implications

      This research sought to explore the factors that are affecting the individuals’ intentions to use ChatGPT. It examined the online users’ effort and performance expectancy, social influences as well as their perceptions about the information quality, source trustworthiness and interactivity of generative text AI chatbots. The empirical investigation hypothesized that performance expectancy, effort expectancy and social influences from Venkatesh et al.’s (2003) UTAUT together with a perceived interactivity construct (McMillan and Hwang, 2002) were significant antecedents of their intentions to revisit ChatGPT’s website and/or to use its app. Moreover, it presumed that information quality and source trustworthiness measures from Sussman and Siegal’s (2003) IAM were found to be the precursors of performance expectancy.

      The results from this study report that source trustworthiness-performance expectancy is the most significant path in this research model. They confirm that online users indicated that they believed that there is a connection between the source’s trustworthiness in terms of its dependability, and the degree to which they believe that using such an AI generative system will help them improve their job performance. Similar effects were also evidenced in previous IAM theoretical frameworks (Kang and Namkung, 2019; Onofrei et al., 2022), as well as in a number of studies related to TAM (Assaker, 2020; Chen and Aklikokou, 2020; Shahzad et al., 2018) and/or to UTAUT/UTAUT2 (Lallmahomed et al., 2017).

      In addition, this research also reports that the users’ peceptions about information quality significantly affects their performance expectancy/expectancies from ChatGPT. Yet, in this case, this link was weaker than the former, thus implying that the respondents’ perceptions about the usefulness of this text generative technology were clearly influenced by the peripheral cues of communication (Cacioppo and Petty, 1981; Shi et al., 2018; Sussman and Siegal, 2003; Tien et al., 2019).

      Very often, academic colleagues noted that individuals would probably rely on the information that is presented to them, if they perceive that the sources and/or their content are trustworthy (Bingham et al., 2019; John and De’Villiers, 2020; Winter, 2020). Frequently, they indicated that source trustworthiness would likely affect their beliefs about the usefulness of information technologies, as they enable them to enhance their performance. Conversely, some commentators argued that there may be users that could be skeptical and wary about using new technologies, especially if they are unfamiliar with them (Shankar et al., 2021). They noted that such individuals may be concerned about the reliability and trustworthiness of the latest technologies.

      The findings suggest that the individuals’ perceptions about the interactivity of ChatGPT are a precursor of their intentions to use it. This link is also highly significant. Therefore, the online users were somehow appreciating this information technology’s responsiveness to their prompts (in terms of its computer-human communications). Evidently, ChatGPT’s interactivity attributes are having an impact on the individuals’ readiness to engage with it, and to seek answers to their questions. Similar results were reported in other studies that analyzed how the interactivity and anthropomorphic features of dialogue systems like live support chatbots, or virtual assistants can influence the online users’ willingness to continue utilizing them in the future (Baabdullah et al., 2022; Balakrishnan et al., 2022; Brachten et al., 2021; Liew et al., 2017).

      There are a number of academic contributions that sought to explore how, why, where and when individuals are lured by interactive communication technologies (e.g. Hari et al., 2022; Li et al., 2021; Lou et al., 2022). Generally, these researchers posited that users are habituated with information systems that are programed to engage with them in a dynamic and responsive manner. Very often they indicated that many individuals are favorably disposed to use dialogue systems that are capable of providing them with instant feedback and personalized content. Several colleagues suggest that positive user experiences as well as high satisfaction levels and enjoyment, could enhance their connection with information technologies, and will probably motivate them to continue using them in the future (Ashfaq et al., 2020; Camilleri and Falzon, 2021; Huang and Chueh, 2021; Wolfinbarger and Gilly, 2003).

      Another important finding from this research is that the individuals’ social influences (from family, friends or colleagues) are affecting their interactions with ChatGPT. Again, this causal path is also very significant. Similar results were also reported in UTAUT/UTAUT2 studies that are focused on the link between social influences and its link with intentional behaviors to use technologies (Gursoy et al., 2019; Patil et al., 2020). In addition, TPB/TRA researchers found that subjective norms also predict behavioral intentions (Driediger and Bhatiasevi, 2019; Sohn and Kwon, 2020). This is in stark contract with other studies that reported that there was no significant relationship between social influences/subjective norms and behavioral intentions (Ho et al., 2020; Kamble et al., 2019).

      Interestingly, the results report that there are highly significant effects between effort expectancy (i.e. ease of use of the generative AI technology) and performance expectancy (i.e. its perceived usefulness). Many scholars posit that perceived ease of use is a significant driver of perceived usefulness of technology (Bressolles et al., 2014; Davis, 1989; Davis et al., 1989; Kamble et al., 2019; Yoo and Donthu, 2001). Furthermore, there are significant causal paths between performance expectancy-intentions to use ChatGPT and even between effort expectancy-intentions to use ChatGPT, albeit to a lesser extent. Yet, this research indicates that performance expectancy partially mediates effort expectancy-intentions to use ChatGPT. In this case, this link is highly significant.

      In sum, this contribution validates key information technology measures, specifically, performance expectancy, effort expectancy, social influences and behavioral intentions from UTAUT/UTAUT2, as well as information quality and source trustworthiness from ELM/IAM and integrates them with a perceived interactivity factor. It builds on previous theoretical underpinnings. Yet, it differentiates itself from previous studies. To date, there are no other empirical investigations that have combined the same constructs that are presented in this article. Notwithstanding, this research puts forward a robust Information Technology Acceptance Framework. The results confirm the reliability and validity of the measures. They clearly outline the relative strength and significance of the causal paths that are predicting the individuals’ intentions to use ChatGPT.


      Managerial implications

      This empirical study provides a snapshot on the online users’ perceptions about ChatGPT’s responses to verbal queries, and sheds light on their dispositions to avail themselves from its natural language processing. It explores their performance expectations about their usefulness and their effort expectations related to the ease of use of these information technologies and investigates whether they are affected by colleagues or by other social influences to use such dialogue systems. Moreover, it examines their insights about the content quality, source trustworthiness as well as on the interactivity features of these text- generative AI models.

      Generally, the results suggest that the research participants felt that these algorithms are easy to use. The findings indicate that they consider them to be useful too, specifically when the information they generate is trustworthy and dependable. The respondents suggest that they are concerned about the quality and accuracy of the content that is featured in the AI chatbots’ answers. This contingent issue can have a negative effect on the use of the information that is created by online dialogue systems.

      OpenAI’s ChatGPT is a case in point. Its app is freely available in many countries, via desktop and mobile technologies including iOS and Android. The company admits that its GPT-3.5 outputs may be inaccurate, untruthful, and misleading at times. It clarifies that its algorithm is not connected to the internet, and that it can occasionally produce incorrect answers (OpenAI, 2023a). It posits that GPT-3.5 has limited knowledge of the world and events after 2021 and may also occasionally produce harmful instructions or biased content. OpenAI recommends checking whether its chatbot’s responses are accurate or not, and to let them know when and if it answers in an incorrect manner, by using their “Thumbs Down” button. They even declare that their ChatGPT’s Help Center can occasionally make up facts or “hallucinate” outputs (OpenAI, 2023a,b).

      OpenAI reports that its top notch ChatGPT Plus subscribers can access safer and more useful responses. In this case, users can avail themselves from a number of beta plugins and resources that can offer a wide range of capabilities including text-to-speech applications as well as web browsing features through Bing. Yet again, OpenAI (2023b) indicates that its GPT-4 still has many known limitations that the company is working to address, such as “social biases and adversarial prompts” (at the time of writing this article). Evidently, works are still in progress at OpenAI. The company needs to resolve these serious issues, considering that its Content Policy and Terms clearly stipulate that OpenAI’s consumers are the owners of the output that is created by ChatGPT. Hence, ChatGPT’s users have the right to reprint, sell, and merchandise the content that is generated for them through OpenAI’s platforms, regardless of whether the output (its response) was provided via a free or a paid plan.

      Various commentators are increasingly raising awareness about the corporate digital responsibilities of those involved in the research, development and maintenance of such dialogue systems. A number of stakeholders, particularly the regulatory ones, are concerned on possible risks and perils arising from AI algorithms including interactive chatbots. In many cases, they are warning that disruptive chatbots could disseminate misinformation, foster prejudice, bias and discrimination, raise privacy concerns, and could lead to the loss of jobs. Arguably, one has to bear in mind that, in many cases, many governments are outpaced by the proliferation of technological innovations (as their development happens before the enactment of legislation). As a result, they tend to be reactive in the implementation of substantive regulatory interventions. This research reported that the development of ChatGPT has resulted in mixed reactions among different stakeholders in society, especially during the first months after its official launch. At the moment, there are just a few jurisdictions that have formalized policies and governance frameworks that are meant to protect and safeguard individuals and entities from possible risks and dangers of AI technologies (Camilleri, 2023). Of course, voluntary principles and guidelines are a step in the right direction. However, policy makers are expected by various stakeholders to step-up their commitment by introducing quasi-regulations and legislation.

      Currently, a number of technology conglomerates including Microsoft-backed OpenAI, Apple and IBM, among others, anticipated the governments’ regulations by joining forces in a non-profit organization entitled, “Partnership for AI” that aims to advance safe, responsible AI, that is rooted in open innovation. In addition, IBM has also teamed up with Meta and other companies, startups, universities, research and government organizations, as well as non-profit foundations to form an “AI Alliance”, that is intended to foster innovations across all aspects of AI technology, applications and governance.

      Continue reading

      Leave a comment

      Filed under artificial intelligence, chatbots, ChatGPT, digital media, Generative AI, Marketing

      Stakeholder engagement disclosures in sustainability reports

      This is an excerpt from one of my latest articles, published through Business Ethics, the Environment and Responsbility.

      Suggested citation: Galeotti, R. M., Camilleri, M. A., Roberto, F., & Sepe, F. (2023). Stakeholder engagement disclosures in sustainability reports: Evidence from Italian food companies. Business Ethics, the Environment & Responsibility, Ahead-of-print, 1–20, https://doi.org/10.1111/beer.12642 

      Abstract

      More businesses are embedding stakeholder engagement (SE) practices in their corporate disclosures. This article explores the extent to which SE practices are featured in the sustainability reports (SRs) of 48 Italian food and beverage businesses, following the latest Global Reporting Initiative (GRI) standards. The researchers analyze the content of their SRs dated 2020 and 2021. They utilize a panel regression technique to examine the relationship between stakeholder engagement disclosures (SED) and corporate financial performance (CFP), and to investigate the mediating role of SR assurance. The results show a positive and significant relationship between SED and CFP. They also confirm that there is a moderating effect from SR assurance on this causal path. However, the findings reveal that SED in SRs of Italian food companies is still moderate. This contribution builds on the logic behind the stakeholder theory. It implies that there is scope for food companies to forge relationships with stakeholders. It indicates that it is in their interest to disclose material information about their SE practices in their SR and to organize third party assurance assessments in order to improve their legitimacy with stakeholders.

      1 INTRODUCTION

      The sustainability agenda has gained significant attention within the global food sector (Rueda et al., 2017), and it is becoming a growing concern among stakeholders (Al Hawaj & Buallay, 2022). The food industry is heavily reliant on natural and technological resources such as water, energy, chemicals, and fossil fuels, and therefore, has a substantial impact on the environment and the society (Buallay, 2020; Camilleri, 2021; Ramos et al., 2020). The actions of food manufacturers and retailers can significantly affect the health of individuals. Their ability to choose, process, package, transport, and promote sustainable food could have an impact on what people consume and on their overall well-being. As they interact directly with consumers, they are subject to intense scrutiny and requests for transparency. Stakeholders, including governmental institutions, consumers, and the global community, have called upon food companies to adopt more sustainable practices and to pay more attention to food sustainability (Friedrich et al., 2012; Troise et al., 2021). Very often, they are raising awareness about value creation opportunities to persuade them to engage in responsible production and consumption behaviors (Attanasio et al., 2021), and to forge relationships with marketplace stakeholders (Camilleri, 2020).

      The interactions between firms and their external environment constitute a vital characteristic of a sustainable business model, owing to the unique value stream that stakeholder engagement (SE) can offer. In this context, sustainability disclosures can act as a catalyst to foster trust, enhance procedures and systems, promote the firm’s vision and strategy, decrease compliance expenses, and generate competitive advantages (Cardoni et al., 2022). Companies operating in the food sector are principally challenged in their efforts to deliver Sustainability Reports (SRs) that provide useful information to both internal and external stakeholders (D’Adamo, 2022). Research examining the role of sustainability reporting in enhancing firm performance in this sector is limited. Some studies suggest a positive relationship between strong sustainability reporting and return on assets (ROA) (Al Hawaj & Buallay, 2022), increased sales (Sen & Bhattacharya, 2001) or reduced cost of capital (Garzón-Jiménez & Zorio-Grima, 2022).

      Given the complexity of the food sector, which is a typical multistakeholder context (Al Hawaj & Buallay, 2022), it is particularly relevant for food companies to ensure that their SRs provide accurate and thorough disclosures of their SE practices. SE is a complex and distinct activity that has emerged in the preparation of SRs (Greenwood, 2007) and it is crucial to reflect on the way it is conducted (Petruzzelli & Badia, 2023). The reporting entities cannot ignore their stakeholders’ relationships from their corporate disclosures. If they conceal any material information on this matter from their SR, they risk damaging their reputation and image (Ardiana, 2019; De Micco et al., 2021; Manetti, 2011; Miles & Ringham, 2020).

      Academic research on SE is an evolving area of investigation due to the increasing scientific and professional interest in sustainability reporting issues (Camilleri, 2015; Stocker et al., 2020). Prior studies have indicated that many companies fail to provide complete disclosures of SE processes (Moratis & Brandt, 2017), and show an inadequate level of SE procedures (Petruzzelli & Badia, 2023; Venturelli et al., 2018). However, despite the significance of this subject, the number of empirical academic contributions on SE remains limited, making it important to further explore this topic. In such a context, several scholars are calling for further studies that seek to investigate how, why, where, and when firms are engaging with stakeholders. In addition, they are encouraging them to explore whether they are disclosing the details about their stakeholder relationships in their SRs (Gagné et al., 2022; Gao & Zhang, 2006; Hörisch et al., 2015).

      The purpose of this article is twofold. The first one is to investigate the extent to which SE is featured in the SRs of 48 Italian unlisted food companies (that were relying on GRI’s new standards in the period 2020–2021), with the objective to verify their focus on SE disclosures (SED) process. The authors examine their SR’s content, in terms of the report preparers’ motivations and methods. They also verify whether they indicated specific stakeholders in their disclosures. This paper raises awareness on the role of SE in the sustainability reporting of food companies. It clarifies how and to what extent food companies are communicating directly with stakeholders, gathering feedback from them, and how explicitly they are involving them in the SR process. To this aim, the researchers developed an SE index composed of 7 categories and 21 items derived from prior literature on the topic and adapted from the latest Global Reporting Initiative (GRI) standards. The proposed index provides a systematic approach to examining the SE practices and activities disclosed by sample firms. Content analysis (a binary coding system) of GRI SRs was carried out to calculate the overall SED score. The second goal of this contribution is to investigate the relationship between SED and corporate financial performance (CFP). In addition, this research analyzes the moderating effects of SR assurance on SED-CFP causal link. Hence, this contribution addresses the following research questions:

      • RQ1: What is the state and extent of SED in the SRs of food companies?
      • RQ2: Is there a relationship between SED in SRs and CFP in the food industry? If there is, how and to what extent, is this relationship mediated by SR assurance?

      This research explores the above-mentioned questions and provides insights on the SE processes of Italian Food companies. It builds on the Stakeholder Theory (ST; Freeman, 1984), as it seeks to explain whether SE processes are integrated in their SRs. The authors anticipate that the exploratory content analysis on the sample firms’ SRs indicate that the average level of SE is not significantly high in food companies in Italy, however, there is an increasing pattern of SED during the study period. While SE seems common practice, many firms are failing to provide the details on their stakeholder relationships in their SRs. The findings suggest that most of the engagement modes disclosed are unidirectional (level 1—Inform) with minimal emphasis on deep involvement strategies (level 3—Involve). Furthermore, only 32% of the sample seek assurance on the information disclosed.

      Results from the panel data analysis provide evidence that there is a significant positive association between SED and CFP. Findings also show that SR assurance by accounting firms accentuates this effect. An extensive literature review suggests that this study, to the best of the authors’ knowledge, is the first to use food companies’ SRs to investigate the impact of SED on CFP introducing the interactive variable of SR third-party assurance, which adds new knowledge to SE and sustainability reporting literature from a specific industry in an advanced economy. Considering the maturity of Italian sustainability reporting and assurance practices (KPMG, 2022; Larrinaga et al., 2020) the Italian context is particularly relevant in explaining the interest of food companies into properly communicating SE activities in SRs. In these terms, this study contributes to a deeper understanding of the underexplored area of SE in a specific industry, highlighting the strategies used by Italian food companies to manage the SE communication process. Specifically, it provides insights to improve the framing of SED and gives evidence of the value relevance of SED and SR assurance for companies operating in the food sector. Therefore, this research sheds light on the advancement and enhancement of food company–stakeholder relations, particularly from the perspective of value co-creation. The findings will help managers identify key focus areas where they can improve the SED process aiming at creating shared value and foster mutually beneficial relationships with stakeholders.

      The remainder of this study is structured as follows. The next section deals with the paper’s conceptual framework and hypotheses development. This is followed by the research design and methodology. Finally, the results, discussion, including recommendations, limitations, and hints for future research are presented.

      Read further (this publication is available in its entirety, as it is an open-access article).

      Leave a comment

      Filed under Corporate Social Responsibility, Corporate Sustainability and Responsibility, CSR, ESG Reporting, Stakeholder Engagement, Sustainability

      Metaverse keywords for dummies

      Individuals can use the Metaverse for leisure, entertainment, socializing, as a marketplace to buy items and for education, among other purposes. Currently, technology giants including Meta, Microsoft, Nvidia, Roblox, Snap and Unity, among others, are building the infrastructure of Metaverse. At the time of writing many commentators are envisaging that the Metaverse’s virtual environments will be replicating the real world. For instance, the Metaverse’s virtual reality (VR) environment can be used to deliver lectures to students located in remote locations. Course instructors can utilize its immersive 3D capabilities in synchronous and asynchronous learning environments. They can interact with their students’ avatars in real time to provide immediate feedback. In addition, they may avail themselves of the Metaverse virtual settings to catapult their students in learning scenarios that are constrained by the limits of reality, or by their own imagination, to enable them to learn in a practical, yet safe environment. Table 1 features a clear (and comprehensible) definition of some of the most popular terms related to the ‘Metaverse’.

      Table 1. Key terms related to the adoption of the Metaverse

      KeywordDefinition  
      AvatarAn avatar represents a human figure with a fictitious, animated character in electronic games as well as in the internet’s websites including in social media and in the Metaverse. They may usually appear to be similar in their physical features and expressions as their real-world counterparts. However, online users may want to customize their avatars to disguise themselves by creating very imaginative characters.
      Digital twinThe digital twin refers to a virtual representation of a real-world product, system, or process that spans its lifecycle. It can be considered as a digital counterpart. A digital twin can be utilized for practical purposes including for monitoring, testing of simulations, maintenance et cetera. Its underlying objective is to generate useful insights on how to improve real life objects and their systems. It is intended to mimic the lifecycle of a physical entity it represents (from its inception up to its disposal). However, the digital twin could exist before the existence of a physical entity. The initial stages of a digital twin (in the creation phase) enable the intended entity’s entire lifecycle to be simulated and tested. Hence, the development of digital twins involves continuous improvements in product designs, operational processes and engineering activities, as they are acquiring new capabilities through trial-and-error phases, simulations and machine learning. The rationale of digital twins is to increase the efficiency of products and systems, to enhance their performance outcomes.
      Extended reality (XR)XR refers to an umbrella term that incorporates augmented reality (AR), virtual reality (VR) and mixed reality (MR) that mirror the physical world or a digital twin. It refers to the combination of real and virtual environments that can comprise different objects and systems. Each of them will have their own roles, features and attributes. A multisensory XR system conveys signals to the human beings’ nervous systems through visual, auditory, olfactory and haptic cues that are very similar to real life feelings and experiences (Yu et al., 2023). Such technologies could be designed to support their users’ well-being. They may involve digital therapeutics that can affect the individuals’ perceptions, state of mind and behaviors.
      Mixed reality (MR)MR is an inter-reality system comprising a physical reality as well as 3D digital worlds, where real and virtual objects could co-exist and interact in real time. MR integrates AR and VR technologies to provide holographic representations of objects in a virtuality continuum (Yoo et al., 2022). It is being used for different applications including for educational purposes, to deliver experiential learning. Students can benefit from natural and intuitive 3D representations based on the latest advancements in input systems, sensors, processing power, display technologies, graphical processing, and cloud computing are creating elaborate experiences with mixed realities.
      Non-fungible tokensNon-fungible tokens (NFTs) are a form of cryptocurrency where data is digitally stored in a blockchain. NFTs are considered as a unique modality of digital non-interchangeable (i.e. non-fungible) assets, that are authenticated and certified to a specific owner. NFTs may represent electronic content including the video games’ audiovisual material, collectibles, avatars, et cetera, that can be acquired, sold or traded. The blockchain technology ensures that the digital assets cannot be replicated in any way. However, owners of NFTs can trade and sell their NFTs. The blockchain allows prospective buyers to confirm the provenance of the virtual content and to clearly track and establish the ownership of the tokens. Hence, they can monetize them with other customers through the Metaverse.
      Virtual realityWhile AR uses the existing real-world environment and incorporates virtual information in it, virtual reality (VR) will completely immerse its users in a simulated environment comprising sensory modalities including auditory and video feedback as well as haptic sensations. VR relies on pose tracking and on 3D near-eye displays to give them an immersive feel of a virtual world. It enables users to experience sights and sounds that are similar or totally different from the real world. Individuals can use VR helmets and headsets like Meta Quest, Play Station VR, HTC Vive, or HP reverb, among others, that provide a small screen in front of the eyes, that will place them in a virtual environment. A person using virtual reality equipment may experience a synthetic world by moving around, and by interacting with its virtual objects that may be present in specially designed 3D rooms or even in outdoor environments. For example, medical students can use VR to practice how to perform heart surgeries.
      Web 3.0Web 3.0 represents the evolution of the web into a decentralized network. Many commentators are anticipating that online users will be in a position to access their own data, including documents, applications and multimedia, in a secure, open-source environment, that will be facilitated by Blockchain’s distributed ledger technology. They envisage that online users will probably rely on the services of Decentralized Autonomous Organizations (DAO), that will be entrusted to provide a secure digital ledger that tracks their customers’ digital interactions across the internet, via a network of openly available smart contracts stored in a decentralized Blockchain. Therefore, smart contracts could provide increased security, scalability and privacy (e.g. as online users can protect their intellectual properties through non-fungible tokens).
      (Developed by the Camilleri & Camilleri, 2023).

      Read the full paper in its entirety here:

      Suggested citation: Camilleri, M.A. & Camilleri, A.C. (2023).  Metaverse education: Opportunities and challenges for immersive learning in virtual environments,  2023 The 4th Asia Conference on Computers and Communications (ACCC 2023),  IOP Publishing, Bristol, United Kingdom (Scopus).

      Leave a comment

      Filed under Marketing

      Responsible artificial intelligence governance and corporate digital responsibility

      This post discusses on the salient aspects of my latest article, entitled: “Artificial intelligence governance: Ethical considerations and implications for social responsibility“, published through Wiley’s Expert Systems.

      Continue reading

      Leave a comment

      Filed under Marketing

      Big data: What are they?

      Big data refers to datasets that are too large or complex to be dealt with via conventional data processing software. These systems handle large volumes as well as a variety of information very quickly.

      Leave a comment

      Filed under Big Data

      The sharing economy: A definition

      The sharing economy can be described as various socio-economic systems that enable consumers to participate in the production, distribution and consumption of goods and/or services. Individuals and organizations including for-profit enterprises, social enterprises, cooperatives, local communities and non-governmental entities that are utilizing the sharing economy may usually rely on the Internet technologies, particularly on the digital platforms like social media networks, to facilitate the distribution, sharing and reuse of (excess capacity) products or services.

      Technology firms like AirBnB and Uber, among others, facilitate the trading activities between service providers and the customers, as they provide them with their electronic and mobile platforms to make their transactions.

      Leave a comment

      Filed under sharing economy

      Customer satisfaction and loyalty with online consumer reviews

      This text is drawn from excerpts of an article published through Elsevier’s International Journal of Hospitality Management.

      Suggested citation: Camilleri, M.A. & Filieri, R. (2023). Customer satisfaction and loyalty with online consumer reviews: Factors affecting revisit intentions, International Journal of Hospitality Management, https://doi.org/10.1016/j.ijhm.2023.103575

      Abstract

      While previous research investigated the effects of online consumer reviews on purchase behaviors, currently, there is still a lack of knowledge on the impact of the reviews’ credibility, content quality and information usefulness on the customers’ satisfaction levels with them. Data were gathered from a sample of 512 participants. A partial least squares approach was utilized to evaluate the reliability and validity of the constructs and to identify the causal effects in this contribution’s structured model. The findings reveal that information usefulness is a very strong predictor of satisfaction. They also confirm highly significant indirect effects, between information quality and customer satisfaction, when information usefulness meditates this link. This study suggests that prospective customers appreciate quality reviews of consumers who have already experienced the hospitality services. It raises awareness about the usefulness of review sites as online users refer to their content before committing themselves to purchasing products and services. 

      Keywords: customer satisfaction; customer loyalty; information usefulness; information quality; source credibility; information adoption model.

      Introduction

      The advances of the Internet are presenting online users and prospective customers of hospitality businesses with a great opportunity for interactive engagement through blogs, microblogs, discussion fora, social networking sites and online communities. Many consumers are sharing their insights about their service experiences through review platforms like AirBnB, Booking.com, TripAdvisor, and the like. Very often, they praise or complain about different aspects of their service encounters (Akdim et al., 2022; Filieri and McLeay, 2014; Rita et al., 2022). Such testimonials are intended to support potential consumers to reduce their uncertainty before committing themselves to make purchase decisions.

      The electronic content featured in review sites as well as in social media can be read by online users hailing from different regions across the globe. Interactive platforms enable their users to feature positive and negative publicity (Moro et al., 2020; Sun and Liu, 2021; Shin et al., 2023) via qualitative service evaluations and/or via quantitative scores, also known as ratings.  Online users can subscribe to review networks to voice their testimonials on their satisfaction and/or on their dissatisfaction levels with the services they experienced (Kim et al., 2023; Zheng et al., 2023). In the latter case, they will intentionally engage in negative word-of-mouth (WOM) publicity to tarnish the reputation and image of the business (Qiao et al., 2022).

      This topic has been attracting the interest of a number of scholars in marketing, information systems, as well as in travel, tourism and service industries (Donthu et al., 2021). Various researchers sought to investigate the consumers’ acceptance of online reviews. Frequently, they explored the internalization processes whereby individuals take heed, or take into consideration user generated content, like electronic WOM (eWOM) publicity, that is usually cocreated by consumers who have already experienced products and services, in order to enhance their extant knowledge about the service quality provided by hospitality businesses (Song et al., 2022; Zhang et al., 2021).

      This argumentation is consistent with the information adoption model (IAM). Sussman and Siegal (2003) suggest that individuals tend to rely on quality information if they believe that it is useful to them. The authors argued that persons are influenced by knowledge transfer if they understand and comprehend the flows of information they receive. Hence, individuals would be in a position to determine the best courses of action that better serve their needs, particularly if they perceive that other individuals are providing reliable and trustworthy advice to them (Erkan and Evans 2016).

      Information adoption factors, including details relating to the quality of the content and the credibility of the informational sources, may significantly affect the individuals’ perceptions about the usefulness of online reviews (Cheung et al., 2008; Filieri, 2015). Hence, the argument quality of consumer testimonials, as well as the credibility of the sources, are two major determinants that can influence online users’ satisfaction levels (Filieri et al., 2015; Zhao et al., 2019), with the sites hosting online reviews, and may even determine their revisit intentions to them (Kaya et al., 2019; Ladhari and Michaud, 2015; Rodríguez et al., 2020).

      This empirical research investigates perceptions toward consumer review sites. It focuses on online users’ beliefs about the quality of their information, as well as on the credibility and usefulness of their content. It examines these constructs exogenous effects on their satisfaction levels and on their loyalty with consumer review platforms, as shown in Figure 1.

      (Source: Camilleri and Filieri, 2023)

      Hence, this study validates key factors, namely, information quality (Cheung et al., 2008; Kumar and Ayodeji, 2021; McClure and Seock, 2020; Talwar et al., 2021), source credibility (Argyris et al., 2021; Filieri, 2015), and information usefulness (Camilleri et al., 2023; Filieri, 2015). These measures are drawn from valid information and/or technology adoption models (Sussman and Siegal, 2003), and are combined with consumer satisfaction (Maxham and Netemeyer, 2002) and consumer loyalty (Tran and Strutton, 2020; Zeithaml, et al., 1996). The latter two constructs are associated with the service-dominant logic (Zeithaml et al., 2002; Parasuraman et al., 2005).

      Arguably, regular users of review platforms are likely to take heed of the consumers’ recommendations as they perceive the usefulness of their advice (on their service encounters) (D’ Acunto et al., 2020; Xu, 2020; Ye et al., 2009). The researchers presume that the individuals who utilize these websites will usually trust past customers’ experiences. Hence, this study hypothesizes that the respondents who habitually rely on consumer reviews, are satisfied with the quality of their content, and that they perceive that their sources are credible and useful. As a result, the research participants may be intrigued to revisit them again in the future. Hence, the research questions of this contribution are:

      RQ1: How and to what extent are information quality and source credibility affecting the usefulness of consumer reviews?

      RQ2: How and to what extent are informative and helpful reviews influencing online users’ satisfaction levels and loyalty behaviors, in terms of their revisit intentions to these platforms?

      RQ3: How and to what degree is information usefulness mediating the information quality – customer satisfaction/customer loyalty and/or source credibility – customer satisfaction/customer loyalty causal paths?

      Previous research examined the perceptions about eWOM and focused on online review websites by using IAM (Cheung et al., 2008; Filieri, 2015). However, for the time being, no other studies sought to explore the effects of IAM’s key constructs on electronic service quality’s (eSERVQUAL’s) endogenous factors of satisfaction and loyalty. Therefore, this study raises awareness on the usefulness of review sites as prospective customers are referring to their content before committing themselves to purchasing products or prior to experiencing the businesses’ services. In this case, the researchers theorized that they would probably revisit the review platforms, if they were satisfied with their quality information and source credibility.

      A survey questionnaire was employed to collect data from subscribers of popular social media networks. A structured equations modelling partial least squares SEM-PLS methodology was utilized to examine the proposed research model in order to confirm the reliability and validity of the constructs used in this study. This composite based SEM approach enabled the researchers to shed light on the significant effects that are predicting the respondents’ likelihood to rely on user generated content and to determine whether they influenced their satisfaction levels and revisit intentions.

      The following section features an original conceptual framework and formulates the hypotheses of this empirical investigation. Afterwards, the methodology provides details on the data collection process for this quantitative study. Subsequently, the results illustrate the findings from SmartPLS’s analytical approach to reveal the causal effects in this study’s research model. In conclusion, this article identifies theoretical and managerial implications. The researchers discuss about the limitations of this study and outline future research avenues.

      Table 1. A definition of the key factors used in this study

      TermDefinition
      Information Quality:  This factor measures perceptions on the quality of information (in terms of the consumer reviews’ reliability and appropriateness).
      Source Credibility:  This factor measures perceptions on the credibility of the sources (in terms of the consumer reviewers’ trustworthiness and proficiency in sharing service their experiences with others).
      Information Usefulness:  This factor measures perceptions on the utilitarian value of information (featured in consumer reviews).
      Customer Satisfaction:  This factor refers to positive or negative feelings about products or services (in this case, it is focused on electronic services provided by review websites).
      Customer Loyalty:  This factor refers to the willingness to repeatedly engage with specific businesses (in this case, it is focused on review websites).
      (Source: Camilleri and Filieri, 2023)

      Theoretical implications

      This contribution puts forward a research model that integrated IAM’s key factors including information quality (Cheung et al., 2008; Filieri, 2015; McClure and Seock, 2020; Talwar et al., 2021)), source credibility (Filieri et al., 2021; Ismagilova et al., 2020) and information usefulness (of consumer reviews) (Camilleri and Kozak, 2023; Moro et al., 2020) with eSERVQUAL’s satisfaction (Kaya et al., 2019; Kumar and Ayodeji, 2021) and loyalty outcomes (Kumar and Ayodeji, 2021; Tran and Strutton, 2020).

      The results from SmartPLS 3 confirm the reliability and validity of all measures that were used in this study. The findings indicate highly significant direct as well as indirect effects that are predicting the online users’ satisfaction levels and loyalty with review sites. This research suggests that the quality of the user generated content as well as the sources’ credibility (in terms of the trustworthiness and expertise of the online reviewers) are positive and significant antecedents of the individuals’ perceptions about the usefulness of information. These findings reveal that both information quality and source credibility are significant precursors of information usefulness, thereby validating mainstream IAM theoretical underpinnings (Cheung et al., 2008; Chong et al., 2018; Erkan and Evans, 2016; Filieri, 2015; Sussman and Siegal, 2003).

      This study differentiated itself from IAM as it examined the effects of information quality, source credibility and information usefulness on the consumers’ satisfaction levels and loyalty with review websites. It reported that information usefulness – customer satisfaction was the strongest link in this empirical investigation and that customer satisfaction partially mediated the relationship between information usefulness and customer loyalty. Moreover, the results showed that there were highly significant indirect effects between information quality and customer satisfaction, between information quality and customer loyalty, between source credibility and customer satisfaction, and between source credibility and customer loyalty.

      In this case, this research indicated that the respondents (i.e. online users) were satisfied with the review platforms that featured the consumers’ testimonials about their “moments of truth” with hospitality businesses. It suggested that they were likely to re-visit them again in the future. To the best of the authors’ knowledge there are no studies in the academic literature that have integrated theoretical underpinnings related to the service dominant logic (Vargo and Lusch, 2008), or to SERVQUAL- and/or eSERVQUAL-related factors (Kaya et al., 2019; Maxham and Netemeyer, 2002; Parasuraman et al., 2005; Rodríguez et al., 2020; Zeithaml et al., 1996; Zeithaml et al., 2002) with IAM constructs (Camilleri & Kozak, 2023; Chatterjee et al., 2023; Cheung et al., 2008; D’Acunto et al., 2020; Erkan and Evans 2016; Filieri, 2015; Huiyue et al., 2022; Kang and Namkung, 2019; Li et al., 2020; Sussman and Siegal, 2003; Ye et al., 2009) to explore the satisfaction levels and revisit intentions to review websites focused on consumer experiences of hospitality services. This original research addresses this knowledge gap. In conclusion, it implies that IAM’s exogenous factors can be used to investigate the online users’ perceptions about the usefulness and satisfaction with past consumers’ service evaluations, and to shed light on their intentions to habitually check out the qualitative content of review platforms/apps, prior to visiting service businesses (including hotels, Airbnbs and restaurants, among others) and/or before committing themselves to a purchase decision.

      This contribution’s novel conceptual model raises awareness on the importance of evaluating the consumers’ satisfaction levels as well as their revisit intentions of review sites rather than merely determining how information usefulness and other IAM antecedents affect their information adoption.

      Managerial implications

      This research postulates that online users are perceiving the usefulness of consumer reviews. It clearly indicates that the respondents feel that they feature quality content and that they consider them to be informative, credible and trustworthy. The results suggest that they are satisfied with the user generated content (that sheds light on the reviewers’ opinions on their personal service encounters). In fact, their responses imply that they are likely to revisit review websites and/or to engage with their apps again.

      The review platforms are helping prospective consumers in their purchase decisions. They enable them to quickly access consumer experiences with a wide array of service providers and to compare their different shades of opinions. This study shows that they are evaluating the consumer reviews to determine whether the hospitality firms are/are not delivering on their promises?

      The consumers’ reviews can make or break a business. The restaurant patrons’ and/or the hotel guests’ words of praise as well as their genuine expressions of respect and gratitude can elevate the business and enhance its corporate reputation. Alternatively, the customers’ critical evaluations may tarnish the image of hospitality business (in this case). Whilst the consumers’ positive experiences with a company increases the likelihood of their loyal behaviors and of word-of-mouth publicity (that attracts new customers), poor reviews and ratings could signal that the customers are dissatisfied with certain aspects of the service delivery and may even result in their conversion to the hospitality firms’ competitors.

      Hence, it is in the businesses’ self-interest: (i) to consistently deliver service quality, (ii) to meet and exceed their customers’ expectations, (iii) to continuously monitor their consumers’ reviews, (iv) to address contentious issues in a timely manner, and (v) to minimize consumer complaints (and turn them into opportunities for consumer satisfaction and loyalty).

      Limitations and future research avenues

      This research comprised reliable measures that are tried and tested in academia. Information quality, source credibility and information usefulness factors were utilized to explore the customers’ satisfaction and loyalty with review sites. These five constructs were never integrated together within the same empirical investigation. Future researchers are invited to validate this study in other contexts. For example, this theoretical model could explore the online users’ satisfaction and intentions to use social media networks (SNSs) and/or e-commerce websites and online marketplaces.

      Alternatively, researchers can include other constructs related to IAM to assess perceptions about information understandability, information reliability, information relevance, information accuracy, and information timeliness, among others. Most of these constructs represent information quality. In addition, they may examine the individuals’ insights about source trustworthiness and/or source expertise rather than integrating them into a source credibility construct. They may also consider various constructs from eSERVQUAL like website appeal, attractiveness, design, functionality, security and consumer fulfilment aspects.

      Perhaps, there is scope for future studies to consider other measures that are drawn from psychology research like the Social Cognitive Theory (Bandura, 1986), the Theory of Reasoned Action (Fishbein and Ajzen, 1975), or the Theory of Planned Behavior (Ajzen, 1991), among others, or from technology adoption models including the Technology Acceptance Model’s TAM (Davis, 1989; Davis et al., 1989), TAM2 (Wang et al., 2021), TAM3 (Al-Gahtani, 2016), the Innovation Diffusion Theory (IDT) (Moore and Benbasat, 1991; Rogers, 1995), the Motivational Model (MM) (Davis et al., 1992), the Unified Theory of Acceptance and Use of Technology’ UTAUT (Venkatesh et al., 2003) and UTAUT2 (Venkatesh et al., 2012), among others.

      These theories may be used to better understand the acceptance and utilization of information technologies (like review platforms). Notwithstanding, other studies are required to shed more light on the moderating effects of demographic variables, on the usability and satisfaction levels with disruptive innovations like voice assistance, chatbots, ChatGPT, Metaverse, and the like.

      Other researchers may utilize other research designs and sampling approaches to gather and analyze primary data. They could capture interpretative data through inductive research, to delve deeper in the informants’ opinions about eWOM publicity in consumer review sites. Qualitative research methodologies and interpretative designs could shed more light on important insights on how, where, when and why the customers’ user-generated content (on their service experiences) could influence the intentional behaviors of prospective consumers in today’s digital age.

      All the references are featured in the article. An open access version is available here: https://www.researchgate.net/publication/372891266_Customer_satisfaction_and_loyalty_with_online_consumer_reviews_Factors_affecting_revisit_intentions

      Leave a comment

      Filed under Marketing

      An artificial intelligence governance framework

      This is an excerpt from my latest contribution on responsible artificial intelligence (AI).

      Suggested citation: Camilleri, M. A. (2023). Artificial intelligence governance: Ethical considerations and implications for socialresponsibility. Expert Systems, e13406. https://doi.org/10.1111/exsy.13406

      The term “artificial intelligence governance” or “AI governance” integrates the notions of “AI” and “corporate governance”. AI governance is based on formal rules (including legislative acts and binding regulations) as well as on voluntary principles that are intended to guide practitioners in their research, development and maintenance of AI systems (Butcher & Beridze, 2019; Gonzalez et al., 2020). Essentially, it represents a regulatory framework that can support AI practitioners in their strategy formulation and in day-to-day operations (Erdélyi & Goldsmith, 2022; Mullins et al., 2021; Schneider et al., 2022). The rationale behind responsible AI governance is to ensure that automated systems including ML/DL technologies, are supporting individuals and organizations in achieving their long terms objectives, whist safeguarding the interests of all stakeholders (Corea et al., 2023; Hickok et al., 2022).

      AI governance requires that the organizational leaders comply with relevant legislation, hard laws and regulations (Mäntymäki et al., 2022). Moreover, they are expected to follow ethical norms, values and standards (Koniakou, 2023). Practitioners ought to be trustworthy, diligent and accountable in how they handle their intellectual capital and other resources including their information technologies, finances as well as members of staff, in order to overcome challenges, minimize uncertainties, risks and any negative repercussions (E.g. decreased human oversight in decision making, among others) (Agbese et al., 2023; Smuha, 2019).

      Procedural governance mechanisms ought to be in place to ensure that AI technologies and ML/DL models are operating in a responsible manner. Figure 1 features some of the key elements that are required for the responsible governance of artificial intelligence. The following principles are aimed to provide guidelines for the modus operandi of AI practitioners (including ML/DL developers).

      Figure 1. A Responsible Artificial Intelligence Governance Framework

      Accountability and transparency

      “Accountability” refers to the stakeholders’ expectations about the proper functioning of AI systems, in all stages, including in the design, creation, testing or deployment, in accordance with relevant regulatory frameworks. It is imperative that AI developers are held accountable for the smooth operation of AI systems throughout their lifecycle (Raji et al., 2020). Stakeholders expect them to be accountable by keeping a track record of their AI development processes (Mäntymäki et al., 2022).

      The transparency notion refers to the extent to which end-users could be in a position to understand how AI systems work (Andrada et al., 2020; Hollanek, 2020). AI transparency is associated with the degree of comprehension about algorithmic models in terms of “simulatability” (an understanding of AI functioning), “decomposability” (related to how individual components work), and algorithmic transparency (this is associated to the algorithms’ visibility).

       In reality, it is difficult to understand how AI systems, including deep learning models and their neural networks are learning (as they acquire, process and store data) during training phases. They are often considered as black box models. It may prove hard to algorithmically translate derived concepts into human-understandable terms, even though developers may use certain jargon to explain their models’ attributes and features. Many legislators are striving in their endeavors to pressurize AI actors to describe the algorithms they use in automated decision-making, yet the publication of algorithms is useless if outsiders cannot access the data of the AI model.

      Explainability and interpretability

      Explainability is the concept that sheds light on how AI models work, in a way that is comprehensible to a human being. Arguably, the explainabilty of AI systems could improve their transparency, trustworthiness and accountability. At the same time, it can reduce bias and unfairness. The explainability of artificial intelligence systems could clarify how they reached their decisions (Arya et al., 2019; Keller & Drake, 2021). For instance, AI could explain how and why autonomous cars decide to stop or to slow down when there are pedestrians or other vehicles in front of them.

      Explainable AI systems might improve consumer trust and may enable engineers to develop other AI models, as they are in a position to track provenance of every process, to ensure reproducibility, and to enable checks and balances (Schneider et al., 2022). Similarly, interpretability refers to the level of accuracy of machine learning programs in terms of linking the causes to the effects (John-Mathews, 2022).

      Fairness and inclusiveness

      The responsible AI’s fairness dimension refers to the practitioners’ attempts to correct algorithmic biases that may possibly (voluntarily or involuntarily) be included in their automation processes (Bellamy et al., 2019; Mäntymäki, et al., 2022). AI systems can be affected by their developers’ biases that could include preferences or antipathies toward specific demographic variables like genders, age groups and ethnicities, among others (Madaio et al., 2020). Currently, there is no universal definition on AI fairness.

      However, recently many multinational corporations have developed instruments that are intended to detect bias and to reduce it as much as possible (John-Mathews et al., 2022). In many cases, AI systems are learning from the data that is fed to them. If the data are skewed and/or if they comprise implicit bias into them, they may result in inappropriate outputs.

      Fair AI systems rely on unbiased data (Wu et al., 2020). For this reason, many companies including Facebook, Google, IBM and Microsoft, among others are striving in their endeavors to involve members of staff hailing from diverse backgrounds. These technology conglomerates are trying to become as inclusive and as culturally aware as possible in order to minimize bias from affecting their AI processes. Previous research reported that AI’s bias may result in inequality, discrimination and in the loss of jobs (Butcher & Beridze, 2019).

      Privacy and safety for consumers

      Consumers are increasingly concerned about the privacy of their data. They have a right to control who has access to their personal information. The data that is collected or used by third parties, without the authorization or voluntary consent of individuals, would result in the violations of their privacy (Zhu et al., 2020; Wu et al., 2022).

      AI-enabled products, including dialogue systems like chatbots and virtual assistants, as well as digital assistants (e.g. like Siri, Alexa or Cortana), and/or wearable technologies such as smart watches and sensorial smart socks, among others, are increasingly capturing and storing large quantities of consumer information. The benefits that are delivering these interactive technologies may be offset by a number of challenges. The technology businesses who developed these products are responsible to protect their consumers’ personal data (Rodríguez-Barroso et al., 2020). Their devices are capable of holding a wide variety of information on their users. They are continuously gathering textual, visual, audio, verbal, and other sensory data from consumers. In many cases, the customers are not aware that they are sharing personal information to them.

      For example, facial recognition technologies are increasingly being used in different contexts. They may be used by individuals to access websites and social media, in a secure manner and to even authorize their payments through banking and financial services applications. Employers may rely on such systems to track and monitor their employees’ attendance. Marketers can utilize such technologies to target digital advertisements to specific customers. Police and security departments may use them for their surveillance systems and to investigate criminal cases. The adoption of these technologies has often raised concerns about privacy and security issues. According to several data privacy laws that have been enacted in different jurisdictions, organizations are bound to inform users that they are gathering and storing their biometric data. The businesses that employ such technologies are not authorized to use their consumers’ data without their consent.

      Companies are expected to communicate about their data privacy policies with their target audiences (Wong, 2020). They have to reassure consumers that the consented data they collect from them is protected and are bound to inform them that they may use their information to improve their customized services to them. The technology giants can reward their consumers to share sensitive information. They could offer them improved personalized services among other incentives, in return for their data. In addition, consumers may be allowed to access their own information and could be provided with more control (or other reasonable options) on how to manage their personal details.

      The security and robustness of AI systems

      AI algorithms are vulnerable to cyberattacks by malicious actors. Therefore, it is in the interest of AI developers to secure their automated systems and to ensure that they are robust enough against any risks and attempts to hack them (Gehr et al., 2018; Li et al., 2020).

      The accessibility to AI models ought to be continuously monitored at all times during their development and deployment (Bertino et al., 2021). There may be instances when AI models could encounter incidental adversities, leading to the corruption of data. Alternatively, they might encounter intentional adversities when they experience sabotage from hackers. In both cases, the AI model will be compromised and can result in system malfunctions (Papagiannidis et al., 2023).

      AI models have to prevent such contingent issues from happening. Their developers’ responsibilities are to improve the robustness of their automated systems, and to make them as secure of possible, to reduce the chances of threats, including by inadvertent irregularities, information leakages, as well as by privacy violations like data breaches, contamination and poisoning by malicious actors (Agbese et al., 2023; Hamon et al., 2020).

      AI developers should have preventive policies and measures related to the monitoring and control of their data. They ought to invest in security technologies including authentication and/or access systems with encryption software as well as firewalls for their protection against cyberattacks. Routine testing can increase data protection, improve security levels and minimize the risks of incidents.

      Conclusions

      This review indicates that more academics as well as practitioners, are increasingly devoting their attention to AI as they elaborate about its potential uses, as well as on its opportunities and threats. It reported that its proponents are raising awareness on the benefits of AI systems for individuals as well as for organizations. At the same time, it suggests that a number of scholars and other stakeholders including policy makers, are raising their concerns about its possible perils (e.g. Berente et al., 2021; Gonzalez et al., 2020; Zhang & Lu, 2021).

      Many researchers identified some of the risks of AI (Li et al., 2021; Magas & Kiritsis, 2022). In many cases, they warned that AI could disseminate misinformation, foster prejudice, bias and discrimination, raise privacy concerns, and could lead to the loss of jobs (Butcher & Beridze, 2019). A few commentators argue about the “singularity” or the moment where machine learning technologies could even surpass human intelligence (Huang & Rust, 2022). They predict that a critical shift could occur if humans are no longer in a position to control AI anymore.

      In this light, this article sought to explore the governance of AI. It sheds light on substantive regulations, as well as on reflexive principles and guidelines, that are intended at practitioners who are researching, testing, developing and implementing AI models. It clearly explains how institutions, non-governmental organizations and technology conglomerates are introducing protocols (including self-regulations) to prevent contingencies from even happening due to inappropriate AI governance.

      Debatably, the voluntary or involuntary mishandling of automated systems can expose practitioners to operational disruptions and to significant risks including to their corporate image and reputation (Watts & Adriano, 2021). The nature of AI requires practitioners to develop guardrails to ensure that their algorithms work as they should (Bauer, 2022). It is imperative that businesses comply with relevant legislations and to follow ethical practices (Buhmann & Fieseler, 2023). Ultimately, it is in their interest to operate their company in a responsible manner, and to implement AI governance procedures. This way they can minimize unnecessary risks and safeguard the well-being of all stakeholders.

      This contribution has addressed its underlying research objectives. Firstly, it raised awareness on AI governance frameworks that were developed by policy makers and other organizations, including by the businesses themselves. Secondly, it scrutinized the extant academic literature focused on AI governance and on the intersection of AI and CSR. Thirdly, it discussed about essential elements for the promotion of socially responsible behaviors and ethical dispositions of AI developers. In conclusion it put forward an AI governance conceptual model for practitioners.

      This research made reference to regulatory instruments that are intended to govern AI expert systems. It reported that, at the moment there are a few jurisdictions that have formalized their AI policies and governance frameworks. Hence, this article urges laggard governments to plan, organize, design and implement regulatory instruments that ensure that individuals and entities are safe when they utilize AI systems for personal benefit, educational and/or for commercial purposes.

      Arguably, one has to bear in mind that, in many cases, policy makers have to face a “pacing problem” as the proliferation of innovation is much quicker than legislation. As a result, governments tend to be reactive in the implementation of regulatory interventions relating to innovations. They may be unwilling to hold back the development of disruptive technologies from their societies. Notwithstanding, they may face criticism by a wide array of stakeholders in this regard, as they may have conflicting objectives and expectations.

      The governments’ policy is to regulate business and industry to establish technical, safety and quality standards as well as to monitor their compliance. Yet, they may consider introducing different forms of regulation other than the traditional “command and control” mechanisms. They may opt for performance-based and/or market-based incentive approaches, co-regulation and self-regulation schemes, among others (Hepburn, 2009), in order to foster technological innovations.

      This research has shown that a number of technology giants, including IBM and Microsoft, among others, are anticipating the regulatory interventions of different governments where they operate their businesses. It reported that they are communicating about their responsible AI governance initiatives as they share information on their policies and practices that are meant to certify, explain and audit their AI developments. Evidently, these companies, among others, are voluntarily self-regulating themselves as they promote accountability, fairness, privacy and robust AI systems. These two organizations, in particular, are raising awareness about their AI governance frameworks to increase their CSR credentials with stakeholders.

      Likewise, AI developers who work for other businesses, are expected to forge relationships with external stakeholders including with policy makers as well as with actors including individuals and organizations who share similar interests in AI. Innovative clusters and network developments may result in better AI systems and can also decrease the chances of possible risks.  Indeed, practitioners can be in better position if they cooperate with stakeholders for the development of trustworthy AI and if they increase their human capacity to improve the quality of their intellectual properties (Camilleri et al., 2023). This way, they can enhance their competitiveness and growth prospects (Troise & Camilleri, 2021). Arguably, it is in their interest to continuously engage with internal stakeholders (and employees), and to educate them about AI governance dimensions, that are intended to promote accountable, transparent, explainable interpretable reproducible, fair, inclusive and secure AI solutions. Hence, they could maximize AI benefits, minimize their risks as well as associated costs.

      Future research directions

      Academic colleagues are invited to raise more awareness on AI governance mechanisms as well as on verification and monitoring instruments. They can investigate what, how, when and where protocols could be used to protect and safeguard individuals and entities from possible risks and dangers of AI.

      The “what” question involves the identification of AI research and development processes that require regulatory or quasi regulatory instruments (in the absence of relevant legislation) and/or necessitate revisions in existing statutory frameworks.

      The “how” question is related to the substance and form of AI regulations, in terms of their completeness, relevance, and accuracy. This argumentation is synonymous with the true and fair view concept applied in the accounting standards of financial statements.

      The “when” question is concerned with the timeliness of the regulatory intervention. Policy makers ought to ensure that stringent rules do not hinder or delay the advancement of technological innovations.

      The “where” question is meant to identify the context where mandatory regulations or the introduction of soft laws, including non-legally binding principles and guidelines are/are not required.

      Future researchers are expected to investigate further these four questions in more depth and breadth. This research indicated that most contributions on AI governance were discursive in nature and/or involved literature reviews. Hence, there is scope for academic colleagues to conduct primary research activities and to utilize different research designs, methodologies and sampling frames to better understand the implications of planning, organizing, implementing and monitoring AI governance frameworks, in diverse contexts.

      The full article is also available here: https://www.researchgate.net/publication/372412209_Artificial_intelligence_governance_Ethical_considerations_and_implications_for_social_responsibility

      Leave a comment

      Filed under artificial intelligence, chatbots, Corporate Social Responsibility, internet technologies, internet technologies and society

      Metaverse education: A cost-benefit analysis

      This is an excerpt from one of my latest articles.

      Suggested Citation: Camilleri, M.A. (2023). Metaverse applications in education: A systematic review and a cost-benefit analysis, Interactive Technology and Smart Education, Forthcoming, https://doi.org/10.1108/ITSE-01-2023-0017

      A critical review of the literature suggests that there are both pros and cons of using the Metaverse applications in education. Table 3 provides a summary of possible costs and benefits of delivering education through the Metaverse’s virtual environments. The following section features a more detailed discussion on these elements.

      Table 1. A cost-benefit analysis on Metaverse education

      CostsBenefits
      Infrastructure, resources and capabilitiesImmersive multi-sensory experiences in 3D environments  
      The degree of freedom in a virtual world  Equitable and accessible space for all users  
      Privacy and security of users’ personal dataInteractions with virtual representations of people and physical objects  
      Identity theft and hijacking of user accountsInteroperability  
      Borderless environment raises ethical and regulatory concerns   
      Users’ addictions and mental health issues   
      (Camilleri, 2023)

      Costs

      Infrastructure, resources and capabilities

                  The use of the Metaverse technology will probably necessitate a thorough investment in hardware to operate in the universities’ virtual spaces. It requires intricate devices, including appropriate high-performance infrastructures to achieve accurate retina display and pixel density for realistic virtual immersions. These systems rely on fast internet connections with good bandwidths as well as computers with adequate processing capabilities, that are equipped with good graphic cards (Bansal et al., 2022; Chang et al., 2022; Girard and Robertson, 2020; Jiawen et al., 2022; Makransky and Mayer 2022). For the time being, VR, MR and AR hardware may be considered as bulky, heavy, expensive and cost-prohibitive, in some contexts.

      The degree of freedom in a virtual world

                  The Metaverse may offer higher degrees of freedom than what is available through the worldwide web and web2.0 technologies (Hackl et al., 2022). Its administrators cannot be in a position to anticipate the behaviors of all persons using their technologies. Therefore, Metaverse users including students as well as their educators, can possibly be exposed to positive as well as to negative influences, as other individuals can disguise themselves, by using anonymous avatars, to roam in the vast virtual environments.

      Privacy and security of users’ personal data

                  The users’ interactions with the Metaverse as well as their personal or sensitive information, can be tracked by platform operators hosting this Internet service, as they continuously record, process and store their virtual activities in real-time. Like its preceding worldwide web and Web 2.0 technologies, the Metaverse can possibly raise the users’ concerns about the security of their data and of their intellectual properties (Chen, 2022; Ryu et al., 2022l; Skalidis et al., 2022). They may be wary about data breaches, scams, et cetera (Njoku et al., 2023; Tan et al., 2022).

                  Public blockchains and other platforms can already trace the users’ sensitive data, so they are not anonymous to them.  Individuals may decide to use one or more avatars to explore the Metaverse’s worlds. They may risk exposing their personal information, particularly when they are porting from one Metaverse to another and/or when they share transactional details via non-fungible token (NFTs) (Hwang, 2023). Some Metaverse systems do not require their users to share personal information when they create their avatar. However, they could capture relevant information from sensors that detect their users’ brain activity, monitor their facial features, eye motion and vocal qualities, along with other ambient data pertaining to the users’ homes or offices.

                  They may have legitimate reasons to capture such information, in order to protect them against objectionable content and/or unlawful conduct of other users. In many cases, the users’ personal data may be collected for advertising and/or for communication purposes. Currently, different jurisdictions have not regulated their citizens’ behaviors within the Metaverse contexts. Works are still in progress, in this regard.

      Identity theft and hijacking of user accounts

                  There may be malicious persons or groups who may try use certain technologies, to obtain the personal information and digital assets from Metaverse users. Recently, a deepfake artificial intelligence software has developed short audible content, that mimicked and impersonated a human voice. Other bots may easily copy the human beings’ verbal, vocal and visual data including their personality traits. They could duplicate the avatars’ identities, to commit fraudulent activities including unauthorized transactions and purchases, or other crimes with their disguised identities. For example, Roblox users reported that they experienced avatar scams in the past. In many cases, criminals could try to avail themselves of the digital identities of vulnerable users, including children and senior citizens, among others, to access their funds or cryptocurrencies (as they may be linked to the Metaverse profiles). As a result, Metaverse users may become victims of identity theft. In the near future, evolving security protocols and digital ledger technologies like the blockchain will be increasing the transparency and cybersecurity of digital assets (Ryu et al., 2022). However, users still have to remain vigilant about their digital footprint, to continue protecting their personal information.

                  As the use of the virtual environment is expected to increase in the coming years, particularly with the emergence of the Metaverse, it is imperative that new ways are developed to protect all users including students. Individuals ought to be informed about the risks to their privacy. Various validation procedures including authentication, such as face scans, retina scans, and speech recognition may be integrated in such systems to prevent identity theft and hijacking of Metaverse accounts.

      Borderless environment raises ethical and regulatory concerns

                  For the time being, a number of policy makers as well as academics are raising their questions on the content that can be presented in the Metaverse’s virtual worlds, as well as to how they can control the conduct and behaviors of the Metaverse users. Arguably, it may prove difficult for the regulators of different jurisdictions to enforce their legislation in the Metaverse’s borderless environment (Njoku et al., 2023). For example, European citizens are well acquainted with the European Union’s (EU) General Data Protection Regulation (GDPR, 2016). Other countries have their own legal frameworks and/or principles that are intended to safeguard the rights of data subjects as well as those of content creators. For example, the United States governments has been slower that the EU to introduce its privacy by design policies. Recently, the South Korean Government announced a set of laudable, non-binding ethical guidelines for the provision and consumption of metaverse services. However, currently, there aren’t a set of formal rules that can apply to all Metaverse users.

      Users’ addictions and mental health issues

                  Although many AR and VR technologies have already been tried and tested in the past few years, the Metaverse is still getting started. At the moment, it is difficult to determine what are the effects of the Metaverse on the users’ health and well-being (Chen, 2022). Many commentators anticipate that an unnecessary exposure to Metaverse’s immersive technologies may result in negative side-effects for the psychological and physical health of human beings (Han et al., 2022).  They are suggesting that individuals may easily become addicted to a virtual environment, where the limits of reality are their own imagination. They are lured to it “for all the things they can do” and will be willing to stay “for all the things they can be” (these are excerpts from Ready Player One, a movie blockbuster).

                  Past research confirms that spending excessive time on internet, social media or playing video games can increase the chances of mental health problems like attention deficit disorders (Dullur et al., 2021), as well as anxiety, stress or depression (Lee et al., 2021), among others. Individuals play video games to achieve their goals, to advance to the next level. Their gameplay releases dopamine (Pallavicini and Pepe, 2020). Similarly, their dopamine levels can increase when they are followed through social media, or when they receive likes, comments or other forms of online engagements (Capriotti et al., 2021; Camilleri and Kozak, 2022; Troise and Camilleri, 2021). Individuals can easily develop an addiction to this immersive technology, as they seek stimulating and temporary pleasurable experiences in its virtual spaces. As a result, they may become dependent to it (Burhan and Moradzadeh, 2020).

                  However, the individuals’ interpersonal communications via social media networks are not as authentic or satisfying as real-life relationships, as they are not interacting in-person with other human beings. In the case of the Metaverse, their engagement experiences may appear to be real. Yet again, in the Metaverse, its users are located in a virtual environment, they not physically present near other individuals. Human beings need to build an honest and trustworthy relationship with one another. The users of the Metaverse can create avatars that could easily conceal their identity within the virtual world.

      Benefits

      Immersive multi-sensory experiences in 3D environments

                  The Metaverse could provide a smooth interaction between the real world and the virtual spaces. Its users can engage in activities that are very similar to what they do in reality. However, it could also provide opportunities for them to experience things that could be impossible for them to do in the real world. Sensory technologies enable users to use their five senses of sight, touch, hearing, taste and smell, to immerse themselves in a virtual 3D environment.

                  Many students are experienced gamers and are lured by their 3D graphics. They learn when they are actively involved (Siyaev and Jo, 2021a). Therefore, the learning applications should be as meaningful, socially interactive and as engaging as possible (Camilleri and Camilleri, 2019). The Metaverse’s VR tools can be entertaining and could provide captivating and enjoyable experiences to their users (Bühler et al., 2022; Hwang, 2023; Suh and Ahn, 2022). In the past years, a number of educators and students have been using 3D learning applications (e.g. like Second Life) to visit virtual spaces that resemble video games (Hadjistassou, 2016).

                  Arguably, there is scope for educators and content developers to create digital domains like virtual schools, colleges and campuses, where students and teachers can socialize and engage in two-way communications. Students could visit the premises of their educational institutions in online tours, from virtually anywhere. A number of universities are replicating their physical campus with virtual ones (Díaz et al., 2020). The design of the virtual campuses may result in improved student services, shared interactive content that could improve their learning outcomes, and could even reach wider audiences. Previous research confirms that it is more interesting and appealing for students to learn academic topics through the virtual world (Lu et al., 2022).

      Equitable and accessible space for all users

                  Like other virtual technologies, the Metaverse could be accessed from remote locations. Educational institutions can use its infrastructure to deliver courses (free of charge or against tuition fees, as of now). Metaverse education may enable students from different locations to use its open-source software to pursue courses from anywhere, anytime. Hence, its democratized architecture could reduce geographic disparities among students, and increases their chances of continuing education through higher educational institutions in different parts of the world.

                  In the future, students including individuals with different abilities, may use the Metaverse’s multisensory environment to immerse themselves in engaging lectures (Hutson, 2022; Lee et al., 2022a).

      Interactions with virtual representations of people and physical objects

                  Currently, individual users can utilize the AR and VR applications to communicate with others and to exert their influence on the objects within the virtual world. They can organize virtual meetings with geographically distant users, attend conferences, et cetera (Camilleri and Camilleri, 2022b; Yu, 2022). Various commentators indicate that the Metaverse can be used to learn academic subjects in real-time sessions in a VR setting (Saritas and Topraklikoglu, 2022; Singh et al., 2022). It could be utilized to interact with peers and course instructors. The students and their lecturers will probably use an avatar that will represent their identity in the virtual world. Many researchers noted that avatars facilitate interactive communications and are a good way to personalize the students’ learning experiences (Barry et al., 2015; Díaz, 2020; Garrido-Iñigo and Rodríguez-Moreno, 2015; Melendez Araya and Hidalgo Avila, 2018; Park, and Kim, 2022).

      Interoperability

                  Many commentators speculate that unlike other VR applications, the Metaverse could probably enable its users to retain their identities as well as the ownership of their digital assets through different virtual worlds and platforms (Hwang, 2023; Xu et al., 2022). This implies that Metaverse users can communicate and interact with other individuals in a seamless manner through different devices or servers, across different platforms. They may be in a position to use the Metaverse to share data and content in different virtual worlds via Web 3.0 (Seddon et al., 2023).

      Conclusion

                  This research theorizes about the pros and cons of using Metaverse’s immersive applications for educational purposes. It clearly indicates that many academics are already experimenting with VR’s immersive technology. While some of them anticipate that the Metaverse is poised to transform education as they envisage that it could be integrated with school curricula and in their educational programs.  Others are more skeptical about the hype around this captivating technology. Time will tell whether the Metaverse project comes to fruition.

                  For the time being, education stakeholders are invited to untap the potential of AR and VR technologies to continue improving the students’ learning journeys. Of course, further research is required to better understand how policy makers as well as practitioners including the developers of the Metaverse, can address the number of challenges and issues identified in this contribution.

      The full article and the list of references are available through Researchgate, Academia and SSRN.

      Leave a comment

      Filed under Digital Learning Resources, digital media, Education, education technology, Higher Education