Tag Archives: ChatGPT

The use of Generative AI for travel and tourism planning

📣📣📣 Published via Technological Forecasting and Social Change.

👉 Very pleased to share this timely article that examines the antecedents of the users’ trust in Generative AI’s recommendations, related to travel and tourism planning.

🙏 I would like to thank my colleagues (and co-authors), namely, Hari Babu Singu, Debarun Chakraborty, Ciro Troise and Stefano Bresciani, for involving me in this meaningful research collaboration. It’s been a real pleasure working with you on this topic!

https://doi.org/10.1016/j.techfore.2025.124407

Highlights

  • •The study focused on the enablers and the inhibitors of generative AI usage
  • •It adopted 2 experimental studies with a 2 × 2 between-subjects factorial design
  • •The impact of the cognitive load produced mixed results
  • •Personalized recommendations explained each responsible AI system construct
  • •Perceived controllability was a significant moderator

Abstract

Generative AI models are increasingly adopted in tourism marketing content based on text, image, video, and code, which generates new content as per the needs of users. The potential uses of generative AI are promising; nonetheless, it also raises ethical concerns that affect various stakeholders. Therefore, this research, which comprises two experimental studies, aims to investigate the enablers and the inhibitors of generative AI usage. Studies 1 (n = 403 participants) and 2 (n = 379 participants) applied a 2 × 2 between-subjects factorial design in which cognitive load, personalized recommendations, and perceived controllability were independently manipulated. The initial study examined the probability of reducing the cognitive load (reduction/increase) due to the manual search for tourism information. The second study considers the probability of receiving personalized recommendations using generative AI features on tourism websites. Perceived controllability was treated as a moderator in each study. The impact of the cognitive load produced mixed results (i.e., predicting perceived fairness and environmental well-being), with no responsible AI system constructs explaining trust within Study 1. In study 2, personalized recommendations explained each responsible AI system construct, though only perceived fairness and environmental well-being significantly explained trust in generative AI. Perceived controllability was a significant moderator in all relationships within study 2. Hence, to design and execute generative AI systems in the tourism domain, professionals should incorporate ethical concerns and user-empowerment strategies to build trust, thereby supporting the responsible and ethical use of AI that aligns with users and society. From a practical standpoint, the research provides recommendations on increasing user trust through the incorporation of controllability and transparency features in AI-powered platforms within tourism. From a theoretical perspective, it enriches the Technology Threat Avoidance Theory by incorporating ethical design considerations as fundamental factors influencing threat appraisal and trust.

Introduction

Information and communication technologies have been playing a key role in enhancing the tourism experience (Asif and Fazel, 2024; Salamzadeh et al., 2022). The tourism industry has evolved as a content-centric industry (Chuang, 2023). It means the growth of the tourism sector is attributed to the creation, distribution, and strategic use of information. The shift from the traditional model of demand–driven to the content-centric model represents a transformation in user behaviour (Yamagishi et al., 2023; Hosseini et al., 2024). Modern travellers are increasingly dependent on user-generated content to decide on their choices and travel planning (Yamagishi et al., 2023; Rahaman et al., 2024). The content-focused marketing approach in tourism emphasizes the role of digital tools and storytelling to assist in creating a holistic experience (Xiao et al., 2022; Jiang and Phoong, 2023). From planning a trip to sharing cherished memories, content helps add value to the travellers and tourism businesses (Su et al., 2023). For example, MakeMyTrip (MMT) integrated generative AI trip planning assistant which facilitates conversational bookings assisting the users with destination exploration, in-trip needs, personalized travel recommendations, summaries of hotel reviews based on user content and voice navigation support positioning the MMT’s platform more inclusive to the users. The content marketing landscape is changing due to the introduction of generative AI models that help generate text, images, videos, and interesting code for users (Wach et al., 2023; Salamzadeh et al., 2025). These models assist in expressing the language, creativity, and aesthetics as humans do and enhance user experience in various industries, including travel and tourism (Binh Nguyen et al., 2023; Chan and Choi, 2025; Tussyadiah, 2014).

Gen AI enhances natural flow of interactions by offering personalized experiences that align with consumer profiles and preferences (Blanco-Moreno et al., 2024). Gen AI is gaining significant momentum for its transformative impact within the tourism sector, revolutionizing marketing, operations, design, and destination management (Duong et al., 2024; Rayat et al., 2025). Accordingly, empirical studies suggest that Generative AI has the potential to transform tourists’ decision-making process at every stage of their journey, demonstrating a significant disruption to conventional tourism models (Florido-Benítez, 2024). Nonetheless, concerns have been raised about the potential implications of generative AI models, and their generated content might possess inaccurate or deceptive information that could adversely impact consumer decision-making (Kim et al., 2025a, Kim et al., 2025b). In its report titled “Navigating the future: How Generative Artificial Intelligence (AI) is Transforming the Travel Industry”, Amadeus highlighted key concerns and challenges in implementation Gen AI such as data security concerns (35 %), lack of expertise and training in Gen AI (34 %), data quality and inadequate infrastructure (33 %), ROI concerns and lack of clear use cases (30 %) and difficulty in connecting with partners or vendors (29 %). Therefore, the present study argues that with the intuitive design, the travel agents could tackle the lack of expertise and clear use of Gen AI. The study suggests that for travel and tourism companies to build trust in Gen AI, they must tackle the root causes of user apprehension. This means addressing what makes users fear the unknown, ensuring they understand the system’s purpose, and fixing problems with biased or poor data. Also, previous studies highlighted how the integration of Gen AI and tourism throws certain issues such as misinformation and hallucinations, data privacy and security, human disconnection, and inherent algorithmic biases (Christensen et al., 2025; Luu et al., 2025). Moreover, if Gen AI provides biased recommendations, the implications are adverse. If the users perceive that the recommendations are biased, they avoid using them, leading to high churn and abandoning platforms (Singh et al., 2023). Users’ satisfaction will decline, replaced by frustration and anger as biased output damages the promise of personalized services. This negatively impacts brand reputation and loss of significant market competitive advantage (Wu and Yang, 2023). Such scenarios will likely lead to stricter regulations, mandatory algorithmic audits, and new consumer protection laws forcing the industry to prioritize fairness as well as explainability to avoid serious consequences. Interestingly, research studies draw attention to an interesting paradox, that consumers are heavily relying on AI-generated travel itineraries, even when they are aware of Gen AI’s occasional inaccuracies (Osadchaya et al., 2024). This reliance might stem from a belief that AI’s perceived objectivity and capacity for personalized recommendations indicate a significant transformation of trust between human and non-human agents in the travel decision-making process (Kim et al., 2023a, Kim et al., 2023b). Empirical findings indicate that AI implementation in travel planning contributes to the objectivity of the results, effectively mitigates cognitive load, and supports higher levels of personalization aligned with user preferences (Kim et al., 2023a, Kim et al., 2023b). Despite the growing body of literature explaining the role of trust in Gen AI acceptance and its influence on travellers’ decision making and behavioural intentions, the potential biases in AI-generated content continue to pose challenges to users’ confidence (Kim et al., 2021a, Kim et al., 2021b). Therefore, this research aims to examine the influence of generative AI in tourism on consumers’ trust in AI technologies, particularly their balance between technological progress and ethical responsibility, concerning the future of tourism (Dogru(Dr. True et al., 2025).

Existing research has focused more on the technology of AI as a phenomenon rather than translating those theories into studies on how the ethics involved would affect perceptions and trust (Glikson and Woolley, 2020). In addition, there is still the black box phenomenon, which is the inability of the user to understand what happens in AI. It also emphasizes the need for more integrative studies between morally sound AI development, user trust, and design in tourism (Tuo et al., 2024).

Moreover, scant research has examined the factors that inhibit tourists from embracing Generative AI technologies, resulting in limited understanding of travellers’ reluctance to Generative AI adoption for travel planning (Fakfare et al., 2025). Despite a growing body of literature examining the antecedents and outcomes of Generative AI (GAI) adoption, large body of research has been based on established frameworks such as Information Systems Success (ISS) model (Nguyen and Malik, 2022), Technology Acceptance Mode; (TAM) (Chatterjee et al., 2021), and the Unified Theory of Acceptance and Use of Technology (UTAUT) (Venkatesh, 2022).

However, the extensive reliance on traditional acceptance models might face the risk of ignoring the critical socio-technical aspects, which are paramount in the context of GAI (Yu et al., 2022). While most of the studies explore the overarching effects of user acceptance and use of GenAI using TAM, UTAUT, and Delone and McLean IS success models, there has been a lack of consideration of ethical factors as well as responsible AI systems. Addressing these gaps could significantly broaden our theoretical understanding of how individuals evaluate and adopt generative AI technologies within users’ ethical behaviour and socio-technical perspective.

Therefore, this research aims to fill this gap by investigating factors that facilitate or inhibit trust in generative AI systems, considering responsible AI and Technology Threat Avoidance Theory, and advancing the following research questions:

RQ1

How does the customer experience of using generative AI in tourism reflect the impact of enablers (such as responsible AI systems) and inhibitors (such as ambiguity and anxiety) on trust in generative AI?

RQ2

Does perceived controllability moderate the enablers and inhibitors of trust in generative AI in tourism?

This research includes responsible AI principles and the technology threat avoidance theory to explicate the relationship between generative AI and trust in tourism. Seen from the conceptual lens of Ethical Behaviours, responsible AI principles are crucial for enhancing trust in Gen AI within tourism (Law et al., 2024). When users perceive Gen AI recommendations as fair, transparent, and bias-free, they are more likely to perceive the systems as trustworthy, which in turn mitigates user skepticism and promotes trust (Ali et al., 2023). Also, when Gen AI promotes sustainable and environmentally friendly practices, it demonstrates ethical responsibility and enhances trust in alignment with shared social values (Díaz-Rodríguez et al., 2023). By operationalizing responsible AI principles like transparency, fairness, and sustainability, Gen AI transforms from a black-box tool into a more trustworthy and responsible system for travel decisions (Kirilenko and Stepchenkova, 2025). From the socio-technical perspective, the Technology threat avoidance theory (TTAT) supports the logic of how perceived ambiguity and perceived anxiety act as inhibitors of trust. In tourism, users’ experience holds paramount importance (Torkamaan et al., 2024). When users encounter Gen AI content that is difficult to comprehend, recommendations are unstable or ambiguous, and users’ data is exposed to privacy concerns, these apprehensions will turn into a threat to using Gen AI (Bang-Ning et al., 2025). According to TTAT, when users perceive a greater threat, they are more inclined to engage in avoidance behaviours, which also erodes trust in the system. Hence, TTAT explains why users might hesitate or avoid using Gen AI tools, even if they offer functional benefits such as personalized recommendations and reduced cognitive load (Shang et al., 2023).

The study adopted an experimental research design that would help us to explore the independent phenomenon (use of Gen AI for content generation) and observe and explain its role to establish a cause-and-effect relationship between factors of responsible AI systems and TTAT (Leung et al., 2023). The experimental setting helps us to understand the differences empirically between human and non-human generated content from users’ travel decision-making perspective towards destinations. The study enriched the literature on both the ethical aspects and environmental aspects (perceived fairness and environmental well-being) and the perceived risks (perceived ambiguity and perceived anxiety) perspective in the tourism context. The situation of perceived controllability as a moderator is tested in the literature, offering help to managers on how to develop AI systems responsible for lowering user fear and building trust. The study also facilitated practitioners in understanding how the personalized recommendations & cognitive load facilitated by Gen AI in content generation impact the Gen AI Trust of the tourists.

Access through your organization

Check access to the full text by signing in through your organization.

Section snippets

Responsible AI systems

Responsible AI adequately incorporates ethical aspects of AI system design and implementation and ensures that the systems are transparent, fair, and responsible (Díaz-Rodríguez et al., 2023). Responsible AI includes ethical, transparent, and accountable use of artificial intelligence systems, ensuring they are fair, secure, and aligned with societal values. It is also an approach to design, develop, and deploy AI systems so that they are ethical, safe, and trustworthy. It is a system that

Cognitive load, personalized recommendations, and perceived fairness

Cognitive load is the mental effort to process and choose information (Islam et al., 2020). A cognitive load can also be high when people interact with complex systems such as AI. Thus, high cognitive load may affect the ability of users to judge whether the AI-based decisions can be considered fair, since they may not grasp enough of the workings of the system and its specific decisions (Westphal et al., 2023). On the other hand, whereas perceived fairness refers to the users’ feelings about

Research methods and analysis

The experiments adopted in this study are scenario-based. Participants’ emotions cannot be manipulated easily in an ethical manner (Anand and Gaur, 2018). Also, the scenario-based approach helps test the causal relationship between constructs used for experimentation in a given scenario. This approach also reduces the minimal interference from extraneous variables. In this method, respondents answered questions based on hypothetical scenarios developed in each scenario. Therefore, scenarios

Discussion

Study 1 shows that cognitive load is detrimental to an individual’s notion of justice or environmental wellbeing, indicating that such factors may be difficult for a user to rate properly based on expending greater cognitive effort. However, cognitive load can also limit the extent of open-mindedness and critical evaluation of AI-assisted communication (T. Li et al., 2024), which could leave people resorting to mental shortcuts or simple fairness and environmental fairness issues. Under such

Theoretical implications

Trust is an important element in the design of organizations and systems, and the current study’s theoretical implications extend the understanding of trust in generative AI systems by integrating constructs of responsible AI and Technology Threat Avoidance Theory. This research underscores the significance of moral factors in creating and using AI systems by exploring relationships between perceived justice, environmental concern, and trust. In this context, the study notes that the degree of

Practical implications

To develop and retain users’ confidence, professionals in the field should observe responsible AI principles, particularly perceived equity and ecological sustainability. It is possible for consumers to be amused by and trust that AI recommendations are perceived as fair. This involves developing algorithms that align with users’ interests while promoting green aspects in AI. It also becomes important for management to note that during AI interface design, cognitive load should be considered so 

Limitations and future research

This study has certain limitations. First, the use of self-reported measures could pose certain biases, as the participants’ experiences with generative AI or social desirability could affect their judgment. The reliance on self-reported data introduces potential biases from participants’ prior engagements with generative AI, social desirability bias, or limited technological competence. Secondly, focusing on a particular context (i.e., tourism) can be seen as a limitation when it comes to

Conclusion

A thorough examination of advancing artificial intelligence in the tourism industry draws attention to the fact that there is no way of avoiding the issue of encouraging responsible AI use. Extending user satisfaction with rhetoric based on AI suggests that user perceptions are not only shaped by the quality of the recommendations but also by the ethical implications of the system and users’ affective states. A range in the effect of personalized suggestions on some parameters that influenced

Leave a comment

Filed under Marketing

Why are people using generative AI like ChatGPT?

The following text is an excerpt from one of my latest articles. I am sharing the managerial implications of my contribution published through Technological Forecasting and Social Change.

This empirical study provides a snapshot of the online users’ perceptions about Chat Generative Pre-Trained Transformer (ChatGPT)’s responses to verbal queries, and sheds light on their dispositions to avail themselves from ChatGPT’s natural language processing.

It explores their performance expectations about their usefulness and their effort expectations related to the ease of use of these information technologies and investigates whether they are affected by colleagues or by other social influences to use such dialogue systems. Moreover, it examines their insights about the content quality, source trustworthiness as well as on the interactivity features of these text-generative AI models.

Generally, the results suggest that the research participants felt that these algorithms are easy to use. The findings indicate that they consider them to be useful too, specifically when the information they generate is trustworthy and dependable.

The respondents suggest that they are concerned about the quality and accuracy of the content that is featured in the AI chatbots’ answers. This contingent issue can have a negative effect on the use of the information that is created by online dialogue systems.

OpenAI’s ChatGPT is a case in point. Its app is freely available in many countries, via desktop and mobile technologies including iOS and Android. The company admits that its GPT-3.5 outputs may be inaccurate, untruthful, and misleading at times. It clarifies that its algorithm is not connected to the internet, and that it can occasionally produce incorrect answers (OpenAI, 2023a). It posits that GPT-3.5 has limited knowledge of the world and events after 2021 and may also occasionally produce harmful instructions or biased content.

OpenAI recommends checking whether its chatbot’s responses are accurate or not, and to let them know when and if it answers in an incorrect manner, by using their “Thumbs Down” button. They even declare that their ChatGPT’s Help Center can occasionally make up facts or “hallucinate” outputs (OpenAI, 2023aOpenAI, 2023b).

OpenAI reports that its top notch ChatGPT Plus subscribers can access safer and more useful responses. In this case, users can avail themselves from a number of beta plugins and resources that can offer a wide range of capabilities including text-to-speech applications as well as web browsing features through Bing.

Yet again, OpenAI (2023b) indicates that its GPT-4 still has many known limitations that the company is working to address, such as “social biases and adversarial prompts” (at the time of writing this article). Evidently, works are still in progress at OpenAI.

The company needs to resolve these serious issues, considering that its Content Policy and Terms clearly stipulate that OpenAI’s consumers are the owners of the output that is created by ChatGPT. Hence, ChatGPT’s users have the right to reprint, sell, and merchandise the content that is generated for them through OpenAI’s platforms, regardless of whether the output (its response) was provided via a free or a paid plan.

Various commentators are increasingly raising awareness about the corporate digital responsibilities of those involved in the research, development and maintenance of such dialogue systems. A number of stakeholders, particularly the regulatory ones, are concerned on possible risks and perils arising from AI algorithms including interactive chatbots.

In many cases, they are warning that disruptive chatbots could disseminate misinformation, foster prejudice, bias and discrimination, raise privacy concerns, and could lead to the loss of jobs. Arguably, one has to bear in mind that, in many cases, many governments are outpaced by the proliferation of technological innovations (as their development happens before the enactment of legislation).

As a result, they tend to be reactive in the implementation of substantive regulatory interventions. This research reported that the development of ChatGPT has resulted in mixed reactions among different stakeholders in society, especially during the first months after its official launch.

At the moment, there are just a few jurisdictions that have formalized policies and governance frameworks that are meant to protect and safeguard individuals and entities from possible risks and dangers of AI technologies (Camilleri, 2023). Of course, voluntary principles and guidelines are a step in the right direction. However, policy makers are expected by various stakeholders to step-up their commitment by introducing quasi-regulations and legislation.

Currently, a number of technology conglomerates including Microsoft-backed OpenAI, Apple and IBM, among others, anticipated the governments’ regulations by joining forces in a non-profit organization entitled, “Partnership for AI” that aims to advance safe, responsible AI, that is rooted in open innovation.

In addition, IBM has also teamed up with Meta and other companies, startups, universities, research and government organizations, as well as non-profit foundations to form an “AI Alliance”, that is intended to foster innovations across all aspects of AI technology, applications and governance.

The full list of references is available here: https://www.sciencedirect.com/science/article/pii/S004016252400043X?via%3Dihub

Suggested citation: Camilleri, M. A. (2024). Factors affecting performance expectancy and intentions to use ChatGPT: Using SmartPLS to advance an information technology acceptance framework. Technological Forecasting and Social Change201, https://doi.org/10.1016/j.techfore.2024.123247

Leave a comment

Filed under academia, chatbots, ChatGPT, Generative AI

Users’ perceptions and expectations of ChatGPT

Featuring an excerpt and a few snippets from one of my latest articles related to Generative Artificial Intelligence (AI).

Suggested Citation: Camilleri, M.A. (2024). Factors affecting performance expectancy and intentions to use ChatGPT: Using SmartPLS to advance an information technology acceptance framework, Technological Forecasting and Social Changehttps://doi.org/10.1016/j.techfore.2024.123247


The introduction

Artificial intelligence (AI) chatbots utilize algorithms that are trained to process and analyze vast amounts of data by using techniques ranging from rule-based approaches to statistical models and deep learning, to generate natural text, to respond to online users, based on the input they received (OECD, 2023). For instance, Open AI‘s Chat Generative Pre-Trained Transformer (ChatGPT) is one of the most popular AI-powered chatbots. The company claims that ChatGPT “is designed to assist with a wide range of tasks, from answering questions to generating text in various styles and formats” (OpenAI, 2023a). OpenAI clarifies that its GPT-3.5, is a free-to-use language model that was optimized for dialogue by using Reinforcement Learning with Human Feedback (RLHF) – a method that relies on human demonstrations and preference comparisons to guide the model toward desired behaviors. Its models are trained on vast amounts of data including conversations that were created by humans (such content is accessed through the Internet). The responses it provides appear to be as human-like as possible (Jiang et al., 2023).

GPT-3.5’s database was last updated in September 2021. However, GPT-4.0 version comes with a paid plan that is more creative than GPT-3.5, could accept images as inputs, can generate captions, classifications and analyses (Qureshi et al., 2023). Its developers assert that GPT-4.0 can create better content including extended conversations, as well as document search and analysis (Takefuji, 2023). Recently, its proponents noted that ChatGPT can be utilized for academic purposes, including research. It can extract and paraphrase information, translate text, grade tests, and/or it may be used for conversation purposes (MIT, 2023). Various stakeholders in education noted that this LLM tool may be able to provide quick and easy answers to questions.

However, earlier this year, several higher educational institutions issued statements that warned students against using ChatGPT for academic purposes. In a similar vein, a number of schools banned ChatGPT from their networks and devices (Rudolph et al., 2023). Evidently, policy makers were concerned that this text generating AI system could disseminate misinformation and even promote plagiarism. Some commentators argue that it can affect the students’ critical-thinking and problem-solving abilities. Such skill sets are essential aspects for their academic and lifelong successes (Liebrenz et al., 2023Thorp, 2023). Nevertheless, a number of jurisdictions are reversing their decisions that impede students from using this technology (Reuters, 2023). In many cases, educational leaders are realizing that their students could benefit from this innovation, if they are properly taught how to adopt it as a tool for their learning journey.

Academic colleagues are increasingly raising awareness on different uses of AI dialogue systems like service chatbots and/or virtual assistants (Baabdullah et al., 2022Balakrishnan et al., 2022Brachten et al., 2021Hari et al., 2022Li et al., 2021Lou et al., 2022Malodia et al., 2021Sharma et al., 2022). Some of them are evaluating their strengths and weaknesses, including of OpenAI’s ChatGPT (Farrokhnia et al., 2023Kasneci et al., 2023). Very often, they argue that there may be instances where the chatbots’ prompts are not completely accurate and/or may not fully address the questions that are asked to them (Gill et al., 2024). This may be due to different reasons. For example, GPT-3.5’s responses are based on the data that were uploaded before a knowledge cut-off date (i.e. September 2021). This can have a negative effect on the quality of its replies, as the algorithm is not up to date with the latest developments. Although, at the moment, there is a knowledge gap and a few grey areas on the use of AI chatbots that use natural language processing to create humanlike conversational dialogue, currently, there are still a few contributions that have critically evaluated their pros and cons, and even less studies have investigated the factors affecting the individuals’ engagement levels with ChatGPT.

This empirical research builds on theoretical underpinnings related to information technology adoption in order to examine the online users’ perceptions and intentions to use AI Chatbots. Specifically, it integrates a perceived interactivity construct (Baabdullah et al., 2022McMillan and Hwang, 2002) with information quality and source trustworthiness measures (Leong et al., 2021Sussman and Siegal, 2003) from the Information Adoption Model (IAM) with performance expectancy, effort expectancy and social influences constructs (Venkatesh et al., 2003Venkatesh et al., 2012) from the Unified Theory of Acceptance and Use of Technology (UTAUT1/UTAUT2) to determine which factors are influencing the individuals’ intentions to use AI text generation systems like ChatGPT. This study’s focused research questions are:

RQ1

How and to what extent are information quality and source trustworthiness influencing the online users’ performance expectancy from ChatGPT?

RQ2

How and to what extent are their perceptions about ChatGPT’s interactivity, performance expectancy, effort expectancy, as well as their social influences affecting their intentions to continue using their large language models?

RQ3

How and to what degree is the performance expectancy construct mediating effort expectancy – intentions to use these interactive AI technologies?

This study hypothesizes that information quality and source trustworthiness are significant antecedents of performance expectancy. It presumes that this latter construct, together with effort expectancy, social influences as well as perceived interactivity affect the online users’ acceptance and usage of generative pre-trained AI chatbots like GPT-3.5 or GPT-4.

Many academic researchers sought to explore the individuals’ behavioral intentions to use a wide array of technologies (Alalwan, 2020Alam et al., 2020Al-Saedi et al., 2020Raza et al., 2021Tam et al., 2020). Very often, they utilized measures from the Theory of Reasoned Action (TRA) (Fishbein and Ajzen, 1975), the Theory of Planned Behavior (TPB) (Ajzen, 1991), the Technology Acceptance Model (TAM) (Davis, 1989Davis et al., 1989), TAM2 (Venkatesh and Davis, 2000), TAM3 (Venkatesh and Bala, 2008), UTAUT (Venkatesh et al., 2003) or UTAUT2 (Venkatesh et al., 2012). Few scholars have integrated constructs like UTAUT/UTAUT2’s performance expectancy, effort expectancy, social influences and intentions to use technologies with information quality and source trust measures from the Elaboration Likelihood Model (ELM) and IAM. Currently, there is still limited research that incorporates a perceived interactivity factor within information technology frameworks. Therefore, this contribution addresses this deficit in academic knowledge.

Notwithstanding, for the time being, there is still scant research that is focused on AI-powered LLM, like ChatGPT, that are capable of generating human-like text that is based on previous contexts and drawn from past conversations. This timely study raises awareness on the individuals’ perceptions about the utilitarian value of such interactive technologies, in an academic (higher educational) context. It clearly identifies the factors that are influencing the individuals’ intentions to continue using them, in the future.


From the literature review

Table 1 features a summary of the most popular theoretical frameworks that sought to identify the antecedents and the extent to which they may affect the individuals’ intentions to use information technologies.

Table 1. A non-exhaustive list of theoretical frameworks focused on (information) technology adoption behaviors

Figure 1. features the conceptual framework that investigates information technology adoption factors. It represents a visual illustration of the hypotheses of this study. In sum, this empirical research presumes that information quality and source trustworthiness (from Information Adoption Model) precede performance expectancy. The latter construct together with effort expectancy, social influences (from Unified Theory of Acceptance and Use of Technology) as well as the perceived interactivity construct, are significant antecedents of the individuals’ intentions to use ChatGPT.


The survey instrument

The respondents were instructed to answer all survey questions that were presented to them about information quality, source trustworthiness, performance expectancy, effort expectancy, social influences, perceived interactivity and on their behavioral intentions to continue using this technology (otherwise, they could not submit the questionnaire). Table 2 features the list of measures as well as their corresponding items that were utilized in this study. It also provides a definition of the constructs used in the proposed information technology acceptance framework.

Table 2. The list of measures and the corresponding items used in this research.


Theoretical implications

This research sought to explore the factors that are affecting the individuals’ intentions to use ChatGPT. It examined the online users’ effort and performance expectancy, social influences as well as their perceptions about the information quality, source trustworthiness and interactivity of generative text AI chatbots. The empirical investigation hypothesized that performance expectancy, effort expectancy and social influences from Venkatesh et al.’s (2003) UTAUT together with a perceived interactivity construct (McMillan and Hwang, 2002) were significant antecedents of their intentions to revisit ChatGPT’s website and/or to use its app. Moreover, it presumed that information quality and source trustworthiness measures from Sussman and Siegal’s (2003) IAM were found to be the precursors of performance expectancy.

The results from this study report that source trustworthiness-performance expectancy is the most significant path in this research model. They confirm that online users indicated that they believed that there is a connection between the source’s trustworthiness in terms of its dependability, and the degree to which they believe that using such an AI generative system will help them improve their job performance. Similar effects were also evidenced in previous IAM theoretical frameworks (Kang and Namkung, 2019; Onofrei et al., 2022), as well as in a number of studies related to TAM (Assaker, 2020; Chen and Aklikokou, 2020; Shahzad et al., 2018) and/or to UTAUT/UTAUT2 (Lallmahomed et al., 2017).

In addition, this research also reports that the users’ peceptions about information quality significantly affects their performance expectancy/expectancies from ChatGPT. Yet, in this case, this link was weaker than the former, thus implying that the respondents’ perceptions about the usefulness of this text generative technology were clearly influenced by the peripheral cues of communication (Cacioppo and Petty, 1981; Shi et al., 2018; Sussman and Siegal, 2003; Tien et al., 2019).

Very often, academic colleagues noted that individuals would probably rely on the information that is presented to them, if they perceive that the sources and/or their content are trustworthy (Bingham et al., 2019; John and De’Villiers, 2020; Winter, 2020). Frequently, they indicated that source trustworthiness would likely affect their beliefs about the usefulness of information technologies, as they enable them to enhance their performance. Conversely, some commentators argued that there may be users that could be skeptical and wary about using new technologies, especially if they are unfamiliar with them (Shankar et al., 2021). They noted that such individuals may be concerned about the reliability and trustworthiness of the latest technologies.

The findings suggest that the individuals’ perceptions about the interactivity of ChatGPT are a precursor of their intentions to use it. This link is also highly significant. Therefore, the online users were somehow appreciating this information technology’s responsiveness to their prompts (in terms of its computer-human communications). Evidently, ChatGPT’s interactivity attributes are having an impact on the individuals’ readiness to engage with it, and to seek answers to their questions. Similar results were reported in other studies that analyzed how the interactivity and anthropomorphic features of dialogue systems like live support chatbots, or virtual assistants can influence the online users’ willingness to continue utilizing them in the future (Baabdullah et al., 2022; Balakrishnan et al., 2022; Brachten et al., 2021; Liew et al., 2017).

There are a number of academic contributions that sought to explore how, why, where and when individuals are lured by interactive communication technologies (e.g. Hari et al., 2022; Li et al., 2021; Lou et al., 2022). Generally, these researchers posited that users are habituated with information systems that are programed to engage with them in a dynamic and responsive manner. Very often they indicated that many individuals are favorably disposed to use dialogue systems that are capable of providing them with instant feedback and personalized content. Several colleagues suggest that positive user experiences as well as high satisfaction levels and enjoyment, could enhance their connection with information technologies, and will probably motivate them to continue using them in the future (Ashfaq et al., 2020; Camilleri and Falzon, 2021; Huang and Chueh, 2021; Wolfinbarger and Gilly, 2003).

Another important finding from this research is that the individuals’ social influences (from family, friends or colleagues) are affecting their interactions with ChatGPT. Again, this causal path is also very significant. Similar results were also reported in UTAUT/UTAUT2 studies that are focused on the link between social influences and its link with intentional behaviors to use technologies (Gursoy et al., 2019; Patil et al., 2020). In addition, TPB/TRA researchers found that subjective norms also predict behavioral intentions (Driediger and Bhatiasevi, 2019; Sohn and Kwon, 2020). This is in stark contract with other studies that reported that there was no significant relationship between social influences/subjective norms and behavioral intentions (Ho et al., 2020; Kamble et al., 2019).

Interestingly, the results report that there are highly significant effects between effort expectancy (i.e. ease of use of the generative AI technology) and performance expectancy (i.e. its perceived usefulness). Many scholars posit that perceived ease of use is a significant driver of perceived usefulness of technology (Bressolles et al., 2014; Davis, 1989; Davis et al., 1989; Kamble et al., 2019; Yoo and Donthu, 2001). Furthermore, there are significant causal paths between performance expectancy-intentions to use ChatGPT and even between effort expectancy-intentions to use ChatGPT, albeit to a lesser extent. Yet, this research indicates that performance expectancy partially mediates effort expectancy-intentions to use ChatGPT. In this case, this link is highly significant.

In sum, this contribution validates key information technology measures, specifically, performance expectancy, effort expectancy, social influences and behavioral intentions from UTAUT/UTAUT2, as well as information quality and source trustworthiness from ELM/IAM and integrates them with a perceived interactivity factor. It builds on previous theoretical underpinnings. Yet, it differentiates itself from previous studies. To date, there are no other empirical investigations that have combined the same constructs that are presented in this article. Notwithstanding, this research puts forward a robust Information Technology Acceptance Framework. The results confirm the reliability and validity of the measures. They clearly outline the relative strength and significance of the causal paths that are predicting the individuals’ intentions to use ChatGPT.


Managerial implications

This empirical study provides a snapshot on the online users’ perceptions about ChatGPT’s responses to verbal queries, and sheds light on their dispositions to avail themselves from its natural language processing. It explores their performance expectations about their usefulness and their effort expectations related to the ease of use of these information technologies and investigates whether they are affected by colleagues or by other social influences to use such dialogue systems. Moreover, it examines their insights about the content quality, source trustworthiness as well as on the interactivity features of these text- generative AI models.

Generally, the results suggest that the research participants felt that these algorithms are easy to use. The findings indicate that they consider them to be useful too, specifically when the information they generate is trustworthy and dependable. The respondents suggest that they are concerned about the quality and accuracy of the content that is featured in the AI chatbots’ answers. This contingent issue can have a negative effect on the use of the information that is created by online dialogue systems.

OpenAI’s ChatGPT is a case in point. Its app is freely available in many countries, via desktop and mobile technologies including iOS and Android. The company admits that its GPT-3.5 outputs may be inaccurate, untruthful, and misleading at times. It clarifies that its algorithm is not connected to the internet, and that it can occasionally produce incorrect answers (OpenAI, 2023a). It posits that GPT-3.5 has limited knowledge of the world and events after 2021 and may also occasionally produce harmful instructions or biased content. OpenAI recommends checking whether its chatbot’s responses are accurate or not, and to let them know when and if it answers in an incorrect manner, by using their “Thumbs Down” button. They even declare that their ChatGPT’s Help Center can occasionally make up facts or “hallucinate” outputs (OpenAI, 2023a,b).

OpenAI reports that its top notch ChatGPT Plus subscribers can access safer and more useful responses. In this case, users can avail themselves from a number of beta plugins and resources that can offer a wide range of capabilities including text-to-speech applications as well as web browsing features through Bing. Yet again, OpenAI (2023b) indicates that its GPT-4 still has many known limitations that the company is working to address, such as “social biases and adversarial prompts” (at the time of writing this article). Evidently, works are still in progress at OpenAI. The company needs to resolve these serious issues, considering that its Content Policy and Terms clearly stipulate that OpenAI’s consumers are the owners of the output that is created by ChatGPT. Hence, ChatGPT’s users have the right to reprint, sell, and merchandise the content that is generated for them through OpenAI’s platforms, regardless of whether the output (its response) was provided via a free or a paid plan.

Various commentators are increasingly raising awareness about the corporate digital responsibilities of those involved in the research, development and maintenance of such dialogue systems. A number of stakeholders, particularly the regulatory ones, are concerned on possible risks and perils arising from AI algorithms including interactive chatbots. In many cases, they are warning that disruptive chatbots could disseminate misinformation, foster prejudice, bias and discrimination, raise privacy concerns, and could lead to the loss of jobs. Arguably, one has to bear in mind that, in many cases, many governments are outpaced by the proliferation of technological innovations (as their development happens before the enactment of legislation). As a result, they tend to be reactive in the implementation of substantive regulatory interventions. This research reported that the development of ChatGPT has resulted in mixed reactions among different stakeholders in society, especially during the first months after its official launch. At the moment, there are just a few jurisdictions that have formalized policies and governance frameworks that are meant to protect and safeguard individuals and entities from possible risks and dangers of AI technologies (Camilleri, 2023). Of course, voluntary principles and guidelines are a step in the right direction. However, policy makers are expected by various stakeholders to step-up their commitment by introducing quasi-regulations and legislation.

Currently, a number of technology conglomerates including Microsoft-backed OpenAI, Apple and IBM, among others, anticipated the governments’ regulations by joining forces in a non-profit organization entitled, “Partnership for AI” that aims to advance safe, responsible AI, that is rooted in open innovation. In addition, IBM has also teamed up with Meta and other companies, startups, universities, research and government organizations, as well as non-profit foundations to form an “AI Alliance”, that is intended to foster innovations across all aspects of AI technology, applications and governance.

Continue reading

Leave a comment

Filed under artificial intelligence, chatbots, ChatGPT, digital media, Generative AI, Marketing