Tag Archives: technology

Opening the black box. Learn about explainable AI tools

This is an excerpt from one of my latest articles published through Technological Forecasting and Social Change. Its content has been adapted for this blog.

Suggested citation: Camilleri, M.A. (2026). Opening the black box: Operational principles, tools and frameworks that advance explainable artificial intelligence (XAI) models, Technological Forecasting and Social Changehttps://doi.org/10.1016/j.techfore.2026.124710

Expainable artificial intelligence (XAI) has emerged as a critical area of study of AI research. This may be due to the increased stakeholders who are exerting their pressure on practitioners to become as accountable as possible, during the development and maintenance of their AI models. XAI concepts span from foundational notions in artificial intelligence and machine learning, to specialized constructs such as interpretability, transparency, and human-centered design. Additionally, XAI research is informed by insights from human-computer interaction and decision science, which ultimately emphasize user engagement and trust. Appendix A presents concise definitions of the core terminology underpinning XAI. It offers a conceptual grounding for scholars, practitioners and regulatory stakeholders seeking to enhance their knowledge and understanding of this evolving field. The findings from this review exercise indicated that they are genuinely concerned about the complexity and opacity of modern AI systems, as they are aware that AI technologies are being integrated into critical decision-making environments, ranging from healthcare or medical systems to finance, legal and/or public administration.

This systematic review confirms that stakeholders are expecting practitioners to develop explainable AI systems, that are not only accurate, but also interpretable, transparent and trustworthy. The findings suggest that XAI seeks to bridge the gap between technical performance and human understanding, by providing meaningful explanations for outputs generated by machine learning models, especially for those that function as “black boxes”. Several commentators indicated that XAI aims to foster user trust, supports accountability and ensures ethical and regulatory compliance.

The findings from this study confirm that the growing use of ML in sensitive areas like healthcare, finance, education and employment has sparked the stakeholders’ concerns over the opacity aspects of black boxes and on the possible liabilities of practitioners who research, develop and maintain AI-driven solutions. Generally, XAI practices can be divided into two broad categories: (i) inherently interpretable models, such as decision trees and linear regressions, and (ii) post-hoc interpretability methods for more complex black-box models like deep neural networks. The latter one generates both local and global explanations through feature attribution, perturbation analysis and visualizations.

For the time being, several complex ML models operate as black boxes, as they hinder the ability of their users and regulators to understand, contest or improve their outputs. XAI addresses these contentious issues by providing tools, methodologies and frameworks that are intended to enhance the interpretability of AI systems through substantive compliance mechanisms, ethical standards and normative guidelines.

Indeed, this research indicates that inherently interpretable models, counterfactual reasoning, ongoing fairness audits, human-in-the-loop (HITL) approaches as well as post-hoc explanations may contribute to improving the transparency and trustworthiness of ML algorithms (Mosqueira-Rey et al., 2023Panigutti et al., 2021). Whilst counterfactual explanations enable practitioners to explore “what-if” scenarios and offer them actionable insights that can improve their model’s comprehensibility for decision subjects; regular fairness audits could analyze model outcomes across demographic groups and may possibly identify potential biases (Holzinger, 2021). In addition, human-in-the-loop (HITL) approaches and post-hoc explanations (as well as retrospective interpretability techniques) can enhance contextual accuracy and accountability.

Practitioners may avail themselves of a range of tools and libraries to implement XAI techniques including open-source options like SHAP, LIME, ELI5 and Alibi, among others, that offer model-agnostic interpretability. Moreover, they may use IBM’s AIX360 and Microsoft’s InterpretML to support them in the explainability of their datasets and machine learning models throughout their AI application lifecycle. Both resources include a diverse set of algorithms, code, guides, tutorials and demos that can help users better understand and explain AI models. Furthermore, they may utilize Google’s What-If Tool (WIT), an interactive visual interface designed to help data scientists, machine learning practitioners and AI ethicists explore, analyze and explain ML models. Such tools enable non-experts, researchers and practitioners to assess model fairness, evaluate performance, deploy responsible systems and make alternative predictions.

Currently, there are a number of XAI frameworks and evaluation standards that can institutionalize transparency. For example, these include initiatives like the United States’ Government’s Defense Advanced Research Projects Agency (DARPA) XAI Program whose AI systems’ decision-making processes can be understood and trusted by humans. DARPA funded a variety of interdisciplinary teams included academia, industry and national labs that explored human-AI interactions (interfaces and feedback loops to improve user trust and usability), interpretable ML models, post-hoc visual and symbolic explainable methods for black-box models like deep neural networks, cognitive psychology integration that design explanations that align with how humans reason and make sense of information. Similarly, Microsoft’s Prediction-Decision-Recommendation (PDR) framework offers an operational and predictive model for building trustworthy AI recommendation systems that are aligned with human values. PDR was introduced as part of Microsoft’s efforts in responsible AI, particularly in enterprise and applied settings.

Both DARPA’s XAI Program and Microsoft’s PDR (Prediction-Decision-Recommendation) framework can incorporate both quantitative and qualitative assessments in their XAI evaluation. For example, DARPA-funded XAI projects quantitative assessment involves an examination of the models’ (i) Fidelity of their logic, (ii) Completeness, and (iii) Simplicity. They use performance metrics to evaluate task accuracy, latency or robustness under explanation constraints. The qualitative assessment emphasizes human-centered evaluation as it investigates perceptions about task effectiveness as well as user trustworthiness, user expectations, and user satisfaction levels with AI models.

Furthermore, standards such as IEEE P7003 Standard for Algorithmic Bias Consideration that is part of the Institute of Electrical and Electronics Engineers (IEEE) P7000 series of standards for Ethically Aligned Design in autonomous and intelligent systems (AIS) is aimed at providing technical guidance for identifying, documenting and mitigating algorithmic bias in AI systems throughout their design, development and deployment. Other tools like Fairlearn and Testing with Concept Activation Vectors (TCAV) (a post-hoc explainability method) help assess model behavior against abstract social concepts. They intend to assist developers and data scientists in assessing and improving the fairness of ML technologies including ensemble methods and deep neural networks.

XAI challenges and methodological limitations

While deep learning infrastructures like black-box AI models often exhibit remarkable predictive performance, they suffer from a lack of interpretability, as it is difficult to understand the internal logic or rationale behind their decision-making processes and predictions. The lack of transparency and trustworthiness of black box models undermines the stakeholders’ efforts to audit or assign accountability for model-driven actions. It may prove hard to ensure that AI developers and systems administrators are held answerable for model-driven actions, when and if errors or harm occur, especially in certain industry sectors like healthcare diagnostics, financial aspects like credit-scoring, and/or welfare allocations, among others. In these contexts, their ML technologies’ decisions may have profound and potentially irreversible effects on the individuals’ lives. Without insight into how decisions are made, affected parties have limited avenues for recourse to action or to equitable remedies, thereby undermining procedural fairness, that could lead to the erosion of public trust in algorithmic systems.

ML systems are typically trained on datasets that may embed historical or structural biases, thereby posing risks of perpetuating inequitable outcomes in automated decision-making. This may result in a situation where a decision-making process or algorithm disproportionately and negatively affects vulnerable or underrepresented groups in society, even if there is no explicit intent to discriminate against them, particularly if their data may be under-sampled or misrepresented in training sets.

AI models’ predictive accuracy and fairness may degrade over time due to effects of data drift on the performance of machine learning models. Shifts in the underlying data distribution or changes in real-world contexts (e.g. political, economic, social, technological and/or ethical issues) can cause the models to produce less reliable or biased outcomes, thereby necessitating continuous monitoring, periodic retraining and fairness audits to ensure sustained performance and regulatory compliance. Such changes may either occur gradually (concept drift) or abruptly (covariate shift). Consequentially, models trained on historical data may no longer generalize well. Their ML systems may yield suboptimal outcomes that can impact on the livelihoods of individuals and specific groups in society. For instance, a financial institution that relies on a credit-scoring model that was trained before major economic fluctuations (e.g. inflation, recession and/or rises in taxes, duties and tariffs) could penalize individuals from economically disrupted regions without accounting for recent changes in income dynamics. Alternatively, low-income or minority borrowers including single mothers, immigrants or disabled persons (among other vulnerable groups in society) could be denied fair access to bank credit, as AI systems may fail to reflect new socioeconomic changes in the labor market. As a result, AI systems risk perpetuating or exacerbating existing inequalities without adequate and sufficient mechanisms that ensure that AI systems are fair and up to date with the latest developments.

It is imperative that AI practitioners conduct fairness auditing on a regular basis. They need to evaluate and appraise algorithmic outputs across various demographic groups to identify and correct any disparate impacts. Such audits must go beyond one-time assessments and need to become an integral part of the AI lifecycle, in order to ensure that models evolve in ways that uphold ethical standards and regulatory requirements. When combined, explainability, monitoring and fairness auditing can establish a trustworthy AI that is clearly aligned with societal expectations of justice, equity and accountability.

Indeed, XAI techniques can help address ethical and performance-related concerns by providing transparency into model behavior, as stakeholders including regulatory bodies, AI developers, auditors and affected individuals have a legitimate right to understand how specific outcomes are generated. Practitioners who maintain AI systems ought to regularly monitor the models to identify early warning signs of degradation. They are expected to recalibrate them before harmful consequences arise.

AI practitioners are encouraged to advance interpretable and efficient models that are responsive to the diverse needs of end users including data scientists, domain experts and end-users. Their human-centred evaluation of XAI methods may usually focus on the development of comprehensible explanations. Hence, they refer to common metrics including: (i) sparsity (meaning that explanations highlight only the most important factors); (ii) explanation complexity (referring to how simple or complicated an explanation is); (iii) simulatability (which is the extent to which practitioners can anticipate the model’s decision after seeing the explanation); and (iv) coverage, that indicates how many cases an explanation applies to). In addition, user-centered outcomes such as trust in the system, improved task performance and the time required to understand the explanation are also considered by AI administrators.

More importantly, their systems ought to be legally and ethically justifiable as well as socially defensible. They are required to comply with relevant regulatory frameworks governing the deployment of their models and to meet the transparency and auditability standards set by specific jurisdictions, such as the European Union’s General Data Protection Regulation (GDPR) and its AI Act (2024), among others. Table 1 features a comparison matrix that provides a non-exhaustive list of XAI tools. It outlines their strengths, weaknesses / limitations and identifies potential domains in which these tools can be applied.

Table 1. A comparison matrix of XAI tools that specifies their key metrics, strengths, weaknesses/limitations and domain fit.

XAI toolTypeCore metricsSupporting / human-centered metricsStrengthsWeaknesses / limitationsPossible domains
Inherently interpretable models (decision trees, linear/logistic regression, rule-based models)Model classSparsity

Explanation length / complexity

Rule length

Simulatability

Time-to-understanding
Coverage (model-wide)

User trust score
– Transparent, easy to explain
– Supports regulatory compliance
– High interpretability without post-hoc tools
– Limited predictive power for complex patterns
– May oversimplify high-dimensional data
– Finance (credit scoring)
– Public administration
– Healthcare triage
– Education and HR screening
Post-hoc interpretability (general category)Methodological classExplanation length / complexity

Sparsity

Coverage

Visualization clarity
User trust score

Time-to-understanding
– Allows explanation of black-box models
– Generates local and global explanations
– Broad domain applicability
– Risk of misleading explanations
– Does not make the model itself interpretable
– May be computationally intensive
– Deep learning applications
– High-stakes decisions needing model transparency
Counterfactual explanationsMethodSparsity

Explanation length

Time-to-understanding

User trust score
Coverage

Task performance improvement
– Intuitive “what-if” reasoning
– Actionable for decision subjects
– Enhances user agency and contestability
– May propose unrealistic or infeasible scenarios
– Sensitive to feature correlations
– Finance (loan decisions)
– Hiring & admissions
– Healthcare prognosis
Fairness audits (ongoing)Governance mechanismCoverage

Visualization clarity
User trust score
Task performance improvement
– Detects structural biases
– Essential for compliance (e.g., EU AI Act, GDPR)
– Supports trust and equity
– Requires access to sensitive demographic data
– Needs continuous monitoring
– May uncover issues that require costly remediation
– Public sector decision systems
– Finance (credit scoring)
– Policing algorithms
– Welfare allocation
Human-in-the-loop (HITL)Operational approachUser trust score

Task performance improvement

Time-to-understanding
Visualization clarity

Explanation length
– Enhances accountability
– Reduces automation bias
– Supports hybrid decision-making
– Slows automation
– Human reviewers require training
– May introduce human bias
– Healthcare diagnosis
– Legal assessments
– Safety-critical systems
SHapley Additive exPlanations (SHAP)Post-hoc; model-agnosticSparsity

Explanation length / complexity

Coverage

Visualization clarity
Simulatability

Time-to-understanding
– Theoretically grounded (game theory)
– Local and global explanations
– Widely adopted, rich visualization tools
– High computational cost for large models
– Can overwhelm non-experts with detail
– Tabular/structured data
– Finance, insurance, healthcare
Local interpretable model-agnostic explanations (LIME)Post-hoc; model-agnosticSparsity

Explanation length

Coverage
Simulatability

Time-to-understanding
– Simple, intuitive local explanations
– Lightweight and fast
– Works across model types
– Instability of explanations
– Locality sampling may be misleading
– Real-time decisions
– Early-stage diagnostics of ML models
ELI5Model-agnostic toolkitSparsity

Explanation length

Visualization clarity

Simulatability
– Easy-to-use API
– Supports debugging and visualization
– Transparent feature and weight analysis
– Less comprehensive than SHAP/LIME
– Limited deep-learning support
– Education, prototyping, model debugging
AlibiModel-agnostic librarySparsity

Explanation length

Coverage

Rule length (Anchors)
Time-to-understanding

User trust score
– Covers counterfactual, anchors, adversarial detection
– Strong support for fairness evaluation
– Requires technical expertise
– Less widely documented
– Enterprise ML pipelines
– Sensitive domains requiring fairness
IBM AIX360Comprehensive XAI frameworkExplanation length / complexity

Rule length

Simulatability

Coverage
User trust score– Extensive algorithms + documentation
– Open-source, enterprise-ready
– Supports datasets + model explainability
– Large and complex ecosystem
– Potential steep learning curve
– Regulated industries (finance, healthcare)
– Enterprises needing governance support
Microsoft InterpretMLComprehensive XAI frameworkSimulatability

Explanation length

Visualization clarity

Coverage
Time-to-understanding– Supports interpretable models (EBMs)
– Unified dashboard for explanations
– Strong community support
– Less tailored for deep learning
– Integration mainly in Python ecosystem
– Healthcare, HR, education
– Systems needing interpretable boosting models
Google what-if tool (WIT)Visual interfaceVisualization clarity

Coverage
Task performance improvement

User trust score
– No-code/low-code exploration
– Intuitive fairness and performance evaluation
– Highly accessible
– Limited support for large-scale or custom DL architectures
– Requires TensorBoard integration
– Ethical AI reviews
– Education & training
– Exploratory fairness analysis
DARPA XAI programResearch & evaluation frameworkUser trust score

Task performance improvement

Time-to-understanding
Explanation satisfaction

Mental model accuracy
– Integrates cognitive psychology and human reasoning
– Supports interpretable ML + post-hoc methods
– Strong evaluation criteria (Fidelity, Completeness, Simplicity)
– Research-oriented; less plug-and-play
– High complexity, diverse methodologies
– Defense, critical infrastructures
– Human-AI collaboration research
Microsoft prediction–decision–recommendation (PDR) frameworkAI governance & workflow frameworkTask performance improvement

Time-to-understanding

User trust score
Visualization clarity– Aligns predictions with human values
– Designed for enterprise-scale recommender systems
– Supports qualitative + quantitative metrics
– Tailored to recommendation ecosystems
– Limited uptake outside Microsoft platforms
– Recommender systems (retail, media)
– Decision-support platforms
IEEE P7003 algorithmic bias standardEthical & technical standardCoverage

Documentation completeness
User trust score (organizational)– Provides actionable framework for bias mitigation
– Widely recognized ethical standard
– Supports documentation + governance
– Not a technical tool—needs developer interpretation
– Compliance may require significant restructuring
– Public sector AI
– HR and recruitment systems
– Safety-critical decision systems
FairlearnFairness assessment & mitigation libraryCoverage

Visualization clarity
User trust score– Provides disparity metrics
– Offers mitigation algorithms
– Integrates with common ML pipelines
– Requires demographic data
– Does not explain models—focuses on fairness only
– Credit scoring, insurance, hiring
– Any domain requiring fairness constraints
Testing with concept activation vectors (TCAV)
(Implemented in Captum)
Concept-based explainabilitySimulatability (concept-level)

Explanation length

Sparsity (concept selection)
Time-to-understanding

User trust score
– Explains models using human-understandable concepts
– Helps detect stereotype-driven patterns
– Requires well-defined concepts
– Limited to deep models with embeddings
– Computer vision
– Medical imaging
– NLP conceptual bias detection
Model monitoring for drift (concept drift, covariate shift)Governance & operational processCoverage

Visualization clarity
Task performance improvement (operational)

Time-to-understanding (alerts)
– Essential for long-term reliability
– Supports proactive correction
– Aligns with regulatory expectations
– Requires continuous data pipelines
– Resource-intensive in large-scale systems
– Finance (risk models)
– Healthcare (diagnostics)
– Dynamic environments (e-commerce)

A conceptual framework for XAI

Explainable artificial intelligence (XAI) has become a very important area of inquiry for the promotion of responsible AI governance. Regulators, organizations and end-users are increasingly demanding that ML systems are transparent, accountable and fair. Beyond technical performance, these technologies are now expected to protect users’ privacy, safety and security, while remaining inclusive and accessible for the benefit of diverse socio-demographic groups in society, regardless of their age, gender, ability or ethnicity. As a result, XAI is no longer a peripheral consideration; rather, it has become a normative requirement as it advances ethical, trustworthy, and socially legitimate AI systems.

Accordingly, the objectives of XAI, whether explainable ML designs are driven by regulatory compliance, operational transparency policies or for trust building purposes, ought to be embedded across the entire AI lifecycle. The explainability of AI plays a critical role during the research and development phase, from data collection and preprocessing to model training, deployment, monitoring and maintenance. Notwithstanding, AI systems are better positioned to achieve accountability, reliability, and ethical alignment, when explainability is treated as an integral component of process innovation rather than a retrospective add-on,

However, there are instances during model development, where practitioners may have to balance trade-offs between the predictive performance of AI systems and their interpretability. Hence, evaluation criteria need to extend beyond accuracy and efficiency. They should consider the extent to which models generate explanations that are meaningful, accessible and appropriate for different user groups. Therefore, data-related practices are particularly influential at this stage. Transparent data provenance, systematic bias auditing as well as input features that are presented in a way that are easily understandable manner to humans (i.e. human-readable feature engineering) can substantially enhance model interpretability and user trust. In this respect, inherently interpretable models, such as decision trees and generalized additive models (GAMs) offer direct insights into decision logic, in contrast to complex black-box models that rely on post-hoc explanation techniques.

XAI systems require ongoing governance and maintenance once they have been deployed. This includes version control, retraining protocols that are guided by explainability objectives, as well as user feedback mechanisms that support continuous learning and improvement outcomes. The extant literature clearly distinguishes between ante-hoc and post-hoc approaches to explainability. Inherently interpretable models such as linear regression, rule-based systems, decision trees, GAMs and Bayesian models are transparent by design. Such models enable users to directly understand their modus operandi, operational logic and decision-making processes. By contrast, black-box models, including deep neural networks, necessitate post-hoc interpretability methods. Techniques such as SHAP and LIME provide feature-attribution and local explanations, while counterfactual reasoning, fairness audits and human-in-the-loop (HITL) approaches are increasingly employed to enhance transparency, accountability and equity in high-stakes contexts.

This review confirms that SHAP offers model-agnostic explanations by quantifying the contribution of individual features to model outputs, whereas LIME explains specific predictions by locally approximating complex models with interpretable surrogates. In addition, other open-source tools (e.g. ELI5, Alibi) and commercial platforms (e.g. IBM AIX360, Microsoft InterpretML, Google’s What-If Tool) have expanded the XAI ecosystem. Methodological approaches such as counterfactual explanations further support understanding by exploring “what-if” scenarios, while ongoing fairness audits evaluate model behaviors across demographic groups, to identify and mitigate bias. Human-in-the-loop (HITL) approaches complement these techniques by embedding human oversight throughout the AI lifecycle, thereby strengthening contextual accuracy and accountability.

Additionally, several institutions initiatives have led to the formalization of XAI assessment and evaluation standards. For instance, the DARPA XAI Program features quantitative metrics (such as fidelity, completeness, simplicity, robustness and performance), as well as qualitative ones (including human-centered evaluations that examine perceived usefulness, trust, satisfaction and task effectiveness). Yet, despite these advances, many existing XAI approaches remain technique-specific, as they exclusively focus on post-hoc explanations, fairness audits or concept-based methods, often resulting in fragmented evaluation practices.

Against this backdrop, this research puts forward an easy-to-understand, user-centric XAI framework for black-box models. This conceptual framework raises awareness on human-centered evaluation metrics and integrates them as a unifying analytical lens across the AI lifecycle (rather than assessing explainability in isolation). It explicitly links data practices, model design choices and explanation interfaces to measurable user outcomes, as illustrated in Fig. 1.

Fig. 2

Fig. 1. A user-centric explainable artificial intelligence (XAI) framework for black box models.

Firstly, this user-centric XAI framework emphasizes transparent, inclusive and secure training data as a foundation for explainability and trust. While governance-oriented tools and standards (e.g. Fairlearn, IEEE P7003) primarily support compliance and bias detection, this model suggests that inclusiveness, transparency, safety and security metrics during the training phase by ensure that the models are developed in a manner that is fair, interpretable, robust and trustworthy.

The inclusiveness metrics help detect and mitigate biases in training data and model behavior, thereby promoting fairness. They ensure objective and consistent performance of AI systems across diverse user groups. Hence, they lead to explanations that are meaningful and relevant to all stakeholders. The transparency metrics are meant to evaluate how clearly the model’s internal decision-making processes can be understood by their users. During training, these metrics guide the development of models that produce interpretable and accessible explanations, in order to improve user comprehension and trust.

The safety metrics monitor the model’s behavior under various conditions, including during unusual, rare or unexpected situations (a.k.a. edge cases) that challenge the system’s robustness, to prevent harmful or unintended outcomes. The integration of safety considerations in training enhance the systems’ reliability aspects, as they ensure that explanations reflect typical contexts as well as exceptional (or even risky) scenarios. Similarly, the security metrics assess vulnerabilities to adversarial attacks or data manipulation. The inclusion of security metrics in training, models become more robust, and their explanations would enhance confidence levels and reduce potential risks, thereby fostering greater user assurance.

Secondly, the framework incorporates an accountable ante-hoc model layer grounded in inherently interpretable models. Clearly, it is consistent with decision trees and rule-based systems, as this layer prioritizes sparsity, simulatability and explanation conciseness. It facilitates quick understanding and mental simulation of decisions. In doing so, it and accountability beyond what post-hoc methods alone can achieve. The accountability metrics reinforces predictability and can strengthen the trustworthiness and governance of AI systems by: (i) evaluating whether the model’s decision logic can be audited and traced, thereby ensuring each prediction can be explained and justified to stakeholders; (ii) ensuring compliance with ethical and legal standards; (iii) assessing stakeholder understanding and acceptance; and, (iv) facilitating error and bias detection.

There is scope for practitioners to incorporate accountability metrics, if they want their inherently interpretable models to become more auditable, responsible and trustworthy. At the same time, they can enhance the value of ante-hoc explainability by adopting privacy metrics that safeguard sensitive information throughout the interpretability process. Though inherently interpretable models are transparent by design, the privacy metrics would ensure that this transparency does not compromise sensitive data by: (i) measuring risk of sensitive (personal) information exposure; (ii) enforcing data minimization principles to ensure that the model uses only the indispensable data to reduce privacy risks; (iii) Balancing interpretability and data protection (e.g. through anonymization techniques) to maintain explainability while respecting privacy constraints; and, (iv) Supporting compliance with data protection regulations (E.g. by complying with GDPR or other relevant privacy laws).

Thirdly, this framework integrates fair and robust post-hoc explanations with interpretable user interfaces. While tools such as SHAP, LIME, Alibi, and TCAV are commonly evaluated using metrics such as sparsity, complexity, and visualization clarity, this framework extends their application by explicitly prioritizing trust calibration and task performance improvement, particularly when AI systems are employed for decision support in human-in-the-loop (HITL) settings. This emphasis aligns with human-centered evaluation principles advocated in initiatives such as DARPA XAI and Microsoft’s Prediction–Decision–Recommendation (PDR) framework.

Post-hoc explanation methods are applied after a black-box model has been trained and has generated predictions (e.g., SHAP, LIME, and counterfactual explanations). While fairness metrics (e.g., demographic parity, equalized odds, and disparate impact) quantify whether the model’s decisions are biased or discriminatory across different demographic groups, robustness metrics assess stability under perturbations, including both predictive robustness (i.e., stability of model outputs) and explanation robustness (i.e., consistency of explanations under slight input variations). In this context, robustness may refer both to the stability of the model’s predictions and to the consistency of the generated explanations under slight variations in the input data.

The fairness and robustness metrics build user trust and enhance XAI in post-hoc settings as they reveal biases, validate explanation reliability (as explanations are not expected to significantly change if they are meeting and exceeding their robust performance metrics), guide explanation refinement (by monitoring fairness and robustness metrics, developers can fine tune post-hoc methods to produce explanations that are accurate and fairly representative of the model’s decision logic), improve interface transparency, and support regulatory compliance as well as ethical standards that foster increased transparency and accountability of AI systems.

Overall, this conceptual framework offers a coherent, user-oriented benchmark for assessing explainability across data, models, and interfaces, thereby extending existing XAI frameworks developed by technology firms and standards bodies. It implies that ante-hoc (inherently interpretable) models can inform and calibrate post-hoc explanation methods and their associated interfaces. Ante-hoc models may serve as interpretable baselines against which the fidelity and consistency of post-hoc explanations from black-box models are assessed. Therefore, the integration of ante-hoc and black-box models can support the development of more trustworthy systems, particularly by enabling interpretable interfaces to be trained or tested against transparent model logic before deployment in more complex settings. Accordingly, this framework positions ante-hoc models as an intermediary layer between training data and post-hoc explanations. This enables explanation methods and interfaces to be validated against interpretable model logic before being applied to complex black-box systems.

Conclusions

This research synthesizes key contributions in XAI to underline its essential role in promoting responsible governance in the research, development and maintenance of machine learning systems. It discusses about XAI tools, describes their metrics, identifies their strengths as well as their weaknesses / limitations. Moreover, it reports their possible domains. It addresses ethical concerns related to black-box models. Hence, it emphasizes the need for documentation practices that establish the normative and technical baselines for accountability, upon which performance tracking and continuous monitoring are built. One has to consider that robust drift detection and fairness auditing are dependent on these baselines and operate iteratively throughout deployment in order to maintain reliable, transparent and equitable XAI systems.

This contribution’s user-centric XAI framework with its interpretable interfaces that bridge technical innovation and stakeholder ethics are intended to foster responsible AI and ensure that ML models remain interpretable, trustworthy and compliant with ethical and legal standards like GDPR and the EU AI Act, throughout their lifecycle.

Theoretical implications

This research adds value to the extant academic literature focused on XAI. It clarifies key notions and explains the meanings of different terms related to model interpretability, data drift, concept drift and fairness. Moreover, it clarifies how practitioners can build and maintain trustworthy AI systems. It clearly indicates that interpretability is a crucial mechanism for fostering user trust, not just through technical explanations, but also by adhering to clear governance structures and established communication channels. This reasoning aligns with emerging theories related to human-computer interaction and to technology adoption frameworks drawn from social sciences literature, that highlight the importance of transparency and accountability in building user confidence in complex systems.

This research builds on the foundations of established theoretical underpinnings by integrating explainable AI within broader models of technology acceptance, trust and socio-technical dynamics. For example, some elements of this contribution’s conceptual framework are related to the Technology Acceptance Model (TAM) key constructs including to perceived usefulness and perceived ease of use, as these factors clearly align with XAI’s goals of enhancing transparency and interpretability of ML models to foster user adoption. The framework also draws on Trust in Automation theories, particularly where they highlight the rationale for the development of explainable AI systems, to enhance user trust, and to prevent their misuse or disuse. In a similar vein, some commentators argue that XAI literature is grounded in the Socio-Technical Systems (STS) theory. They contend that this theory provides a holistic lens by emphasizing the interplay between technological artifacts and social contexts, thereby reinforcing the need for inclusive, ethical and transparent AI design. Other colleagues maintain that XAI literature is rooted in Responsible Research and Innovation (RRI) frameworks as they raise awareness about anticipatory governance, stakeholder engagement and ethical reflexivity, all of which are operationalized through user-centric and transparent approaches. Together, these models serve as a theoretical basis for this study’s conceptual framework, as they bridge technical, human, ethical and regulatory dimensions to support trustworthy AI ecosystems.

This timely contribution promotes transparent and fair forms of AI knowledge generation, as the reasoning behind ML decisions and predictions ought to be continuously scrutinized and validated. It puts forward a comprehensive framework that synthesizes key dimensions of XAI into a cohesive model. It reports how, why, where and when explainability is evolving within generative AI systems. Generally, by linking design choices to measurable user outcomes across the AI lifecycle. Unlike prior models that are narrowly focused on interpretability techniques, this framework integrates lifecycle governance with human-centered evaluation metrics. It supports the practical implementation of responsible AI principles. By doing so, it advances theoretical understanding while offering actionable guidance for developers, policymakers and stakeholders committed to trustworthy AI.

In sum, it provides a comprehensive explanation of XAI systems for the benefit of their users including AI developers, data scientists, domain experts, business stakeholders, regulators and auditors, end users as well as academic researchers, among others. It enables them to better understand the modus operandi of deep neural networks and complex learning models. It promotes post-hoc explanation techniques and methods that provide explanations for the decisions made by machine learning models after they have been trained. This is particularly important for opaque black box models, ensemble methods or support vector machines, which offer high predictive accuracy but are not clear enough on how they arrive at specific outputs. It identifies XAI tools that can help practitioners assess the validity and reliability of ML models.

This research emphasizes the dynamic challenges of AI deployment. It makes reference to model drift and to data distribution shifts, as they can have a negative impact on the reliability and fairness of explanations over time. This perspective moves beyond static evaluations of XAI. It highlights the need for continuous monitoring and adaptation of AI models. It considers the needs and challenges faced not only by AI developers but also by system administrators and non-expert users. It recognizes that effective XAI must cater to diverse levels of technical understanding and operational requirements.

This article also offers novel, integrated and up-to-date syntheses of both academic research as well as practitioner-oriented tools and frameworks. It bridges the gap between theoretical advancements and their real-world applications across the entire AI lifecycle. It refers to technical aspects including XAI specific tools and techniques, data monitoring, fairness assurance and stakeholder engagement, thereby providing a timely and holistic view of the current XAI landscape.

Practical iplications

This research offers guidance for a wide range of stakeholders involved in the development, deployment and governance of AI systems. It provides actionable insights for developers and system administrators for implementing XAI. It describes specific tools (e.g. SHAP, LIME, ELI5) and platforms that offer concrete entry points for integrating interpretability into their workflows. This article highlights a comparison matrix of leading XAI tools. It outlines their key metrics, strengths, limitations and domain suitability to support informed managerial decision-making.

Additionally, it proposes a user-centric XAI framework tailored for black-box models. This framework offers practical guidance on aligning explainability techniques with organizational capabilities, stakeholder expectations, and contextual constraints. The novel framework provides a tangible structure that embeds responsible AI practices from the initial design phase through ongoing monitoring and updates. It is intended to support practitioners in the development of more robust, reliable and trustworthy AI applications. Its recommendations for the integration of interpretability, regular bias monitoring and fairness auditing (through standardized reporting frameworks, such as model cards and data sheets, combined with automated drift detection tools) can inform policy makers as well as practitioners to advance XAI systems. Hence, the development of internal policies, quasi-substantive rules and workflows are intended to advance responsible AI development and deployment. This may ultimately lead to virtuous outcomes that are intended to foster a culture of ethical AI innovation that enhances public trust and understanding of XAI systems, leading to increased user adoption in different domains.

Limitations and future research directions

Despite its contributions, this study also has its inherent limitations. The systematic review involved the analysis of recent, high-impact academic publications focused on “explainable artificial intelligence” or “explainable AI” or “XAI”. This selection approach, while ensuring relevance and quality, introduces the risk of citation bias, where frequently cited or well-known studies receive disproportionate attention, potentially overshadowing emerging, less-cited, or interdisciplinary work. Consequently, some innovative advancements or niche applications in XAI may not have been fully captured. Additionally, the quickly evolving nature of the field means new developments could have emerged after the review period. Furthermore, the evaluation of XAI tools and frameworks relied on publicly available information and academic studies, which often lack empirical depth or comprehensive real-world validation, thereby limiting the scope for fully assessing practical performance and impact of interpretable models.

Future research can address these limitations and explore plausible areas of study related to XAI. For example, there is scope for conducting longitudinal studies to examine the long-term impact of XAI adoption on system performance, user trust and on the fairness of AI outputs in real-world scenarios. Moreover, other research is required to develop standardized metrics that can evaluate the “quality” of explanations and their effectiveness for different user groups hailing from diverse contexts. Perhaps, prospective researchers can build on this seminal article by promoting the integration of XAI techniques with other responsible AI governance frameworks, such as privacy-secure AI methodologies, robust AI, as well as inclusive, bias-free AI systems in the near future. In addition, they may analyze human-computer interaction aspects of XAI, including how different types of explanations are perceived and understood by diverse stakeholders. It is imperative that developers design effective and interpretable user-centric XAI solutions. Further research in these fields of study will contribute to the continued advancement and to the responsible adoption of explainable AI, as shown in Table 2.

Table 2. Future research directions related to explainable AI (XAI).

Future research areaRationalePotential impact
Context-specific XAITo investigate user backgrounds, domain knowledge of XAI and cultural contexts.Increases usability and accessibility of XAI systems.
Human-computer interaction (HCI) in XAITo explore how different stakeholders perceive, interpret and interact with different types of AI explanations.Improves the design of user-centric and interpretable XAI solutions.
Focus on niche and emerging XAI applicationsTo examine XAI applications in specialized domains (e.g., healthcare, finance, autonomous systems).Expands XAI applicability and domain-specific innovations.
Integration of XAI with responsible AI governance frameworksTo better understand how XAI can be associated with privacy-preserving, robust and bias-free AI methodologies, to advance holistic AI governance frameworks.Promotes trustworthy, fair, and secure XAI deployment.
Empirical validation of XAI tools and frameworksIn-depth and broad empirical studies will shed light on the effectiveness of current XAI tools in real-world applications.Bridges the gap between theoretical models and practical uses of XAI.
Longitudinal studies on XAI adoptionTo analyze the long-term effects of XAI on system performance, user trust and fairness in real-world contexts.Advances knowledge on sustained benefits and risks of XAI use.
Ethical and social implications of XAITo demonstrate the societal impacts, ethical challenges and policy considerations arising from XAI adoption.Guides responsible AI governance deployment that respects societal norms.
Development of standardized evaluation metricsTo create standardized, reliable metrics that can assess XAI quality and its effectiveness across diverse users.Enables consistent benchmarking and comparison of XAI tools.

Appendix A. Key concepts in explainable artificial intelligence research.

XAI key termDescription
AccountabilityAccountability ensures that individuals or organizations can be held responsible for the outcomes and impacts of AI systems, especially in critical applications, where errors or biases could have significant consequences. Individuals and organizations ought to be supported by clear, interpretable explanations that enable oversight and compliance with ethical or regulatory standards.
Artificial Intelligence (AI)AI is a broad field in computer science focused on creating machines that can perform tasks typically associated with human intelligence, such as learning, reasoning, problem-solving, perception and decision-making.
Black box / Black-box modelThe black box (model) refers to the opaque nature of various AI models like deep neural networks’ decision-making processes (as users including their developers may not be in a position to understand their modus operandi). While such models can usually achieve high accuracy, they may not be transparent about how their data processing works and on how they produce a specific output.
Counterfactual explanationsCounterfactual explanations are a type of model-agnostic explanation technique used in interpretable and explainable AI (XAI). They describe how an input instance would need to be altered minimally for a machine learning model to yield a different (usually a desired) outcome.
Decision makingDecision-making in the context of AI refers to the process where an AI system uses computational techniques to analyze data, identify patterns, and determine optimal courses of action or choices from a set of alternatives. Unlike human decision-making, which can rely on intuition, experience, or emotion, AI decision-making is data-driven and is based on algorithms.
Decision support systems (DSS)DSS are applications that analyze data and provide valuable insights. They are designed to assist humans in making informed choices. In XAI contexts, DSS integrate explainability into such systems, and transform them from “black boxes” into transparent tools that users can understand and trust, especially in sensitive domains like healthcare, among others.
Deep Learning (DL)DL is a subset of machine learning that focuses on utilizing multilayered (deep) neural networks to learn patterns and representations directly from raw data, to discover intricate features and perform tasks such as classification, regression and representation learning.
Evaluation metricsThe evaluation metrics relate to how AI and XAI systems are assessed and measured. They enable practitioners to objectively evaluate their effectiveness as well as the quality of explanations generated by AI systems. While AI models are typically evaluated on their predictive performance (e.g., in terms of their accuracy), XAI evaluation metrics go beyond this to measure how well explanations help users understand, trust and interact with AI systems. In this case, evaluation metrics may include human-centered metrics (e.g. the users’ trust and satisfaction levels vis-à-vis XAI) as well as quantitative metrics (like measuring the model’s accuracy and comprehensiveness).
Explainable Artificial Intelligence / Explainable AI (XAI)XAI explores methods that provide humans with the ability and intellectual oversight to understand AI outputs. The rationale behind XAI is to increase the interpretability and transparency of AI decisions, actions and predictions. In other words, XAI is intended to answer the “why” and “how” behind AI’ systems, as they often function as blackboxes.
Feature attributionFeature attribution refers to the process of quantifying the contribution or importance of each input feature in a machine learning model’s prediction. It helps explain how much each feature influences a particular decision made by the model. This is especially valuable in interpretable machine learning and explainable AI (XAI), as the understanding why an AI model is advancing a certain prediction is as important as the prediction itself.
Human-AI interaction (HAII) / Human-computer interaction (HCI)HAII and HCI concepts and their variations emphasize the user-centricity aspects of AI. In the context of XAI, both notions suggest that humans are more likely to engage, communicate and collaborate with intuitive and explainable AI interfaces.
Human-in-the-Loop (HITL)HITL approaches refer to systems or processes in AI and ML where human judgement and intervention are actively integrated into the decision-making loop. This involvement can occur at various stages, through data collection, labeling and annotation (often with human input), data preprocessing and curation, model training, model evaluation and validation (with human oversight, especially in high-stakes domains), model deployment as well as during monitoring and maintenance phases. The underlying goal of HITL is to combine the strengths of human intuition, contextual understanding and ethical reasoning with the efficiency and scale of automated systems.
InterpretabilityInterpretability is related to the degree to which a human can understand internal mechanics, in terms of the cause-effect relationships of its decision-making processes of XAI models. This construct suggests that users tend to interact with transparent and trustworthy XAI technologies because they can facilitate the interpretation of their outputs.
Local Interpretable Model-agnostic Explanations (LIME)LIME is a technique that explains individual predictions by approximating a complex model (e.g. in a localized setting), with an interpretable one, such as a linear model. LIME highlights which features could influence a specific decision (by perturbing input data) to observe how predictions change, thereby making black-box models more understandable to users without requiring access to their internal structures.
Machine learningML is a field in AI concerned with the development and study of algorithms that can identify patterns within the data. This allows them to learn from them and to make decisions as well as predictions. Such systems can perform tasks without explicit instructions and could improve their performance over time, as they are exposed to more data.
Mental models / Shared mental modelsShared mental models refer to the mutual understanding and common representation of knowledge between humans and AI agents regarding their respective roles, capabilities and their task at hand. Essentially, they refer to the extent to which there is a shared understanding of how the AI system operates and how it aligns with the overall task.
Neural networks (models)Neural networks are complex machine learning architectures with interconnected “layers” used to learn patterns from data, to perform specific tasks like predictions or classifications. The role of XAI is to provide explanations about how such opaque networks/models work. It clarifies how inputs influence outputs and reveals what the AI model has learned.
Perturbation analysisPerturbation analysis involves systematically altering (perturbing) one or more features of the input data and observing how the model’s output changes.
Post-hoc explanationsPost-hoc explanations are retrospective interpretability techniques that are used to explain the predictions of already trained machine learning models after they have made a decision. Post-hoc explanations are generated after model training and are not part of the original learning process. They aim to interpret how or why a model made a specific decision, without altering the model itself.
SHapley Additive exPlanations (SHAP)SHAP is a method based on Shapley values from cooperative game theory. It is used to explain the output of machine learning models. Basically, SHAP offers consistent and theoretically grounded insights into how individual XAI features contribute to its model decisions, by assigning each feature an “importance value” for a specific prediction. Features with positive SHAP values positively impact the prediction, while those with negative values have a negative impact. The magnitude is a measure of how strong the effect is.
TransparencyTransparency refers to the clarity and understandability of an AI system’s internal workings and decision-making processes. Hence, it allows humans to learn how AI systems process data and make decisions.
TrustTrust refers to the users’ confidence levels they place on XAI systems’ decisions. The individuals’ willingness to avail themselves of XAI technologies relies on their reliability in terms of clarity, consistency and usefulness of their explanations. XAI aims to foster appropriate levels of trust by helping users to better understand how and why AI models make certain decisions, predictions or generate outcomes.
User behavior / User studyUser behavior focuses on how individuals interact with, perceive, and respond to the explanations provided by AI systems. The persons’ cognitive processes, trust, decision-making and reliance on AI systems can influence their engagement levels with XAI technologies.

About the author

Mark Anthony CAMILLERI, Ph.D. (Edinburgh) is an Associate Professor in the Department of Corporate Communication at the University of Malta. He was a Fulbrighter at Northwestern University in Evanston, U.S.A (in 2022). Prof. Camilleri was featured among the world’s top 2% scientists in Elsevier’s “Updated science-wide author databases of standardized citation indicators” (in the past four years). In 2023, he achieved a global rank (ns) of 3854, and was listed 124th among business & management researchers. He serves as a scientific expert and reviewer for various European research councils. He was recognized for his outstanding reviews by Publons and by Emerald (as he received a Literati award in 2022 and 2023). He is an Associate Editor of Business Strategy and the Environment; Sustainable Development and of International Journal of Hospitality Management, among others.

Leave a comment

Filed under AI, artificial intelligence, Explainable AI, Responsible AI

The Service Industries Journal: Call for papers focused on ethical AI

Special Issue: Ethical implications of artificial intelligence (AI) and automation in service industries: Addressing algorithmic bias, opacity and unclear accountability mechanisms

Overview

Artificial intelligence (AI) and automation technologies are transforming service industries, including finance, healthcare, hospitality, retail, education, public services and digital platforms. While algorithmic decision-making systems, service robots, chatbots, predictive analytics and automated workflows offer enhanced efficiencies, personalization possibilities and scalability potential, these technologies are also raising profound ethical concerns related to their modus operandi and explainability of their outputs (Camilleri, 2024; Hu & Min, 2023).

As AI-driven service systems increasingly mediate interactions between organisations and their stakeholders; ethical failures and bias have the potential to reinforce existing social inequalities, undermine their trustworthiness, service quality, organisational legitimacy and broader societal well-being (Camilleri et al., 2024). Moreover, opaque “black-box” models reduce transparency and could erode user trust in these machine learning technologies (Kordzadeh & Ghasemaghaei, 2022). Unclear accountability structures may obscure responsibility for service failures or might facilitate unintended harmful outcomes (Novelli et al., 2024). These challenges are particularly evidenced in service contexts where human–AI interactions are frequent, relational and consequential.

Such concerns are clearly illustrated in healthcare services (Procter et al., 2023), where AI-driven diagnostic and triage systems are increasingly used to support clinical decision-making. When these technologies rely on biased or unrepresentative training data, they may systematically underdiagnose or misclassify specific demographic groups. Given the high-stakes and the relational nature of healthcare encounters, limited transparency and explainability can significantly diminish patient trust while raising serious ethical and accountability concerns.

Similar issues arise in financial and insurance services (Oke & Cavus, 2025), where automated credit scoring, loan approval and underwriting systems directly influence individuals’ financial inclusion and long-term economic prospects. Algorithmic opacity makes it difficult for customers to understand, question or contest adverse decisions. Therefore, biased models may perpetuate or amplify socioeconomic inequalities. Such an outcome is particularly problematic in service relationships characterised by long-term dependency and trust.

Ethical challenges are also conspicuous in customer service and frontline interactions (Han et al., 2023), where chatbots and virtual assistants handle large volumes of customer inquiries across retail, telecommunications and travel services (Lv et al., 2022). Although these systems offer efficiency and scalability benefits, there are instances where they fail to recognise emotional distress, cultural differences, or exceptional circumstances. Excessive automation can therefore undermine relational service quality, especially when customers are unable to escalate complex or sensitive issues to human agents (Yang et al., 2022).

In public service contexts, governments are progressively deploying AI systems (Willems et al., 2023) to allocate welfare benefits, determine assess eligibility and detect fraud. In such settings, automated decisions can have profound implications for the citizens’ livelihoods and their inclusion in cohesive societies Ethical concerns become particularly acute when accountability is diffused between public agencies and technology providers, as well as when affected individuals lack meaningful mechanisms for appeal, explanation or redress.

Likewise, platform-based and gig economy services are increasingly relying on algorithmic management systems to assign tasks, evaluate performance and to compute remunerations (Kadolkar et al., 2025). These systems often operate as “black boxes,” leaving workers uncertain about how ratings, penalties or income calculations are determined. The resulting lack of transparency and of clear accountability structures can weaken trust, exacerbate power asymmetries and could intensify worker vulnerability within ongoing service relationships.

Notwithstanding, more human resource management and recruitment specialists are adopting AI-enabled tools for résumé screening and to assess their candidates’ credentials (Soleimani et al., 2025). Possible bias embedded within these systems may disadvantage certain social groups. Their limited transparency can prevent applicants from understanding how hiring decisions are made. Such practices raise important ethical questions concerning fairness, informed consent and procedural justice within professional service contexts.

This special issue seeks to advance novel insights into the above ethical implications of AI and automation in services industries. The guest editors look forward to receiving original, interdisciplinary contributions that critically examine how ethical principles can be embedded into the design, governance, implementation and evaluation of AI-enabled service systems.

Aims and scope

The special issue aims to:

·        Deepen understanding of ethical risks and dilemmas associated with AI and automation in service industries.

·        Explore mechanisms for bias detection, mitigation and governance in service algorithms.

·        Examine transparency, explainability and accountability in AI-enabled service encounters.

·        Advance responsible, human-centered and sustainable approaches to AI-driven service innovation.

Both conceptual, theoretical and empirical contributions are welcome, including qualitative, quantitative, mixed-methods, experimental, design science as well as critical and/or reflexive approaches.

Indicative themes and topics

Submissions may address, but are not limited to, the following topics:

·        Algorithmic bias and discrimination in service delivery;

·        Ethical design of AI-enabled service systems;

·        Transparency and explainability in automated service decisions;

·        Accountability and responsibility in human–AI service interactions;

·        AI ethics governance, regulation, and standards in service industries;

·        Trust, legitimacy and customer perceptions of AI-driven services;

·        Ethical implications of service robots and conversational agents;

·        Human oversight and hybrid human–AI service models;

·        Data privacy, surveillance and consent in digital service platforms;

·        Fairness and inclusion in AI-based personalisation and targeting;

·        Responsible AI and ESG considerations in service organisations;

·        Cross-cultural and institutional perspectives on AI ethics in services;

·        Ethical failures, service recovery and crisis communication involving AI;

·        Methodological advances for studying ethics in AI-enabled services.

References

Camilleri, M. A., Zhong, L., Rosenbaum, M. S. & Wirtz, J. (2024). Ethical considerations of service organizations in the information age. The Service Industries Journal44(9-10), 634-660.

Camilleri, M. A. (2024). Artificial intelligence governance: Ethical considerations and implications for social responsibility. Expert Systems41(7), e13406.

Hu, Y., & Min, H. K. (2023). The dark side of artificial intelligence in service: The “watching-eye” effect and privacy concerns. International Journal of Hospitality Management110, 103437.

Kadolkar, I., Kepes, S., & Subramony, M. (2025). Algorithmic management in the gig economy: A systematic review and research integration. Journal of Organizational Behavior46(7), 1057-1080.

Kordzadeh, N., & Ghasemaghaei, M. (2022). Algorithmic bias: review, synthesis, and future research directions. European Journal of Information Systems31(3), 388-409.

Lv, X., Yang, Y., Qin, D., Cao, X., & Xu, H. (2022). Artificial intelligence service recovery: The role of empathic response in hospitality customers’ continuous usage intention. Computers in Human Behavior126, 106993.

Novelli, C., Taddeo, M., & Floridi, L. (2024). Accountability in artificial intelligence: What it is and how it works. AI & Society39(4), 1871-1882.

Procter, R., Tolmie, P., & Rouncefield, M. (2023). Holding AI to account: challenges for the delivery of trustworthy AI in healthcare. ACM Transactions on Computer-Human Interaction30(2), 1-34.

Soleimani, M., Intezari, A., Arrowsmith, J., Pauleen, D. J., & Taskin, N. (2025). Reducing AI bias in recruitment and selection: an integrative grounded approach. The International Journal of Human Resource Management, 1-36.

Willems, J., Schmid, M. J., Vanderelst, D., Vogel, D., & Ebinger, F. (2023). AI-driven public services and the privacy paradox: do citizens really care about their privacy?. Public Management Review25(11), 2116-2134.

Yang, Y., Liu, Y., Lv, X., Ai, J., & Li, Y. (2022). Anthropomorphism and customers’ willingness to use artificial intelligence service agents. Journal of Hospitality Marketing & Management31(1), 1-23.

Submission Instructions

Submission guidelines

Manuscripts should be prepared according to The Service Industries Journal’s author guidelines and submitted via the journal’s online submission system. During submission, authors should select the special issue title:

“Ethical implications of artificial intelligence (AI) and automation in service industries: Addressing algorithmic bias, opacity and unclear accountability mechanisms”.

All submissions will undergo a double-blind peer review process in accordance with the journal’s standards and policies of Taylor & Francis.

Important dates

  • Full paper submission deadline: 31st January 2027
  • First round of reviews: 31st March 2027
  • Revised manuscript submission: 31st May 2027
  • Final acceptance: 31st August 2027
  • Expected publication: 30th November 2027

Contact Information: For informal enquiries regarding the fit of manuscripts or the scope of the special issue, please contact the Leading Guest Editor  via Mark.A.Camilleri@um.edu.mt.

Leave a comment

Filed under Analytics, artificial intelligence, Big Data, Call for papers, chatbots, ChatGPT, customer service, digital media, digital transformation, ethics, Generative AI, Industry 4.0, innovation, Marketing, technology

The use of Generative AI for travel and tourism planning

📣📣📣 Published via Technological Forecasting and Social Change.

👉 Very pleased to share this timely article that examines the antecedents of the users’ trust in Generative AI’s recommendations, related to travel and tourism planning.

🙏 I would like to thank my colleagues (and co-authors), namely, Hari Babu Singu, Debarun Chakraborty, Ciro Troise and Stefano Bresciani, for involving me in this meaningful research collaboration. It’s been a real pleasure working with you on this topic!

https://doi.org/10.1016/j.techfore.2025.124407

Highlights

  • •The study focused on the enablers and the inhibitors of generative AI usage
  • •It adopted 2 experimental studies with a 2 × 2 between-subjects factorial design
  • •The impact of the cognitive load produced mixed results
  • •Personalized recommendations explained each responsible AI system construct
  • •Perceived controllability was a significant moderator

Abstract

Generative AI models are increasingly adopted in tourism marketing content based on text, image, video, and code, which generates new content as per the needs of users. The potential uses of generative AI are promising; nonetheless, it also raises ethical concerns that affect various stakeholders. Therefore, this research, which comprises two experimental studies, aims to investigate the enablers and the inhibitors of generative AI usage. Studies 1 (n = 403 participants) and 2 (n = 379 participants) applied a 2 × 2 between-subjects factorial design in which cognitive load, personalized recommendations, and perceived controllability were independently manipulated. The initial study examined the probability of reducing the cognitive load (reduction/increase) due to the manual search for tourism information. The second study considers the probability of receiving personalized recommendations using generative AI features on tourism websites. Perceived controllability was treated as a moderator in each study. The impact of the cognitive load produced mixed results (i.e., predicting perceived fairness and environmental well-being), with no responsible AI system constructs explaining trust within Study 1. In study 2, personalized recommendations explained each responsible AI system construct, though only perceived fairness and environmental well-being significantly explained trust in generative AI. Perceived controllability was a significant moderator in all relationships within study 2. Hence, to design and execute generative AI systems in the tourism domain, professionals should incorporate ethical concerns and user-empowerment strategies to build trust, thereby supporting the responsible and ethical use of AI that aligns with users and society. From a practical standpoint, the research provides recommendations on increasing user trust through the incorporation of controllability and transparency features in AI-powered platforms within tourism. From a theoretical perspective, it enriches the Technology Threat Avoidance Theory by incorporating ethical design considerations as fundamental factors influencing threat appraisal and trust.

Introduction

Information and communication technologies have been playing a key role in enhancing the tourism experience (Asif and Fazel, 2024; Salamzadeh et al., 2022). The tourism industry has evolved as a content-centric industry (Chuang, 2023). It means the growth of the tourism sector is attributed to the creation, distribution, and strategic use of information. The shift from the traditional model of demand–driven to the content-centric model represents a transformation in user behaviour (Yamagishi et al., 2023; Hosseini et al., 2024). Modern travellers are increasingly dependent on user-generated content to decide on their choices and travel planning (Yamagishi et al., 2023; Rahaman et al., 2024). The content-focused marketing approach in tourism emphasizes the role of digital tools and storytelling to assist in creating a holistic experience (Xiao et al., 2022; Jiang and Phoong, 2023). From planning a trip to sharing cherished memories, content helps add value to the travellers and tourism businesses (Su et al., 2023). For example, MakeMyTrip (MMT) integrated generative AI trip planning assistant which facilitates conversational bookings assisting the users with destination exploration, in-trip needs, personalized travel recommendations, summaries of hotel reviews based on user content and voice navigation support positioning the MMT’s platform more inclusive to the users. The content marketing landscape is changing due to the introduction of generative AI models that help generate text, images, videos, and interesting code for users (Wach et al., 2023; Salamzadeh et al., 2025). These models assist in expressing the language, creativity, and aesthetics as humans do and enhance user experience in various industries, including travel and tourism (Binh Nguyen et al., 2023; Chan and Choi, 2025; Tussyadiah, 2014).

Gen AI enhances natural flow of interactions by offering personalized experiences that align with consumer profiles and preferences (Blanco-Moreno et al., 2024). Gen AI is gaining significant momentum for its transformative impact within the tourism sector, revolutionizing marketing, operations, design, and destination management (Duong et al., 2024; Rayat et al., 2025). Accordingly, empirical studies suggest that Generative AI has the potential to transform tourists’ decision-making process at every stage of their journey, demonstrating a significant disruption to conventional tourism models (Florido-Benítez, 2024). Nonetheless, concerns have been raised about the potential implications of generative AI models, and their generated content might possess inaccurate or deceptive information that could adversely impact consumer decision-making (Kim et al., 2025a, Kim et al., 2025b). In its report titled “Navigating the future: How Generative Artificial Intelligence (AI) is Transforming the Travel Industry”, Amadeus highlighted key concerns and challenges in implementation Gen AI such as data security concerns (35 %), lack of expertise and training in Gen AI (34 %), data quality and inadequate infrastructure (33 %), ROI concerns and lack of clear use cases (30 %) and difficulty in connecting with partners or vendors (29 %). Therefore, the present study argues that with the intuitive design, the travel agents could tackle the lack of expertise and clear use of Gen AI. The study suggests that for travel and tourism companies to build trust in Gen AI, they must tackle the root causes of user apprehension. This means addressing what makes users fear the unknown, ensuring they understand the system’s purpose, and fixing problems with biased or poor data. Also, previous studies highlighted how the integration of Gen AI and tourism throws certain issues such as misinformation and hallucinations, data privacy and security, human disconnection, and inherent algorithmic biases (Christensen et al., 2025; Luu et al., 2025). Moreover, if Gen AI provides biased recommendations, the implications are adverse. If the users perceive that the recommendations are biased, they avoid using them, leading to high churn and abandoning platforms (Singh et al., 2023). Users’ satisfaction will decline, replaced by frustration and anger as biased output damages the promise of personalized services. This negatively impacts brand reputation and loss of significant market competitive advantage (Wu and Yang, 2023). Such scenarios will likely lead to stricter regulations, mandatory algorithmic audits, and new consumer protection laws forcing the industry to prioritize fairness as well as explainability to avoid serious consequences. Interestingly, research studies draw attention to an interesting paradox, that consumers are heavily relying on AI-generated travel itineraries, even when they are aware of Gen AI’s occasional inaccuracies (Osadchaya et al., 2024). This reliance might stem from a belief that AI’s perceived objectivity and capacity for personalized recommendations indicate a significant transformation of trust between human and non-human agents in the travel decision-making process (Kim et al., 2023a, Kim et al., 2023b). Empirical findings indicate that AI implementation in travel planning contributes to the objectivity of the results, effectively mitigates cognitive load, and supports higher levels of personalization aligned with user preferences (Kim et al., 2023a, Kim et al., 2023b). Despite the growing body of literature explaining the role of trust in Gen AI acceptance and its influence on travellers’ decision making and behavioural intentions, the potential biases in AI-generated content continue to pose challenges to users’ confidence (Kim et al., 2021a, Kim et al., 2021b). Therefore, this research aims to examine the influence of generative AI in tourism on consumers’ trust in AI technologies, particularly their balance between technological progress and ethical responsibility, concerning the future of tourism (Dogru(Dr. True et al., 2025).

Existing research has focused more on the technology of AI as a phenomenon rather than translating those theories into studies on how the ethics involved would affect perceptions and trust (Glikson and Woolley, 2020). In addition, there is still the black box phenomenon, which is the inability of the user to understand what happens in AI. It also emphasizes the need for more integrative studies between morally sound AI development, user trust, and design in tourism (Tuo et al., 2024).

Moreover, scant research has examined the factors that inhibit tourists from embracing Generative AI technologies, resulting in limited understanding of travellers’ reluctance to Generative AI adoption for travel planning (Fakfare et al., 2025). Despite a growing body of literature examining the antecedents and outcomes of Generative AI (GAI) adoption, large body of research has been based on established frameworks such as Information Systems Success (ISS) model (Nguyen and Malik, 2022), Technology Acceptance Mode; (TAM) (Chatterjee et al., 2021), and the Unified Theory of Acceptance and Use of Technology (UTAUT) (Venkatesh, 2022).

However, the extensive reliance on traditional acceptance models might face the risk of ignoring the critical socio-technical aspects, which are paramount in the context of GAI (Yu et al., 2022). While most of the studies explore the overarching effects of user acceptance and use of GenAI using TAM, UTAUT, and Delone and McLean IS success models, there has been a lack of consideration of ethical factors as well as responsible AI systems. Addressing these gaps could significantly broaden our theoretical understanding of how individuals evaluate and adopt generative AI technologies within users’ ethical behaviour and socio-technical perspective.

Therefore, this research aims to fill this gap by investigating factors that facilitate or inhibit trust in generative AI systems, considering responsible AI and Technology Threat Avoidance Theory, and advancing the following research questions:

RQ1

How does the customer experience of using generative AI in tourism reflect the impact of enablers (such as responsible AI systems) and inhibitors (such as ambiguity and anxiety) on trust in generative AI?

RQ2

Does perceived controllability moderate the enablers and inhibitors of trust in generative AI in tourism?

This research includes responsible AI principles and the technology threat avoidance theory to explicate the relationship between generative AI and trust in tourism. Seen from the conceptual lens of Ethical Behaviours, responsible AI principles are crucial for enhancing trust in Gen AI within tourism (Law et al., 2024). When users perceive Gen AI recommendations as fair, transparent, and bias-free, they are more likely to perceive the systems as trustworthy, which in turn mitigates user skepticism and promotes trust (Ali et al., 2023). Also, when Gen AI promotes sustainable and environmentally friendly practices, it demonstrates ethical responsibility and enhances trust in alignment with shared social values (Díaz-Rodríguez et al., 2023). By operationalizing responsible AI principles like transparency, fairness, and sustainability, Gen AI transforms from a black-box tool into a more trustworthy and responsible system for travel decisions (Kirilenko and Stepchenkova, 2025). From the socio-technical perspective, the Technology threat avoidance theory (TTAT) supports the logic of how perceived ambiguity and perceived anxiety act as inhibitors of trust. In tourism, users’ experience holds paramount importance (Torkamaan et al., 2024). When users encounter Gen AI content that is difficult to comprehend, recommendations are unstable or ambiguous, and users’ data is exposed to privacy concerns, these apprehensions will turn into a threat to using Gen AI (Bang-Ning et al., 2025). According to TTAT, when users perceive a greater threat, they are more inclined to engage in avoidance behaviours, which also erodes trust in the system. Hence, TTAT explains why users might hesitate or avoid using Gen AI tools, even if they offer functional benefits such as personalized recommendations and reduced cognitive load (Shang et al., 2023).

The study adopted an experimental research design that would help us to explore the independent phenomenon (use of Gen AI for content generation) and observe and explain its role to establish a cause-and-effect relationship between factors of responsible AI systems and TTAT (Leung et al., 2023). The experimental setting helps us to understand the differences empirically between human and non-human generated content from users’ travel decision-making perspective towards destinations. The study enriched the literature on both the ethical aspects and environmental aspects (perceived fairness and environmental well-being) and the perceived risks (perceived ambiguity and perceived anxiety) perspective in the tourism context. The situation of perceived controllability as a moderator is tested in the literature, offering help to managers on how to develop AI systems responsible for lowering user fear and building trust. The study also facilitated practitioners in understanding how the personalized recommendations & cognitive load facilitated by Gen AI in content generation impact the Gen AI Trust of the tourists.

Access through your organization

Check access to the full text by signing in through your organization.

Section snippets

Responsible AI systems

Responsible AI adequately incorporates ethical aspects of AI system design and implementation and ensures that the systems are transparent, fair, and responsible (Díaz-Rodríguez et al., 2023). Responsible AI includes ethical, transparent, and accountable use of artificial intelligence systems, ensuring they are fair, secure, and aligned with societal values. It is also an approach to design, develop, and deploy AI systems so that they are ethical, safe, and trustworthy. It is a system that

Cognitive load, personalized recommendations, and perceived fairness

Cognitive load is the mental effort to process and choose information (Islam et al., 2020). A cognitive load can also be high when people interact with complex systems such as AI. Thus, high cognitive load may affect the ability of users to judge whether the AI-based decisions can be considered fair, since they may not grasp enough of the workings of the system and its specific decisions (Westphal et al., 2023). On the other hand, whereas perceived fairness refers to the users’ feelings about

Research methods and analysis

The experiments adopted in this study are scenario-based. Participants’ emotions cannot be manipulated easily in an ethical manner (Anand and Gaur, 2018). Also, the scenario-based approach helps test the causal relationship between constructs used for experimentation in a given scenario. This approach also reduces the minimal interference from extraneous variables. In this method, respondents answered questions based on hypothetical scenarios developed in each scenario. Therefore, scenarios

Discussion

Study 1 shows that cognitive load is detrimental to an individual’s notion of justice or environmental wellbeing, indicating that such factors may be difficult for a user to rate properly based on expending greater cognitive effort. However, cognitive load can also limit the extent of open-mindedness and critical evaluation of AI-assisted communication (T. Li et al., 2024), which could leave people resorting to mental shortcuts or simple fairness and environmental fairness issues. Under such

Theoretical implications

Trust is an important element in the design of organizations and systems, and the current study’s theoretical implications extend the understanding of trust in generative AI systems by integrating constructs of responsible AI and Technology Threat Avoidance Theory. This research underscores the significance of moral factors in creating and using AI systems by exploring relationships between perceived justice, environmental concern, and trust. In this context, the study notes that the degree of

Practical implications

To develop and retain users’ confidence, professionals in the field should observe responsible AI principles, particularly perceived equity and ecological sustainability. It is possible for consumers to be amused by and trust that AI recommendations are perceived as fair. This involves developing algorithms that align with users’ interests while promoting green aspects in AI. It also becomes important for management to note that during AI interface design, cognitive load should be considered so 

Limitations and future research

This study has certain limitations. First, the use of self-reported measures could pose certain biases, as the participants’ experiences with generative AI or social desirability could affect their judgment. The reliance on self-reported data introduces potential biases from participants’ prior engagements with generative AI, social desirability bias, or limited technological competence. Secondly, focusing on a particular context (i.e., tourism) can be seen as a limitation when it comes to

Conclusion

A thorough examination of advancing artificial intelligence in the tourism industry draws attention to the fact that there is no way of avoiding the issue of encouraging responsible AI use. Extending user satisfaction with rhetoric based on AI suggests that user perceptions are not only shaped by the quality of the recommendations but also by the ethical implications of the system and users’ affective states. A range in the effect of personalized suggestions on some parameters that influenced

Leave a comment

Filed under Marketing

Cocreating Value Through Open Circular Innovation Strategies

This is an excerpt from one of my papers published through Wiley’s Business Strategy and the Environment.

Suggested citation: Camilleri, M.A. (2025). Cocreating Value Through Open Circular Innovation Strategies: A Results-Driven Work Plan and Future Research Avenues, Business Strategy and the Environmenthttps://doi.org/10.1002/bse.4216

This research raises awareness of practitioners’ crowdsourcing initiatives and collaborative approaches, such as sharing ideas and resources with external partners, expert consultants, marketplace stakeholders (like suppliers and customers), university institutions, research centers, and even competitors, as the latter can help them develop innovation labs and to foster industrial symbiosis (Calabrese et al. 2024; Sundar et al. 2023; Triguero et al. 2022). It reported that open innovation networks would enable them to work in tandem with other entities to extend the life of products and their components. It also indicated how and where circular open innovations would facilitate the sharing of unwanted materials and resources that can be reused, repaired, restored, refurbished, or recycled through resource recovery systems and reverse logistics approaches. In addition, it postulates that circular economy practitioners could differentiate their business models by offering product-service systems, sharing economies, and/or leasing models to increase resource efficiencies and to minimize waste.

Arguably, the cocreation of open innovations can contribute to improve the financial performance of practitioners as well as of their partners who are supporting them in fostering closed-loop systems and sharing economy practices. They enable businesses and their stakeholders to minimize externalities like waste and pollution that can ultimately impact the long-term viability of our planet. Figure 1 presents a conceptual framework that clarifies how open innovation cocreation approaches can be utilized to advance circular, closed-loop models while adding value to the businesses’ financial performance.

The collaborative efforts between organizations, individuals, and various stakeholders can lead to sustainable innovations, including to the advancement of circular economy models (Jesus and Jugend 2023; Tumuyu et al. 2024). Such practices are not without their own inherent challenges and pitfalls. For example, resource sharing, the recovery of waste and by-products from other organizations, and industrial symbiosis involve close partnership agreements among firms and their collaborators, as they strive in their endeavors to optimize resource use and to minimize waste (Battistella and Pessot 2024; Eisenreich et al. 2021). While the open innovation strategies that are mentioned in this article can lead to significant efficiency gains and to waste reductions, practitioners may encounter several difficulties and hurdles, to implement the required changes (Phonthanukitithaworn et al. 2024). Different entities will have their own organizational culture, strategic goals, and modus operandi that may result in coordination challenges among stakeholders.

Organizations may become overly reliant on sharing resources or on their symbiotic relationships, leading to vulnerabilities related to stakeholder dependencies (Battistella and Pessot 2024). For instance, if one partner experiences disruptions, such as operational issues or financial difficulties, it can adversely affect the feasibility of the entire network. Notwithstanding, organizations are usually expected to share information and resources when they are involved in corporate innovation hubs and clusters. Their openness can lead to concerns about knowledge leakages and intellectual property theft, which may deter companies from fully engaging in resource-sharing initiatives, as they pursue outbound innovation approaches.

Other challenges may arise from resource recovery, reverse logistics, and product-life extension strategies (Johnstone 2024). The implementation of reverse logistics systems can be costly, especially for small and micro enterprises. The costs associated with the collection, sorting, and processing of returned products and components may outweigh the benefits, particularly if the market for recovered materials is not well established (Panza et al. 2022; Sgambaro et al. 2024). Moreover, the effectiveness of resource recovery methodologies and of product-life extension strategies would be highly dependent on the stakeholders’ willingness to return products or to participate in recycling programs. Circular economy practitioners may have to invest in promotional campaigns to educate their stakeholders about sustainable behaviors. There may be instances where existing recovery and recycling technologies are not sufficiently advanced or widely available, in certain contexts, thereby posing significant barriers to the effective implementation of open circular innovations. Notwithstanding, there may be responsible practitioners and sustainability champions that may struggle to find reliable partners with appropriate technological solutions that could help them close the loop of their circular economy.

In some scenarios, emerging circular economy enthusiasts may be eager to shift from traditional product sales models to innovative product-service systems. Yet, such budding practitioners can face operational challenges in their transitions to such circular business models. They may have to change certain business processes, reformulate supply chains, and also redefine their customer relationships, to foster compliance with their modus operandi. These dynamic aspects can be time-consuming, costly, and resource intensive (Eisenreich et al. 2021). For instance, the customers who are accustomed to owning tangible assets may resist shifting to a product-service system model. Their reluctance to accept the service providers’ revised terms and conditions can hinder the adoption of circular economy practices. The former may struggle to convince their consumers to change their status quo, by accessing products as a service, rather than owning them (Sgambaro et al. 2024). In addition, the practitioners adopting products-as-a-service systems may find it difficult to quantify their performance outcomes related to resource savings and customer satisfaction levels and to evaluate the success of their product-service models, accurately, due to a lack of established metrics.

In a similar vein, the customers of sharing economies and leasing systems ought to trust the quality standards and safety features of the products and services they use (Sergianni et al. 2024). Any negative incidents reported through previous consumers’ testimonials and reviews can undermine the prospective customers’ confidence in the service provider or in the manufacturer who produced the product in the first place. Notwithstanding, several sharing economy models rely on community participation and localized networks, which can pose possible challenges for scalability. As businesses seek to expand their operations, it may prove hard for them to consistently maintain the same level of trust and quality in their service delivery. Moreover, many commentators argue that the rapid growth of sharing economies often outpaces existing regulatory frameworks. The lack of regulations, in certain jurisdictions, in this regard, can create uncertainties and gray areas for businesses as well as for their consumers.

This open access paper can also be accessed via ResearchGate: https://researchgate.net/publication/389267075_Cocreating_Value_Through_Open_Circular_Innovation_Strategies_A_Results-Driven_Work_Plan_and_Future_Research_Avenues#CSR#CircularEconomy#OpenInnovation

Leave a comment

Filed under academia, circular economy, innovation, Open Innovation

My contribution as foreign expert reviewer

I have just returned back to base after a productive two-day foreign expert meeting.

Once again, it was a positive experience to connect with European academic colleagues, to review and discuss research proposals worth thousands of Euros.

My big congratulations go to the successful scholars who passed the shortlisting phase, based on our evaluation scores.

The best proposals will eventually receive national government funds for transformative projects that will add value to society and the natural environment.

#Academia #AcademiaService #ForeignExpert #ForeignExpertReviewer #Review #AcademicReviewer #ResearchProposal #ResearchProjects

Leave a comment

Filed under academia, Business, education technology, Market Research, Marketing, performance appraisals, Stakeholder Engagement, Strategic Management, Strategy, Sustainability, technology, tourism

Scaling up small enterprises: What’s the growth formula?

Pleased to share that I have recently coauthored an open-access article about the growth hacking capabilities of small and medium-sized enterprises (SMEs). It has been published in collaboration with my Italian colleagues from the University of Turin, via the Journal of Business Research.

Our research confirms that SMEs can leverage their growth potential through return-generating investments in disruptive innovations and by harnessing big data analytics. In sum, it suggests that core competencies, resources, and capabilities in these areas, can enhance the SMEs’ financial and operational performance.

READ FURTHER: The full paper can be accessed here: https://www.sciencedirect.com/science/article/pii/S0148296325001110

Suggested citation: Giordino, D., Troise, C., Bresciani, S. & Camilleri, M.A. (2025). Growth hacking capability: Antecedents and performance implications in the context of SMEs, Journal of Business Research, 192, https://doi.org/10.1016/j.jbusres.2025.115288 

Leave a comment

Filed under Analytics, Big Data, Business, digital, innovation, Marketing, online, Small Business, technology

Leveraging Industry 4.0 technologies for sustainable value chains and responsible operations management

Featuring a few snippets from one of my latest co-authored papers on the use of digital technologies for lean and sustainable value chains. A few sections have been adapted to be presented as a blog post.

Suggested citation: Strazzullo, S., Cricelli, L., Troise, C. & Camilleri, M.A. (2024). Leveraging Industry 4.0 technologies for sustainable value chains: Raising awareness on digital transformation and responsible operations management, Sustainable Development, https://doi.org/10.1002/sd.3211

Abstract

Practitioners, policy makers as well as scholars are increasingly focusing their attention on the promotion of sustainable practices that reduce the businesses’ impacts on the environment. In many cases, they are well aware that manufacturers and their suppliers are resorting to lean management processes and Industry 4.0 (I4.0) technologies such as big data, internet of things (IoT), and artificial intelligence (AI), among others, to implement sustainable production models in their operational processes. This research utilizes an inductive approach to better understand how I4.0 technologies could result in increased organizational performance in terms of resource efficiencies, quality assurance as well as in environmentally sustainable outcomes, in the context of the automotive industry. The findings shed light on the relationship between I4.0 technologies, sustainable and lean practices of automakers of combustion engines, hybrid models and/or electric vehicles (EVs). In conclusion, this contribution puts forward an original conceptual framework that clearly explains how practitioners can avail themselves of disruptive technologies to foster continuous improvements in their value chain.

Keywords: Industry 4.0, digital transformation, lean management, sustainable supply chain, responsible operations management, resource efficiencies.

Introduction

The manufacturing industries are characterized by their increased emphasis on the development of sustainable practices that are facilitated by digital technologies. Companies are under pressure from a wide range of stakeholders, including by regulatory institutions and by individual customers, among others (Wellbrock et al., 2020). In parallel, in recent years, most businesses have gradually introduced Industry 4.0 (I4.0) technologies in their manufacturing processes, as they shifted to smart factory models (Atif, 2023; Choi et al., 2022; Varriale et al., 2024). However, they cannot disregard their corporate responsibilities on economic, environmental and social aspects (Sunar & Swaminathan, 2022). Many researchers contend that sustainability behaviors ought to be integrated with I4.0 processes (Ghobakhloo, 2020), in order to enhance the effectiveness, efficiencies and economies of their Supply Chains (SC) (Núñez-Merino et al., 2020). To be competitive in this context, SCs are implementing lean management models to improve their operations.

The sustainability of SC is related to the notion of Lean Supply Chain Management (LSCM) that refers to the elimination of non-value-added activities in order to enhance the manufacturing firms’ performance (Centobelli et al., 2022; Núñez-Merino et al., 2020). The proponents of LSCM suggest that the generation of waste can be reduced through responsible management strategies (Deshpande & Swaminathan, 2020). Arguably, the minimization of externalities can ultimately affect all stakeholders of SCs, ranging from the business itself, its suppliers as well as its consumers (Khorasani et al., 2020). Notwithstanding, the stakeholders’ pressures on organizations has led them to change their operational approaches to comply with new environmental regulations and to respond to the growing demands of customers for sustainable products and services (Adomako et al., 2022; Camilleri et al., 2023).

As a result, many commentators are also raising awareness on the Sustainable Supply Chain Management (SSCM) concept (Sonar et al., 2022; Yadav et al., 2020). Very often, they claim that SSCM is an important organizational model that can increase corporate profits and boost market shares. The SSCM proposition is based on the reduction of risks from unwanted environmental impacts, thereby improving the overall efficiency of SCs (Negri et al., 2021). Previous contributions have clearly demonstrated how LSCM and SSCM are closely related to one another (Azevedo et al., 2017). More recent studies have deepened the link between the lean management paradigm and I4.0 (Oliveira-Dias et al., 2022; Tissir et al., 2022). The  integration of these two concepts has led to the formulation of new definitions such as “Lean 4.0” and “Digital Lean Manufacturing”, among others.

Given the increased complexity of operations, many researchers debate that the introduction of lean practices may not be enough to address extant competitive pressures. Although lean management can improve the operational efficiencies of SCs and may add value to their organization, there is still scope for practitioners to continue ameliorating their extant processes. Lean initiatives are reaching a point where they are becoming common practice in different contexts. Many manufacturers are adopting them to reduce their costs. However, the success of lean production practices relies on the management’s strategic decisions and on operational changes they are willing to undertake. Arguably, both SSCM and LSCM are aimed at fostering more flexible, fast, customized, and transparent operations management in manufacturing and distribution systems. Some studies have already clarified how digital technologies can help practitioners to improve achieve these objectives (Ghobakhloo, 2020; Varriale et al., 2024).

Several academic studies have not considered the fact that SCs are becoming more technologically savvy. As technologies continue to evolve, they are transforming the modus operandi of many businesses. Today’s organizational processes are increasingly utilizing different types of innovative solutions. Undoubtedly, manufacturers ought to keep up with the latest advances in technology and with the changing market conditions. Besides, a number of firms are opting to outsource their manufacturing processes to low-cost developing countries. In this light, this research builds on theoretical underpinnings focused on the link between SSCM and LSCM. However, it differentiates itself from previous contributions, as it clarifies how these two paradigms can be connected to I4.0.

Notwithstanding, for the time being, there is still a lack of agreement among academia, policy makers and expert practitioners about what constitutes lean, sustainable systems in today’s manufacturing landscape. Although there a number of stakeholders who are already engaging in LSCM and SSCM practices to meet the new challenges and opportunities presented by I4.0 and the digital age, others are still lagging behind, or are considering SSCM and LSCM and digital technologies as silos, as they see no link between these approaches (Narkhede et al., 2024).

For example, at the time of writing, several automotive manufacturers claim that they are integrating lean and sustainable practices. Very often, they indicate that they utilize I4.0’s disruptive technologies. Yet, a number of academic commentators argue that some of these practitioners unsustainable manufacturing processes and waste management behaviors are contributing to the negative impacts to the degradation of the natural environment, thereby accelerating climate change (Liu & Kong, 2021; Sonar et al., 2022).

Lately, academic colleagues have sought to highlight the synergies between I4.0 technologies, lean management principles and sustainable practices (Centobelli et al., 2022; Cerchione, 2024). The majority of contributions provide a conceptual study of the potential relationship between I4.0, sustainable and lean SCs. However, to date, limited research have integrated lean SC, SSC and I4.0 technologies. This paper represents one of the first attempts to investigate the connection between SSCM, LSCM and I4.0 paradigms, in depth and breadth, in the context of the automotive industry. For the time being, there is still limited research that raises awareness on sustainable and lean supply chain systems that are benefiting from disruptive technologies (Cerchione, 2024; Guo et al., 2022). Hence, this contribution addresses this knowledge gap. Specifically, it seeks to explore these research questions (RQs):

RQ1: Which I4.0 technologies and to what extent are they supporting the manufacturing businesses in their adoption of sustainable and lean management practices?

RQ2: How is the automotive industry’s SC benefiting from the utilization of disruptive technologies, as well as from sustainable and lean management practices?

The underlying goal of this contribution is to raise awareness on how manufacturing businesses including automotive corporations utilize I4.0 technologies, implement lean management as well as sustainable practices to improve their SCs performance. An inductive approach is utilized to address the above RQs. Rich qualitative data were captured through semi-structured interviews with expert practitioners who hold relevant experience in planning, organizing, leading and controlling responsible operations management initiatives in the automotive industry, and who are already deploying a wide array of I4.0 technologies in their manufacturing processes.

The researchers adopt a hermeneutic approach to outline the thematic analysis (TA) of their interpretative findings. They identify the main intersections between SSCM, LSCM and I4.0 paradigms. Moreover, they provide a conceptual framework that clearly explicates how practitioners can avail themselves of I4.0 technologies to advance sustainable and lean management practices in different phases of the supply chain, including in the sourcing of materials, inventory control, manufacturing processes, logistics/distribution of products, as well as in their after sales services.

Literature review

Companies can create value when they have the competences, capabilities and resources to create products. (Khan et al., 2016). They ought to be flexible and responsive to their customers’ needs, particularly in a competitive environment, like the automotive industry. Indeed, customers tend to evaluate the companies based on the products they sell  and on their unique selling propositions  (Kumar Singh & Modgil, 2020). The lean management principles can therefore help manufacturers to implement the philosophy of continuous improvements in their operational performance (Marodin et al. 2016), in order to add value to their customers, and to increase the likelihood of repeat business (Liker, 2004; Papadopoulou & Özbayrak, 2005).

Such ongoing improvements are not only relevant during production (e.g. within the automotive workshops) but may also be implemented throughout the entire SC, including in customer-facing environments (Cagliano et al., 2006). There are a number of lean management approaches that can be taken on board by different manufacturers including by automakers. Table 1 provides a list of lean practices (that could also be adopted within the automotive industry):

Table 1. A non-exhaustive list of lean management terms

Lean PracticesDefinitionsReferences
AndonAndon is a quality control signaling system that provides notifications on issues relating to the maintenance of certain operational processes. An alert can be activated automatically through automated systems or manually by employees. As a result, Andon systems can pause production so that operational issues can be rectified.(Saurin et al., 2011)
HeijunkaHeijunka is intended to improve operational flows by reducing the unevenness in production processes and by minimizing the chance of overburden. It can used to process orders according to fluctuations in demand, and to respond to changes by levelling production by volume or by type, thereby utilizing existing capacity in the best possible way.(Nordin et al., 2010)
JidokaJidoke refers to automated systems that are monitored and supervised by humans. It is used to improve the product quality and to prevent any malfunctions during manufacturing processes.(Liker & Morgan, 2006)
Just in time (JIT)A JIT system is an inventory management strategy that is based on forecasted demand. It aligns purchasing and procurement tasks with production schedules. Companies employ this lean strategy to increase their efficiency by reducing overproduction, unnecessary waiting times, excessive inventory, product defects and unwanted waste. JIT is evidenced when materials and goods are ordered, only when they are required.(Mayr et al., 2018; Sanders et al., 2016)  
KaizenKaizen is a lean production management approach that promotes continuous improvements in manufacturing processes on a day-by-day basis. This notion is based on the idea that ongoing positive changes will gradually result in significant improvements in the long run. Organizations adopting Kaizen will motivate their employees to consistently boost their productivity, reduce waste, lower defects and to be accountable in their jobs.(Valamede & Akkari, 2020)
KanbanKanban involves a scheduling system that can improve operational efficiencies in lean manufacturing environments. One of its main advantages is to limit the buildup of excess materials and resources at any point in time during operational processes. Practitioners ought to ensure that they are maintaining a predefined inventory level for production purposes.(Valamede & Akkari, 2020)
Pull Production (PP)PP is a lean management methodology that is intended to control production processes in order to limit overproduction, reduce surpluses and to minimize warehouse costs. PP can be used to determine the optimal quantity that should be produced. Production occurs when and where it is needed, according to demand.(Sanders et al., 2017b)
Total Productive Maintenance (TPM)TPM is a holistic maintenance approach that is used to improve operational efficiency and product quality, by eliminating failures and defects. Moreover, it promotes a safe working environment to prevent accidents from happening. It also aims to motivate employees to improve their job satisfaction, productivity and organizational performance(Mayr et al., 2018; Valamede & Akkari 2020)
Value Stream Mapping (VSM)VSM (is also known as material- and information-flow mapping) is a lean management method that involves the analysis of extant operations to better plan operational procedures, for the future. It is a visual tool that describes (in detail) all critical steps in specific manufacturing processes.(De Raedemaecker et al., 2017; Wagner et al., 2017)

Table 2 describes some of the most prevalent sustainability practices that are being employed in the automotive industry, as well as in other manufacturing contexts.

Table 2 Sustainable practices adopted by manufacturing businesses

Sustainable PracticesDefinitionsReferences
Sustainable Total Quality Management (STQM)STQM is a management approach that relies on the participation of all members of staff to create long-term value to their organization and to society at large, by considering the triple bottom line objectives in terms of profit, people and planet.(Yadav et al., 2020)  
Local sourcingLocal sourcing is related to the procurement of products, resources or materials from producers and suppliers located in close proximity to the manufacturing facility, rather than acquiring them from international sources. This approach encourages companies to purchase their requirements from local suppliers to reduce costs and to minimize their impact on the environment.(Zailani et al., 2015)  
Sustainable cooperation with customers“Sustainable cooperation with customers” involves the businesses’ engagement activities with customers. Organizations can increase their customers’ awareness about social responsible issues and environmentally sustainable initiatives.(Eltayeb et al., 2011; Purba Rao, 2018)  
Sustainable employee engagement“Sustainable employee engagement” is associated with the organizations’ relationship with its employees. Employers are expected to treat their employees well with dignity and respect. It is in their interest to foster an organizational climate that rewards their hard in a commensurate manner.(Robinson et al., 2003)
Supplier certification International Standards Organization’s (ISO’s) Environmental Management Standard (ISO14001)ISO14001 is one of the most widely used environmental management standard. It encourages manufacturing practitioners to continuously improve their operations to minimize their impact on the environment. It clearly recommends that environmental management issues ought to be embedded within the organizations’ strategic planning processes and that business leaders should pledge their commitment to implement sustainable initiatives that are aimed to protect the environment and to mitigate climate change.  (Camilleri, 2022; Potoski & Prakash 2005)  
Waste and emissions reductionsThe “waste and emissions reductions” constitute one of the most important aspects of sustainable production. Manufacturing businesses ought to reduce the generation of externalities including the accumulation of waste and emissions resulting from their operations. They are expected to strictly comply with the relevant legislation to protect the environment and to prevent any detrimental effects from waste and emissions on eco systems.(Vijayvargy & Agarwal, 2014)

Table 3 sheds light on some of I4.0 technologies that are being employed within the automotive industry.

Table 3. I4.0 technologies that are utilized in the automotive industry

I4.0 TechnologiesDefinitionsReferences
Three-Dimensional (3D) printing3D printing is based on additive technology that can create solid objects from computer-aided design (CAD) software, or via 3D models.(Kamble et al. 2018)  
Artificial Intelligence (AI)AI is concerned with computers and machines that are capable of mimicking human reasoning, human learning and even human behaviors. Basically, it involves a set of machine learning and deep learning technologies that can be used to analyze, predict and forecast data, to categorize objects, to process natural language, to make recommendations, and to retrieve intelligent data retrieval.(Chae and Goh 2020; Ghobakhloo 2020)  
Augmented Reality (AR)AR enables its users to view virtual content that comprises multiple sensory modalities that may include visual, vocal, haptic, olfactory, and other somatosensory stimuli in a real-world environment.(Mayr et al., 2018; Rüßmann et al., 2015)
Big Data (BIG DATA)BIG DATA refers to data sets that are too large or complex to be dealt with via conventional data processing software. Supposedly, big data software can rapidly handle large volumes as well as a variety of information.(Swaminathan, 2018; Vaidya et al., 2018)
BlockchainA blockchain is a distributed ledger technology that allows its users to track and store records (blocks). The blocks hold transactional data that are securely linked together via cryptographic hashes that are timestamped. Each block is linked to the other.(Pun et al., 2021)
Cloud computingCloud computing refers to on-demand computer resources that can be utilized to share and store data in an agile and flexible manner, beyond company boundaries, through multiple locations.(Tao & Qi 2019; Vaidya et al. 2018)
Cyber Physical Systems (CPSs)  CPSs are related to physical and software systems that are deeply intertwined to operate spatial and temporal scales. They are controlled and/or monitored by algorithms to interact with each other in ways that change with context. They exhibit multiple and distinct behavioral modalities.(Adamides & Karacapilidis, 2020; Kamble et al., 2018; Wang et al., 2016)  
Internet of Things (IoT)IoT are physical objects (or groups of objects) with sensors that can enable them to process and exchange data with other devices and systems via the Internet or other communications networks.(He & Xu, 2014)  
Virtual simulation (VS)VS refers to computational system-based modeling that relies on real-time data to mirror the physical world. Virtual models can include machines, products, and humans. A simulation provides a preliminary analysis of different processes (and phases) that make up the operational processes, thereby presenting performance estimates for production management.(Li et al., 2018)

Discussion

This research sought to examine the role of I4.0 technologies in supporting sustainable and lean initiatives in SCs. To this end, an inductive study involving a thematic analysis was conducted to answer the underlying RQs. Interestingly, the findings clearly indicate that utilization of I4.0 technologies are opening up new opportunities in the automotive industry. They confirm that carmakers are changing their modus operandi in terms of their procurement of resources, production practices, and of how they are servicing their customers. It shows that a myriad of digital technologies (including big data, simulation and IoT, among others) are facilitating the implementation of lean programs, thereby improving productivity outcomes, whilst decreasing operational costs.

Moreover, it reported that certain disruptive technologies can be utilized to create value to environmental sustainability in terms of waste minimization practices through recycling procedures, reductions in CO2 emissions, lower energy consumption levels, et cetera, thereby diminishing the businesses’ impact on the natural environments. This research noted that the automakers’ implementation of sustainable practices is not as conspicuous as that of their lean management practices, in the academic literature, even though most of them are increasingly producing sustainable vehicles including hybrids and EVs.

In addition, the findings indicate that there is still scope for manufacturing firms to avail themselves of I4.0 systems to consistently improve their operations in SCs. The results reported that big data can be used to pursue continuous improvements and Kaizen approaches to improve efficiencies, lower costs and reduce waste. They revealed that practitioners are collaborating with marketplace stakeholders and utilizing JIT systems to responsibly source materials and resources when they are required. Moreover, they found that organizations are availing themselves of Andon and Jidoka automated systems to monitor and control different manufacturing processes in the supply chain, to ensure the smooth running of operations.

Theoretical implications

This contribution convergences Industry 4.0 and responsible supply chain practices with lean management approaches. It raises awareness on how manufacturers including those operating in the automotive industry, can improve their quality standards through specific tools (e.g. Andon and Jidoka) and techniques (like Kaizen and Kanban, among others), to enhance their efficiencies, reduce costs and eliminate non-value-added activities. It explains that there is scope for sustainable businesses to invest in disruptive technologies and long-term cultural change to achieve continuous improvements in their supply chains. It clarifies that the intersection of LSCM, SSCM and I4.0 can potentially revolutionize operations management, as practitioners can benefit from digital technologies like real-time data, cloud, AI, CPS, blockchain technologies to consistently ameliorate their production systems in a sustainable manner.

Arguably, businesses can avail themselves of big data analytics, simulations and digital twins, to anticipate demand fluctuations, optimize inventory levels, reduce lead times. These data-driven innovations enable them to proactively respond to changing market conditions and disruptions, identify potential disruptions early, and to mitigate risks. In addition, they could invest in Blockchain digital ledger technologies to trace materials, components and products to ensure responsible sourcing of goods, increase the sustainability of their operations and reduce the businesses’ environmental impact.  

Alternatively, they can utilize CPS systems to automate tasks, improve quality control and to reduce errors from their production processes. These approaches would probably lead to better resource utilization, waste management and circular economy approaches like recyclability, reusability and repairability of assets to extend their lifecycles. Hence, practitioners can align I4.0 paradigm with the lean principles of pull production and just-in-time systems as well as with sustainable supply chain management. For the time being, few researchers have delved into these promising areas of study. Even fewer contributions have investigated these issues in the automotive industry context. This contribution addresses these knowledge gaps in academia. It advances a comprehensive theoretical framework that clearly sheds light on the link between I4.0, strategic lean management approaches and sustainability outcomes including improved resource efficiencies and reduced externalities, among others.

Managerial implications

Regarding the implications for practitioners, this contribution raises awareness on the importance of using technologies to improve the efficiency, economy and effectiveness of SCs, in a sustainable manner. The interpretative findings of this research identified a set of I4.0 technologies and practices that can improve the performance of SCs in the automotive industry. Among the various I4.0 technologies, the informants identified: IoT, simulation, cloud, and big data as some of the most effective tools to enhance the organizational performance of manufacturing businesses. Generally, they indicated that their companies were relying on insights from big data to continuously improve their operations. Evidently, they captured data as they tracked different processes of their operations, in real time. Subsequently, the gathered data is analyzed to discover any areas for improvement. For example, big data could reveal that modifications may be required if certain processes and procedures are not adding value to the company, or if they are translating to operational inefficiencies and/or to unwanted waste.

Most interviewees showed that they utilized simulations, cloud systems and IoT to adopt JIT, Kaizen, Jidoka, local sourcing, and waste reduction initiatives. They explained how they benefitted from these technologies to optimize their operations, in terms their procurement of materials, as well as in other areas including in distribution and marketing activities. For instance, the findings clearly reported that IoT can support the implementation of local sourcing of resources, by minimizing the vulnerabilities and logistical costs associated with long SCs and could improve efficiency by providing valuable information about machine health, including predictive maintenance requirements, at logistics centers or warehouses.

This research reported that these tools enabled practitioners to monitor the operational performance in all phases of their SC, including from the selection of suppliers until the delivery of after-sales services to their valued customers. As mentioned above, the utilization of systems such as big data, analytics and the use of cloud technologies for data storage are adding value to the companies’ SC. Data-driven technologies facilitate the exchange of information between marketplace stakeholders (e.g. with intermediaries). They can foster lean management approaches by increasing throughput, addressing bottlenecks, streamlining processes and by reducing delays, resulting in improved productivity, operational efficiencies, better time management and in lower risks for SCs.

Macroenvironmental factors, including political, economic, social, and technological issues could also impact on the businesses’ I4.0 digital transformation and implementation of sustainable operations management. The transition towards a zero-waste model could prove to be a costly, long-term investment for businesses including those operating in the automotive industry. Although financial investments in new technologies could possibly improve operational efficiencies (Camilleri, 2019), there could still be a low demand for them, particularly if I4.0 systems require behavioural changes by their users.

The full list of references are included in the last part of this open-access article: https://doi.org/10.1002/sd.3211

This research is also available via Researchgate: https://www.researchgate.net/publication/384191949_Leveraging_Industry_40_technologies_for_sustainable_value_chains_Raising_awareness_on_digital_transformation_and_responsible_operations_management

Leave a comment

Filed under Business, digital transformation, Industry 4.0, lean management, Operations Management, Sustainability, sustainable supply chains

Metaverse education: Opportunities and challenges for immersive learning

The following content was adapted from one of my latest contributions on the Metaverse’s immersive technology.

(Credit: Onurdongel)

Suggested citation: Camilleri, M.A. (2023), “Metaverse applications in education: a systematic review and a cost-benefit analysis”, Interactive Technology and Smart Education, Vol. ahead-of-print No. ahead-of-print. https://doi.org/10.1108/ITSE-01-2023-0017

Online users are connecting to simulated virtual environments through various digital games like Fortnite, Minecraft, Roblox, and World of Warcraft, among others. Very often, gamers are utilizing virtual reality (VR) and augmented reality (AR) technologies to improve their gaming experiences. In many cases, they are engaging with other individuals in the cyberspace and participating in an extensive virtual economy. New users are expected to create electronic personas, called avatars (that represent their identity in these games). They are allowed to move their avatars around virtual spaces and to use them to engage with other users, when they are online. Therefore, interactive games are enhancing their users’ immersive experiences, particularly those that work with VR headsets.

Academic researchers as well as technology giants like Facebook (Meta), Google and Microsoft, among others, anticipate that the Metaverse will shortly change the way we experience the Internet. Whilst on the internet, online users are interacting with other individuals through websites, including games and social media networks (SNSs) in the Metaverse they engage with the digital representations of people (through their avatars), places, and things in a simulated universe. Hence, the Metaverse places its users in the middle of the action. In plain words, it can be described as a combination of multiple elements of interactive technologies, including VR and AR where users can experience a digital universe. Various industry practitioner including Meta (Facebook) argue that this immersive technology will reconfigure the online users’ sensory inputs, definitions of space, and points of access to information.

AR and VR devices can be used to improve the students’ experiences when they engage with serious games. Many commentators noted that these technologies encourage active learning approaches, as well as social interactions among students and/or between students and their teachers. Serious games can provide “gameful experiences”, if they share the immersive features that captivate them, like those relating to the entertaining games. If they do so, it is very likely that students would enjoy their game play (and game-based learning). Similarly, the Metaverse can be used to increase the students; motivations and learning outcomes.

For the time being, there is no universal definition that encapsulates the word “Metaverse”. The term has been used in a 1992 science fiction novel Snow Crash. Basically, it is a blend of two words, in which parts of them, namely “meta” and “universe” were combined to create the “Metaverse” notion. While meta means beyond, universe is a term that is typically used to describe an iteration of the internet that consists of persistent, immersive 3D virtual spaces that are intended to emulate physical interactions in perceived virtual worlds (like a universe).

Although, there are various academic contributions that have explored the utilization of online educational technologies, including AR and VR, in different contexts,  currently, just a few researchers who have evaluated of the latest literature on this contemporary topic, to reveal the benefits and costs of using this disruptive innovation in the context of education. Therefore, this contribution closes this gap in academic literature. The underlying objective of this research is to shed light on the opportunities and challenges of using this immersive technology with students.

Opportunities

    Immersive multi-sensory experiences in 3D environments

    The Metaverse could provide a smooth interaction between the real world and the virtual spaces. Its users can engage in activities that are very similar to what they do in reality. However, it could also provide opportunities for them to experience things that could be impossible for them to do in the real world. Sensory technologies enable users to use their five senses of sight, touch, hearing, taste and smell, to immerse themselves in a virtual 3D environment. VR tools are interactive, entertaining and provide captivating and enjoyable experiences to their users. In the past years, a number of educators and students have been using 3D learning applications (e.g. like Second Life) to visit virtual spaces that resemble video games. Many students are experienced gamers and are lured by their 3D graphics. They learn when they are actively involved. Therefore, the learning applications should be as meaningful, engaging, socially interactive and entertaining as possible.

    There is scope for educators and content developers to create digital domains like virtual schools, colleges and campuses, where students and teachers can socialize and engage in two-way communications. Students could visit the premises of their educational institutions in online tours, from virtually anywhere. A number of universities are replicating their physical campus with virtual ones. The design of the virtual campuses may result in improved student services, shared interactive content that could improve their learning outcomes, and could even reach wider audiences. Previous research confirms that it is more interesting and appealing for students to learn academic topics through the virtual world.

    Equitable and accessible space for all users

    Like other virtual technologies, the Metaverse could be accessed from remote locations. Educational institutions can use its infrastructure to deliver courses (free of charge or against tuition fees, as of now). Metaverse education may enable students from different locations to use its open-source software to pursue courses from anywhere, anytime. Hence, its democratized architecture could reduce geographic disparities among students, and increases their chances of continuing education through higher educational institutions in different parts of the world.

    In the future, students including individuals with different abilities, may use the Metaverse’s multisensory environment to immerse themselves in engaging lectures.

    Interactions with virtual representations of people and physical objects

    Currently, individual users can utilize the AR and VR applications to communicate with others and to exert their influence on the objects within the virtual world. They can organize virtual meetings with geographically distant users, attend conferences, et cetera. Various commentators argued that the Metaverse can be used in education, to learn academic subjects in real-time sessions in a VR setting and to interact with peers and course instructors. The students and their lecturers will probably use an avatar that will represent their identity in the virtual world. Many researchers noted that avatars facilitate interactive communications and are a good way to personalize the students’ learning experiences.

    Interoperability

    Unlike other VR applications, the Metaverse will enable its users to retain their identities as well as the ownership of their digital assets through different virtual worlds and platforms, including those related to the provision of education. This means that Metaverse users can communicate and interact with other individuals in a seamless manner through different devices or servers, across different platforms. They can use the Metaverse to share data and content in different virtual worlds that will be accessed through Web 3.0.

    Challenges

      Infrastructure, resources and capabilities

      The use of the Metaverse technology will necessitate a thorough investment in hardware to operate the university virtual spaces. The Metaverses requires intricate devices, including appropriate high-performance infrastructures to achieve accurate retina display and pixel density for realistic virtual immersions. These systems rely on fast internet connections with good bandwidths as well as computers with adequate processing capabilities, that are equipped with good graphic cards. For the time being, VR, MR and AR hardware may be considered as bulky, heavy, expensive and cost-prohibitive, in some contexts.

      The degree of freedom in a virtual world

      The Metaverse offers higher degrees of freedom than what is available through the worldwide web and web2.0 technologies. Its administrators cannot be in a position to anticipate the behaviors of all persons using their technologies. Therefore, Metaverse users can possibly be exposed to positive as well as to negative influences as other individuals can disguise themselves in the vast virtual environments, through anonymous avatars.

      Privacy and security of users’ personal data

      The users’ interactions with the Metaverse as well as their personal or sensitive information, can be tracked by the platform operators hosting this service, as they continuously record, process and store their virtual activities in real-time. Like its preceding worldwide web and Web 2.0 technologies, the Metaverse can possibly raise the users’ concerns about the security of their data and of their intellectual properties. They may be wary about data breaches, scams, et cetera. Public blockchains and other platforms can already trace the users’ sensitive data, so they are not anonymous to them.  Individuals may decide to use one or more avatars to explore the Metaverse’s worlds. They may risk exposing their personal information, particularly when they are porting from one Metaverse to another and/or when they share transactional details via NFTs. Some Metaverse systems do not require their users to share personal information when they create their avatar. However, they could capture relevant information from sensors that detect their users’ brain activity, monitor their facial features, eye motion and vocal qualities, along with other ambient data pertaining to the users’ homes or offices.

      They may have legitimate reasons to capture such information, in order to protect them against objectionable content and/or unlawful conduct of other users. In many cases, the users’ personal data may be collected for advertising and/or for communication purposes. Currently, different jurisdictions have not regulated their citizens’ behaviors within the Metaverse contexts. Works are still in progress, in this regard.

      Identity theft and hijacking of user accounts

      There may be malicious persons or groups who may try use certain technologies, to obtain the personal information and digital assets from Metaverse users. Recently, a deepfake artificial intelligence software has developed short audible content, that mimicked and impersonated a human voice.

      Other bots may easily copy the human beings’ verbal, vocal and visual data including their personality traits. They could duplicate the avatars’ identities, to commit fraudulent activities including unauthorized transactions and purchases, or other crimes with their disguised identities. Roblox users reported that they experienced avatar scams in the past. In many cases, criminals could try to avail themselves of the digital identities of vulnerable users, including children and senior citizens, among others, to access their funds or cryptocurrencies (as they may be linked to the Metaverse profiles). As a result, Metaverse users may become victims of identity theft. Evolving security protocols and digital ledger technologies like the blockchain will be increasing the transparency and cybersecurity of digital assets. However, users still have to remain vigilant about their digital footprint, to continue protecting their personal information.

      As the use of the virtual environment is expected to increase in the foreseeable future, particularly with the emergence of the Metaverse, it is imperative that new ways are developed to protect all users including students. Individuals ought to be informed about the risks to their privacy. Various validation procedures including authentication, such as face scans, retina scans, and speech recognition may be integrated in such systems to prevent identity theft and hijacking of Metaverse accounts.

      Borderless environment raises ethical and regulatory concerns

      For the time being, a number of policy makers as well as academics are raising their questions on the content that can be presented in the Metaverse’s virtual worlds, as well as to the conduct and behaviors of the Metaverse users. Arguably, it may prove difficult for the regulators of different jurisdictions to enforce their legislation in the Metaverse’s borderless environment. For example, European citizens are well acquainted with the European Union’s (EU) General Data Protection Regulation. Other countries have their own legal frameworks and/or principles that are intended to safeguard the rights of data subjects as well as those of content creators. For example, the United States governments has been slower that the EU to introduce its privacy by design policies. Recently, the South Korean Government announced a set of laudable, non-binding ethical guidelines for the provision and consumption of metaverse services. However, there aren’t a set of formal rules that can apply to all Metaverse users.

      Users’ addictions and mental health issues

      Although many AR and VR technologies have already been tried and tested in the past few years, the Metaverse is still getting started. For the time being, it is difficult to determine what are the effects of the Metaverse on the users’ health and well-being. Many commentators anticipate that an unnecessary exposure to Metaverse’s immersive technologies may result in negative side-effects for the psychological and physical health of human beings.  They are suggesting that individuals may easily become addicted to a virtual environment, where the limits of reality are their own imagination. They are lured to it “for all the things they can do” and will be willing to stay “for all the things they can be” (i.e. excerpts from Ready Player One Movie).

      Past research confirms that spending excessive time on internet, social media or playing video games can increase the chances of mental health problems like attention deficit disorders, eating conditions, as well as anxiety, stress or depression, among others. Individuals play video games to achieve their goals, to advance to the next level. Their gameplay releases dopamine. Similarly, their dopamine levels can increase when they are followed through social media, or when they receive likes, comment or other forms of online engagements.          

      Individuals can easily develop an addiction with this immersive technology, as they seek stimulating and temporary pleasurable experiences in its virtual spaces. As a result, they may become dependent to it. Their interpersonal communications via social media networks are not as authentic or satisfying as real-life relationships, as they are not interacting in-person, with other human beings. In the case of the Metaverse, their engagement experiences may appear to be real. Yet again, in the Metaverse, its users are located in a virtual environment, they not physically present near other individuals. Human beings need to build an honest and trustworthy relationship with one another. The users of the Metaverse can create avatars that could easily conceal their identity.

      Read further! The full paper can be accessed and downloaded from:

      The University of Malta: https://www.um.edu.mt/library/oar/handle/123456789/110459

      Researchgate: https://www.researchgate.net/publication/371275481_Metaverse_applications_in_education_A_systematic_review_and_a_cost-benefit_analysis

      Academia.edu: https://www.academia.edu/102800696/Metaverse_applications_in_education_A_systematic_review_and_a_cost_benefit_analysis

      SSRN: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4490787

      1 Comment

      Filed under digital games, Digital Learning Resources, digital media, Education, education technology, Metaverse

      Metaverse keywords for dummies

      Individuals can use the Metaverse for leisure, entertainment, socializing, as a marketplace to buy items and for education, among other purposes. Currently, technology giants including Meta, Microsoft, Nvidia, Roblox, Snap and Unity, among others, are building the infrastructure of Metaverse. At the time of writing many commentators are envisaging that the Metaverse’s virtual environments will be replicating the real world. For instance, the Metaverse’s virtual reality (VR) environment can be used to deliver lectures to students located in remote locations. Course instructors can utilize its immersive 3D capabilities in synchronous and asynchronous learning environments. They can interact with their students’ avatars in real time to provide immediate feedback. In addition, they may avail themselves of the Metaverse virtual settings to catapult their students in learning scenarios that are constrained by the limits of reality, or by their own imagination, to enable them to learn in a practical, yet safe environment. Table 1 features a clear (and comprehensible) definition of some of the most popular terms related to the ‘Metaverse’.

      Table 1. Key terms related to the adoption of the Metaverse

      KeywordDefinition  
      AvatarAn avatar represents a human figure with a fictitious, animated character in electronic games as well as in the internet’s websites including in social media and in the Metaverse. They may usually appear to be similar in their physical features and expressions as their real-world counterparts. However, online users may want to customize their avatars to disguise themselves by creating very imaginative characters.
      Digital twinThe digital twin refers to a virtual representation of a real-world product, system, or process that spans its lifecycle. It can be considered as a digital counterpart. A digital twin can be utilized for practical purposes including for monitoring, testing of simulations, maintenance et cetera. Its underlying objective is to generate useful insights on how to improve real life objects and their systems. It is intended to mimic the lifecycle of a physical entity it represents (from its inception up to its disposal). However, the digital twin could exist before the existence of a physical entity. The initial stages of a digital twin (in the creation phase) enable the intended entity’s entire lifecycle to be simulated and tested. Hence, the development of digital twins involves continuous improvements in product designs, operational processes and engineering activities, as they are acquiring new capabilities through trial-and-error phases, simulations and machine learning. The rationale of digital twins is to increase the efficiency of products and systems, to enhance their performance outcomes.
      Extended reality (XR)XR refers to an umbrella term that incorporates augmented reality (AR), virtual reality (VR) and mixed reality (MR) that mirror the physical world or a digital twin. It refers to the combination of real and virtual environments that can comprise different objects and systems. Each of them will have their own roles, features and attributes. A multisensory XR system conveys signals to the human beings’ nervous systems through visual, auditory, olfactory and haptic cues that are very similar to real life feelings and experiences (Yu et al., 2023). Such technologies could be designed to support their users’ well-being. They may involve digital therapeutics that can affect the individuals’ perceptions, state of mind and behaviors.
      Mixed reality (MR)MR is an inter-reality system comprising a physical reality as well as 3D digital worlds, where real and virtual objects could co-exist and interact in real time. MR integrates AR and VR technologies to provide holographic representations of objects in a virtuality continuum (Yoo et al., 2022). It is being used for different applications including for educational purposes, to deliver experiential learning. Students can benefit from natural and intuitive 3D representations based on the latest advancements in input systems, sensors, processing power, display technologies, graphical processing, and cloud computing are creating elaborate experiences with mixed realities.
      Non-fungible tokensNon-fungible tokens (NFTs) are a form of cryptocurrency where data is digitally stored in a blockchain. NFTs are considered as a unique modality of digital non-interchangeable (i.e. non-fungible) assets, that are authenticated and certified to a specific owner. NFTs may represent electronic content including the video games’ audiovisual material, collectibles, avatars, et cetera, that can be acquired, sold or traded. The blockchain technology ensures that the digital assets cannot be replicated in any way. However, owners of NFTs can trade and sell their NFTs. The blockchain allows prospective buyers to confirm the provenance of the virtual content and to clearly track and establish the ownership of the tokens. Hence, they can monetize them with other customers through the Metaverse.
      Virtual realityWhile AR uses the existing real-world environment and incorporates virtual information in it, virtual reality (VR) will completely immerse its users in a simulated environment comprising sensory modalities including auditory and video feedback as well as haptic sensations. VR relies on pose tracking and on 3D near-eye displays to give them an immersive feel of a virtual world. It enables users to experience sights and sounds that are similar or totally different from the real world. Individuals can use VR helmets and headsets like Meta Quest, Play Station VR, HTC Vive, or HP reverb, among others, that provide a small screen in front of the eyes, that will place them in a virtual environment. A person using virtual reality equipment may experience a synthetic world by moving around, and by interacting with its virtual objects that may be present in specially designed 3D rooms or even in outdoor environments. For example, medical students can use VR to practice how to perform heart surgeries.
      Web 3.0Web 3.0 represents the evolution of the web into a decentralized network. Many commentators are anticipating that online users will be in a position to access their own data, including documents, applications and multimedia, in a secure, open-source environment, that will be facilitated by Blockchain’s distributed ledger technology. They envisage that online users will probably rely on the services of Decentralized Autonomous Organizations (DAO), that will be entrusted to provide a secure digital ledger that tracks their customers’ digital interactions across the internet, via a network of openly available smart contracts stored in a decentralized Blockchain. Therefore, smart contracts could provide increased security, scalability and privacy (e.g. as online users can protect their intellectual properties through non-fungible tokens).
      (Developed by the Camilleri & Camilleri, 2023).

      Read the full paper in its entirety here:

      Suggested citation: Camilleri, M.A. & Camilleri, A.C. (2023).  Metaverse education: Opportunities and challenges for immersive learning in virtual environments,  2023 The 4th Asia Conference on Computers and Communications (ACCC 2023),  IOP Publishing, Bristol, United Kingdom (Scopus).

      Leave a comment

      Filed under Marketing

      CALL FOR PAPERS: The circular economy of surplus food (in the hospitality industry)

      A SPECIAL ISSUE entitled,’Responsible consumption and production of food: Opportunities and challenges for hospitality practitioners‘ will be published through the Journal of Sustainable Tourism.

      Special Issue Editor(s)

      Mark Anthony Camilleri, University of Malta, Malta, and Northwestern University, United States of America.

      mark.a.camilleri@um.edu.mt

      Antonino Galati, Universita’ degli studi di Palermo, Italy.

      antonino.galati@unipa.it

      Demetris Vrontis, University of Nicosia, Cyprus.

      vrontis.d@unic.ac.cy

      Previous research explored the circular economy practices of different businesses in various contexts; however, limited contributions have focused on the responsible production and consumption of food (Huang et al., 2022; Van Riel et al., 2021). Even fewer articles sought to explore environmental, social and governance (ESG) dimensions relating to the sustainable supply chain management of food and beverages in the tourism context.

      This special issue will shed light on the responsible practices in all stages of food preparation and consumption in the tourism and hospitality industry. It raises awareness on sustainable behaviors that are aimed to reduce the businesses’ externalities including the generation of food waste on the natural environment. It shall put forward relevant knowledge and understanding on good industry practices that curb food loss. It will identify the strengths and weaknesses of extant food supply chains as well as of waste management systems adopted in the sector. It is hoped that prospective contributors identify laudable and strategic initiatives in terms of preventative and mitigating measures in terms of procurement and inventory practices, recycling procedures and waste reduction systems involving circular economy approaches.

      Academic researchers are invited to track the progress of the tourism businesses on the United Nations’ Sustainable Development Goal SDG12 – Responsible Consumption and Production. They are expected to investigate in depth and breadth, how tourism businesses are planning, organizing, implementing and measuring the effectiveness of their responsible value chain activities. They may utilize different methodologies to do so. They can feature theoretical and empirical contributions as well as case studies of organizations that are: (i) reusing and recycling of surplus food, (ii) utilizing sharing economy platforms and mobile apps (that are intended to support business practitioners and prospective consumers to reduce the food loss and waste), (iii) contributing to charitable institutions and food banks, through donations of surplus food, and/or (iv) recycling inedible foods to compost, among other options.

      The contributing authors could clarify how, where, when and why tourism businesses are measuring their ESG performance on issues relating to the supply chain of food and beverage. They may refer to international regulatory instruments and guidelines (Camilleri, 2022),  including the International Standards Organization (ISO) and Global Reporting Initiative (GRI) standards, among others, to evaluate the practitioners’ ESG performance through: a) Environmental Metrics: The businesses’ circularity; Recycling and waste management; and/or Water security; b) Social Metrics: Corporate social responsibility; Product safety; Responsible sourcing; and/or Sustainable supply chain, and; c) Governance: Accounting transparency; Environmental sustainability reporting and disclosures.

      They could rely on GRI’s Standards 2020, as well as on GRI 204: Procurement Practices 2016; GRI 303: Water and Effluents 201; GRI 306: Effluents and Waste 2016; GRI 306: Waste 2020; GRI 308: Supplier Environmental Assessment 2016 and GRI 403: and to Occupational Health and Safety 2018, to assess the businesses’ ESG credentials.

      Prospective submissions ought to clearly communicate about the positive multiplier effects of their research (Ahn, 2019). They can identify responsible production and consumption behaviors that may result in operational efficiencies and cost savings in their operations (Camilleri, 2019). At the same time, they enable them to improve their corporate image among stakeholders (hence they can increase their financial performance). They can examine specific supply chain management initiatives involving open innovation, stakeholder engagement and circular economy approaches that may ultimately enhance the businesses’ legitimacy in society. More importantly, they are urged to elaborate on the potential pitfalls and to discuss about possible challenges for an effective implementation of a sustainable value chain of food-related products and their packaging, in the tourism and hospitality industry (Galati et al., 2022).

      It is anticipated that the published articles shall put forward practical implications for a wide array of tourism stakeholders, including for food manufacturers and distributors, airlines, cruise companies, international hotel chains, hospitality enterprises, and for consumers themselves. At the same time, they will draw their attention to the business case for responsible consumption and production of food through strategic behaviors.

      Potential topics may include but are not limited to:

       –          Responsible food production for tourism businesses

      –           Responsible food consumption practices in the hospitality industry

      –           Circular economy and closed loop systems adopted in restaurants, pubs and cafes

      –           Open innovation and circular economy approaches for a sustainable tourism industry

      –           Recycling of inedible food waste to compost

      –           Measuring performance of responsible food production/sustainable consumption

      –           Digitalisation and the use of sharing economy platforms to reduce food waste

      –           Artificial intelligence for sustainable food systems

      –           Sustainable food supply chain management

      –           Food waste and social acceptance of circular approaches

      –           Stakeholders’ roles to minimize food waste in the hospitality industry

      –           Food donation initiatives to decrease food loss and waste

      References

      Ahn, J. (2019). Corporate social responsibility signaling, evaluation, identification, and revisit intention among cruise customers. Journal of Sustainable Tourism, 27(11), 1634-1647.

      Camilleri, M. A. (2019). The circular economy’s closed loop and product service systems for sustainable development: A review and appraisal. Sustainable Development, 27(3), 530-536.

      Camilleri, M. A. (2022). The rationale for ISO 14001 certification: A systematic review and a cost–benefit analysis. Corporate Social Responsibility and Environmental Management, 29(4), 1067-1083.

      Galati, A., Alaimo, L. S., Ciaccio, T., Vrontis, D., & Fiore, M. (2022). Plastic or not plastic? That’s the problem: Analysing the Italian students purchasing behavior of mineral water bottles made with eco-friendly packaging. Resources, Conservation and Recycling, 179, https://doi.org/10.1016/j.resconrec.2021.106060

      Huang, Y., Ma, E., & Yen, T. H. (2022). Generation Z diners’ moral judgements of restaurant food waste in the United States: a qualitative inquiry. Journal of Sustainable Tourism, https://doi.org/10.1080/09669582.2022.2150861

      Van Riel, A. C., Andreassen, T. W., Lervik-Olsen, L., Zhang, L., Mithas, S., & Heinonen, K. (2021). A customer-centric five actor model for sustainability and service innovation. Journal of Business Research, 136, 389-401.

      Leave a comment

      Filed under academia, Call for papers, Circular Economy, environment, food loss, food waste, Hospitality, hotels, responsible consumption, responsible production, responsible tourism, restaurants, Shared Value, sharing economy, Stakeholder Engagement, Strategy, Sustainability, Sustainable Consumption, sustainable development, sustainable production, sustainable tourism, tourism