CFP - HICSS-59 Minitrack - Human-AI Collaborations and Ethical Issues

danjongkim at gmail.com danjongkim at gmail.com
Mon Mar 17 12:18:14 EDT 2025


Call for Papers

HICSS-59 Minitrack - Human-AI Collaborations and Ethical Issues

 

This mini-track is organized to draw attention to a wide variety of ethical
issues relevant to human-AI collaborations and to encourage more intensive
research on this emergent topic. It welcomes theoretical, methodological,
and empirical research addressing a variety of technical, social, and
ethical issues relevant to complex and multifaceted challenges of AI systems
in interaction with human stakeholders (e.g., users, developers, and
competitors). The topics relevant to this minitrack include, but are not
limited to:

 

Human-AI Synergy in Online Platforms: While many online platforms are
actively embracing AI (as discussed in Su et al., 2023) - particularly
generative AI technologies - to enhance user experience, the human-AI
synergy on these platforms requires careful examination. This synergy
includes how platform users perceive AI as an alternative and how other
stakeholders, such as existing service providers practicing on these
platforms, react to this new entrant. In particular, the introduction of AI
and its impact on platform sustainability should be discussed, as AI may
pose a threat to existing business relationships. 

 

Trust in AI Systems: As AI becomes increasingly capable of performing a wide
range of tasks, both user engagement and perceived reliability of AI systems
have seen significant growth, thereby amplifying their influence on human
decisions and actions. However, given that AI optimization can progress
rapidly and in unforeseen ways, ensuring that AI's objectives and behaviors
are consistent with human values and goals poses a considerable challenge
(Rezwana & Maher, 2022). Consequently, it is crucial to conduct thorough
research to ascertain when, where, and to what extent human users should
place their trust in AI systems and integrate their outputs into
decision-making processes. 

 

Multi-Agent Systems for Human-AI Collaboration: Multi-agent systems allow AI
agents to coordinate with each other. This collaborative approach enables AI
to collaborate with humans in more complex domains. As multi-agent systems
play a critical role in advancing the effectiveness and efficiency of
human-AI collaboration, we call for more attention to their design, impact,
and implications.

 

Transparency and Explainability: AI systems, which are commonly
characterized as a "black box", can be difficult to understand or explain,
making it harder to earn human users' trust. The opacity of AI systems has
led to a growing need for research on methods to increase the transparency
and explainability of AI systems (Samek et al., 2019). Amidst the rise of
generative AI, attention should also be directed to how this emerging
technology can support transparency and interpretability with its remarkable
capabilities for language understanding and generation (Schneider, 2024). 

 

Bias and Fairness: AI systems can perpetuate and amplify biases present in
the data and the computation architecture used to train them, which can lead
to unfair and discriminatory outcomes. This has led to a growing need for
research on methods to increase fairness and reduce bias in AI systems
(Daugherty, et al., 2019). 

 

Autonomy: As AI systems become more advanced, there are concerns about their
potential to make decisions without human oversight or control. This has led
to a growing need for research on methods to ensure the safety and
accountability of autonomous AI systems (Baum, 2020). 

 

Privacy and Data Breaches: The use of AI can raise concerns about the
collection, storage, and process of large amounts of personal sensitive data
making them a target for data breaches and other forms of cybercrime (Osenl,
et al., 2021). Methods and implications for protecting individuals' privacy
and data breaches as a result of misuse of AI systems need to be studied.  

 

Security and Vulnerabilities: AI systems can be vulnerable to adversarial
attacks (e.g., hacking, malware, and other forms of cyber-attack), which can
compromise their security and the security of the systems and networks they
are connected to. Attackers also manipulate input data or use other
techniques to trick the system into making incorrect decisions. This has led
to a growing concern about the confidentiality and integrity of AI systems
that directly interact with human users, which needs research on methods to
increase the robustness and security of AI systems against adversarial
attacks (Tariq, et al. 2020). 

 

Copyrights and Intellectual Property Rights: AI systems can be used to
create and distribute unauthorized copies of copyrighted and trademarked
material, making it difficult to enforce and protect such rights (Craig,
2022). In addition, the models of AI systems are trained on a large amount
of data, which are valuable assets and can be stolen or replicated (Oliynyk,
et al., 2022). Furthermore, in a co-creative system, the role of AI can be
complex and substantial, making it difficult to answer "who owns the product
in a human-AI co-creation?" (Rezwana & Maher, 2022) These questions need to
be investigated.

 

Weaponization: AI systems are increasingly being used in autonomous weapon
systems, which raises ethical questions about human-AI collaborations in
warfare and the possibility of AI being used to create autonomous weapons
(Duberry, 2022). 

 

Generative AI and Large Language Models: The last few years have witnessed
remarkable signs of progress in Generative AI (GAI) technologies, such as
ChatGPT and Stable Diffusion. These advancements have tremendously enhanced
AI systems and accelerated their adoption. Concurrently, the new technology
integration calls for more research to address its various issues, including
design methodologies, development challenges, innovative applications, and
ethical concerns.

 

Multimodal Human-AI Collaboration: The rapid integration of large
pre-trained foundation AI models has been equipped with multimodal input and
output capabilities. These advancements unlock fresh user experiences and
pave the way for more adaptive human-AI collaborations. However, the
wide-ranging implications of multimodal applications, along with the
potential for misuse - particularly through the generation of highly
expressive and convincing multimodal content - highlight the urgent need for
careful consideration of how multimodal AIs communicate with users.

 

Natural Language Processing and Text Analytics: A wide variety of ML/DL
methods, along with NLP, have been used to analyze voice and text in
conversation. Nonetheless, existing approaches suffer from technical
limitations, calling for more research to advance the state-of-the art in
voice/text analytics.

 

Job Displacement: While AI creates new job opportunities in the IT sector,
they have rendered some jobs obsolete, profoundly influencing the skills and
competencies required for future employment. Does AI replace human laborers,
potentially exacerbating economic inequality, or does it aid in preparing
job candidates for more productive roles and thereby promoting occupational
well-being? The potential impact of human-AI collaborations on human
employment should be carefully examined. 

 

AI for Vulnerable Population: (mental disorder sufferers, disabled, minor,
etc.): While affording universal accessibility, AI's interaction with
vulnerable groups, such as those suffering from mental disorders and
disabilities, may raise concerns due to inadequate design or data
contamination. Communication generated by AI for the general population may
appear insensitive or inappropriate for these vulnerable groups. Therefore,
it is essential to explore design principles for AI systems that
specifically accommodate the needs of vulnerable populations.

 

Unintended Consequences of Human-AI Collaborations: Unintended consequences
can emerge from the complex interplay between human users and AI.
Recognizing these unintended consequences is crucial in guiding the
development of AI systems toward more equitable, responsible, and beneficial
outcomes for society.

 

Important dates ( <http://www.hicss.hawaii.edu/> www.hicss.hawaii.edu):

 

April 15:                       Paper submission commences

June 15:                      Paper submission deadline

August 17:                  Notification of Acceptance/Rejection

September 22:            Deadline for authors to submit final manuscript

October 1:                   Deadline for at least one author to register
for HICSS
January 6-9, 2026:      HICSS Conference

 

Conference Website:
<https://urldefense.proofpoint.com/v2/url?u=http-3A__hicss.hawaii.edu_&d=DwM
FaQ&c=qgVugHHq3rzouXkEXdxBNQ&r=gRloTqYu41iyPBnFvSIsmmsPKOHv72ESW8DtO3XYTcc&m
=4XiyqQFxEuYV_CNJ5KzOBVAT8nJ2elm32XdIHhguG3Q&s=2QBXzFGqmhSsv2IJK2OMVxYQFup4h
Q7DyfSXi4ySbzY&e=> http://hicss.hawaii.edu/
Author Guidelines:
<https://urldefense.proofpoint.com/v2/url?u=http-3A__hicss.hawaii.edu_tracks
-2Dand-2Dminitracks_authors_&d=DwMFaQ&c=qgVugHHq3rzouXkEXdxBNQ&r=gRloTqYu41i
yPBnFvSIsmmsPKOHv72ESW8DtO3XYTcc&m=4XiyqQFxEuYV_CNJ5KzOBVAT8nJ2elm32XdIHhguG
3Q&s=btsWYB4QhggVqei8tH_L3JdSHhaB45EQcIYDchm1SQo&e=>
http://hicss.hawaii.edu/tracks-and-minitracks/authors/

 

Minitrack Co-Chairs:

Dr. Dan J. Kim (Primary)

Professor, Information Technology & Decision Sciences

G. Brint Ryan College of Business

University of North Texas

Email: dan.kim at unt.edu

Dr. Victoria Yoon

Professor, Information Systems

School of Business

Virginia Commonwealth University

Email: vyyoon at vcu.edu 

Dr. Xunyu Chen

Assistant Professor, Information Systems

School of Business

Virginia Commonwealth University

Email: chenx at vcu.edu

Babak Abedin 

Professor, Business Analytics and Information Systems

Macquarie Business School

Macquarie University

Babak.Abedin at mq.edu.au

 

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://isworld.org/pipermail/aisworld_isworld.org/attachments/20250317/11c4c96a/attachment.htm>


More information about the AISWorld mailing list