Poster Description: We will share findings on the AI Trust Journal Project, which seeks to address the double-edged sword that AI poses when used for planning, developing, and implementing education for clinicians and patients: AI tools hold immense potential to enhance efficiency and personalization in healthcare education, yet unseen biases in AI also offer potential perils, as AI outputs are products of algorithms reflecting existing, often biased, patterns in data. The poster will feature findings from the originator of the grassroots effort to bring awareness and solutions for this issue, the AI Trust Journal Project, Ms. Tanya Bass . The poster will show why this matters to the Alliance community, including emphasizing the profound implications for patient care and clinician trust when educational content, influenced by AI, is biased, incorrect, or inappropriate. The poster will highlight the ethical responsibility of CPD professionals to be aware and actively challenge and solve for this issue through emphasizing the indispensable role of human expertise in reviewing, refining, and validating AI outputs. The poster will outline the fallacy of omission, or the risk of perpetuating bias unwittingly by not questioning AI outputs. This poster aims to raise awareness among healthcare education professionals about the inherent biases in AI outputs and equip them with practical strategies to identify, challenge, and mitigate these biases to produce fair, balanced, and equity-led education for clinicians and patients.
Learning Objectives:
Identify potential sources of bias in AI-generated content relevant to healthcare education
Implement critical evaluation strategies to scrutinize AI outputs for accuracy, fairness, and inclusivity
Evaluate the Alliance AI Committee's statement on AI best practices in healthcare education