Assessing Artificial Intelligence Literacy: What do Health Information Professionals Know about AI? - My Website
Skip to Content
Advances in Health Information Science and Practice
  • Articles and Issues
  • Collections
  • For Authors
  • For Reviewers
  • About AHISP
Article Type Research Article Artificial Intelligence 2025 Volume 1 Issue 1 AHISP Vol. 1 / Issue 2, 11/3/2025

Assessing Artificial Intelligence Literacy: What do Health Information Professionals Know about AI?

Diane Dolezel, PhD, MSCS, BSHIM; Elize Lambert, PhD; Valerie Watzlaf, PhD; Mary Morton, PhD; Karima Lalani, PhD; Jaime Sand, PhD; Susan Fenton, PhD
Health Informatics and Information Management, Texas State University (Dolezel, Lambert); School of Health and Rehabilitation Sciences, University of Pittsburgh (Watzlaf); Health Informatics and Information Management, University of Mississippi Medical Center (Morton); Department of Health Systems and Population Health, University of Washington (Lalani); School of Public and Population Health, Bosie State University (Sand); Department of Clinical & Health Informatics, The University of Texas Health Science Center at Houston (Fenton)
DOI: 10.63116/KLVV3078 elocation_id: KLVV3078
Download PDF

 

Abstract

Background

Previous AI literacy studies have been limited to clinical students and professionals and included subjective reporting. This survey study explored the extent of AI knowledge in health information professionals with subjective and objective questions. The objective of the study was to inform the International Federation of Health Information Management Associations (IFHIMA) and national health information bodies about current AI literacy levels and the education needs of their members.

Methods

A descriptive survey was adapted from two validated, previously-published study instruments. Survey data were collected between December 5, 2024, and February 28, 2025 using a self-administered Qualtrics (Seattle, WA) online survey link distributed by email to IFHIMA members. The survey link was also distributed on the LinkedIn professional networking platform (LinkedIn Corp; Sunnyvale, CA) by multiple IFHIMA members. Results were analyzed using Chi-square, ANOVA, and Tukey HSD post-hoc tests to assess the associations between the categorical response variables and the subjective survey question measures.

Results

A total of 176 participants began the survey. Data were cleaned to exclude 48 incomplete responses, leaving 128 complete and valid responses for analysis. AI knowledge varied by demographics; country of employment or residence and professional association membership were shown to influence familiarity with AI. Many health information professionals reported limited or no AI experience, and those with practical AI experience performed better on foundational AI knowledge questions, suggesting that experiential learning scaffolds AI literacy. Most respondents understood emerging AI-related threats. However, regardless of experiences with everyday AI tools, they struggled with AI modeling and product development.

Conclusions

The study results identified a major gap in AI knowledge, and the authors provide input for educators aiming to align educational programs with job market demand by increasing AI knowledge content, addressing gaps through targeted curriculum development and educator training.

Introduction

Artificial Intelligence (AI) refers to software that imitates aspects of intelligent human behavior, including technologies such as machine learning (ML), natural language processing (NLP), and generative AI, which is focused on creating new content.1–3 AI usage is prevalent in healthcare, where it is used to reduce costs, increase access, improve outcomes, and support innovations in disease diagnosis and treatment, personalized medicine, and predictive modeling.4 AI applications include detecting atrial fibrillation, predicting cardiovascular disease, and image-based diagnosis.5 The rapid diffusion of AI into healthcare has led to health information professionals focusing on developing AI literacy, which is the ability to understand, use, and evaluate AI systems.6,7

Recent advancements in generative AI have transformed healthcare, particularly in electronic health records (EHRs), medical language processing, and personalized care. By using speech recognition and NLP, generative AI summarizes patient conversations for integration into EHRs, improving care accessibility and continuity.8 Its benefits include enhanced diagnosis and treatment accuracy, improved patient care support, increased engagement, advanced medical training, task automation, and streamlined radiology reports, which reduce human errors, redundancy, and healthcare costs, impacting the Health Information (HI) field.4

Clinical Decision Support Systems (CDSSs) improve outcomes by providing real-time, individualized medical advice and alerts, demonstrating early adoption of AI technologies in health information (HI). Computer Assisted Coding (CAC) uses NLP to assign billing codes from documentation, and after 20 years, more advanced models can enhance this process.9 CAC’s rule-based algorithms analyze medical records and suggest codes, allowing coders to focus on auditing and accuracy, thus improving efficiency and patient care by accelerating the revenue cycle and reducing the number of coding errors.9 AI systems that are more advanced than CAC can potentially enable autonomous clinical coding.

To address AI’s benefits in healthcare and education, HI students and professionals need basic knowledge of AI concepts.6 HI is an interdisciplinary field that applies computer science, information technology, healthcare research, public health, and patient care to improve outcomes.10 HI educators could incorporate AI into curricula to teach data mining for cancer detection11 and predictive analytics for improving medication compliance, with advanced classes focused on creating and testing AI applications.12

As AI adoption in healthcare increases, concerns arise that healthcare education programs may not be aligned with job market needs, leaving HI professionals lacking the skills to support AI adoption and to assess the clinical effects of AI.13 Low AI literacy, which includes social and technical skills, can be a barrier to adoption and implementation of clinical AI applications.12–14 Assessing AI literacy in HI professionals is essential for identifying knowledge gaps in AI education, which can inform curriculum design. Addressing AI challenges, especially ethical, privacy, bias, and quality issues, offers opportunities for HI professionals to reduce these challenges and improve AI systems.

Recent research has offered self-reported assessments of medical students’ and professionals’ understanding of generative AI, primarily focused on clinicians using Likert-scale surveys, leaving a gap in understanding the perspectives of HI students and professionals.3,13,15 Our study addresses this gap by combining self-reported questionnaires with objective AI assessments to identify discrepancies between perceived and actual AI understanding across various domains, including socio-technical aspects.

As such, this study was undertaken to inform the International Federation of Health Information Management Associations (IFHIMA) about AI knowledge among HI professionals and provide data for AI training planning, with an understanding that results would also be applicable to other US health information bodies as well. For educators, we sought to identify knowledge gaps and strategies to better prepare students to align their educational programs with the job market demands, ensuring individuals are well-prepared to enter the workforce. For professionals and employers, we assessed AI comprehension to tailor professional development materials and ensure workforce readiness.

Methods

Study Design

An exploratory, descriptive, adapted online survey was conducted to measure the AI literacy of IFHIMA HI professionals. We conducted Chi-square, ANOVA, and Tukey honestly significant difference (HSD) post-hoc tests to assess the associations between the categorical response variables and the subjective survey question measures. Effect sizes were measured by Cramer’s V for Chi-square tests and Cohen’s d for some of the ANOVAs.

Survey Instrument

The validated questionnaires and assessments used in this study are adapted from two instruments, the AI Literacy Scale from Pinski and Benlian (2023)16 and the objective AI literacy scale from Weber, Pinski, and Baum (2023).17 The Pinski and Benlian AI literacy scale16 has thirteen items that measure the five dimensions of human socio-technical competencies (technology knowledge, actors in AI, steps knowledge, AI usage experience, AI design experience, and AI literacy). The objective AI literacy scale from Weber, Pinski, and Baum17 has 16 multiple-choice items with four response choices.

The online web-based questionnaire for this study had three sections. Section 1 asked demographic questions on association membership, participants’ country of residence and work country, age, education level, job title, credentials, and certifications. Section 2 collected information on self-assessed knowledge and perceptions of AI with a five-point Likert rating scale (strongly disagree, disagree, neutral, agree, strongly agree) questions presented in a rating matrix. Section 3 collected information on objective evaluations of AI for aggregate assessments with multiple choice questions with four response choices, and one open-ended question inviting comments related to AI education for HI professionals. The survey questions are presented in Tables 2-4.

Data Collection

Data were collected with convenience sampling. IRB approval was obtained on November 27, 2004 with exempt status (HSC-SBMI-24-1140 - Global AI Literacy Survey for Health Information Professionals) from the Committee for the Protection of Human Subjects of the study organization. Data were collected between December 5, 2024 and February 28, 2025, using a self-administered Qualtrics (Seattle, Washington) online survey link distributed by email to HI professionals who were members of an IFHIMA organization. The survey link was also posted on LinkedIn by multiple IFHIMA members. A total of 176 participants began the survey. Data were cleaned to exclude 48 incomplete responses where all the demographics and AI self-assessment questions were not answered. After data cleaning, 128 complete and valid responses were retained for analysis.

Data Analysis

Data were analyzed with Microsoft Excel (version 16.95.1; Redmond, Washington) and Python statistical software (version 1.96.2) for Macintosh computers.

Results

Sample descriptive statistics

Table 1 displays the respondents’ demographics. Most respondents (45; 35.2%) were aged 50 to 59 years. The respondents’ predominant country of residence was the United States (60; 46.9%), followed by Australia (15; 11.7 %). Most had education levels of ISCED 7 (Master) (53; 41.4%) or ISCED 8 (Doctorate) (37; 28.9%). Most respondents reported association membership in AHIMA (57; 44.5 %) or Other (47, 36.7%). The most-frequently reported job titles were Academics / Education (70; 54.7%), Coder (44; 34.4%), and Information Systems and Technologies (43; 33.6%). Less than half (53; 41.4%) of respondents reported holding a non-HI credential or certification.

Table 1.Respondent Demographics
Characteristics Number Percent (%)
Age
18-19 2 1.6
20-29 3 2.3
30-39 16 12.5
40-49 28 21.9
50-59 45 35.2
60-69 28 21.9
70+ 6 4.7
Country
United States 60 46.9
Australia 15 11.7
Canada 10 7.8
Saudi Arabia 10 7.8
Oman 5 3.9
Indonesia 5 3.9
Nigeria 4 3.1
Spain 4 3.1
Saint Lucia 2 1.6
Japan 2 1.6
United Republic of Tanzania 2 1.6
South Korea 2 1.6
Kenya 2 1.6
Qatar 1 0.8
Philippines 1 0.8
Ghana 1 0.8
Barbados 1 0.8
New Zealand 1 0.8
Do you work in a country that is different from the one in which you reside?
Yes 6 4.7
No 122 95.3
Country where you reside (if you work in a country other than where you reside)    
Australia 1 16.7
India 1 16.7
New Zealand 1 16.7
Oman 1 16.7
Qatar 1 16.7
Saudi Arabia 1 16.7
Country where you work
Saudi Arabia 2 33.3
Oman 2 33.3
Qatar 1 16.7
Samoa 1 16.7
Country where employer is located
Australia 2 33.3
Oman 2 33.3
Qatar 1 16.7
Saudi Arabia 1 16.7
Hold non-health informatics and information management (HIIM) credentials or certifications
Yes 53 41.4
No 75 58.6
Education Level
ISCED 3 2 1.6
ISCED 4 2 1.6
ISCED 5 5 3.9
ISCED 6 29 22.7
ISCED 7 53 41.4
ISCED 8 37 28.9
National or International Association Member
American Health Information Management Association 57 44.5
Other 47 36.7
Australian Health Information Management Association 17 13.3
Canadian Health Information Management Association 10 7.8
Indian Health Information Management and Health
Informatics Association
2 1.6
Institute of Health Records and Information
Management
2 1.6
Barbados Health Information Management Association 1 0.8
La Sociedad Española de Documentación Médica
(SEDOM)
1 0.8
Pacific Health Information Network (PHIN) 1 0.8
Job Categories    
Academics / Education (including full-time, part-time,
adjunct, administration)
70 54.7
(Coder) Health data management, structure, content,
and standards, including clinical classification systems,
standard terminologies, and methodologies
44 34.4
Information systems and technologies 43 33.6
Healthcare analytics, statistics, decision support,
epidemiology, and clinical research
41 32.0
Organizational development and resource
management, including project and operations
management
40 31.3
Clinical documentation quality and standards 38 29.7
Legal, regulatory, ethical, privacy and confidentiality
issues of healthcare systems, including data and
information governance
36 28.1
Healthcare systems and services, including health
insurance, reimbursement, and healthcare funding
methodologies
27 21.1
Clinical quality management and performance
improvement
22 17.2
Currently retired (If retired, please select your previous
job area/title prior to retirement)
3 2.3

Descriptive Analysis of Survey Questions

Table 2 displays the subjective survey question responses for Section 2 in which information on self-assessed knowledge and perceptions of AI was collected. Respondents’ experience and familiarity with AI in healthcare were evaluated with Likert-scale items rated on a five-point scale ranging from strongly disagree to strongly agree. When asked about their experience interacting with different types of AI systems, 46.1% agreed (40; 31.3%) or strongly agreed (19; 14.8%), 35.2% strongly disagreed (17; 13.3%) or disagreed (28, 21.9%), and (24;18.8%) were neutral in response to the question beginning, “I have experience in interaction with different types of AI.” Most (64.1% (82/128)) agreed (54; 42.2%) or strongly agreed (28; 21.9%) that they had frequent interactions with AI. A substantial majority (92; 71.9%) of respondents reported limited experience in AI model design (strongly disagree, 57 [44.5%]; disagree, 35 [27.3%] to the question, “I have experience in designing AI models, for example, a neural network.”). Most respondents (99; 77.3%) reported limited experience in AI product use, design, or development (strongly disagree, 60 [46.9%]; disagree, 39 [30.5%] to the question, “I have experience in the development of AI products.”). In answering the question, “In general, I know the unique facets of AI and humans and their potential roles in human-AI collaboration within healthcare,” a high proportion reported knowledge (agree, 50 [39.1%]; strongly agree, 16 [12.5%]), while 27.3% (35) indicated a lack of understanding by either disagreeing or strongly disagreeing with the statement.

Table 2.Subjective Survey Question Responses
Item statements Strongly Disagree
n (%)
Disagree
n (%)
Neutral
n (%)
Agree
n (%)
Strongly Agree
n (%)
I have experience in interaction with different types of AI, like Computer-assisted coding (CAC), clinical decision support systems (CDSS), ambient documentation, etc. 17 (13.3) 28 (21.9) 24 (18.8) 40 (31.3) 19 (14.8)
I have experience in the usage of AI through frequent interactions in my everyday life. 8 (6.3) 18 (14.1) 20 (15.6) 54 (42.2) 28 (21.9)
I have experience in designing AI models, for example, a neural network. 57 (44.5) 35 (27.3) 17 (13.3) 13 (10.2) 6 (4.7)
I have experience in the development of AI products. 60 (46.9) 39 (30.5) 12 (9.4) 9 (7.0) 8 (6.3)
In general, I know the unique facets of AI and humans and their potential roles in human-AI collaboration within healthcare. 13 (10.2) 22 (17.2) 27 (21.1) 50 (39.1) 16 (12.5)

Table 3 displays responses to Section 3, the general, objective survey questions regarding AI. When asked about the historical emergence of AI, fewer than half of the respondents (60; 46.9%) correctly identified the 1950s as the era when AI was first formally mentioned. A majority of respondents (93; 72.7%) correctly identified that humans and AI have distinct strengths and weaknesses. A substantial majority (97; 75.8%) accurately acknowledged that AI research is interdisciplinary and encompasses multiple technologies. When assessing awareness of AI-related risks, 61.7% (79) correctly identified “deep fakes rendering videos unattributable” as a significant concern. Other selected responses—such as digital assistants controlling self-driving cars (18; 14.1%) or AI image and voice generation affecting art and language (13 [10.2%] and 18 [14.1%], respectively)—reflect secondary or less widely accepted risks.

Table 3.General Objective AI questions
Question and Response Options n (%)
AI was first mentioned in  
The 1880s 11 (8.6)
The 1950s 60 (46.9)
The 1980s 31 (24.2)
The 2000s 26 (20.3)
How are human and artificial intelligence (AI) related  
Their strengths and weaknesses converge 25 (19.5)
They predict each other 7 (5.5)
They are the same concerning strengths and weaknesses 3 (2.3)
They are different, each has its own strengths and weaknesses 93 (72.7)
Artificial intelligence (AI) research …  
Is only fiction at this point in time 2 (1.6)
Happens in an interdisciplinary field including multiple
technologies
97 (75.8)
Revolves predominantly around optimization 22 (17.2)
Refers to one specific AI technology 7 (5.5)
What is a possible risk for humans of artificial intelligence (AI) technology?  
Deep fakes render videos unattributable 79 (61.7)
Digital assistants take over self-driving cars 18 (14.1)
Image generators break the rules of art 13 (10.2)
Voice generators make people unlearn natural languages 18 (14.1)
What is the central distinction between supervised and unsupervised learning?  
Unsupervised learning may happen anytime 53 (41.4)
Supervised learning uses labeled datasets 48 (37.5)
Supervised learning is performed by supervised personnel 11 (8.6)
Supervised learning supersedes unsupervised learning 16 (12.5)
Which of the following statements is true?  
ML and AI are mutually exclusive 6 (4.7)
AI is a part of ML1 17 (13.3)
AI and ML are the same 4 (3.1)
Machine learning (ML) is a part of artificial intelligence (AI) 101 (78.9)
Which is a typical application of artificial intelligence (AI) at which it is usually better than non-AI?  
Creating annual reports 20 (15.6)
Undefined processes 24 (17.8)
Image recognition 71 (55.5)
Hardware space analysis 13 (10.2)
Running the same request with the same data on the same artificial intelligence (AI)  
Increases the computing speed 29 (22.7)
Never gives different results 17 (13.3)
Could give different results 78 (60.9)
Double the computing time 4 (3.1)

Understanding of supervised versus unsupervised learning varied, with only 37.5% (48) correctly identifying that supervised learning involves labeled datasets. A larger portion (53; 41.4%) chose "unsupervised learning may happen anytime. A strong majority (101; 78.9%) correctly identified that ML is a subset of AI while 13.3% (17) believed the inverse (that AI is a part of ML). The most correctly identified application where AI outperforms traditional methods was image recognition (71; 55.5%). Undefined processes (24;17.8%), annual report creation (20, 15.6%), and hardware space analysis (13; 10.2%) demonstrated a mix of respondents’ overestimation and underestimation of AI’s capabilities in real-world scenarios. In evaluating the understanding of stochasticity in AI models, 60.9% (78) correctly recognized that running the same input through an AI system could yield different results. Others held misconceptions, reporting beliefs that it never gives different results (17;13.3%), increases computing speed (29; 22.7%), or doubles computing time (4; 3.1%).

Table 4 displays the assessment of the technical understanding of AI systems and their regulatory, ethical, and computational dimensions. A set of technical objective questions was administered to 22 respondents of the 128 who stated they were involved in programming, creating, developing, or evaluating AI models. When asked to identify an objective that is not part of current AI regulation, 31.8% (7) gave the correct response that “Enforcing a no-bias policy to ban all potential biases that can arise from AI.” Some (10; 45.5%) incorrectly selected “Ensuring legal certainty to facilitate investment and innovation in AI.” Just over half of respondents (12; 54.5%) correctly identified the European Commission’s AI Act as a major regulatory initiative, while 36.4% (8) mistakenly attributed regulatory leadership to the United Nations. Most respondents (17; 77.3%) correctly identified that working with AI leads to a shift in tasks performed by humans. A high percentage of respondents (16; 72.7%) accurately identified diversity, bias, and transparency as central ethical concerns in AI development and deployment.

Table 4.Technical Objective AI Questions for Programmers, Creators, Developers, or Evaluators of AI (n=22)
Question and Response Options n (%)
Which is not an objective of current artificial intelligence (AI) regulation?  
Ensuring that AI systems placed on the market are safe and respect existing laws on fundamental right 4 (18.2)
Enforcing a no-bias policy to ban all potential biases that can arise from AI 7 (31.8)
Facilitating the development of a market for lawful, safe, and trustworthy AI1 (4.6) 1 (4.6)
Ensuring legal certainty to facilitate investment and innovation in AI 10 (45.5)
Which is a major regulation that has been passed specifically for artificial intelligence (AI)?  
European Commission’s Act for Artificial Intelligence 12 (54.6)
European Regulation for Responsible AI 1 (4.6)
United Nations Framework for the Ethical Use of AI 8 (36.4)
American Regulations on the Usage of AI1 (4.6) 1 (4.6)
Which potential consequence can working with artificial intelligence (AI) have on humans that interact with it?  
Shift of evaluation periods 1 (4.6)
Debiasing of human literacy 1 (4.6)
Shift tasks performed by humans 17 (77.3)
Debiasing of result interpretation 3 (13.6)
Key ethical issues surrounding artificial intelligence (AI) include:  
Artificial neural network (ANN), genetic algorithm (GA), and simulations Annealing 2 (9.1)
Future predictions and past overfitting 2 (9.1)
Cold start problem, omitted variable trap, and sunk cost fallacy 2 (9.1)
Diversity, bias, and transparency 16 (72.7)
What always distinguishes decision trees from support vector machines?  
Decision trees are more interpretable 9 (40.9)
Decision trees generate more predictions 5 (22.7)
Decision trees are more implicit 5 (22.7)
Decision trees are trained faster 3 (13.6)
Which is a typical split of testing and training data for development purposes?  
Creating annual reports 40%, Training, 40% testing, 20% test-training together 10 (45.6)
80% training, 20% testing 10 (45.6)
95% training, 5% testing 0 (0)
It does not matter 2 (9.1)
Which is not a strictly necessary part of a single artificial intelligence (AI) systems’ development process?  
Training/learning 4 (18.2)
Data preprocessing 6 (27.3)
Model definition 1 (4.6)
Benchmarking 11 (50.0)
Which is not a strictly necessary part of an artificial neural network (ANN)?  
Input layer 3 (13.6)
Output layer 0 (0)
User layer 16 (72.7)
Hidden layer 3 (13.6)

Responses were more varied on model interpretability. While 40.9% (9) correctly selected “Decision trees are more interpretable,” others believed they generate more predictions (5; 22.7%), are more implicit (5; 22.7%), or train faster (3; 13.6%). Equal proportions of respondents (10; 45.5%) selected “80% training, 20% testing” (correct) and a flawed option describing a “40/40/20” split, including a test-training mix. Half of the respondents (11; 50.0%) accurately identified benchmarking as not strictly necessary in every AI system development process. Other respondents incorrectly excluded essential steps such as data preprocessing (16; 72.7%) or training/learning (4; 18.2%). A clear majority (16; 72.7%) correctly identified the “User layer” as not a structural component of an ANN (Artificial Neural Network), demonstrating a solid grasp of ANN architecture. However, small proportions mistakenly excluded essential layers such as the input and hidden layers (3; 13.6% each).

Inferential Statistics

Chi-Square

Chi-square tests were used to examine associations between demographic characteristics and subjective survey responses, as well as between subjective questions and individual objective knowledge questions. Four chi-square tests showed significant demographic associations at p < 0.05. First, educational level was significantly associated with subjective AI experience, with higher educational attainment correlating to more AI tool interaction, likely due to the greater access to AI technologies or curricular exposure within higher education settings (V=0.266, p = 0.014).

Second, educational level is associated with an understanding of human–AI roles, with academic educator background influencing perceptions of AI’s healthcare collaborative function and academic educators reporting more frequent AI use (V=0.261, p = 0.02). Third, job type was significantly associated with AI product development involvement with a large effect size (V=0.792, p = 0.022). Fourth, academic/educator status was associated with frequent AI use (V=0.386, p < 0.001), and AI experience (V=0.274, p = 0.047), suggesting educators may have more exposure to AI technologies through teaching, research, or academic institutions. Post-hoc tests with Bonferroni corrections indicated no individual pairwise comparisons remained significant after correction, likely due to modest differences in proportions and limited subgroup sizes, which reduce the statistical power of Bonferroni corrections.

Additionally, chi-square tests between individual subjective survey questions and objective knowledge items showed specific patterns of association. For instance, frequent AI usage and correct identification of the historical origin of AI were strongly associated (V = 0.38, p < 0.001). Respondents who reported experience in designing AI models were significantly more likely to answer two key items correctly: the timeline of AI’s emergence (V = 0.273, p = 0.0486) and the difference between supervised and unsupervised learning (V = 0.275, p = 0.0460).

Experience in developing AI products was significantly linked to an improved understanding of supervised versus unsupervised learning paradigms (V = 0.284, p = 0.0352), reinforcing the idea that applied, hands-on development fosters comprehension of core AI concepts. When testing associations between subjective questions and technical AI knowledge the analysis revealed significant relationships between self-reported AI experience and respondents’ ability to correctly answer technical questions related to AI development and regulation. First, respondents who reported greater overall experience with AI were significantly more likely to correctly identify benchmarking as not being a key part of the AI development process. Additionally, a significant relationship was found between frequent AI usage and the European Commission’s AI Act as a major regulatory framework (V = 0.615, p = 0.0396).

Chi-Square Educators vs. Non-Educators

Table 5 presents Chi-square test results comparing educators with non-educators on general AI literacy questions, while Table 6 focuses on technical AI literacy. Given that educators represented the largest job category in our sample (19.2%) and reported significantly higher AI experience and usage, we separated the data for educators and non-educators to examine whether differences in AI literacy were statistically significant.

Table 5.Chi Square Results for Educators vs Non-Educators on General AI Literacy
Question Educators Correct % Non-Educators Correct % p-value Cramér's V
ai_first_mentioned 45.71 48.28 0.91 0.01
human_ai_relationship 77.14 67.24 0.29 0.09
ai_research_scope 80.00 70.69 0.31 0.09
ai_risk 58.57 65.52 0.53 0.05
supervised_vs_unsurpervised 38.57 36.21 0.93 0.01
ml_vs_ai 75.71 82.76 0.45 0.07
ai_application 55.71 55.17 1.00 0.00
ai_consistency 68.57 51.72 0.08 0.16
Table 6.Chi Square Results for Educators vs Non-Educators on Technical AI Literacy
Question Educators Correct % Non-Educators Correct % p-value Cramér's V
ai_regulation 38.46 22.22 0.73 0.07
ai_major_regulation 53.85 55.56 1.00 0.00
ai_human_impact 76.92 77.78 1.00 0.00
ai_ethical_issues 76.92 66.67 0.96 0.01
dt_vs_vector 46.15 33.33 0.87 0.03
training_testing_ratio 53.85 33.33 0.61 0.11
ai_development_process 61.54 33.33 0.39 0.18
ann_structure 76.92 66.67 0.96 0.01

In the general AI literacy section, educators performed slightly better on some questions—such as human-AI relationship and AI research scope—but none of the differences reached statistical significance. All effect sizes were weak, suggesting that both groups demonstrated similar levels of general AI knowledge.

Similarly, the technical AI Chi Square tests showed no statistically significant differences and effect sizes were small, the largest being (V=0.185) for AI development process and training testing ratio (V=0.110), which may warrant further analysis. Notably, the sample size for this comparison was small (22), which limits the statistical power of the analysis.

ANOVA

ANOVA analyses found significant differences by country, professional associations, self-reported AI knowledge, and AI usage. Total objective knowledge scores, which reflect respondents’ conceptual understanding of AI, differed significantly by country for general AI knowledge (p < 0.005) and technical AI knowledge (p < 0.004). Professional association membership (p = 0.01) and educational level were significantly associated with general AI knowledge (p = 0.04). Tukey HSD post-hoc results revealed that respondents from Nigeria (p = 0.02, Cohen’s D = 0.008) and Saudi Arabia outperformed their US counterparts (p = 0.03, Cohen’s D = -0.090) with small and large effect sizes, respectively, indicating the effect sizes are negligible by conventional benchmarks.

Discussion

Previous studies on AI education were limited to clinical students and professionals and included subjective reporting.1–3 This study assessed AI knowledge among HI professionals, offering insights to guide education, training, and professional development by identifying knowledge gaps. This knowledge is critical in addressing workforce shortages in digital health and in developing curricula to meet the needs of an AI-digitized world.18

Among our respondents, many HI professionals reported limited or no AI experience, and those having practical AI experience performed better on foundational AI knowledge questions, suggesting that experiential learning scaffolds AI literacy. Most respondents understood emerging AI-related threats; however, regardless of experiences with everyday AI tools, they struggled with AI modeling and product development. AI knowledge varied by demographics; country of employment or residence and professional association membership influenced familiarity with AI, likely due to differences in national digital health strategies, training, and access to professional resources. Job type significantly influenced the involvement in AI product development, with hands-on engagement varying across job roles such as academia, coding, and data analytics. AI experience increased with education level; educators, especially academics, reported the highest use, likely due to greater exposure in research settings. Respondents showed limited understanding of supervised and unsupervised learning, and the relationship between ML and AI’s stochastic models, but more familiarity with AI tools and human-AI interaction.

Comparison to Previous Research

Our results aligned with prior studies on AI knowledge and perceptions that found limited practical exposure to AI applications among healthcare professionals, with a majority reporting little or no exposure to AI applications in their work environment and organizational awareness of the need for employees with AI-related skills.19–23 Our respondents struggled with AI models and the development and use of AI products due to a lack of hands-on experience with AI products and curricula development, and a low level of understanding of AI in general, which supports other studies.24–27

In this study, responses related to the human-AI relationship varied. Over half recognized the distinct roles of humans and AI, while most identified their strengths and weaknesses and how these differences shift the task distributions. However, 27.4% lacked an understanding of this collaboration, which agrees with other research.28 In contrast, a few respondents considered AI and ML to be the same, but most failed to identify the misconception that AI and ML are interchangeable terms.

In our study, most respondents correctly recognized diversity, bias, and transparency as key ethical issues in AI. In contrast, another study identified the misconception that AI is unbiased.28 Similarly to the results from our study, a systematic review revealed that bias was the most frequently discussed ethical issue related to justice and fairness transparency29; nurse interviews recognized the issue of transparency and privacy challenges and the risk of perpetuating biases and disparities with AI algorithms, which could result in unequal treatment or misdiagnosis.30

Our study found that professional association membership correlated with higher AI knowledge, suggesting that access to resources and training increases AI literacy. This coincides with our suggestion for HI colleges and universities to provide faculty AI training and instruction on developing AI curricula, aligning with other research on integrating AI content into classroom instruction.31 Academic institutions should offer faculty development opportunities such as workshops and online certificate programs, and communities of practice to enhance AI literacy and pedagogical readiness.32 Effective integration of AI into the curriculum requires multidisciplinary collaboration, particularly between health informatics and computer science faculties, to co-design syllabi and develop relevant course content.33

Limitations

This study had some limitations. First, we were unable to conduct a power analysis because there was uncertainty about the population of IFHIMA members. IFHIMA has membership numbers for individual member associations, but it does not report the total number of members. Therefore, the sample may not be a true representation of the HI population. However, we believe it is a good starting point for identifying their strengths and needs in AI. Second, a small subsample (22) involved in AI development was used to examine technical, ethical, computational, and regulatory understanding. For reasons stated above, this sample is also not a true representation of the population under study and not fully generalizable which could impact our results. However, our findings could provide a starting point for future research addressing AI knowledge gaps through targeted curriculum development and educator training. Third, the study had a high dropout rate, with 176 respondents starting the survey and only 128 completing it, resulting in 27% study attrition. This leads to non-response bias, which could impact our results and not accurately reflect on our target population. Fourth, we recognize the limitations of self-reported data and relied on respondents to answer in good faith that they were health informatics professionals. Finally, there is a potential sampling bias introduced by recruitment, which occurred through the IFHIMA membership list and members outreach through LinkedIn. This recruitment may have attracted respondents who are professionally employed, have higher digital literacy levels, and have HI expertise, which limited generalization to the HI population.

Conclusions

The survey analysis confirmed that while a considerable proportion of respondents reported some level of experience with AI systems, several reported limited or no such experience. Regardless of frequent experiences with everyday AI tools, many respondents struggled with AI models and the development and use of AI products. Lower educational levels reported lower experiences with AI, providing a group to target for additional education and training. This study identifies a major gap in AI knowledge and provides input for educators aiming to align educational programs with job market demand by increasing the AI knowledge content.


Author contributions

SF, DD, EL, VW, designed the study; SF, DD, EL, VW, MM, KL, JS researched the literature; SF, DD, EL contributed to the survey; SF, DD, EL, VW provided statistical advice; SF, EL, VW extracted and analyzed the data; SF, DD, EL, VW reviewed the analyses; SF, DD, EL contributed to the manuscript drafts and SF, DD, EL, VW, MM, KL, JS finalized the manuscript.

Disclosures

The authors have nothing to disclose.

Funding

The authors received no funding for this research.

Bibliography

  • 1.
    Merriam-Webster. AI. Accessed January 9, 2025. https:/​/​www.merriam-webster.com/​dictionary/​
  • 2.
    Baig M, Hobson C, GholamHosseini H, Ullah E, Afifi S. Generative AI in Improving Personalized Patient Care Plans: Opportunities and Barriers Towards Its Wider Adoption. Applied Sciences. 2023;714(23):10899. doi:10.3390/​app142310899
  • 3.
    Russell RG, Novak LL, Patel M, et al. Competencies for the use of artificial intelligence–based tools by health care professionals. Academic Medicine. 2023;98(3):348-356. doi:10.1097/​ACM.0000000000004963
  • 4.
    Moulaei K, Yadegari A, Baharestani M, Farzanbakhsh S, Sabet B, Afrash MR. Generative artificial intelligence in healthcare: A scoping review on benefits, challenges and applications. International Journal of Medical Informatics. 2024;188(4). doi:10.1016/​j.ijmedinf.2024.105474
  • 5.
    Briganti G, Le Moine O. Artificial Intelligence in Medicine: Today and Tomorrow. Frontiers in Medicine. 2020:7. doi:10.3389/​fmed.2020.00027
  • 6.
    Hornberger M, Bewersdorff A, Nerdel C. What do university students know about Artificial Intelligence? Development and validation of an AI literacy test. Computers and Education: Artificial Intelligence. 2023;5:100165. doi:10.1016/​j.caeai.2023.100165
  • 7.
    Ng DTK, Leung JKL, Chu SKW, Qiao MS. Conceptualizing AI literacy: An exploratory review. Computers and Education: Artificial Intelligence. 2021;2:100041. doi:10.1016/​j.caeai.2021.100041
  • 8.
    Nova K. Generative AI in healthcare: advancements in electronic health records, facilitating medical languages, and personalized patient care. Journal of Advanced Analytics in Healthcare Management. 2023;7(1):115-131.
  • 9.
    Campbell S, Giadresco K. Computer-assisted clinical coding: A narrative review of the literature on its benefits, limitations, implementation and impact on clinical coding professionals. Health Information Management Journal. 2020;49(1):5-18. doi:10.1177/​1833358319851305
  • 10.
    American Medical Association. Artificial Intelligence in Medicine. Accessed January 9, 2024. https:/​/​www.ama-assn.org/​practice-management/​digital/​augmented-intelligence-medicine
  • 11.
    Haue AD, Hjaltelin JX, Holm PC, Placido D, Brunak SR. Artificial intelligence-aided data mining of medical records for cancer detection and screening. Lancet Oncology. 2024;25(12):e694-e703. doi:10.1016/​S1470-2045(24)00277-8
  • 12.
    Sharma P. Leveraging predictive analytics to target payer-led medication adherence interventions. Am J Manag Care. 2024;30(10):SP756-SP758. doi:10.37765/​ajmc.2024.89610
  • 13.
    Charow R, Jeyakumar T, Younus S, et al. Artificial intelligence education programs for health care professionals: scoping review. JMIR Medical Education. 2021;7(4):e31043. doi:10.2196/​31043
  • 14.
    Brock JKU, von Wangenheim F. Demystifying AI: What digital transformation leaders can teach you about realistic artificial intelligence. California Management Review. 2019;61(14):110-134.
  • 15.
    Sapci AH, Sapci HA. Teaching hands-on informatics skills to future health informaticians: a competency framework proposal and analysis of health care informatics curricula. JMIR medical informatics. 2020;8(1):e15748. doi:10.2196/​15748
  • 16.
    Pinski M, Benlian A. AI literacy - towards measuring human competency in artificial intelligence. Accessed September 1, 2024. https:/​/​scholarspace.manoa.hawaii.edu/​items/​b53359f1-217d-45de-9378-c8cc55cbbd31
  • 17.
    Weber P, Pinski M, Baum L. Toward an Objective Measurement of AI Literacy. PACIS 2023 Proceedings. Published online 2023.
  • 18.
    World Health Organization. Global strategy on human resources for health: workforce 2030. Accessed April 25, 2025. https:/​/​www.who.int/​publications/​i/​item/​9789241511131
  • 19.
    QuantumBlack. The state of AI: How organizations are rewiring to capture value. Accessed April 9, 2025. https:/​/​www.mckinsey.com/​capabilities/​quantumblack/​our-insights/​the-state-of-ai
  • 20.
    Heredia-Negrón F, Tosado-Rodríguez E, Meléndez-Berrios J, Nieves B, Amaya-Ardila C, R-LA RLA. Assessing the Impact of AI Education on Hispanic Healthcare Professionals’ Perceptions and Knowledge. Education Sciences. 2024;14(4):339. doi:10.3390/​educsci14040339
  • 21.
    Hoffman J, Hattingh L, Shinners L, et al. Allied health professionals’ perceptions of artificial intelligence in the clinical setting: cross-sectional survey. JMIR Formative Research. 2024;8(e57204). doi:10.2196/​57204
  • 22.
    Catalina QM, Fuster-Casanovas A, Vidal-Alaball J, et al. Knowledge and perception of primary care healthcare professionals on the use of artificial intelligence as a healthcare tool. Digital Health. 2023;14(9):20552076231180511. doi:10.1177/​20552076231180511. PMID:37361442
  • 23.
    Wood E, Ange B, Miller D. Are we ready to integrate artificial intelligence literacy into medical school curriculum: students and faculty survey. Journal of Medical Education and Curricular Development. 2021;8:1-5. doi:10.1177/​23821205211024078
  • 24.
    Mah D, Grob N. Artificial intelligence in higher education: exploring faculty use, self-efficacy, distinct profiles, and professional development needs. International Journal of Educational Technology in Higher Education. 2024;21(58). doi:10.1186/​s41239-024-00490-1
  • 25.
    Nashwan A, Cabrega J, Othman M, et al. The evolving role of nursing informatics in the era of artificial intelligence. International Nursing Review. 2025;720(1):e13084. doi:10.1111/​inr.13084
  • 26.
    Lomis K, Jeffries P, Palatta A, et al. Artificial Intelligence for Health Professions Educators. NAM Perspectives. 2021;10:31478. doi:10.31478/​202109a
  • 27.
    Estrada-Araoz EG, Manrique-Jaramillo YV, Díaz-Pereira VH, et al. Assessment of the level of knowledge on artificial intelligence in a sample of university professors: a descriptive study. Data and Metadata. 2024;3(285). doi:10.56294/​dm2024285
  • 28.
    Antonenko P, Abramowitz B. In-service teachers’(mis) conceptions of artificial intelligence in K-12 science education. Journal of Research on Technology in Education. 2023;55(1):64-78. doi:10.1080/​15391523.2022.2119450
  • 29.
    Li F, Ruijs N, Lu Y. Ethics & AI: A systematic review on ethical concerns and related strategies for designing with AI in healthcare. 2022;4(1):28-53. doi:10.3390/​ai4010003
  • 30.
    Rony MKK, Numan SM, Akter K, et al. Nurses’ perspectives on privacy and ethical concerns regarding artificial intelligence adoption in healthcare. Heliyon. 2024;10(17):e36702. doi:10.1016/​j.heliyon.2024.e36702
  • 31.
    Esmaeilzadeh P. Challenges and strategies for wide-scale artificial intelligence (AI) deployment in healthcare practices: A perspective for healthcare organizations. Artif Intell Med. 2024;151:102861. doi:10.1016/​j.artmed.2024.102861
  • 32.
    Zawacki-Richter O, Marín VI, Bond M, et al. Systematic review of research on artificial intelligence applications in higher education – where are the educators? International Journal of Educational Technology in Higher Education. 2019;16:39.
  • 33.
    Tolentino R, Baradaran A, Gore G, et al. Curriculum Frameworks and Educational Programs in AI for Medical Students, Residents, and Practicing Physicians: Scoping Review. JMIR Med Educ. 2024;10:e54793.

PREVIOUS ARTICLE

KEYWORDS

education   artificial intelligence   health informatics   literacy   healthcare

Advances in Health Information
Science and Practice

is the quarterly peer-reviewed
research journal of AHIMA.

ahima


© Copyright AHIMA . All Rights Reserved

 
STAY CONNECTED
 Twitter  Facebook  LinkedIn  RSS
Login