"Our Mission is to Build on Theories of Learning and Instruction to Create Innovative Learning Environments that Maximize Learner Capacity to Achieve Learning Goals"
Welcoming our visiting scholar: Sumin Hong! 🔗
June 24, 2024
We are excited to welcome Sumin Hong to our lab as a visiting scholar!
Sumin Hong has joined our lab as a visiting scholar during the summer, 2024. She is currently a PhD candidate at Seoul National University, South Korea. Her research interest is technology integrated instructional design including Artificial intelligence, Virtual reality, virtual world and collaborative learning tool for meaningful learning. During her visit, she is exploring AI integrated education and immersive learning for adult learning.
Presentations at AI-ALOE's 2024 Annual Review Meeting 🔗
June 21, 2024
Director Dr. Min Kyu Kim and our graduate associate Jinho Kim attended and presented at AI-ALOE's 2024 Annual Review Meeting on June 21, 2024. The AI-ALOE team, comprising scholars, researchers, scientists, and student researchers, shared talks about our progress with the NSF evaluation team.
At the meeting, Jinho presented on Fostering Understanding and Knowledge Acquisition and Dr. Kim chaired and presented a Panel on Personalization.
Fostering Understanding and Knowledge Acquisition
As part of the Fostering Understanding part of the Core Research: Performance Measurement and Evaluation session, Jinho introduced our focus for year 3 research in terms of assessing the real impact of SMART on adult learning and online education through a combined summative and midterm evaluation approach, as well as employing a longitudinal approach to examine the impact of SMART on learners' ability to transfer learning to subsequent course tasks. Along with the issue hypothesis tree and design strategies of SMART, we shared data analysis and results from three years of SMART deployment, as well as our future steps.
Panel on Personalization
In the afternoon, Dr. Kim chaired the Panel on Personalization, introducing our efforts to conceptualize personalization and build a design framework for personalized learning with AI. He first shared ALOE's strategy in developing a multi-dimensional design guideline for personalized learning. Dr Kim also showcased what we have done to provide personalized learning for concept learning through SMART. Dr. Kim discussed where SMART lies on the theory-laden design dimensions for personalization, and introduced personalization strategies, feedback on SMART, along with experiments and results.
For more information about the Annual Review Meeting, please visit the following links:
AI2RL at the 2024 ISLS Annual Meeting 🔗
June 14, 2024
Our AI2RL members attended and presented three short papers and two posters at the 2024 International Society of Learning Sciences (ISLS) Annual Meeting in Buffalo, New York, which took place from June 10th to 14th.
A study on AI-augmented concept learning: Impact on learner perceptions and outcomes in STEM education
Tuesday, June 11th, 2:30 to 3:30 PM, Jacobs 1225 B - AI and Tech-Enhanced Learning Environments
Abstract: This study explores the efficacy of AI-enhanced concept learning among adult learners, aiming to bolster their comprehension and facilitate the transition to embracing technology, refining metacognitive reading strategies, and improving subsequent knowledge test scores. Leveraging an AI-driven formative assessment feedback system, named SMART, AI integration was implemented in pre-class activities within a Biology course. Learners demonstrated enhanced mental models of STEM readings, and while the levels of technology acceptance were not statistically significant, we observed numerical increases in perceived AI usefulness. However, no significant relations were found with perceived ease of use and metacognitive awareness. The impact of concept learning through SMART on knowledge test scores demonstrated partial visibility. This research underscores the holistic integration of AI tools, highlighting the importance of educators to align instructional methods such as AI with learning objectives, content, assessment tests, and learners’ AI literacy levels, particularly within the domain of online STEM education.
Investigating the influence of AI-augmented summarization on concept learning, summarization skills, argumentative essays, and course outcomes in online adult education
Tuesday, June 11th, 4:00 to 5:30 PM, Jacobs 2nd Floor Atrium - Posters
Abstract: This study aims to explore the influence of concept learning facilitated by an AI-augmented summarization feedback tool, the Student Mental Model Analyzer for Research and Teaching (SMART), on various learning outcomes within an undergraduate English course using linear mixed-effects (LME) modeling and Bayesian correlations with data from 22 participants. Significant improvements in learners’ mental models and associations of concept learning with subsequent learning activities suggest the potential of such tools in improving learning performance.
A study on AI-augmented concept learning: Impact on learner perceptions and outcomes in STEM education
Thursday, June 13th, 10:45 to 11:45 AM, Jacobs 2134 - Learning Feedback and Assessment
Abstract: This study aims to explore the utility of generative AI in providing formative assessment and feedback. Using data from 43 learners in an instructional technology class, we assessed generative AI’s evaluative indices and feedback capabilities by comparing them to human-rated scores. To do this, this study employed Linear Mixed-Effects (LME) models, correlation analyses, and a case study methodology. Our findings suggest an effective generative AI model that generates reliable evaluation for detecting learners’ progress. Moderate correlations were found between generative AI-based evaluations and human-rated scores, and generative AI demonstrated potential in providing formative feedback by identifying strengths and gaps. These findings suggest the potential of utilizing generative AI to provide different insights as well as automate formative feedback that can offer learners detailed scaffolding for summary writing.
How AI evaluates learner comprehension: A comparison of knowledge-based and large language model (LLM)-based AI approaches
Thursday, June 13th, 1:15 to 2:15 PM, Jacobs 2220 B - Large Language Models and Learning
Abstract: This study investigated two AI techniques for evaluating learners’ summaries and explored the relationship between them: the SMART knowledge-based AI tool, which generated multidimensional measures representing knowledge models derived from learner summaries, and a Large Language Model (LLM) fine-tuned for summary scoring. The LLM model incorporated both the summary and source texts in the input sequence to calculate two component scores related to text content and wording. Summary revisions from 172 undergraduates in English and Biology classes were analyzed. The results of linear mixed-effects models revealed that both AI techniques detected changes during revisions. Several SMART measures were positively associated with an increase in the LLM’s Content scores. These findings support the notion that the LLM model excels at broad and comprehensive assessment, while SMART measures are more effective in providing fine-grained feedback on specific dimensions of knowledge structures.
Evaluating private artificial intelligence (AI) curriculum in computer science (CS) education: Insights for advancing student-centered CS learning
Thursday, June 13th, 4:00 to 5:30 PM, Jacobs 2nd Floor Atrium - Posters
Abstract: This study was undertaken to pilot a Private AI curriculum designed with a problem-centered instruction (PCI) approach for post-secondary Computer Science (CS) education. To this end, a condensed version of one of the ten curricular modules was implemented in a two-hour workshop. The mixed-method data analysis revealed participants' positive motivation and interest in the curriculum, while also pinpointing opportunities to further improve the design strategies of the curriculum.
Introducing AI-ALOE and Demonstrating AI-ALOE's Technologies at the ISLS 2024 Workshop 🔗
June 9, 2024
Dr. Min Kyu Kim and our graduate associate, Jinho Kim, attended the ISLS 2024 full-day workshop, conducted jointly by five national AI institutes: AI-ALOE, EngageAI, iSAT, AI4ExceptionalEd, and INVITE. They represented AI-ALOE, introduced the institute, and demonstrated AI-ALOE's technologies, including SMART.
In the morning, Dr. Kim presented an overview of AI-ALOE, covering the institute's interests, organization, testbeds, AI technologies with their deployment results, data architecture, visualization, and next steps. After a short break, there was a demo session of the workshop, where Dr. Kim and Jinho showcased the many AI-ALOE technologies, including SMART with detailed, working demonstrations.
As Artificial Intelligence (AI) becomes increasingly powerful, it is imperative for the general public to learn more about AI and how it can be utilized to address the society’s daily challenges. The National AI Institutes represent a cornerstone of the U.S. government’s commitment to fostering long-term fundamental research in AI. This workshop will introduce the National AI Institutes program to the Learning Sciences community, and, in particular, will focus on five of such AI Institutes related to the learnings science community, i.e., the National AI Institute for Adult Learning and Online Education (AI-ALOE), the National AI Institute for Engaged Learning (EngageAI), the National AI Institute for Student-AI Teaming (iSAT), the National AI Institute for Exceptional Education (AI4ExceptionalEd), and the National AI Institute for Inclusive Intelligent Technologies for Education (INVITE). The objectives are to introduce to the learning sciences community about the various education and learning related use cases being addressed by these AI Institutes, their AI research activities, the current status of AI advancement and limitations, and more importantly, how the learning sciences community can engage with these AI Institutes to shape their research programs to more strongly align with ongoing and emerging research in the field. Key research leaders from the AI Institutes will be invited to speak at the workshop along with other key players.
Presentations at the AI ALOE Year 3 Executive Advisory Board Meeting 🔗
May 10, 2024
Dr. Min Kyu Kim, two of our AI2 graduate associates, and our visiting scholar attended the AI-ALOE Year 3 Executive Advisory Board (EAB) Meeting on May 8th. About 30 AI-ALOE colleagues attended and shared their research at AI-ALOE. During the EAB meeting, Jinho presented on the topic of Fostering Deeper Understanding through Text Summarization while Dr. Kim participated in a Panel on Personalization with a presentation titled Design Dimensions for AI-Augmented Personalized Learning.
Fostering Deeper Understanding through Text Summarization
During the Text Summarization segment of the Use-Inspired AI session, Jinho presented findings from a summative evaluation of SMART implementations. She shared the analysis from using linear mixed effects models, focusing on the relation between revisions in SMART and concept learning. Additionally, the relation between SMART's concept learning indices with the following cognitive tasks (i.e., subsequent writing activities in English classes and problem-solving tasks in associated exams) was presented. Furthermore, Jinho shared preliminary results on the engagement with SMART from the A/B experiments conducted in Fall 2023 and Spring 2024.
Design Dimensions for AI-Augmented Personalized Learning
During the afternoon Panel on Personalization, Dr. Kim proposed ten design dimensions that could be taken into consideration when focusing on AI-augmented personalized learning. He first outlined key theories underpinning personalized learning (such as CoI, ICAP, CoP, and Self-Determination) and shared the ten design dimensions, which can be categorized into four clusters: task characteristic-related, learning domain-related, motivation-related, and the AI role-related. As an illustration, he demonstrated how SMART fits into each of these dimensions. Additionally, Dr. Kim highlighted two further considerations - cognitive load and learners' feedback literacy. Following his presentation, Dr. Kim also took part in the panel discussion on personalization.
For more information about the EAB meeting, please visit the following links:
Congratulations! Dr. Ali Heidari 🔗
May 6, 2024
Congratulations to our graduate associate, Ali Heidari, who received his Ph.D. degree on May 8, 2024! Ali has worked tirelessly under the guidance of Dr. Kim over the past five years. In a symbolic moment, his Ph.D. candidate title was replaced with the prestigious title of Dr. Heidari as Dr. Kim hooded him during the ceremony. Let's all join in celebrating this significant achievement and Ali's well-deserved success! Please check out his dissertation abstract following the nice photo below.
EXAMINING LEARNER’S EVALUATIVE JUDGMENT SUPPORTED BY TECHNOLOGY-ENABLED FEEDBACK INFORMATION
Abstract:
Evaluative judgment is the capacity to discern and assess the quality of work using established criteria (Sadler, 1989), a critical skill for fostering self-regulation and continuous improvement in learning environments (Boud & Falchikov, 2006). This study investigates the effects of self-assessment versus peer assessment and technology versus non-technology settings on evaluation scores, evaluative judgment quality, and rating confidence of undergraduate college students. Utilizing a linear mixed-effects model, the research explores these impacts while accounting for individual participant differences (Gao et al., 2019; Panadero et al., 2016; Shore et al., 1992). The study indicated peer assessments consistently yielded higher evaluation scores across technological and non-technological contexts. However, no significant differences were observed in the quality of evaluative judgment between assessment types or settings, suggesting a more complex interplay of cognitive and affective processes than previously assumed (Sadler, 1998). Unexpectedly, peer assessment was associated with greater rating confidence, challenging the notion that self-assessment, particularly when augmented by technology, would enhance confidence levels (McCarthy, 2017; Panadero et al., 2016). These results underline the importance of peer interaction and the provision of clear evaluative criteria in enhancing evaluative practices. The study recommends integrating structured peer-assessment activities into educational curricula to promote critical feedback and reflective learning (Falchikov & Goldfinch, 2000; Hanrahan & Isaacs, 2001). The findings contribute to our understanding of assessment practices, emphasizing further research to explore the long-term development of evaluative judgment and the optimal integration of technology in assessment (Ecclestone, 2001; O’Donovan et al., 2004)
Lia Haddadian, our graduate associate, has achieved two significant milestones. 🔗
April 25, 2024
We are thrilled to announce that our graduate associate, Lia Haddadian, has achieved two significant milestones. During her three years of working at the lab, Lia has made outstanding contributions. Her paper was published in the reputable open-access journal, The Journal of Applied Instructional Design, and she was also awarded the prestigious "AACE Award" at the Society for Information Technology & Teacher Education (SITE) conference on March 25, 2024, in Las Vegas, Nevada. This award was selectively given to only five out of 411 papers presented, making it a remarkable achievement.
Congratulations to Lia on her well-deserved success! Take a look at the details below.
1. Golnoush Haddadian had a manuscript published in The Journal of Applied Instructional Design. Based on her interest in English language education, she leveraged the use of Grammarly feedback to enhance English language learner’s speaking skills. Her research suggests that feedback given by Grammarly greatly enhanced learners’ speaking abilities. Moreover, learners had positive perceptions towards Grammarly with signs of motivated use related to Grammarly feedback in their everyday lives. There was also an expansion of perception, and they gained significant experiential value from using Grammarly.
Haddadian, G. & Haddadian, N. (2024). Innovative Use of Grammarly Feedback for Improving EFL Learners’ Speaking: Learners’ Perceptions and Transformative Engagement Experiences in Focus. The Journal of Applied Instructional Design, 13(2).
2. Golnoush Haddadian presented her paper at the Society for Information Technology & Teacher Education (SITE) conference on Mar 25, 2024 in Las Vegas, Nevada. Her paper, titled “An Investigation of ELT Teachers’ Online Self-efficacy: Does Teachers’ Level of Agency Matter?” received the “AACE Award” which was selectively given to five of the 411 papers presented this year.
Her research explored the online self-efficacy of sixty English language teachers in Iran based on their agency level. The findings suggested that teachers with high levels of agency significantly outperformed teachers with low levels of agency. The results identified five themes related to high and four themes related to low agency levels. These themes include “confidence, sense of control, willingness to take risks, positive attitude, and adaptability” for the former “lack of confidence, resistance to change, overwhelmed by technology, and the need for support” for the latter. These themes help to explain factors influencing teacher’s high and low online self-efficacy levels.
Haddadian, G. & Haddadian, N. (2024). An Investigation of ELT Teachers’ Online Self-efficacy: Does Teachers’ Level of Agency Matter?. In J. Cohen & G. Solano (Eds.), Proceedings of Society for Information Technology & Teacher Education International Conference (pp. 1607-1615). Las Vegas, Nevada, United States: Association for the Advancement of Computing in Education (AACE). Retrieved April 25, 2024 from https://www.learntechlib.org/primary/p/224179/.
Dr. Kim was invited as a guest speaker at the School of Nursing faculty meeting. 🔗
April 15, 2024
Dr. Kim was honored to be invited as a guest speaker at the School of Nursing faculty meeting at Georgia State University (GSU) on April 15, 2024. His presentation, titled "AI-Supported Nursing Education: Potentials and Showcases," covered:
An introduction to the AI² Research Lab and its current AI-related projects
A SMART Technology Demonstration based on a user scenario
Showcases of SMART-applied classrooms
Research findings on the impacts of SMART technology
The potential of AI to generate NCLEX exams for practice
Dr. Kim's keynote highlighted the exciting possibilities of AI in nursing education and sparked important discussions among the faculty.
Five papers accepted to ISLS 2024 Annual Meeting 🔗
April 12, 2024
We're excited to announce that five of our papers have been accepted for presentation at the 2024 International Society of Learning Sciences (ISLS) Annual Meeting in Buffalo, New York, taking place from June 10th to 14th. These papers, stemming from our NSF AI ALOE and NSF SaTC projects, touch on our work related to AI-augmented concept learning, private AI curriculum in computer science education, AI-augmented summarization, and the evaluation of learner comprehension through AI techniques. We look forward to sharing our findings!
Bae, Y., Kim, J., Davis, A., & Kim, M. (accepted). A study on AI-augmented concept learning: Impact on learner perceptions and outcomes in STEM education. Proceedings of the 18th International Conference of the Learning Sciences/Computer-Supported Collaborative Learning (ICLS/CSCL-2024). Buffalo, NY: International Society of the Learning Sciences.
Abstract: This study explores the efficacy of AI-enhanced concept learning among adult learners, aiming to bolster their comprehension and facilitate the transition to embracing technology, refining metacognitive reading strategies, and improving subsequent knowledge test scores. Leveraging an AI-driven formative assessment feedback system, named SMART, AI integration was implemented in pre-class activities within a Biology course. Learners demonstrated enhanced mental models of STEM readings, and while the levels of technology acceptance were not statistically significant, we observed numerical increases in perceived AI usefulness. However, no significant relations were found with perceived ease of use and metacognitive awareness. The impact of concept learning through SMART on knowledge test scores demonstrated partial visibility. This research underscores the holistic integration of AI tools, highlighting the importance of educators to align instructional methods such as AI with learning objectives, content, assessment tests, and learners’ AI literacy levels, particularly within the domain of online STEM education.
Haddadian, G., Panzade, P., Takabi, D., & Kim, M. (accepted). Evaluating private artificial intelligence (AI) curriculum in computer science (CS) education: Insights for advancing student-centered CS learning. Proceedings of the 18th International Conference of the Learning Sciences/Computer-Supported Collaborative Learning (ICLS/CSCL-2024). Buffalo, NY: International Society of the Learning Sciences.
Abstract: This study was undertaken to pilot a Private AI curriculum designed with a problem-centered instruction (PCI) approach for post-secondary Computer Science (CS) education. To this end, a condensed version of one of the ten curricular modules was implemented in a two-hour workshop. The mixed-method data analysis revealed participants' positive motivation and interest in the curriculum, while also pinpointing opportunities to further improve the design strategies of the curriculum.
Kim, J., Bae, Y., Stravelakis, J., & Kim, M. (accepted). Investigating the influence of AI-augmented summarization on concept learning, summarization skills, argumentative essays, and course outcomes in online adult education. Proceedings of the 18th International Conference of the Learning Sciences/Computer-Supported Collaborative Learning (ICLS/CSCL-2024). Buffalo, NY: International Society of the Learning Sciences.
Abstract: This study aims to explore the influence of concept learning facilitated by an AI-augmented summarization feedback tool, the Student Mental Model Analyzer for Research and Teaching (SMART), on various learning outcomes within an undergraduate English course using linear mixed-effects (LME) modeling and Bayesian correlations with data from 22 participants. Significant improvements in learners’ mental models and associations of concept learning with subsequent learning activities suggest the potential of such tools in improving learning performance.
Kim, J., Lee, T., Bae, Y., & Kim, M. (accepted). A comparison between AI and human evaluation with a focus on generative AI. Proceedings of the 18th International Conference of the Learning Sciences/Computer-Supported Collaborative Learning (ICLS/CSCL-2024). Buffalo, NY: International Society of the Learning Sciences.
Abstract: This study aims to explore the utility of generative AI in providing formative assessment and feedback. Using data from 43 learners in an instructional technology class, we assessed generative AI’s evaluative indices and feedback capabilities by comparing them to human-rated scores. To do this, this study employed Linear Mixed-Effects (LME) models, correlation analyses, and a case study methodology. Our findings suggest an effective generative AI model that generates reliable evaluation for detecting learners’ progress. Moderate correlations were found between generative AI-based evaluations and human-rated scores, and generative AI demonstrated potential in providing formative feedback by identifying strengths and gaps. These findings suggest the potential of utilizing generative AI to provide different insights as well as automate formative feedback that can offer learners detailed scaffolding for summary writing.
Kim, M., Kim, J., Bae Y., Morris, W., Holmes, L., & Crossley, S. (accepted). How AI evaluates learner comprehension: A comparison of knowledge-based and large language model (LLM)-based AI approaches. Proceedings of the 18th International Conference of the Learning Sciences/Computer-Supported Collaborative Learning (ICLS/CSCL-2024). Buffalo, NY: International Society of the Learning Sciences.
Abstract: This study investigated two AI techniques for evaluating learners’ summaries and explored the relationship between them: the SMART knowledge-based AI tool, which generated multidimensional measures representing knowledge models derived from learner summaries, and a Large Language Model (LLM) fine-tuned for summary scoring. The LLM model incorporated both the summary and source texts in the input sequence to calculate two component scores related to text content and wording. Summary revisions from 172 undergraduates in English and Biology classes were analyzed. The results of linear mixed-effects models revealed that both AI techniques detected changes during revisions. Several SMART measures were positively associated with an increase in the LLM’s Content scores. These findings support the notion that the LLM model excels at broad and comprehensive assessment, while SMART measures are more effective in providing fine-grained feedback on specific dimensions of knowledge structures.
Dr. Min Kyu Kim presented about the theoretical underpinnings for the SMART project 🔗
April 1, 2024
Dr. Kim Kyu Kim, our director, presented the theory-driven development and research for SMART at today's AI-ALOE Foundational and Use-Inspired AI Meeting.
Dr. Kim began by outlining the project's aim of aiding learners in understanding key concepts. He elaborated on how theories such as Personalization, Community of Inquiry (COI), and ICAP (Interactive, Constructive, Active, Passive) have influenced SMART's design. Additionally, Dr. Kim shared our research questions and data collection methods within the SMART project. He also raised the question of having shareable instruments among the ALOE research teams and proposed the development of shareable theoretical frameworks to facilitate collaboration and research involving multiple AI tools. Dr. Kim led discussions on external threats and opportunities relevant to our work within the AI-ALOE as well.