| Home | E-Submission | Sitemap | Contact us |  
top_img
Clinical Archives of Communication Disorders > Volume 7(2); 2022 > Article
Thao, Lombardino, Tibi, and Gleghorn: Multimodal learning: How task types affect learning of students with reading difficulties

Abstract

Purpose

The prevalence of multimedia technology in the current educational settings has increased the need for investigating how individual differences such as reading difficulties moderate multimedia learning. Given that instructional designs closely interact with task characteristics and requirements, we aimed to explore how students with reading difficulties perform various types of learning tasks within the context of multimedia instruction.

Methods

College students with and without reading difficulties viewed a multimedia lesson on stem cell research while their eye movements were recorded. After viewing the multimedia slides, students completed four tasks pertaining to the content provided in the lesson.

Results

Students with reading difficulties performed poorer than their peers on the procedural task while the groups’ performance did not differ on definitional, infographic, and argumentative tasks. Further, there was no difference in the eye fixation counts and gaze duration between the two groups.

Conclusions

Our findings underscore the importance of an awareness that the types of tasks used to assess students’ knowledge may impact their performance. Information of this nature may be useful in determining optimal learning experiences for students with learning difficulties.

INTRODUCTION

Multimedia presentations, particularly when text is accompanied by images, have been proven to lead to better learning outcomes than single media presentations across age groups including children [1], college students [2], and older adults [3]. Mayer’s cognitive theory of multimedia learning [4,5] addressed multimedia effects by positing that when text and images are presented together, learners actively organize and integrate verbal and visual information thereby increasing learning efficiency. In the present study, one of the widely used types of multimedia, written text and images [6], was used to explore the relationship between task characteristics and reading abilities.
It is a commonly held assumption that multimedia materials aid learning of students with reading difficulties [7], yet only recently have studies empirically demonstrated that multimedia instruction is more beneficial than single media instruction for students who have reading difficulties. Wang et al. [8] examined how college students with specific reading disabilities performed in multimedia (text and picture) vs. single media (text-only) conditions and found that students answered comprehension questions more accurately in the multimedia condition than in the single media condition. Similarly, Kim and Lombardino [9] reported that college students with reading deficits benefitted more from multimedia (narration-picture) than single media (narration-only) instruction. In another study, Knoop-van Campen and colleagues [10] compared the performance of students with and without specific reading difficulties in two multimedia conditions (i.e., text and picture vs. text and picture with audio) and reported that quality of learning and learning efficiency were higher in the text and picture condition than in the text and picture with audio condition across the two groups, showing that the addition of redundant audio to text media did not improve learning for students with reading difficulties. Collectively, for students with reading difficulties, multimedia presentations, particularly, when two types of modalities (i.e., words and pictures) are combined, have been shown to facilitate learning.
Furthermore, recent research in multimedia instruction suggests that multimedia effects on learning are moderated by the types of tasks used. For example, van Genuchten and colleagues [11] showed that multimedia presentations were more beneficial for procedural (i.e., depicting procedural steps of a method or a technique) than for conceptual (i.e., describing features of an object or an event) or causal (i.e., explaining the cause-and-effect chain of events) learning tasks and emphasized that without considering types of tasks, incorrect conclusions may be drawn regarding the benefits of multimedia presentations. Chiu and Churchill [12] found that multimedia instruction increased performance of secondary level students on algebraic graphical representation and concept association tasks but was less effective in algebraic tasks requiring analysis and reasoning. Taken together, research suggests that multimedia presentations have differential effects on learning which are moderated by task types and characteristics. However, previous research on the task types in multimedia learning has mostly focused on mathematical learning or has been limited to the learners without learning difficulties. The aim of this study was to extend this line of research to explore how different tasks characteristics (i.e., types of tasks used to assess learning) moderate performance of students with reading difficulties when they are instructed in multimedia learning environments.
For students with reading difficulties, the effect of task type has been studied mainly in a single media (i.e., text) condition. Cain and Oakhill [13] and Oakhill and colleagues [14] reported that students with reading difficulties performed comparably to their peers with typical reading skills on comprehension questions seeking literal information from text, but experienced difficulty with inference-making questions. They concluded that poor readers are less likely than their peers to integrate information within and across texts to solve inference-making questions. Similarly, Nicolielo-Carrilho and colleagues [15] reported that students with learning disabilities performed worse on the inferential questions than on the literal questions when compared to peers without learning difficulties. These studies provided insights into the relationship between task types and reading difficulties; however, they have been limited to the use of binary formats for comparing responses to questions (i.e., literal vs. inferential questions; retention vs. transfer questions).
By measuring their eye gaze patterns while reading, investigators have compared the performance of students with typical reading abilities with those who have reading difficulties on their responses to various tasks. To date, studies show that students with reading difficulties tend to have longer and more numerous eye fixations on words during reading compared to their peers without reading difficulties [16]. Furthermore, investigators have begun to explore how eye gaze patterns compare for students with and without reading difficulties in multimedia learning environments. Olander and colleagues [17] compared viewing patterns of students with and without reading disabilities when pictures were added to text. Students with reading disabilities had more total eye fixations on the entire stimuli and fewer eye fixations on picture areas than those without reading disabilities. On the other hand, Jian [18] found no difference in eye movements patterns of students with high and low reading ability when comparing fixations durations, proportion of total fixation durations, and number of saccades between text and pictures on illustrated science texts. These divergent findings underscore the need for more comparative studies on eye gaze patterns of students with high and low reading ability when reading multimedia texts [19].
Our goal was to design an integrative study to compare the effects of multimedia instruction in college students with and without reading difficulties, using varying types of tasks to assess knowledge. In doing so, we applied Bloom’s [20] learning process taxonomy for assessing four types of knowledge - remembering, understanding, applying, and evaluating - to explore students’ responses to four tasks - definitional, infographic, procedural, and argumentative tasks. Definitional tasks are designed to elicit students’ recall of basic facts and knowledge. Infographic and procedural tasks to assess students’ ability to understand and interpret on information. While infographic tasks target quantitative and graphic information, procedural tasks target qualitative information. Finally, argumentative tasks assess students’ application and evaluation of information.
The present study addressed the following experimental questions: (1) Do different task types (i.e., definitional, infographic, procedural, and argumentative tasks), used within the context of multimedia instruction, moderate learning outcomes of students with reading difficulties compared to peers with typical reading skills? and (2) Do the eye gaze patterns differ between groups when viewing slides related to the four tasks? Based on the previous studies showing that less skilled readers have difficulty building connections between information across texts and that their performance decreases as task demands increase [21,22], we expected that students with reading difficulties would perform less accurately and exhibit more eye fixations and longer eye gaze durations when answering more complex tasks (procedural and argumentative tasks).

METHODS

Participants

Sixty-seven college students, including 24 students with reading difficulties (RD) and 43 students with typical reading skills (TR), were selected for participation in the study. Data from two students in the RD group and three students in the TR group were excluded from the study due to incomplete profiles or technical problems during data collection, leaving a total of 62 students (22 students with RD and 40 students with TR) in the final sample. Students with RD were recruited through advertising on university websites and announcements in classes. Students who reported a history of reading problems beginning in childhood were administered the Test of Word Reading Efficiency [23] to document their reading difficulties. Students with TR were recruited through the university research participation website and administered the Test of Word Reading Efficiency [23] to document reading abilities within the expected range. As shown in Table 1 below, while the two groups did not differ in age, education, prior knowledge, visual memory, learning style preference, and topic interest (ps>0.01), they differed significantly in word-level reading ability (M=78.32, SD=4.80 for the RD group and M=106.30, SD=4.70 for the TR group; p<0.001).

Experimental measures

Pre- and post-experimental questionnaire

Prior to the multimedia experiment, students completed a pre-experimental questionnaire about their prior knowledge and experience with stem cell research (e.g., “Rate your knowledge of stem cell,” “Are you regularly reading science journals or magazines?”). Each question was answered using a five-point Likert scale ranging from very low (1) to very high (5). The maximum score was 20. Additionally, after completing the multimedia learning task, students completed a post-experimental questionnaire about their learning style preference (e.g., “I prefer to learn visually”) and topic interest (e.g., “Please rate how appealing this lesson was for you”). Each question was answered using a five-point Likert scale ranging from very low/strongly disagree (1) to very high/strongly agree (5). Maximum score was 10 for learning style preference and 30 for topic interest. Students’ responses from the pre-and post-experimental questionnaires were used to assess whether there was any difference in the prior knowledge and/or interest in the topic between the two reading groups.

Experimental stimuli

The multimedia learning material consisted of 26 PowerPoint slides describing a stem cell. Each slide contained one to five sentences with a picture below text. The text provided a definition of a stem cell, a description of stem cell development, a discussion of the pros and cons of stem cell research, and infographics regarding people’s opinions on stem cell research (see Figure 1). The text contained a total of 695 words. The Flesch Reading Ease score [24] of the text, one of the most widely used and validated measures of text readability [25], was 62.8. A score of 60–69 corresponds to a high school level of ‘standard’ written text. The text was presented in PowerPoint slides using a 36-point Calibri font. Images presented in 511×347 pixels were excerpted from the stem cell education outreach program at the California Stem Cell Education Initiative.

Eye gaze tracking apparatus

As students viewed the multimedia learning slides on the computer screen, their eye gaze was recorded using an LC Technologies head-free Eyegaze EDGE® EyeFollower binocular system 2.0 with the gaze point sampling rate of 120 Hz and the gaze point tracking accuracy of 0.45° throughout the operational head range. A minimum fixation duration was 100 ms and a spatial dispersion threshold was 1.5°. Each student was placed at a viewing distance of 23.62 inches (60 cm) in front of a 24-inch (61 cm) light-emitting diode (LED) monitor with resolution set to 1,920×1,080 pixels. The R software [26] within the Rstudio environment [27] and SPSS version for 25.0 for windows [28] were used for behavioral data analysis and the NYAN 2.0 software from Interactive Minds Eyetracking Solutions for eye data analysis.

Experimental procedures

The students individually took part in the experiment for one to one and a half hours. After completing the informed consent process, students completed the pre-experimental questionnaire, a reading test, and a visual memory test. Prior to viewing the multimedia stem cell lesson slides, students were shown four tasks (definitional, infographic, procedural, and argumentative tasks) that pertained to the information shown on the stem cell lesson slide presentation. This was done to familiarize students with the content that they were expected to address after reading the stem cell lesson. Students were told that they could advance the slides at their own pace by pressing the spacebar key to move to the next slide, however, they were not able to return to the previous slides.
After viewing the multimedia slides, students answered four written questions with no time limit. The four questions were presented on the same page and the participants were able to decide the order of questions they answered. They typed their answers under each question in a word processor and were instructed to answer all questions with complete sentences. The first question asked students to define a stem cell (“What is a stem cell?”). The second question asked them to interpret an infographic regarding how people in U.S. view stem cell research (“Based on the infographic in the slide you just viewed, what do Americans think about the embryonic stem cell research?”). The third question asked them to describe the steps of stem cell development (“Eight steps of stem development were described in the slides. Describe the steps of stem cell development as specific as possible. Do your best and type all that you do remember.”). The fourth question asked students to express their opinion about stem cell research (“State your opinion on stem cell research. Do we need to continue the stem cell research? Your opinion should be supported by the evidence included in the slides. Your statements should include both the pros and cons of stem cell research. You want to state why pros (or cons) compensate cons (or pros).”). Finally, students completed a post-experimental questionnaire.

Scoring procedures

Two raters, blind to the goal of the study, independently determined the experiment scores for each participant. Inter-rater reliability exceeded 90% and discrepancies in ratings were arbitrated by an independent third rater. For the definitional task (maximum two points), students’ answer was divided into two units (‘a cell has the ability to divide’ and ‘various kinds of cell tissues’) with one point for each unit. For the infographic task (maximum two points), the answer was divided into two units (‘58% of people who responded to the survey favored the stem cell research’ and ‘29% of people were opposed’) with one point for each unit. A correct answer without the correct number information (i.e., 58% and 29%) attained 0.5 point, instead of one full point.
For the procedural task (maximum eight points), one point was awarded for the description of each of the eight steps in the procedure of the stem cell development. A point was given for the correct answers worded similarly to the text. For example, a student’s answer “both of those cells split making four cells that are all same” for “each of those two cells divided, making four identical cells” received one point. Correct temporal order without contiguity was accepted; for example, students who wrote step 3 and step 5 without step 4 received two points (points for steps 3 and 5). Incorrect temporal order was not accepted; for example, students who wrote step 1, step 3, step 2, and step 4 received three points (points for steps 1, 3, and 4). For the argumentative task (maximum eight points), one point was awarded if a claim was introduced, a maximum of three points were awarded if the claim was supported by evidence (i.e., number of evidence), a maximum three points were awarded if opposing argument and evidence were addressed (i.e., number of evidence), and one point was awarded if the student’s claim refuted opposing argument.

RESULTS

Learning outcomes

Before comparing contents of writing between the two reading groups, the amount of writing and writing mechanics and grammar were assessed. The two reading groups were not significantly different in the quantity of writing across (ps> 0.01). However, for the writing mechanics and grammar, RD group had a significantly lower score than the TR group (p=0.007). Because these were not included in our research question, we did not further analyze the difference. Descriptive information is presented in Appendix 1.
Table 2 shows a group comparison of the learning outcomes for each of the four tasks. Due to a non-normally distributed data set, the Wilcoxon rank-sum test was used where reading group served as the independent variable and scores in the definitional, infographic, procedural, and argumentative tasks served as the dependent variables. Due to the small sample size and non-normal distribution of data, p values less than 0.01 were considered statistically significant [29]. For the definitional task, the two reading groups were not significantly different, p=0.04. Similarly, for the infographic task, there was no significant difference between the two groups, p=0.03. On the contrary, the RD group had significantly lower scores on the procedural task than the TR group, p=0.001, r=0.40 (moderate to strong effect). On the argumentative task, the two groups did not significantly differ, p=0.06.

Eye gaze analysis

We analyzed total fixation count (sum of the number of fixations within an area of interest) and total gaze duration (sum of the durations of fixations within an area of interest, in seconds), obtained from the experiment slides. Eye data from two students in the RD group were excluded due to the technical problems with the eye tracking device for more than half of the screens. Data from the remaining 20 students in the RD group and 40 students in the TR group were used for the data analysis.
For the definitional and the infographic tasks, the fixation count and the gaze duration from one slide (see Figure 1) related to each task was presented in the Table 3. For the procedural task, the average count and duration data from nine slides were calculated. For the argumentative task, the average count and duration data from six slides were calculated. Each slide was segmented into two subregions (i.e., text area and picture area). Multivariate analyses of variance (MANOVAs) were conducted with the reading group as the independent variable and the total fixation count and the total gaze duration on the text and the picture subareas as the dependent variables. Assumptions of homogeneity of covariance matrices and homogeneity of error variances were tested before conducting MANOVAs. Box’s M tests [30] and Levene’s tests [31] indicated that the two assumptions were not violated in all data except the data from the procedural task related slides. Therefore, we reported the conservative Pilla’s trace statistics for multivariate F-value and used a more conservative alpha level of 0.01. As presented in Table 3, the two groups did not show any significant differences in eye gaze patterns on the slides related to the four tasks (ps>0.01).

DISCUSSION

In the current study, we were interested in determining how different types of tasks moderate the learning accuracy and eye gaze patterns in students with and without reading difficulties within the context of multimedia instruction. Students with reading difficulties performed significantly less accurately than their typical reading peers on a task requiring procedural knowledge. In contrast, no difference in eye fixations or gaze duration for either the text or the picture areas was found between the two groups.
The learning outcomes moderated by task type for students with reading difficulties are in line with previous studies showing that task types and requirements differentially affect performance of less skilled readers [15,21]. Explanation for the less accurate performance of students with reading difficulties for the procedural task may be associated with working memory overload. Previous research has shown that students with reading difficulties experience problems with working memory [32,33]. Specifically, Hachmann et al. [34] compared performance of students with and without specific learning disabilities on various memory tasks and found that students with specific learning disabilities performed comparably to their peers on an individual item recall task but worse than their peers on a serial order recall task. Similarly, in our study, compared to the definitional or infographic tasks for which students needed to process the information on a single slide, the procedural task required students to process, update, and maintain sequential information presented across eight consecutive slides.
Our students’ low performance on the procedural task may be also related to motivational factors. Engaged learners actively participate in the learning process and successfully complete a given task [35]. Even for struggling readers, if they are motivated, they devote more cognitive resources to comprehending and learning from texts, leading to improved reading performance [36]. In the current study, students might have been more interested in the argumentative aspects of the stem cell research rather than the steps of stem cell development, resulting in similar performance in the argumentative task but lower performance of the students with reading difficulties in the procedural task. We measured students’ overall topic interest at the posttest and did not find any difference between the two groups, but we did not measure their interest in individual tasks. Future methodologies could be employed to more precisely evaluate how motivation is related to specific tasks and impacts students’ performance.
Contrary to our expectation, there was no difference in the eye fixations and gaze durations between the two groups even for the procedural task for which students with reading difficulties were significantly less accurate than their peers. Even though students were given the experimental questions prior to viewing the slides and they were allowed to view each slide for as long as needed, students with reading difficulties did not extend their reading time to view relevant text or picture areas. A possible explanation for this finding may lie in poor readers’ lower metacognitive abilities [37]. Students with reading difficulties may be less likely to monitor and evaluate their ongoing comprehension of text while reading [38,39]. Further study is needed to clarify the role of metacognitive knowledge on reading time in relation to varying topics and comprehension task types for students with and without reading difficulties.

Educational Implication

While multimedia instruction and learning has been considered one of the most effective assistive technology methods for struggling readers over the past decade [40,41], our study demonstrates that task characteristics are critical factors to consider when designing multimedia instruction for assessment, interventions, and accommodations for students with learning difficulties. In the current study, within the context of multimedia instruction, the procedural task, requiring sequential information processing, was particularly difficult for students with reading difficulties. This finding highlights the need to evaluate students’ reading comprehension skills under varying task constraints to determine the specific nature of the task that may be problematic. Shin’s recent study [42] showed that task types affected reading comprehension of second language learners. This type of task microanalysis may be necessary for developing optimal instructional strategies to aid students with reading difficulties as well as second language learners.
Biard and colleagues [43] found that novice college students learned from procedural lessons more effectively when procedural information was pre-segmented according to key points rather than when learners controlled the pace and pause. They suggested that when learners are unable to identify key information, instructors divide information into small meaningful chunks, making it easier for learners to process units of information by reducing cognitive load. Other studies have shown that metacognitive monitoring and self-regulated learning strategies are beneficial in enhancing procedural learning [44]. By monitoring and regulating their processing of novel information, students can evaluate their ongoing comprehension and employ repair strategies, if needed [45]. Lau [46] compared reading comprehension strategies of two groups of secondary students; one group was taught through text-based instruction and another group was taught self-regulating learning principles including goal-setting, monitoring, self- and peer-evaluation activities. The self-regulating group not only exhibited improvements in reading comprehension and content knowledge, but also expressed positive attitudes towards self-regulating learning strategies and attributed their progress to the strategies they learned.

Limitation and future research

The present study has some limitations that future research should take into account. First, we assessed only word-level reading abilities of the students with and without reading difficulties for group assignment. Future studies could measure various reading skills (e.g., sentence level reading and reading comprehension) and include various subgroups based on students’ reading difficulty characteristics. Second, we included only four types of tasks - definitional, infographic, procedural, and argumentative. Further studies could include more various types of tasks (e.g., critical analysis) and texts (e.g., narrative and argumentative) and participants with different languages [47,48] to increase the generalizability of the findings. Third, even though we found that task types influenced students’ comprehension, we do not know how students approach these tasks. Think-aloud methodologies could provide insights into the students’ ongoing cognitive processes during specific tasks and might shed light on why group differences were not found for the eye gaze data. Finally, our learning materials were delivered as static images. Recent studies (e.g., 49) found that animations are more beneficial than static visualizations for student when comprehending expository text. Future studies could use dynamic visualizations to explore the relationship between task types and reading abilities.

CONCLUSION

Compared to their peers, students with reading difficulties who pursue postsecondary education have greater risks of failing to graduate [50,51]. On a wide variety of subjects, college students encounter expository texts that often include procedural information presented with both text and images. Hopefully an awareness of the potential challenges faced by college students with reading difficulties when engaged in multimedia comprehension activities will prompt researchers to continue to explore how task characteristics influence the learning of students with reading difficulties within multimedia contexts and to use these observations to explore ways to facilitate more effective instructions.

Figure 1
Slide examples used for the experiment. (A) definitional section slide example. (B) infographic section slide example. (C) procedural section slide example. (D) argumentative section slide example.
cacd-2021-00500f1.jpg
Table 1
Descriptive characteristics of students with and without reading difficulties
RD group M (SD) (N=22) TR group M (SD) (N=40) F statistics (1, 58) p value
Age (yr) 19.82 (1.36) 20.95 (2.22) 1.85 0.17
Education (years post high school) 2.13 (1.08) 2.22 (1.11) 0.08 0.77
Prior knowledge questionnaire (maximum score=20) 7.64 (2.25) 7.08 (2.69) 0.68 0.41
TOWRE phonemic decoding efficiency (SS) (average=100) 78.32 (4.80) 106.30 (4.70) 496.96 <0.001
TOMAL-2 visual sequential memory (SS) (average=10) 8.64 (3.36) 9.78 (2.33) 2.45 0.12
Learning style preference (maximum score=10) 6.4 (1.3) 6.6 (1.4) 0.26 0.61
Topic interest (maximum score=30) 20.2 (2.9) 19.1 (3.6) 1.4 0.24

RD, reading difficulties; TR, typical reading skills; M, mean; SD, standard deviation; TOWRE, Test of Word Reading Efficiency; TOMAL-2, Test of Memory and Learning, second edition; SS, standardized score.

Table 2
Comparison of learning outcomes in students with and without reading difficulties
RD group TR group Wilcoxon W statistics Z value p value
Definitional task (maximum score=2.0) Median 1.00 1.00 561.50 2.04 0.04
Mean rank 25.52 34.79

Infographic task (maximum score=2.0) Median 1.00 1.00 555.00 2.20 0.03
Mean rank 25.23 34.95

Procedural task (maximum score=8.0) Median 3.25 4.50 483.00 3.11 0.001**
Mean rank 21.95 36.75

Argumentative task (maximum score=8.0) Median 4.00 4.00 660.00 2.67 0.62
Mean rank 30.00 32.33

RD, reading difficulties; TR, typical reading skills.

** p<.01.

Table 3
Comparison of total fixation count and total gaze durations (in seconds) on screen in students with and without reading difficulties
Picture Text MANOVA p value



RD group TR group RD group TR group F statistics (2, 57)




M (SD) M (SD) M (SD) M (SD)
Definition screen Fixation count 1.45 (2.16) 1.67 (1.83) 60.40 (30.14) 46.00 (21.58) 3.81 0.03
Gaze duration 0.33 (0.54) 0.50 (0.74) 13.82 (7.73) 10.16 (5.19) 4.48 0.02

Infographic screen Fixation count 43.55 (17.98) 46.02 (24.77) 10.85 (5.85) 10.42 (6.16) 0.13 0.87
Gaze duration 10.64 (4.71) 10.17 (4.88) 2.06 (1.41) 2.05 (1.47) 0.06 0.94

Procedural screena Fixation count 2.29 (1.16) 2.56 (2.00) 44.83 (13.94) 45.64 (12.40) 0.15 0.85
Gaze duration 0.66 (0.38) 0.78 (0.71) 10.94 (4.33) 10.42 (3.66) 0.63 0.53

Argumentative screena Fixation count 1.54 (0.96) 1.34 (1.61) 66.25 (18.13) 65.44 (22.09) 0.14 0.86
Gaze duration 0.50 (0.37) 0.44 (0.56) 19.19 (6.44) 17.45 (6.62) 0.49 0.61

RD, reading difficulties; TR, typical reading skills; M, mean; SD, standard deviation.

a For procedure and argumentative screens, average fixation counts and mean durations per screen were presented.

Appendix 1
Comparison of number of words and spelling and writing mechanics in students with and without reading difficulties
RD group TR group F value p value

M (SD) M (SD)
Number of words Definitional question 13.63 (6.14) 16.67 (10.66) 1.40 0.24
Infographic question 23.40 (10.64) 26.15 (15.44) 0.55 0.46
Procedural question 42.00 (18.01) 53.22 (19.67) 4.89 0.03
Argumentative question 78.40 (30.78) 95.40 (35.04) 3.62 0.06

Grammar and writing mechanicsa,b (Maximum score=2.0) Argumentative question 1.14 (0.72) 1.64 (0.48) 7.80 0.007**

RD, reading difficulties; TR, typical reading skills; M, mean; SD, standard deviation.

a We assessed grammar and writing mechanics of students’ responses to the argumentation question. Students’ responses to the other questions were short (definitional and infographic questions) or consisted of bullet points (procedural question), which were not adequate for grammatical assessment;

b For grammar and writing mechanics, first, the Grammarly program (Grammarly Inc., 2017) was used to detect grammatical errors and then two reviewers manually graded students’ answers regarding correct sentence function, punctuation, capitalization, grammar usage, and spelling. If the answer had 0~1 error, it attained two points, if 2~4 errors, attained one point, and if more than 5 errors, attained zero point (maximum two points).

** p<0.01.

REFERENCES

1. Schnotz W, Wagner I, Ullrich M, Horz H, McElvany N. Development of students’ text-picture integration and reading competence across grades 5 to 7 in a three-tier secondary school system: A longitudinal study. Contemp Educ Psychol. 2017;51:152–169.

2. Bolkan S. Facilitating student attention with multimedia presentations: examining the effects of segmented PowerPoint presentations on student learning. Commun Educ. 2019;68:61–79.
crossref
3. Chiu wu CH, Perng SJ, Shi CK, Lai HL. Advance care planning and advance directives: A multimedia education program in community-dwelling older adults. J Appl Gerontol. 2019;39:1–19.

4. In : Mayer RE, editor. The Cambridge handbook of multimedia learning. New York, NY: Cambridge University Press, 2014.

5. Mayer RE. How multimedia can improve learning and instruction. The Cambridge handbook of cognition and education. New York, NY, US: Cambridge University Press, 2019. p. 460–479.
crossref
6. Kim S, Wiseheart R, Walden P. Do multimedia instructional designs enhance comprehension in college students with dyslexia? J Postsecond Educ Disabil. 2018;31:351–365.

7. Wissick C. Multimedia: Enhancing instruction for students with learning disabilities. J Learn Disabil. 1996;29:494–503.
crossref pmid pdf
8. Wang J, Dawson K, Saunders K, Ritzhaupt AD, Antonenko P “Pasha,”, Lombardino L, et al. Investigating the effects of modality and multimedia on the learning performance of college students with dyslexia. J Spec Educ Technol. 2018;33:182–193.
crossref pdf
9. Kim S, Lombardino LJ. Exploring the effects of narration and pictures on learning for students with reading deficits. Clin Arch Commun Disord Clin Arch Commun Disord. 2017;2:116–127.

10. Knoop-van Campen CAN, Segers E, Verhoeven L. Effects of audio support on multimedia learning processes and outcomes in students with dyslexia. Comput Educ. 2020;150:1–14.

11. van Genuchten E, Scheiter K, Schüler A. Examining learning from text and pictures for different task types: Does the multimedia effect differ for conceptual, causal, and procedural tasks? Comput Hum Behav. 2012;28:2209–2218.
crossref
12. Chiu TKF, Churchill D. Design of learning objects for concept learning: effects of multimedia learning principles and an instructional approach. Interact Learn Environ. 2016;24:1355–1370.
crossref
13. Cain K, Oakhill JV. Inference making ability and its relation to comprehension failure in young children. Read Writ. 1999;11:489–503.

14. Oakhill JV, Berenhaus MS, Cain K. Children’s reading comprehension and comprehension difficulties. The Oxford handbook of reading. New York, NY, US: Oxford University Press, 2015. p. 344–360. (Oxford library of psychology).

15. Nicolielo-Carrilho AP, Crenitte PAP, Lopes-Herrera SA, Hage SR, de V. Relationship between phonological working memory, metacognitive skills and reading comprehension in children with learning disabilities [Internet]. J Appl Oral Sci. 2018. [cited 2020 Jul 29];26. Available from: http://www.scielo.br/scielo.php?script=sci_abstract&pid=S1678-77572018000100486&lng=en&nrm=iso&tlng=en .

16. Moiroud L, Gerard CL, Peyre H, Bucci MP. Developmental eye movement test and dyslexic children: A pilot study with eye movement recordings. PloS One. 2018;13:e0200907.
crossref pmid pmc
17. Olander MH, Brante EW, Nyström M. The effect of illustration on improving text comprehension in dyslexic adults. Dyslexia. 2017;23:42–65.
crossref pmid pmc pdf
18. Jian YC. The immediate and delayed effects of text-diagram reading instruction on reading comprehension and learning processes: evidence from eye movements [Internet]. Read Writ. 2020. Sep 5; [cited 2020 Nov 24]. Available from: https://doi.org/10.1007/s11145-020-10089-3 .
crossref
19. Bar-Zvi Shaked K, Shamir A, Vakil E. An eye tracking study of digital text reading: a comparison between poor and typical readers. Read Writ. 2020;33:1925–1944.
crossref pdf
20. Anderson LW, Krathwohl DR. A taxonomy for learning, teaching, and assessing: a revision of Bloom’s taxonomy of educational objectives [Internet]. Longman, 2001. [cited 2021 Apr 30]. Available from: https://eduq.info/xmlui/handle/11515/18345 .

21. Denton CA, Enos M, York MJ, Francis DJ, Barnes MA, Kulesz PA, et al. Text-processing differences in adolescent adequate and poor comprehenders reading accessible and challenging narrative and informational Text. Read Res Q. 2015;50:393–416.
crossref pdf
22. Greer DL, Crutchfield SA, Woods KL. Cognitive theory of multimedia learning, instructional design principles, and students with learning disabilities in computer-based and online learning environments. J Educ. 2013;193:41–50.
crossref pdf
23. Torgesen JK, Wagner RK, Rashotte CA. Test of Word Reading Efficiency. second edition. Austin, TX: PRO-ED, 2012.

24. Flesch R. A new readability yardstick. J Appl Psychol. 1948;32:221–233.
crossref pmid
25. Bernstam EV, Shelton DM, Walji M, Meric-Bernstam F. Instruments to assess the quality of health information on the World Wide Web: what can our patients actually use? Int J Med Inf. 2005;74:13–19.
crossref pmid
26. R Development Core. Team R: A language and environment for statistical computing [Internet]. Vienna, Austria: R Foundation for Statistical Computing, 2018. Available from: http://www.R-project.org/ .

27. Rstudio Team. RStudio: Integrated development for R [Internet]. Boston, MA: RStudio, Inc, 2015. Available from: http://www.rstudio.com .

28. IBM Corp. IBM SPSS Statistics for Windows. Armonk NY: IBM Corp, 2017.

29. Thiese MS, Ronna B, Ott U. P value interpretations and considerations. J Thorac Dis. 2016;8:E928–31.
crossref pmid pmc
30. Box GEP. A general distribution theory for a class of likelihood criteria. Biometrika. 1949;36(3/4):317–346.
crossref pmid
31. Levene H. Robust tests for equality of variances. In : Olkin I, editor. Contributions to Probability and Statistics. Palo Alto, CA: Stanford University Press, 1960.

32. Gathercole SE, Woolgar F, Kievit RA, Astle D, Manly T, Holmes J. How common are WM deficits in children with difficulties in reading and mathematics? J Appl Res Mem Cogn. 2016;5:384–394.
crossref pmid pmc
33. Salimi M, Naghi Zade P. Reading disabilities in children: A selective meta-analysis of the cognitive literature. Except Educ J. 2019;3:51–62.

34. Hachmann WM, Bogaerts L, Szmalec A, Woumans E, Duyck W, Job R. Short-term memory for order but not for item information is impaired in developmental dyslexia. Ann Dyslexia. 2014;64:121–136.
crossref pmid pdf
35. Wigfield A, Gladstone J, Turci L. Beyond cognition: Reading motivation and reading comprehension. Child Dev Perspect. 2016;10:190–195.
crossref pmid pmc pdf
36. Wolters CA, Barnes MA, Kulesz PA, York M, Francis DJ. Examining a motivational treatment and its impact on adolescents’ reading comprehension and fluency. J Educ Res. 2017;10:98–109.
crossref
37. Flavell JH. Metacognition and cognitive monitoring: A new area of cognitive-developmental inquiry. Am Psychol. 1979;34:906–911.
crossref
38. Chevalier TM, Parrila R, Ritchie KC, Deacon SH. The role of metacognitive reading strategies, metacognitive study and learning strategies, and behavioral study and learning strategies in predicting academic success in students with and without a history of reading difficulties. J Learn Disabil. 2017;50:34–48.
crossref pmid pdf
39. Nash-Ditzel S. Metacognitive reading strategies can improve self-regulation. J Coll Read Learn. 2010;40:45–63.
crossref
40. Perelmutter B, McGregor KK, Gordon KR. Assistive technology interventions for adolescents and adults with learning disabilities: An evidence-based systematic review and meta-analysis. Comput Educ. 2017;114:139–163.
crossref pmid pmc
41. van Daal VHP, Sandvik JM, Adèr HJ. A meta-analysis of multimedia applications: How effective are interventions with e-books, computer-assisted instruction and TV/video on literacy learning? In : Kim JE, Hassinger-Das B, editors. Reading in the digital age: young children’s experiences with e-books: international studies with e-books in diverse contexts [Internet]. Cham: Springer International Publishing, 2019. [cited 2020 Jun 5]. p. 259–296. (Literacy Studies). Available from: https://doi.org/10.1007/978-3-030-20077-0_14 .
crossref
42. Shin J. A meta-analysis of the relationship between working memory and second language reading comprehension: Does task type matter? Appl Psycholinguist. 2020;41:873–900.
crossref
43. Biard N, Cojean S, Jamet E. Effects of segmentation and pacing on procedural learning by video. Comput Hum Behav. 2018;89:411–417.
crossref
44. Chatzipanteli A, Digelidis N, Papaioannou AG. Self-regulation, motivation and teaching styles in physical education classes: An intervention study. J Teach Phys Educ. 2015;34:333–344.
crossref
45. Nejadihassan S, Arabmofrad A. A review of relationship between self-regulation and reading comprehension. Theory Pract Lang Stud. 2016;6:835–842.
crossref pdf
46. Lau K. The effectiveness of self-regulated learning instruction on students’ classical Chinese reading comprehension and motivation. Read Writ. 2020;33:2001–2027.
crossref pdf
47. Hamdan JM, Smadi AM. Comprehension of Idioms by Jordanian Arabic-Speaking Children. J Psycholinguist Res. 2021;50:985–1008.
crossref pmid pdf
48. Zarei AA, Moftakhari Rezaei G. The effect of task type and task orientation on L2 vocabulary learning. Issues Lang Teach. 2017;5:278–255.

49. Strømme TAa, Mork SM. Students’ conceptual sense-making of animations and static visualizations of protein synthesis: a sociocultural hypothesis explaining why animations may be beneficial for student learning [Internet]. Res Sci Educ. 2020. May 4; [cited 2020 Jun 24]. Available from: https://doi.org/10.1007/s11165-020-09920-2 .
crossref
50. de Beer J, Engels J, Heerkens Y, van der Klink J. Factors influencing work participation of adults with developmental dyslexia: a systematic review. BMC Public Health. 2014;14:77.
crossref pmid pmc pdf
51. Meredith S. Reading disabilities in adolescents and adults. Lang Speech Hear Serv Sch. 2018;49:787–797.
crossref pmid
Editorial Office
#409, 102 SK-Hub BULD, 461 Samil-daero, Jongno-gu, Seoul 03147, Korea
FAX: +82-2-795-2726   E-mail: editor@e-cacd.org
About |  Browse Articles |  Current Issue |  For Authors and Reviewers
Copyright © The Korean Association of Speech-Language Pathologists.                 Developed in M2PI