User:Fuzzypenguin1998/sandbox
Education
[edit]Multimodality in the 21st century has caused educational institutions to consider changing the forms of its traditional aspects of classroom education. With a rise in digital and Internet literacy, new modes of communication are needed in the classroom in addition to print, from visual texts to digital e-books. Rather than replacing traditional literacy values, multimodality augments and increases literacy for educational communities by introducing new forms. According to Miller and McVee, authors of Multimodal Composing in Classrooms, “These new literacies do not set aside traditional literacies. Students still need to know how to read and write, but new literacies are integrated."[1] The learning outcomes of the classroom stay the same, including – but are not limited to – reading, writing, and language skills. However, these learning outcomes are now being presented in new forms as multimodality in the classroom suggests a shift from traditional medias such as paper-based text to more modern medias such as screen-based texts. The choice to integrate multimodal forms in the classroom is still controversial within educational communities. The idea of learning has changed over the years and now, some argue, must adapt to the personal and affective needs of new students. In order for classroom communities to be legitimately multimodal, all members of the community must share expectations about what can be done with through integration, requiring a "shift in many educators’ thinking about what constitutes literacy teaching and learning in a world no longer bound by print text."[2]
Multiliteracy
[edit]Multilteracy is the concept of understanding information through various methods of communication and being proficient in those methods. With the growth of technology, there are more ways to communicate than ever before, making it necessary for our definition of literacy to change in order to better accommodate these new technologies (add benefits). These new technologies consist of tools such as text messaging, social media, and blogs.[3] However, these modes of communication often employ multiple mediums simultaneously such as audio, video, pictures, and animation. Thus, making content multimodal.
The culmination of these different mediums are what’s called content convergence, which has become a cornerstone of multimodal theory. Within our modern digital discourse content has become accessible to many, remixable, and easily spreadable, allowing ideas and information to be consumed, edited, and improved by the general public. An example being Wikipedia, the platform allows free consumption and authorship of its’ work which in turn facilitates the spread of knowledge through the efforts of a large community. It creates a space in which authorship has become collaborative and the product of said authorship is improved by that collaboration. As distribution of information has grown through this process of content convergence it has become necessary for our understanding of literacy to evolve with it.[4]
The shift away from written text as the sole mode of nonverbal communication has caused the traditional definition of literacy to evolve.[5] While text and image may exist separately, digitally, or in print, their combination gives birth to new forms of literacy and thus, a new idea of what it means to be literate. Text, whether it is academic, social, or for entertainment purposes, can now be accessed in a variety of different ways and edited by several individuals on the Internet, in this way texts that would typically be concrete become amorphous through the process of collaboration . The spoken and written word are not obsolete, but they are no longer the only way to communicate and interpret messages.[5] Many mediums can be used separately and individually. Combining and repurposing one to for another has contributed to the evolution of different literacies.
Communication is spread across a medium through content convergence, such as a blog post accompanied by images and an embedded video. This idea of combining mediums gives new meaning to the concept of translating a message. The culmination of varying forms of media allows for content to be either reiterated, or supplemented by its parts. This reshaping of information from one mode to another is known as transduction.[5] As information changes from one mode to the next, our comprehension of its message is attributed to multiliteracy. Xiaolo Bao defines three succeeding learning stages that make up multiliteracy. Grammar-Translation Method, Communicative Method, and Task-Based Method. Simply put, they can be described as the fundamental understanding of syntax and its function, the practice of applying that understanding to verbal communication, and lastly, the application of said textual and verbal understandings to hands-on activities. In an experiment conducted by the Canadian Center of Science and Education, students were either placed in a classroom with a multimodal course structure, or a classroom with a standard learning course structure as a control group. Tests were administered throughout the length of the two courses, with the multimodal course concluding in a higher learning success rate, and reportedly higher rate of satisfaction among students. This indicates that applying multimodality to instruction is found to yield overall better results in developing multiliteracy than conventional forms of learning when tested in real-life scenarios.[6]
Classroom literacy
[edit]Multimodality in classrooms has brought about the need for an evolving definition of literacy. According to Gunther Kress, a popular theorist of multimodality, literacy usually refers to the combination of letters and words to make messages and meaning and can often be attached to other words in order to express knowledge of the separate fields, such as visual- or computer-literacy. However, as multimodality becomes more common, not only in classrooms, but in work and social environments, the definition of literacy extends beyond the classroom and beyond traditional texts. Instead of referring only to reading and alphabetic writing, or being extended to other fields, literacy and its definition now encompass multiple modes. It has become more than just reading and writing, and now includes visual, technological, and social uses among others[7]
A university writing and communication program created a definition of multimodality based off the acronym, WOVEN. The acronym explains how communication can be written, oral, visual, electronic, and nonverbal. Communication has multiple modes that can work together to create meaning and understanding. The goal of the program is to ensure students are able to communicate effectively in their everyday lives using various modes and media.[8]
As classroom technologies become more prolific, so do multimodal assignments. Students in the 21st century have more options for communicating digitally, be it texting, blogging, or through social media.[9]This rise in computer-controlled communication has required classes to become multimodal in order to teach students the skills required in the 21st-century work environment.[9] However, in the classroom setting, multimodality is more than just combining multiple technologies, but rather creating meaning through the integration of multiple modes. Students are learning through a combination of these modes, including sound, gestures, speech, images and text. For example, in digital components of lessons, there are often pictures, videos, and sound bites as well as the text to help students grasp a better understanding of the subject. Multimodality also requires that teachers move beyond teaching with just text, as the printed word is only one of many modes students must learn and use.[7][9][10]
The application of visual literacy in English classroom can be traced back to 1946 when the instructor's edition of the popular Dick and Jane elementary reader series suggested teaching students to "read pictures as well as words" (p. 15). [11] During 1960s, a couple of reports issued by NCTE suggested using television and other mass media such as newspapers, magazines, radio, motion pictures, and comic books in English classroom. The situation is similar in postsecondary writing instruction. Since 1972, visual elements have been incorporated into some popular twentieth-century college writing textbooks like James McCrimmon's Writing with a Purpose.[11]
Higher education
[edit]Multimodality in the college setting can be seen in an article by Teresa Morell, where she discusses how teaching and learning elicit meaning through modes such as language, speaking, writing, gesturing, and space. The study observes an instructor who conducts a multimodal group activity with students. Previous studies observed different classes using modes such as gestures, classroom space, and PowerPoints. The current study observes an instructors combined use of multiple modes in teaching to see its effect on student participation and conceptual understanding. She explains the different spaces of the classroom, including the authoritative space, interactional space, and personal space. The analysis displays how an instructors multimodal choices involve student participation and understanding. On average the instructor used three to four modes, most often being some kind of gaze, gesture, and speech. He got students to participate by formulating a group definition of cultural stereotypes. It was found that those who are learning a second language depend on more than just spoken and written word for conceptual learning, meaning multimodal education has benefits. [12][10]
Multimodal assignments involve many aspects other than written words, which may be beyond an instructors education. Educators have been taught how to grade traditional assignments, but not those that utilize links, photos, videos or other modes. Dawn Lombardi is a college professor who admitted to her students that she was a bit "technologically challenged," when assigning a multimodal essay using graphics. The most difficult part regarding these assignments is the assessment. Educators struggle to grade these assignments because the meaning conveyed may not be what the student intended. They must return to the basics of teaching to configure what they want their students to learn, achieve, and demonstrate in order to create criteria for multimodal tasks. Lombardi made grading criteria based on creativity, context, substance, process, and collaboration which was presented to the students prior to beginning the essay.[10]
Another type of visuals-related writing task is visual analysis, especially advertising analysis, which has begun in the 1940s and has been prevalent in postsecondary writing instruction for at least 50 years. This pedagogical practice of visual analysis did not focus on how visuals including images, layout, or graphics are combined or organized to make meanings.[11]
Then, through the following years, the application of visuals in composition classroom has been continually explored and the emphasis has been shifted to the visual features—margins, page layout, font, and size—of composition and its relationship to graphic design, web pages, and digital texts which involve images, layout, color, font, and arrangements of hyperlinks. In line with the New London Group, George (2002) argues that both visual and verbal elements are crucial in multimodal designs.[11]
Acknowledging the importance of both language and visuals in communication and meaning making, Shipka (2005) further advocates for a multimodal, task-based framework in which students are encouraged to use diverse modes and materials—print texts, digital media, videotaped performances, old photographs—and any combinations of them in composing their digital/multimodal texts. Meanwhile, students are provided with opportunities to deliver, receive, and circulate their digital products. In so doing, students can understand how systems of delivery, reception, and circulation interrelate with the production of their work.[13]
![]() | This is a user sandbox of Fuzzypenguin1998. You can use it for testing or practicing edits. This is not the place where you work on your assigned article for a dashboard.wikiedu.org course. Visit your Dashboard course page and follow the links for your assigned article in the My Articles section. |
- ^ Multimodal composing in classrooms : learning and teaching for the digital world. Miller, Suzanne M., 1949-, McVee, Mary B. New York: Routledge, Taylor & Francis Group. 2012. ISBN 9780415897488. OCLC 714730484.
{{cite book}}
: CS1 maint: others (link) - ^ April, Kurt (2012-06-25). "Performance Through Learning". doi:10.4324/9780080479927.
{{cite journal}}
: Cite journal requires|journal=
(help) - ^ Selfe, Richard J.; Selfe, Cynthia L. (2008-04-23). ""Convince me!" Valuing Multimodal Literacies and Composing Public Service Announcements". Theory Into Practice. 47 (2): 83–92. doi:10.1080/00405840801992223. ISSN 0040-5841.
- ^ IIEA1 (2012-05-24), Henry Jenkins on How Content Gains Meaning and Value in a Networked Culture, retrieved 2019-04-12
{{citation}}
: CS1 maint: numeric names: authors list (link) - ^ a b c Kress, Gunther (2004-01-14). Literacy in the New Media Age. Taylor & Francis. ISBN 9780203299234.
- ^ Bao, Xiaoli (2017-08-29). "Application of Multimodality to Teaching Reading". English Language and Literature Studies. 7 (3): 78. doi:10.5539/ells.v7n3p78. ISSN 1925-4776.
- ^ a b Kress, Gunther R. (2003). Literacy in the new media age. London: Routledge. ISBN 020329923X. OCLC 53016783.
- ^ "Guiding Principles | Writing and Communication Program". wcprogram.lmc.gatech.edu. Retrieved 2019-04-08.
- ^ a b c Vaish, Viniti; Towndrow, Phillip A. (2010-12-31), "12. Multimodal Literacy in Language Classrooms", Sociolinguistics and Language Education, Multilingual Matters, p. 319, ISBN 9781847692849, retrieved 2019-04-08
- ^ a b c Lombardi, Dawn (2018-01-19), "Braving Multimodality in the College Composition Classroom", Designing and Implementing Multimodal Curricula and Programs, Routledge, pp. 15–34, ISBN 9781315159508, retrieved 2019-04-09
- ^ a b c d George, Diana (2002). "From Analysis to Design: Visual Communication in the Teaching of Writing". College Composition and Communication. 54 (1): 11. doi:10.2307/1512100. ISSN 0010-096X – via JSTOR.
- ^ Morell, Teresa (2018). "Multimodal competence and effective interactive lecturing". System. 77: 70–79. doi:10.1016/j.system.2017.12.006. ISSN 0346-251X.
- ^ Shipka, Jody, "Including, but Not Limited to, the Digital:", Multimodal Literacies and Emerging Genres, vol. 57, University of Pittsburgh Press, pp. 277–306, ISBN 9780822978046, retrieved 2019-04-08