Jump to content

User:Luluanonymous/sandbox

From Wikipedia, the free encyclopedia

file:///Users/ensleymason/Downloads/70285-257429-1-SM.pdf

Application of Multimodality to Teaching Reading - Canadian Center of Science and Education: Notes I chose this article as a reference not only because of its scholarly source, the Canadian Center of Science and Education, but also because of its content on the function of multimodality in teaching. I enjoyed how it discussed the significance of multimodality in maintaining the interest of students learning to read, write, or speak another language. The article details an experiment run on two contrasting yet parallel classrooms. In one classroom, the educator took a multimodal approach to instruction, incorporating elements of the concept in all reading and writing assignments, while in the control classroom, the teacher led the course through a typical media-based approach. Students were given a pre-test, several check-up tests throughout the extent of the experiment, and a post-test to monitor the progress of the class. The findings at the end of the experiment indicated that not only did the multimodal strategy output better grades, but it also generally appealed more to students when asked how they enjoyed their guided-learning process. According to the students in the "multimodality" class, more fun was had between them and the instructor than those in the control class. The article mentioned something else that I hadn't really considered: the fact that most conservative approaches to teaching are centered more around the teacher's abilities and preferences themselves, and are less involved in crafting an approach that caters to the attention span of the student. Curriculums are too often based on what the instructor deems is efficient enough and what amount of time the instructor has on their hands to accommodate such a curriculum. The article mentions the psychology behind learning as well in this context. Knowing how people receive and process text and language is essential in understanding how using different modes of learning can be successful. The article defines multimodality as "the mixture of several semiotic modes". Bao, X. (2017) Application of Multimodality to Teaching Reading Obviously, multimodality is a scientific study, but to witness it actually put through a documented, experimental process really got me thinking about it more in terms successful application.

Three Centralized Modes of Teaching: "Grammar-Translation Method" - The process of learning to understand and translate the fundamental structure and function of grammar. (Bao, 2017) “Communicative Method" - The process of learning the functions of grammar for the purpose of clear and concise communication. (Bao, 17) “Task-Based Method" - The process of learning to use language through task-oriented applications of communication. (Bao, 2017)


Essentially these points can be interpreted as language structure, to language structure applied to communication, to communication applied to hands-on learning.

Questions and Musings: -We may understand multimodality for what it means in the most general sense, and we may successfully observe its effects on education, but what is at the root of a favorable-outcome of the concept? How does multimodality work from a scientific, psychological standpoint? Why does the practice click in our brains more so than other methods of learning?

-What are old methods of teaching defined by and why are they considered outdated?

-How was multimodality created as a concept or method? What was the process of integrating it into the educational community like?

-Is multimodality effective for all ages and genres of learning?

-How can I paraphrase content from a video successfully? (Jenkins)

-How can I translate the information gleaned from the above experiment in terms of objectively defining multiliteracy in our coming article?

Things to do: -Put together a solid paragraph on multiliteracy (including citations) that incorporates the paraphrasing of my chosen article as well as the definitions of Henry Jenkins' video lecture. -Make sure to describe any aspects of the Canadian Center of Science and Education's findings from an objective point of view that appreciates the legitimacy of the experiment as a scientific trial. -Be cautious when describing the students' documented responses when asked how much they enjoyed their course. This could easily become opinion-related in lieu of staying neutral.

Multiliteracy[edit]

Multilteracy is the translating of information through various methods of communication and being proficient in those methods. With the growth of technology, there are now more ways to convey a message to others than ever before. Literacy changes in order to incorporate new processes of communication, stemming from new advances or approaches in communication tools, such as text messaging, social media, and blogs.[32] These methods consist of more than just text. Modes such as audio, video, pictures, and animation can now be simultaneously incorporated into communication.[32] A practice called media convergence, according to theorist Henry Jenkins. The shift from page-based text found in print, and screen-based text found on the Internet, is inciting a redefinition of literacy.[30] The shift away from written text as the sole mode of nonverbal conveyance has been the catalyst for the traditional definition of literacy to evolve. While text and image may exist separately, digitally, or in print, their combination gives birth to new forms of literacy and thus, a new idea of what it means to be literate and communicate in efficient ways. Text, whether it is academic, social, or for entertainment purposes, can now be accessed in a variety of different ways and edited by several individuals on the Internet, in this way, texts that would typically be concrete become amorphous through the process. The spoken and written word are not obsolete, but they are no longer the only way to communicate and interpret messages.[30] With the continual growth of new media and the adaptation of old media, there are now numerous mediums to use when communicating.[31] Many mediums can be used individually. Combining and repurposing one to another has contributed to the evolution of different literacies. Communication is spread across a medium using different modes, like a blog post accompanied with images and an embedded video. These modes all work to construct meaning through this concept of multimodality. With the introduction of these modes comes the notion of transforming the message. This metamorphosis is accomplished by taking the message of one mode and displaying it in or with another, such as taking a text and incorporating it into a video.[33] However, the message may have been changed as it goes from one medium to the next. The video could now act as a supplement to the text, much like special features on a DVD, or it could become a piece that reiterates or supports the text in a different format. This reshaping of information from one mode to another is known as transduction.[30] As information changes from one mode to the next, the comprehension of the message is attributed to multiliteracy, as the text is understood across a variety of different angles. A key purpose for multiliteracies is to engage a diverse perspectives of students, facilitating progressively broadened multicultural groups.[34] Another function of multiliteracies is to help the shift of content design from primarily the instructor's responsibility to a more cooperative effort between teacher and learner.[34] Students are able have a more proactive role in their learning and are in a position to consciously evaluate how their lessons may impact others. Such extrinsic thought permits an evolution of the content and context of lessons advancing the idea of teaching (and learning) relevant material.[34] Xiaolo Bao of the Canadian Center of Science and Education defines three succeeding stages that make up multiliteracy. Grammar-Translation Method, Communicative Method, and Task-Based Method. [Bao, Xiaolo (2017)] Simply put, they can be described as the fundamental understanding of syntax and its function, the practice of applying that understanding to verbal communication, and lastly, those practices leading to the application of said textual and verbal understandings to everyday, hands-on activities. In an experiment conducted by the Canadian Center of Science and Education, students were either placed in a classroom with a multimodal course structure, or a classroom with a standard learning course structure as a control group. Tests were administered throughout the length of the two courses, with the multimodal course concluding with a higher learning success rate, and reportedly higher rate of satisfaction among students. This implies that applying multimodality to instruction is found to yield overall better results than conventional forms of literacy-learning when tested in real-life scenarios.

Notes: -Try to limit use of word "transforming/ed"

Last paragraph of Multiliteracy: A key purpose for multiliteracies is to engage the diverse perspectives of students, facilitating progressively broadened and multicultural groups.[34] (eliminate "and")

Peer review questions: -What strategies did you use to maintain neutrality? -Do you feel like you achieved a good balance between sounding professional and writing accessibly? -How confident do you feel in your wording and punctuation? -What are some things you feel still need work? -If you were an outside reader, would the writing seem clear and concise to you?

Second Draft:

multiliteracy is the concept of understanding information through various methods of communication and being proficient in those methods. With the growth of technology, there are more ways to communicate than ever before, making it necessary for our definition of literacy to change in order to better accommodate these new technologies. These new technologies consist of tools such as text messaging, social media, and blogs.[32] However, these modes of communication often employ multiple mediums at once such as audio, video, pictures, and animation. Thus, making content multimodal. The shift away from written text as the sole mode of nonverbal communication has caused the traditional definition of literacy to evolve.[30] While text and image may exist separately, digitally, or in print, their combination gives birth to new forms of literacy and thus, a new idea of what it means to be literate. Text, whether it is academic, social, or for entertainment purposes, can now be accessed in a variety of different ways and edited by several individuals on the Internet, in this way texts that would typically be concrete become amorphous through the process of collaboration . The spoken and written word are not obsolete, but they are no longer the only way to communicate and interpret messages.[30]] Many mediums can be used separately and individually. Combining and repurposing one to for another has contributed to the evolution of different literacies. Communication is spread across a medium through content convergence, such as a blog post accompanied with images and an embedded video. These three separate mediums all work together to create new meaning, this is the concept of multimodality. With the introduction of these modes comes the notion of translating the message. This transformation is accomplished by taking the message of one mode and displaying it in or with another, such as taking a text and incorporating it into a video.[33] However, the message may have been reinterpreted or changed as it goes from one medium to the next. The video could now act as a supplement to the text, much like special features on a DVD, or it could become a piece that reiterates or supports the text, just in a different format. This reshaping of information from one mode to another is known as transduction.[30] As information changes from one mode to the next, how that message is comprehended is attributed to multiliteracy, as the text is understood across a variety of different means.

(Add in study, find a good flow)

Multiliteracy - Final paragraph (final edit) Communication is spread across a medium through content convergence, such as a blog post accompanied by images and an embedded video. This idea of combining mediums gives new meaning to the concept of translating a message. The culmination of varying forms of media allows for content to be either reiterated, or supplemented by its parts. This reshaping of information from one mode to another is known as transduction.[30] As information changes from one mode to the next, our comprehension of its message is attributed to multiliteracy. Xiaolo Bao defines three succeeding learning stages that make up multiliteracy. Grammar-Translation Method, Communicative Method, and Task-Based Method. Simply put, they can be described as the fundamental understanding of syntax and its function, the practice of applying that understanding to verbal communication, and lastly, the application of said textual and verbal understandings to hands-on activities. In an experiment conducted by the Canadian Center of Science and Education, students were either placed in a classroom with a multimodal course structure, or a classroom with a standard learning course structure as a control group. Tests were administered throughout the length of the two courses, with the multimodal course concluding in a higher learning success rate, and reportedly higher rate of satisfaction among students. This indicates that applying multimodality to instruction is found to yield overall better results in developing multiliteracy than conventional forms of learning when tested in real-life scenarios.

[1]


Possible extra sources:

http://neamathisi.com/literacies/chapter-7-literacies-as-multimodal-designs-for-meaning/kress-and-van-leeuwen-on-multimodality

https://link.springer.com/chapter/10.1007/978-1-4020-9964-9_2

https://kcwritingcenter.weebly.com/multimodal-projects.html

https://www.coursera.org/lecture/multimodal-literacies/11-8-multimodal-pedagogy-in-practice-a00Fk

https://pdfs.semanticscholar.org/4ff2/4821d0da1117c0c3acc519b49cdb5567e083.pdf

https://books.google.com/books?id=8y0MqdHN2r0C&printsec=frontcover&dq=multimodality&hl=en&sa=X&ved=0ahUKEwjFxPfgnsrhAhUTsp4KHcJaB4MQ6AEIODAC#v=onepage&q=multimodality&f=false

Final final draft: Education[edit] Multimodality in the 21st century has caused educational institutions to consider changing the forms of its traditional aspects of classroom education. With a rise in digital and Internet literacy, new modes of communication are needed in the classroom in addition to print, from visual texts to digital e-books. Rather than replacing traditional literacy values, multimodality augments and increases literacy for educational communities by introducing new forms. According to Miller and McVee, authors of Multimodal Composing in Classrooms, “These new literacies do not set aside traditional literacies. Students still need to know how to read and write, but new literacies are integrated."[27] The learning outcomes of the classroom stay the same, including – but are not limited to – reading, writing, and language skills. However, these learning outcomes are now being presented in new forms as multimodality in the classroom suggests a shift from traditional medias such as paper-based text to more modern medias such as screen-based texts. The choice to integrate multimodal forms in the classroom is still controversial within educational communities. The idea of learning has changed over the years and now, some argue, must adapt to the personal and affective needs of new students. In order for classroom communities to be legitimately multimodal, all members of the community must share expectations about what can be done with through integration, requiring a "shift in many educators’ thinking about what constitutes literacy teaching and learning in a world no longer bound by print text."[29]

Multiliteracy

Multilteracy is the concept of understanding information through various methods of communication and being proficient in those methods. With the growth of technology, there are more ways to communicate than ever before, making it necessary for our definition of literacy to change in order to better accommodate these new technologies (add benefits). These new technologies consist of tools such as text messaging, social media, and blogs.[32] However, these modes of communication often employ multiple mediums simultaneously such as audio, video, pictures, and animation. Thus, making content multimodal. The culmination of these different mediums are what’s called content convergence, which has become a cornerstone of multimodal theory. Within our modern digital discourse content has become accessible to many, remixable, and easily spreadable, allowing ideas and information to be consumed, edited, and improved by the general public. An example being Wikipedia, the platform allows free consumption and authorship of its’ work which in turn facilitates the spread of knowledge through the efforts of a large community. It creates a space in which authorship has become collaborative and the product of said authorship is improved by that collaboration. As distribution of information has grown through this process of content convergence it has become necessary for our understanding of literacy to evolve with it. The shift away from written text as the sole mode of nonverbal communication has caused the traditional definition of literacy to evolve. While text and image may exist separately, digitally, or in print, their combination gives birth to new forms of literacy and thus, a new idea of what it means to be literate. Text, whether it is academic, social, or for entertainment purposes, can now be accessed in a variety of different ways and edited by several individuals on the Internet, in this way texts that would typically be concrete become amorphous through the process of collaboration . The spoken and written word are not obsolete, but they are no longer the only way to communicate and interpret messages.[30]] Many mediums can be used separately and individually. Combining and repurposing one to for another has contributed to the evolution of different literacies.

Communication is spread across a medium through content convergence, such as a blog post accompanied by images and an embedded video. This idea of combining mediums gives new meaning to the concept of translating a message. The culmination of varying forms of media allows for content to be either reiterated, or supplemented by its parts. This reshaping of information from one mode to another is known as transduction.[30] As information changes from one mode to the next, our comprehension of its message is attributed to multiliteracy. Xiaolo Bao defines three succeeding learning stages that make up multiliteracy. Grammar-Translation Method, Communicative Method, and Task-Based Method. Simply put, they can be described as the fundamental understanding of syntax and its function, the practice of applying that understanding to verbal communication, and lastly, the application of said textual and verbal understandings to hands-on activities. In an experiment conducted by the Canadian Center of Science and Education, students were either placed in a classroom with a multimodal course structure, or a classroom with a standard learning course structure as a control group. Tests were administered throughout the length of the two courses, with the multimodal course concluding in a higher learning success rate, and reportedly higher rate of satisfaction among students. This indicates that applying multimodality to instruction is found to yield overall better results in developing multiliteracy than conventional forms of learning when tested in real-life scenarios.

Classroom literacy[edit] Multimodality in classrooms has brought about the need for an evolving definition of literacy. According to Gunther Kress, a popular theorist of multimodality, literacy usually refers to the combination of letters and words to make messages and meaning and can often be attached to other words in order to express knowledge of the separate fields, such as visual- or computer-literacy. However, as multimodality becomes more common, not only in classrooms, but in work and social environments, the definition of literacy extends beyond the classroom and beyond traditional texts. Instead of referring only to reading and alphabetic writing, or being extended to other fields, literacy and its definition now encompass multiple modes. It has become more than just reading and writing, and now includes visual, technological, and social uses among others.[1] A university writing and communication program created a definition of multimodality based off the acronym, WOVEN. The acronym explains how communication can be written, oral, visual, electronic, and nonverbal. Communication has multiple modes that can work together to create meaning and understanding. The goal of the program is to ensure students are able to communicate effectively in their everyday lives using various modes and media.[2] As classroom technologies become more prolific, so do multimodal assignments. Students in the 21st century have more options for communicating digitally, be it texting, blogging, or through social media.[3]This rise in computer-controlled communication has required classes to become multimodal in order to teach students the skills required in the 21st-century work environment.[3] However, in the classroom setting, multimodality is more than just combining multiple technologies, but rather creating meaning through the integration of multiple modes. Students are learning through a combination of these modes, including sound, gestures, speech, images and text. For example, in digital components of lessons, there are often pictures, videos, and sound bites as well as the text to help students grasp a better understanding of the subject. Multimodality also requires that teachers move beyond teaching with just text, as the printed word is only one of many modes students must learn and use.[1][3] The application of visual literacy in English classroom can be traced back to 1946 when the instructor's edition of the popular Dick and Jane elementary reader series suggested teaching students to "read pictures as well as words" (p. 15). [4] During 1960s, a couple of reports issued by NCTE suggested using television and other mass media such as newspapers, magazines, radio, motion pictures, and comic books in English classroom. The situation is similar in postsecondary writing instruction. Since 1972, visual elements have been incorporated into some popular twentieth-century college writing textbooks like James McCrimmon's Writing with a Purpose.[4] Higher Education This can be seen in an article by Teresa Morell, where she discusses how teaching and learning elicit meaning through modes such as language, speaking, writing, gesturing, and space. The study observes an instructor who conducts a multimodal group activity with students. Previous studies observed different classes using modes such as gestures, classroom space, and PowerPoints. The current study observes an instructors combined use of multiple modes in teaching to see its effect on student participation and conceptual understanding. She explains the different spaces of the classroom, including the authoritative space, interactional space, and personal space. The analysis displays how an instructors multimodal choices involve student participation and understanding. On average the instructor used three to four modes, most often being some kind of gaze, gesture, and speech. He got students to participate by formulating a group definition of cultural stereotypes. It was found that those who are learning a second language depend on more than just spoken and written word for conceptual learning, meaning multimodal education has benefits. [5] Another type of visuals-related writing task is visual analysis, especially advertising analysis, which has begun in the 1940s and has been prevalent in postsecondary writing instruction for at least 50 years. This pedagogical practice of visual analysis did not focus on how visuals including images, layout, or graphics are combined or organized to make meanings.[4] Then, through the following years, the application of visuals in composition classroom has been continually explored and the emphasis has been shifted to the visual features—margins, page layout, font, and size—of composition and its relationship to graphic design, web pages, and digital texts which involve images, layout, color, font, and arrangements of hyperlinks. In line with the New London Group, George (2002) argues that both visual and verbal elements are crucial in multimodal designs.[4] Acknowledging the importance of both language and visuals in communication and meaning making, Shipka (2005) further advocates for a multimodal, task-based framework in which students are encouraged to use diverse modes and materials—print texts, digital media, videotaped performances, old photographs—and any combinations of them in composing their digital/multimodal texts. Meanwhile, students are provided with opportunities to deliver, receive, and circulate their digital products. In so doing, students can understand how systems of delivery, reception, and circulation interrelate with the production of their work.[6]

  1. ^ Xiaolo, B. (2017) Application of Multimodality to Teaching Reading. Richmond Hill, Canada: Canadian Center of Science and Education. file:///Users/ensleymason/Downloads/70285-257429-1-SM.pdf