User:Coffeekoala/sandbox
Copied from Negative evidence in language acquisition In language acquisition, negative evidence is evidence concerning what is not possible in a language. There is debate among linguists whether negative evidence can help child language learners determine the grammar of their language by disconfirming incorrect hypotheses about how their language works. Importantly, negative evidence does not show what is grammatical, but helps eliminate ungrammatical constructions by revealing what is not grammatical. There are two main types of negative evidence in language acquisition: direct negative evidence and indirect negative evidence.
Let's unpack the lead.
In language acquisition, negative evidence is evidence concerning what is not possible in a language. Importantly, negative evidence does not show what is grammatical, but helps eliminate ungrammatical constructions by revealing what is not grammatical. Direct negative evidence refers to comments made by a parent in response to an ungrammatical sentence that a child produces. Indirect negative evidence refers to the absence of ungrammatical sentences in the language that the child hears. There is debate among linguists whether negative evidence can help children determine the grammar of their language. Negative evidence, if it is used, could help children rule out ungrammatical constructions in their language. Coffeekoala (talk) 18:47, 15 October 2019 (UTC)
Copied from Negative evidence in language acquisition
Indirect negative evidence in language acquisition
[edit]Indirect negative evidence in language acquisition is information about the grammatical structure of their own language drawn from what the language learner does not hear in speech. Indirect negative evidence serves as a means for child language learners to constrain and refine hypotheses that they previously held about the structure of their language.[1]
Examples of indirect negative evidence
[edit]General examples
[edit]Indirect negative evidence can be used for more than just language learning. The calculation of probability based on observed occurrences, called Bayesian inference, serves many purposes. For example, when we see a dog bark, we are likely to think that dogs bark, not that every kind of animal barks. This is because we have never seen horses or fish or any other animal bark, so our hypothesis becomes that only dogs bark.[2] We use this same inference to assume that the sun will rise tomorrow, having seen it rise every other day so far. No evidence received indicates that the sun may not rise once every two thousand years, or only rises on years that are not 2086, but since all evidence seen so far is consistent with the universal generalization, we infer that the sun does indeed rise every day.[2]
Examples in language acquisition
[edit]Regier & Gahl argue that certain Poverty of the Stimulus problems could be overcome by using indirect negative evidence. For example, syntacticians argue that children could not learn that "one" is anaphoric to phrasal categories given that the evidence they are exposed to is equally consistent with "one" being anaphoric only to individual words. This is because the meanings produced by the phrasal hypothesis are a subset of those produced by the single word hypothesis. Children could thus use the fact that all of their data is consistent with both hypotheses as evidence in favor of the more restricted hypothesis.[2]
Utility of indirect negative evidence
[edit]Indirect negative evidence and word learning
[edit]Child and adult speakers rely on 'suspicious coincidences' when learning a new word meaning.[3] When learning a new word meaning, children and adults use only the first few instances of hearing that word in order to decide what it means, rapidly constricting their hypothesis if only used in a narrow context. In an experiment conducted by Xu and Tenebaum, 4-year-old participants learning a novel word 'fep' readily decided that it referred only to Dalmatians if only hearing it while shown pictures of Dalmatians; although they received no information that 'fep' was unable to refer to other kinds of dogs, the suspicious coincidence that they had never heard it in other contexts caused them to restrict their meaning to just one breed.[3]
Child language learners can use this same type of probabilistic inference to decide when and how verbs can be used. A word children hear often, like 'disappear', is more likely to be used than a less common word with a similar meaning, 'vanish'. Children studied said that the ungrammatical sentence "*We want to disappear our heads" was ungrammatical, but when given the same sentence with vanish, they were less sure of the grammaticality.[4] Given the frequency of 'disappear' in intransitive clauses, learners could infer that if it were possible in transitive clauses, they would have heard it in those contexts. Thus, the high frequency of the intransitive use leads to the inference that the transitive is impossible.[5] This inference is less reliable with a less frequent verb like "vanish".
Indirect negative evidence and syntax
[edit]It has been argued that children use indirect negative evidence to make probabilistic inferences about the syntax of the language they are acquiring. A 2004 study by Regier & Gahl produced a computational model which provides support for this argument.[2] They assert that children can use the absence of particular patterns in the input in order to conclude that such patterns are illicit. According to Regier and Gahl, young language learners form hypotheses about what is and isn't correct based on probabilistic inferences. As children are exposed to more and more examples of a certain phenomenon, their hypothesis space narrows. Notably, Regier and Gahl assert that this ability for probabilistic inference can be used in all sorts of general learning tasks, and not just linguistic ones. Regier and Gahl also present their model as evidence against an argument from the poverty of the stimulus, because their model illustrates that syntactic learning is possible from using the input alone, and does not necessarily require some innate linguistic knowledge of syntax.
As everyone is suggesting, this can be reordered.
Indirect negative evidence in language acquisition
[edit]let's put the explanation and the general example here (and take out Bayesian inference because there could be other ways of calculating this, and as Hyeonah says, it's not relevant)
New explanation:
Indirect negative evidence refers to using what's not in the input to make an inference of what's not possible. For example, when we see a dog bark, we are likely to think that dogs bark, not that every kind of animal barks. This is because we have never seen horses or fish or any other animal bark, so our hypothesis becomes that only dogs bark.[2] We use this same inference to assume that the sun will rise tomorrow, having seen it rise every other day so far. No evidence received indicates that the sun may not rise once every two thousand years, or only rises on years that are not 2086, but since all evidence seen so far is consistent with the universal generalization, we infer that the sun does indeed rise every day.[2] In language acquisition, indirect negative evidence may be used to constrain a child's grammar; if a child never hears a certain construction, the child concludes that it is ungrammatical.
(actually not sure if the sun example exactly fits--i'll read what they're citing and see)
Utility of indirect negative evidence
[edit]word learning
[edit]syntax
[edit]and let's combine the two paragraphs about the Regier and Gahl study here
Copied from Negative evidence in language acquisition
Furthermore, Gary Marcus argues that implicit direct negative evidence in the input is insufficient for children to learn the correct grammar of their language. He criticizes negative evidence because it does not explain why sentences are ungrammatical, thus making it difficult for children to learn what their correct grammar is. He also argues that for children to be able to even use implicit direct negative evidence, they would need to repeat a sentence 85 times and receive negative feedback in order to eliminate that sentence from their vocabulary. He then shows that children typically do not repeat ungrammatical utterances in such high quantities; therefore, language learners would not be able to benefit from implicit evidence. Marcus also purports that implicit evidence is largely unavailable because the feedback differs from parent to parent, and is inconsistent in both the frequency with which it is offered and the kinds of errors it corrects. Other studies demonstrate that implicit negative evidence decreases over time, so that as children get older there is less feedback, making it less available and, consequentially, less likely to account for children's unlearning of grammatical errors.
- let's make this clearer as Remo suggested:
Furthermore, Gary Marcus argues that implicit direct negative evidence in the input is insufficient for children to learn the correct grammar of their language. He asserts that negative evidence does not explain why sentences are ungrammatical, thus making it difficult for children to learn why these sentences should be excluded from their grammar. He also argues that for children to be able to even use implicit direct negative evidence, they would need to receive negative feedback on a sentence 85 in order to eliminate it from their vocabulary, but children do not repeat ungrammatical sentences nearly that often. Marcus also purports that implicit evidence is largely unavailable because the feedback differs from parent to parent, and is inconsistent in both the frequency with which it is offered and the kinds of errors it corrects. Other studies demonstrate that implicit negative evidence decreases over time, so that as children get older there is less feedback, making it less available and, consequentially, less likely to account for children's unlearning of grammatical errors.
Coffeekoala (talk) 16:57, 10 October 2019 (UTC)
Dear Coffeekoala
I absolutely agree with every part that you added in your sandbox. In particular, as you pointed out, general examples in the section of "Examples of indirect negative evidence" are rather distracting readers. Moreover, I am not sure whether Bayesian inference helps readers understand 'indirect negative evidence' better, because Bayesian inference seems to be irrelevant to this section. Let me add just a few, minor comments.
1) Both section 2 and 3 can be a subset under section 1, since they are all part of direct negative evidence.
2) A 2004 study by Regier & Gahl produced a computational model which provides support for this argument.[16]
- It seems awkward to start a sentence "A 2004 study~" Why don't you change this into " Regier & Gahl (2004)'s study"?
Hyeonah
Hi Lucy:
It seems to me that there is no clear reason why sections 4 and 5 should be separate, they are both impoverished in content and related, maybe it would be sensible to fuse them? I don't think the non-linguistic example of indirect negative evidence is necessarily bad, it might need to go in its own little paragraph that just explains what indirect negative evidence is. Perhaps there could be a section at the very beginning of the article that just explains the difference between direct negative evidence and indirect negative evidence in principle, so the reader has an easier time parsing that into linguistic terms.
In Section 2 there is that example of direct negative evidence. The authors then surmise that this shows that "children are seemingly unable to detect differences between their ungrammatical sentences and the grammatical sentences that their parents produce". I don't think that's quite true. It may just as well show that children can not instantly alter their *production* after negative evidence, or simply that one instance of corrections is not enough to sufficiently alter already established subconscious grammar. In any case, the judgement here seems a bit absolutive to me.
In Section 3.2, the paragraph on Gary Marcus' work, there is the folowing sentence: "He then shows that children typically do not repeat ungrammatical utterances in such high quantities; therefore, language learners would not be able to benefit from implicit evidence.". I'm not sure what this is supposed to mean. Is the idea to say that children don't say enough grammatical things to be corrected enough times for those corrections to actually make a difference? Is there a way to phrase this so people who haven't read this paper can understand what is going here? (Assuming you understand what they mean here). RemoLing (talk) 04:05, 26 September 2019 (UTC)
copied from Negative evidence in language acquisition
Indirect negative evidence in language acquisition[edit]
[edit]Indirect negative evidence in language acquisition is information about the grammatical structure of their own language drawn from what the language learner does not hear in speech. Indirect negative evidence serves as a means for child language learners to constrain and refine hypotheses that they previously held about the structure of their language.
- Let's add more to this because it's very short, and I'm not sure the average reader will understand what indirect negative evidence is based on these 2 sentences.
copied from Negative evidence in language acquisition
Examples of indirect negative evidence[edit]
[edit]General examples[edit]
[edit]Indirect negative evidence can be used for more than just language learning. The calculation of probability based on observed occurrences, called Bayesian inference, serves many purposes. For example, when we see a dog bark, we are likely to think that dogs bark, not that every kind of animal barks. This is because we have never seen horses or fish or any other animal bark, so our hypothesis becomes that only dogs bark. We use this same inference to assume that the sun will rise tomorrow, having seen it rise every other day so far. No evidence received indicates that the sun may not rise once every two thousand years, or only rises on years that are not 2086, but since all evidence seen so far is consistent with the universal generalization, we infer that the sun does indeed rise every day.
Examples in language acquisition[edit]
[edit]Regier & Gahl argue that certain Poverty of the Stimulus problems could be overcome by using indirect negative evidence. For example, syntacticians argue that children could not learn that "one" is anaphoric to phrasal categories given that the evidence they are exposed to is equally consistent with "one" being anaphoric only to individual words. This is because the meanings produced by the phrasal hypothesis are a subset of those produced by the single word hypothesis. Children could thus use the fact that all of their data is consistent with both hypotheses as evidence in favor of the more restricted hypothesis.
- Reorganize this! Why start with non-linguistic examples when this is an article about language acquisition?
- After Regier and Gahl add info about this citation:
- ^ Rohde, Douglas L. T; Plaut, David C (1999-08-25). "Language acquisition in the absence of explicit negative evidence: how important is starting small?". Cognition. 72 (1): 67–109. doi:10.1016/S0010-0277(99)00031-1. ISSN 0010-0277.
- Talk about how taking statistics could overcome the lack of negative evidence problem? Need to read more about this
- Papers from Ling 533 to cite:
Romberg & Saffran 2010
Yang 2004
![]() | This is a user sandbox of Coffeekoala. You can use it for testing or practicing edits. This is not the place where you work on your assigned article for a dashboard.wikiedu.org course. Visit your Dashboard course page and follow the links for your assigned article in the My Articles section. |
- ^ Lust, Barbara (2007). Child Language: Acquisition and Growth. Cambridge University Press. pp. 30–31. ISBN 978-0521449229.
- ^ a b c d e f Regier, Terry; Gahl, Susanne (2004). "Learning the unlearnable: the role of missing evidence". Cognition. 93 (2): 147–155. doi:10.1016/j.cognition.2003.12.003. PMID 15147936.
- ^ a b Xu, Fei; Tenebaum, Joshua (2007). "Word Learning as Bayesian Inference". Psychological Review. 114 (2): 245–272. CiteSeerX 10.1.1.57.9649. doi:10.1037/0033-295X.114.2.245. PMID 17500627.
- ^ Ambridge, Ben; Pine, J.M.; Rowland, C.F.; Young, C.R. (2008). "The effect of verb semantic class and verb frequency (entrenchment) on children's and adults' graded judgements of argument-structure overgeneralization errors". Cognition. 106 (1): 87–129. doi:10.1016/j.cognition.2006.12.015. hdl:11858/00-001M-0000-002B-4C4F-7. PMID 17316595.
- ^ Bowerman, 1988 (1988). Explaining Language Universals (PDF). Oxford: Basil Blackwell. pp. 73–101.
{{cite book}}
:|first=
has numeric name (help)