Prévia do material em texto
<p>APRESENTAÇÃO</p><p>DE APOIO</p><p>NEUROCIÊNCIA, PERCEPÇÃO E INFÂNCIA</p><p>Estudo a partir da psicologia e neurociências sobre a temática do desenvolvimento da</p><p>percepção na infância. Estudo sobre como a música, a leitura, as artes, impactam no</p><p>desenvolvimento infantil.</p><p>Ementa da disciplina</p><p>Importante filósofo e cientista cognitivo. É Professor de Filosofia e</p><p>Professor de Artes Liberais de Samuel Rhea Gammon, na Texas A&M University,</p><p>onde atuou anteriormente como Reitor do College of Liberal Arts. Nascido em</p><p>Bogotá, Colômbia, foi educado na Universidade de Cambridge e ocupou cargos</p><p>em Cambridge, na Universidade de Stirling e na Universidade de Washington em</p><p>St Louis. O primeiro livro do Dr. Bermúdez, “The Paradox of Self-Consciousness”</p><p>(MIT Press, 1998) analisou a natureza da autoconsciência. “Thinking without</p><p>Words” (Oxford UP, 2003) ofereceu um modelo para pensar sobre as</p><p>realizações e habilidades cognitivas de bebês pré-linguísticos e humanos não</p><p>linguísticos. “Decision Theory and Rationality” (Oxford UP, 2009) explora as</p><p>tensões em como o conceito de racionalidade é definido e formalizado em</p><p>diferentes disciplinas acadêmicas. Continuando seu trabalho sobre</p><p>autoconsciência e autorreferência, “Understanding “I”: Language and Thought”</p><p>foi publicado pela Oxford UP em 2017 e “The Bodily Self: Selected Essays” pela</p><p>MIT Press em 2018.</p><p>JOSÉ LUIS BERMÚDEZ</p><p>Professor convidado</p><p>THIAGO WENDT VIOLA</p><p>Professor PUCRS</p><p>Possui Doutorado (2018) e Mestrado (2015) em Ciências Médicas na área de</p><p>Pediatria e Saúde da Criança pela Pontifícia Universidade Católica do Rio Grande do</p><p>Sul (PUCRS). Realizou doutorado sanduíche no laboratório Cognitive</p><p>Neuroepigenetics do Departamento de Neurobiology and Behavior da University of</p><p>California, Irvine (EUA). Atualmente realiza Pós-Doutorado no Programa de Pós-</p><p>Graduação em Ciências da Saúde na área de Neurociências da Escola de Medicina da</p><p>PUCRS, com financiamento do National Institutes of Health (NIH). Sua área de</p><p>atuação é focada na investigação epigenética e estudos de varredura genômica na</p><p>área da neuropsiquiatria. Tem experiência em pesquisa com humanos e com</p><p>modelos animais, pois seu foco sempre foi a pesquisa translacional em</p><p>desenvolvimento.</p><p>Professores</p><p>Encontros e resumo da disciplina</p><p>JOSÉ LUIS BERMÚDEZ</p><p>Professor convidado</p><p>THIAGO WENDT VIOLA</p><p>Professor PUCRS</p><p>O que você realmente percebe em uma</p><p>dada situação é uma função do contexto</p><p>em que ela está inserida.</p><p>A aprendizagem é um processo de</p><p>desenvolvimento de teorias.</p><p>JOSÉ LUIS BERMÚDEZ</p><p>Professor convidado</p><p>O estudo de idiomas é muito importante no</p><p>desenvolvimento da ciência cognitiva.</p><p>A repetição é a chave para a aprendizagem.</p><p>Há um único mecanismo responsável tanto</p><p>vivenciar um certo tipo de estado mental,</p><p>quanto por reconhecê-lo em outras</p><p>pessoas.</p><p>O funcionamento executivo</p><p>também tem um papel importante</p><p>de regulação das nossas emoções.</p><p>As vias sensoriais que dão a base para as</p><p>vias perceptivas precisam se desenvolver</p><p>mais rápido para dar base a outros</p><p>processos da cognição.</p><p>O funcionamento executivo também tem</p><p>um papel importante de regulação das</p><p>nossas emoções.</p><p>AULA 1 AULA 3AULA 2</p><p>O V E RV I E W</p><p>José Luis Bermúdez</p><p>Texas A&M University</p><p>A I M O F T H E C L A S S</p><p>This class will explore two very different models in the cognitive</p><p>science of development and learning</p><p>The first model derives from the computational model of the</p><p>mind, associated with traditional artificial intelligence</p><p>The second model derives from connectionist modeling, loosely</p><p>derived from the operation and organization of the brain</p><p>T O P I C S</p><p>We will begin by looking at the two different models of mind and</p><p>cognition</p><p>We will then look at how they each explain the process of cognitive</p><p>development</p><p>We will work through the very different explanations they give of well-</p><p>documented developmental progressions in early childhood</p><p>Language learning</p><p>Specifically: How children learn the past tenses of English verbs</p><p>Emergence of “ folk physics” or “ naïve physics”</p><p>Specifically:</p><p>(a) How children learn that objects continue to exist when</p><p>unperceived</p><p>(b) How children learn to reason about weight, distance, and force</p><p>Emergence of “folk psychology”</p><p>Specifically: How do children learn to reason about other people’s</p><p>beliefs, desires, and other mental states?</p><p>O U T L I N E</p><p>Class 1</p><p>The computational</p><p>model of mind</p><p>Class 2</p><p>Neurally-inspired</p><p>models of mind</p><p>Class 3</p><p>Learning and</p><p>development</p><p>Class 4</p><p>Learning the past</p><p>tense</p><p>Class 5</p><p>Naïve physics</p><p>Class 6</p><p>Mindreading</p><p>THE</p><p>COMPUTATIONAL</p><p>MODEL OF MIND</p><p>CLASS 1</p><p>O U T L I N E</p><p>Class 1</p><p>The computational</p><p>model of mind</p><p>Class 2</p><p>Neurally-inspired</p><p>models of mind</p><p>Class 3</p><p>Learning and</p><p>development</p><p>Class 4</p><p>Learning the past</p><p>tense</p><p>Class 5</p><p>Naïve physics</p><p>Class 6</p><p>Mindreading</p><p>T H E B A S I C I D E A O F C O G N I T I V E</p><p>S C I E N C E</p><p>Cognitive science starts off</p><p>from the basic assumption</p><p>that cognition is a form of</p><p>information-processing</p><p>Assumption governs all levels</p><p>of organization (from</p><p>neurons upwards) and almost</p><p>all explanatory models and</p><p>hypotheses within the</p><p>individual cognitive sciences</p><p>T H R E E Q U E S T I O N S</p><p>In what format does a cognitive</p><p>system carry information?</p><p>How does that system transform</p><p>and process information?</p><p>How is the mind as a whole</p><p>organized into information-</p><p>processing sub-systems?</p><p>T WO M O D E L S O F I N F O R M AT I O N -</p><p>P RO C E S S I N G</p><p>• Turing machine model of information-</p><p>processing</p><p>• associated with classical, symbolic AI</p><p>Computational</p><p>models</p><p>• connectionism/artificial neural networks</p><p>• used to model cognitive/perceptual abilities</p><p>that have posed problems for classical AI</p><p>Neurally-</p><p>inspired models</p><p>1 9 7 5 T U R I N G</p><p>AWA R D</p><p>• Given by Association of</p><p>Computing Machinery to Allen</p><p>Newell and Herbert Simon –</p><p>pioneers of AI</p><p>Logic Theory Machine</p><p>(1957)</p><p>General Problem Solver</p><p>(1956)</p><p>• Newell and Simon used their</p><p>Turing lecture to deliver a</p><p>manifesto about the basic</p><p>principles for studying intelligent</p><p>information-processing</p><p>L AW S O F</p><p>Q U A L I TAT I V E</p><p>S T RU C T U R E</p><p>Basic principles governing individual</p><p>sciences</p><p>Biology:</p><p>The cell is the basic building block of</p><p>organisms</p><p>Geology:</p><p>Geological activity results from the</p><p>movement of a small number of huge</p><p>plates</p><p>AI/Cognitive Science:</p><p>The physical symbol system hypothesis`</p><p>T H E P H Y S I C A L</p><p>S Y M B O L</p><p>S Y S T E M</p><p>H Y P O T H E S I S</p><p>• A physical symbol system has the</p><p>necessary and sufficient means for</p><p>intelligent action</p><p>Necessity: Anything capable of</p><p>intelligent action is a physical</p><p>symbol system</p><p>Sufficiency: Any (sufficiently</p><p>sophisticated) PSS is capable of</p><p>intelligent action</p><p>F O U R B A S I C</p><p>I D E A S</p><p>Symbols are physical patterns</p><p>Symbols can be combined to form</p><p>complex symbol structures</p><p>The system contains processes for</p><p>manipulating symbol structures</p><p>These can themselves be symbolically</p><p>represented within the system</p><p>T H I N K I N G</p><p>A N D T H E</p><p>P S S H</p><p>The essence of intelligent thinking is</p><p>the ability to solve problems</p><p>Intelligence is the ability to identify</p><p>which option best matches certain</p><p>requirements and constraints</p><p>Problem-solving is relative to a</p><p>problem-space</p><p>S P E C I F Y I N G A P RO B L E M I N A I</p><p>Basic components of a representation</p><p>• description of given situation</p><p>• operators for changing the situation</p><p>• a goal situation</p><p>• tests to determine whether the goal has been reached</p><p>Problem space = branching tree of achievable situations defined by</p><p>potential application of operators to initial situation</p><p>[e.g. chess]</p><p>P RO B L E M - S O LV I N G</p><p>• Problem-spaces are generally too large to be searched</p><p>exhaustively (by brute force algorithms)</p><p>• Search must be selective heuristic search rules</p><p>• effectively close off branches of the tree</p><p>• e.g. in chess: “ignore branches that start with a piece</p><p>being lost without compensation”</p><p>C O M B I N AT O R I A L E X P L O S I O N !</p><p>• With n connected cities there are (n</p><p>– 1)! possible paths through the search</p><p>space</p><p>• This can be reduced to 2n</p><p>• But it would take a computer</p><p>processing 1,000,000 possibilities per</p><p>second over 30 years to solve a 50 city</p><p>TP problem by brute force search</p><p>H E U R I S T I C S E A R C H H Y P O T H E S I S</p><p>• Problems are solved by generating and modifying symbol structures until</p><p>a solution structure is reached</p><p>• GPS starts with symbolic descriptions of the start state and the</p><p>goal state</p><p>• aims to find a sequence of admissible transformations that will</p><p>transform the start state into the goal state</p><p>I L L U S T R AT I O N : M I S S I O N A RY A N D</p><p>C A N N I B A L S</p><p>• Famous logic problem</p><p>Three missionaries and three cannibals are on one side of a</p><p>river.</p><p>All must cross in a boat that can only take two passengers</p><p>If the cannibals outnumber the missionaries on either bank</p><p>then the mission has failed</p><p>F U N DA M E N TA L</p><p>Q U E S T I O N</p><p>If intelligence is a process of</p><p>solving problems through</p><p>heuristic search, then what</p><p>constraints does that place</p><p>on how we think about the</p><p>mind?</p><p>T H E C O M P U TAT I O N A L M O D E L</p><p>O F M I N D</p><p>(1) Heuristic problem-solving is basically a process of</p><p>hypothesis formation and testing</p><p>(2) Hypotheses can only be formulated in a language –</p><p>they are linguistic symbol structures</p><p>(3) Hypothesis-testing works by applying rules to</p><p>hypotheses and assessing the results</p><p>T H E C O M P U TAT I O N A L M O D E L O F M I N D</p><p>(4) The rules must also be linguistically formulated</p><p>(5) So – thinking is process of manipulating linguistic</p><p>symbol structures in a rule-governed way</p><p>(6) Direct analogy with formal logic and with the</p><p>operation of digital computers</p><p>N E U R A L LY-</p><p>I N S P I R E D</p><p>M O D E L S O F M I N D</p><p>José Luis Bermúdez</p><p>Texas A&M University</p><p>CLASS 2</p><p>O U T L I N E</p><p>Class 1</p><p>The computational</p><p>model of mind</p><p>Class 2</p><p>Neurally-inspired</p><p>models of mind</p><p>Class 3</p><p>Learning and</p><p>development</p><p>Class 4</p><p>Learning the past</p><p>tense</p><p>Class 5</p><p>Naïve physics</p><p>Class 6</p><p>Mindreading</p><p>C H A L L E N G E S F O R T H E</p><p>C O M P U T A T I O N A L</p><p>M O D E L</p><p>(1) Massive challenges for rule-based and serial models of simple</p><p>cognitive abilities</p><p>(2) Gap in the neuroscientist’s toolkit the missing level of</p><p>analysis</p><p>(3) Worries about biological plausibility of physical symbol</p><p>systems</p><p>C H A L L E N G E 1</p><p>• Many cognitive abilities are difficult to model in a rule-based</p><p>way</p><p>Context effects in perception (explosion of rules)</p><p>Pattern completion/visual recognition</p><p>Language (particularly languages such as English with</p><p>many irregular forms)</p><p>RU L E S V S S O F T C O N S T R A I N T S</p><p>• Much of cognition is governed by “soft” constraints</p><p>• Constraints that hold “for the most part” (ceteris paribus) – don’t</p><p>fully determine outcomes</p><p>• These are often in conflict with each other</p><p>• Hard to formulate rules governing which constraints should taken</p><p>precedence in which circumstances</p><p>• Many background constraints are never made explicit</p><p>P O N Z O I L L U S I O N</p><p>Conflicting constraints:</p><p>size constancy</p><p>depth cues</p><p>Visual system estimates size by</p><p>multiplying retinal size by assumed</p><p>distance</p><p>C H A L L E N G E 2</p><p>Tools</p><p>A Psychological techniques</p><p>B Neuroimaging</p><p>C ???</p><p>D Single neuron recording</p><p>T H E M I S S I N G L I N K . . .</p><p>• Neuroimaging gives information about large-scale neural</p><p>activity but not about how the activity of individual neurons</p><p>generates that activity</p><p>• Likewise for activity measured through ERPs</p><p>• Single and multi-unit recordings give information about the</p><p>activity of individual neurons, but not about how populations</p><p>of neurons function as information-processors</p><p>C H A L L E N G E 3</p><p>• There are major issues concerning the biological plausibility of</p><p>the computational model of the mind</p><p>• The speed of serial processing</p><p>• Graceful degradation vs brittleness</p><p>• Learning</p><p>1 0 0 - S T E P RU L E ( F E L D M A N & B A L L A R D )</p><p>• Many complicated information processing tasks can be</p><p>performed in a few hundred milliseconds</p><p>• E.g. visual recognition w/in 500ms</p><p>• But the average time it takes neurons to generate an action</p><p>potential is typically 3-5 ms</p><p>• There are not very many interesting programs that can run in</p><p>100 steps or so. . .</p><p>S E R I A L P RO C E S S I N G</p><p>• Processing in physical symbol systems and digital computers is</p><p>step-by-step</p><p>• a Turing machine or can only read one cell at a time</p><p>• each line of computer code only applies one instruction</p><p>• Can the brain actually work this slowly?</p><p>• Distinction between computability in principle and</p><p>practical computability</p><p>• Practical computability imposes real-time constraints</p><p>G R A C E F U L D E G R A DAT I O N</p><p>• Brains respond in distinctive and flexible ways to damage and</p><p>impairment</p><p>• characteristic partial breakdown patterns in response to</p><p>local damage/lesions</p><p>• graceful degradation when cognitive abilities slowly</p><p>deteriorate</p><p>• Symbolic computer programs either work or they don’t </p><p>brittle</p><p>L E A R N I N G</p><p>• GOFAI and computational approaches typically model</p><p>completed cognitive abilities</p><p>• There is a field of machine learning, but it is very</p><p>specialized and narrow (e,g, data mining)</p><p>• Not easy to see how GOFAI programs could emerge in the</p><p>normal course of human development</p><p>• This is connected to lack of flexibility and lack of real-time</p><p>dimension</p><p>M E E T I N G T H E C H A L L E N G E S</p><p>• Artificial neural networks (AKA connectionist networks) are a</p><p>way of modeling cognition that meets these challenges</p><p>• They emerged to prominence during the 1980s</p><p>• The Deep Learning tools that are revolutionizing AI today are</p><p>descendants of the original connectionist networks.</p><p>F E AT U R E S O F C O N N E C T I O N I S T</p><p>N E T WO R K S</p><p>• Processing is parallel , not serial</p><p>• Can be used to model multiple satisfaction of soft constraints</p><p>• Do not feature explicit (content-specific) rules</p><p>• Exhibit graceful degradation</p><p>• Capable of learning</p><p>Can be used to model multiple satisfaction of soft constraints</p><p>Neurons and</p><p>network units</p><p>Fig. 2</p><p>An artificial neuron:</p><p>Ii Input i</p><p>Wi The weight attached to input i</p><p>T The threshold of the neuron</p><p>X The total input to the neuron</p><p>S The output signal</p><p>L E A R N I N G I N N E U R A L N E T WO R K S</p><p>• Neural networks are important because they allow us to model</p><p>how information-processing capacities are learnt</p><p>• Two types of learning</p><p>Supervised [requires feedback]</p><p>Unsupervised [no feedback]</p><p>U N S U P E RV I S E D L E A R N I N G</p><p>• Simplest algorithms for unsupervised learning are forms of</p><p>Hebbian learning</p><p>• Basic principle: Neurons that fire together, wire together</p><p>• “When an axon of a cell A is near enough to excite call B</p><p>or repeatedly or persistently takes part in firing it, some</p><p>growth or metabolic change takes place in both cells such</p><p>that A’s efficiency, as one of the cells firing B, is increased.”</p><p>S U P E RV I S E D L E A R N I N G</p><p>• Basic distinction</p><p>Single layer networks]</p><p>Multilayer networks</p><p>• Different learning rules</p><p>• Only multilayer networks have hidden units</p><p>P E R C E P T RO N C O N V E R G E N C E RU L E</p><p>• Also known as the delta rule</p><p>• Distinct from Hebbian learning in that training depends upon</p><p>the discrepancy between actual output and intended output</p><p> = error measure (Intended output – actual output)</p><p>• The δ is used to update the weightrs and thresholds to reduce</p><p>error</p><p>T H E B A S I C P RO B L E M</p><p>• Multilayer networks can be constructed to compute any</p><p>function that can be computer by a Turing machine, but cannot</p><p>be trained using the perceptron convergence rule</p><p>• Single unit networks can be trained, but there are certain types</p><p>of basic function that they cannot compute</p><p>• What was needed was an algorithm for training multilayer</p><p>networks</p><p>B A C K P RO PA G AT I O N</p><p>• Information is transmitted forwards through the network</p><p>• Error is propagated backwards through the network</p><p>• The backpropagated error signal is used to adjust the weights</p><p>to/from the hidden units</p><p>B A C K P RO P</p><p>• The algorithm needs to find a way of calculating error in</p><p>hidden units that do not have target activation levels</p><p>• It does this by calculating for each hidden unit its degree of</p><p>“responsibility” for error at the output units</p><p>• This error value is used to adjust the weights of the hidden</p><p>units</p><p>G R A D I E N T D E S C E N T L E A R N I N G</p><p>• GD learning rules calculate the slope of the</p><p>error curve at a particular point</p><p>• Negative slope means that the weight</p><p>needs to be increased etc.</p><p>• Learning stops when the slope is zero</p><p>B A C K P RO P A N D L O C A L M I N I M A</p><p>• The basic problem with backpropagation is that multilayer</p><p>networks are not guaranteed to have error curves with a single</p><p>error minimum (i.e. zero slope)</p><p>• The network can get stuck</p><p>in a local minimum</p><p>B A C K P RO P : B I O L O G I C A L P L A U S I B I L I T Y</p><p>• No evidence that backpropagation takes place in the brain</p><p>• Setting the number of hidden units is crucial -</p><p>how would the brain determine this?</p><p>• No evidence that individual neurons receive</p><p>error signals from all the neurons to which they are</p><p>connected</p><p>B A C K T O T H E C H A L L E N G E S</p><p>C H A L L E N G E S F O R T H E</p><p>C O M P U T A T I O N A L</p><p>M O D E L</p><p>(1) Massive challenges for rule-based and serial models of simple</p><p>cognitive abilities – predominantly met by ANNs</p><p>(2) Gap in the neuroscientist’s toolkit and the missing level of</p><p>analysis – ANNs fill the gap from a modeling perspective</p><p>(3) Worries about biological plausibility of physical symbol</p><p>systems – partially met by ANNs</p><p>L E A R N I N G A N D</p><p>D E V E L O P M E N T</p><p>José Luis Bermúdez</p><p>Texas A&M University</p><p>CLASS 3</p><p>O U T L I N E</p><p>Class 1</p><p>The computational</p><p>model of mind</p><p>Class 2</p><p>Neurally-inspired</p><p>models of mind</p><p>Class 3</p><p>Learning and</p><p>development</p><p>Class 4</p><p>Learning the past</p><p>tense</p><p>Class 5</p><p>Naïve physics</p><p>Class 6</p><p>Mindreading</p><p>T WO M O D E L S O F I N F O R M AT I O N -</p><p>P RO C E S S I N G</p><p>• Turing machine model of information-</p><p>processing</p><p>• associated with classical, symbolic AI</p><p>Computational</p><p>models</p><p>• neurally-inspired models of information-</p><p>processing</p><p>• used to model cognitive/perceptual abilities</p><p>that have posed problems for classical AI</p><p>Neurally-</p><p>inspired models</p><p>T WO M O D E L S O F L E A R N I N G A N D</p><p>D E V E L O P M E M T</p><p>• Information processing is symbolic and rule-</p><p>governed</p><p>• Knowledge is fundamentally theory-like</p><p>• Learning is a process of developing theories</p><p>Computational</p><p>model</p><p>• Information-processing is not symbolic</p><p>• Learning is a process of gradually changing</p><p>neural firing patterns (weights and thresholds)</p><p>• Networks (and brains) act in accordance with</p><p>rules, but not by consulting them</p><p>Neurally-</p><p>inspired models</p><p>L A N G U A G E A N D RU L E S</p><p>• Speaking and understanding a natural language is a paradigm of</p><p>a rule-governed activity</p><p>– Grammatical rules that govern how words can be put</p><p>together to form meaningful sentences</p><p>– Rules that govern the deep structure of languages in</p><p>Chomskyan linguistics</p><p>T H E D E F A U LT H Y P O T H E S I S</p><p>Mastering a language is at bottom a process of mastering the</p><p>basic rules that govern a language</p><p>Phonological rules</p><p>Syntactic rules</p><p>Semantic rules</p><p>D E V E L O P I N G T H E D E F A U LT H Y P O T H E S I S</p><p>• Everything depends upon what counts as mastering a rule</p><p>Maximalist conception: Mastering a rule requires explicitly</p><p>representing the rule</p><p>Minimalist conception: There is no more to mastering a</p><p>rule than using words in accordance with the rule</p><p>T H E M A X I M A L I S T C O N C E P T I O N</p><p>• Most explicitly developed by the philosopher and cognitive</p><p>scientist Jerry Fodor (1935 – 2017)</p><p>• Closely connected for Fodor (and others) with versions of</p><p>innatism/nativism</p><p>• Fodor argued that linguistic rules (and all other representations)</p><p>need to be represented in an innate language of thought</p><p>F O D O R ’ S A R G U M E N T F O R L Ο Τ ( 1 )</p><p>• It is impossible to understand a language without understanding</p><p>what it is for sentences to be true – their truth condition</p><p>(TC1) The sentence “Austin is the capital of Texas” is true if</p><p>and only if Austin is the capital of Texas</p><p>(TC2) The sentence “Snow is white is true if and only if snow</p><p>is white</p><p>F O D O R ’ S A R G U M E N T F O R L Ο Τ ( 2 )</p><p>• But the truth conditions as stated cannot explain what it is to learn a</p><p>language</p><p>(TC1) The sentence “Austin is the capital of Texas” is true if</p><p>and only if Austin is the capital of Texas</p><p>• Somebody cannot understand TC1 unless they already understand</p><p>the sentence “Austin is the capital of Texas”</p><p>F O D O R ’ S A R G U M E N T F O R L Ο Τ ( 3 )</p><p>• In order to explain how learning is possible, we need to involve</p><p>not truth conditions but what Fodor calls truth rules</p><p>(TR1) The sentence “Austin is the capital of Texas” is true</p><p>if and only if X is F</p><p>where ‘X’ is another name for Austin and ‘F’ is a synonym</p><p>for ‘– is the capital of Texas’</p><p>F O D O R ’ S A R G U M E N T F O R L O T ( 4 )</p><p>• The language in which the truth rule is stated cannot be the</p><p>same as the language being learnt</p><p>• Nor can it be any public language that has to be learned</p><p>• So, truth rules must be stated in an innate language of thought</p><p>• Fodor thinks of Mentalese as a quasi-logical language</p><p>T H E B I G P I C T U R E</p><p>• Fodor thinks of language-learning as a process of hypothesis-</p><p>testing</p><p>• Children formulate hypotheses about semantic and syntactic</p><p>rules in Mentalese</p><p>• This idea of children as “little scientists” generalizes beyond</p><p>language-learning</p><p>G O P N I K : ‘ T H E C H I L D A S S C I E N T I S T ’</p><p>This paper argues that there are powerful similarities between</p><p>cognitive development in children and scientific theory change. These</p><p>similarities are best explained by postulating an underlying abstract set</p><p>of rules and representations that underwrite both types of cognitive</p><p>abilities. In fact, science may be successful largely because it exploits</p><p>powerful and flexible cognitive devices that were designed by</p><p>evolution to facilitate learning in young children. Both science and</p><p>cognitive development involve abstract, coherent systems of entities</p><p>and rules, theories. In both cases, theories provide predictions,</p><p>explanations, and interpretations. In both, theories change in</p><p>characteristic ways in response to counterevidence.</p><p>PA R A L L E L I D E A S</p><p>(1) We should think of different types of knowledge in terms of</p><p>different sets of rules and representations.</p><p>(2) These rules and representations are organized in theory-like</p><p>structures</p><p>(3) The process of learning is a quasi-theoretical process of</p><p>hypothesis formation and testing</p><p>M O D U L A R I T Y</p><p>• These bodies of knowledge are sometimes called domain-</p><p>specific modules</p><p>• The idea is that we have specialized modules for distinctive</p><p>types of problems and interactions</p><p>• Some cognitive scientists think that at least some modules are</p><p>supported by specialized neural systems</p><p>C A N D I DAT E M O D U L E S</p><p>• The language module/language faculty</p><p>• Folk physics</p><p>• Folk psychology</p><p>• Folk biology</p><p>T WO M O D E L S O F I N F O R M AT I O N -</p><p>P RO C E S S I N G</p><p>• Turing machine model of information-</p><p>processing</p><p>• associated with classical, symbolic AI</p><p>Computational</p><p>models</p><p>• neurally-inspired models of information-</p><p>processing</p><p>• used to model cognitive/perceptual abilities</p><p>that have posed problems for classical AI</p><p>Neurally-</p><p>inspired models</p><p>T WO M O D E L S O F L E A R N I N G A N D</p><p>D E V E L O P M E M T</p><p>• Information processing is symbolic and rule-</p><p>governed</p><p>• Knowledge is fundamentally theory-like</p><p>• Learning is a process of developing theories</p><p>Computational</p><p>model</p><p>• Information-processing</p><p>is not symbolic</p><p>• Learning is a process of gradually changing</p><p>neural firing patterns (weights and thresholds)</p><p>• Networks (and brains) act in accordance with</p><p>rules, but not by consulting them</p><p>Neurally-</p><p>inspired models</p><p>A N A LT E R N AT I V E P I C T U R E</p><p>• Neural network models support a view on which there are</p><p>specialized bodies of knowledge and cognitive abilities, but these</p><p>should not be understood as theory-like</p><p>• Learning is not a process of internalizing ruless</p><p>• More general learning mechanisms can explain both the</p><p>cognitive abilities and the process by which those abilities are</p><p>acquired in the normal course of development.</p><p>L E A R N I N G</p><p>• On the alternative picture, we can understand much of what</p><p>goes on in learning and development in terms of more general-</p><p>purpose learning mechanisms</p><p>• The mechanisms are applied to particular domains, but not in a</p><p>way that involves mastering domain-specific rules/theories</p><p>• Learning mechanisms are analogous to the processes whereby</p><p>and weights and thresholds change in neural networks</p><p>T WO M O D E L S O F H OW YO U N G</p><p>C H I L D R E N L E A R N</p><p>1) The child as little scientist, testing hypotheses and</p><p>constructing theories</p><p>2) The child as a pattern detector, learning to become sensitive</p><p>to subtle differences in input as a function of feedback</p><p>T H E I M P O RTA N C E O F M O D E L I N G</p><p>• There is no direct evidence for any particular model of the</p><p>information-processing</p><p>• It is perfectly possible (perhaps likely) that children are both</p><p>little scientists and pattern associators, depending on the domain</p><p>• All we can do is consider particular domains/abilities and see</p><p>which model seems to fit best with the documented</p><p>developmental trajectory</p><p>L O O K I N G A H E A D</p><p>We will be comparing the two different models of learning in</p><p>different areas:</p><p>Language learning</p><p>Folk physics</p><p>Folk psychology</p><p>L A N G U A G E L E A R N I N G</p><p>• Here we will look at the special case of how children learn the</p><p>past tense of English verbs</p><p>• There is a very specific developmental trajectory that young</p><p>children go through, which at first sight seems to support the</p><p>rule-based, theoretical model</p><p>• But artificial neural networks can replicate the developmental</p><p>trajectory without having any rules explicitly coded into them.</p><p>F O L K P H Y S I C S</p><p>• The same pattern appears in studies of how infants learn about</p><p>how objects behave</p><p>• There are well understood developmental trajectories that have</p><p>been experimentally demonstrated</p><p>• Again, they seem initially to support a rule-based approach, but</p><p>neural networks can replicate the trajectory</p><p>F O L K P S Y C H O L O G Y</p><p>• This is the area where the “theory theory” has been most</p><p>developed</p><p>• There is a huge body of research on the acquisition of</p><p>“mindreading” skills, widely interpreted to support the idea that</p><p>children acquire a “theory of mind”</p><p>• The counterbalance here comes from versions of the simulation</p><p>theory of mindreading</p><p>capa 01</p><p>aula 1 parte 0</p><p>aula 1 parte 2</p><p>aula 1 parte 1</p><p>aula 1 parte 3</p><p>capa 01</p>