How philosophers and scholars see cognitive artificial intelligence
Authors:
(1) Raphaël Millière, Department of Philosophy, Macquarie University ([email protected]);
(2) Cameron Bokner, Department of Philosophy, Houston University ([email protected]).
Links table
Abstract and 1 introduction
2. Premium on llms
2.1. Historical foundations
2.2. Llms based on the transformer
3. A front with classical philosophical issues
3.1. Composition
3.2. Nativism and language acquisition
3.3. Understanding language and land
3.4. World models
3.5. The transmission of cultural knowledge and linguistic scaffolding
4. Conclusion, radio, and references
3. A front with classical philosophical issues
Synthetic nervous networks, including previous NLP structures, have always been the focus of philosophical investigation, especially among philosophers of reason, language and science. A lot of philosophical discussion surrounding these systems revolves around the extent of their suitability for human perceptions. Specifically, the discussion focuses on whether they constitute better models of basic human cognitive processes than their classic and symbolic counterparts. Here, we review the main philosophical questions that have appeared in relation to the role of artificial nerve networks as models of intelligence, rationality or perception, with a focus on their current embodiment in the context of continuous discussions on the effects of the effects of LLMS based on the transformer.
Recent discussions have been dominated by a misleading conclusion pattern, which we call “Redescription”. This fallacy arises when critics argue that the system cannot design in particular
Cognitive ability, simply because its operations can be explained with less abstract and more contrasting conditions. In the current context, the fallacy in allegations is manifested that LLMS cannot be good models for some cognitive capacity 𝜙 because their operations are only formed in a set of statistical accounts, linear algebra, or upcoming suspended predictions. These arguments are valid only if they are accompanied by evidence that shows that the system specified in these terms is unable to do it to implement 𝜙. To clarify this, consider the flawed logic to emphasize that the piano cannot result from harmony because it can be described as a group of amazing hammers a set of nerve shootings. The decisive question is not whether LLM operations can be described in a non -general way.
Redescription is a wider symptom to treat major philosophical questions about synthetic nervous networks as a purely theory, leading to implicit, impressive implicit claims. The hypotheses here should be guided by experimental evidence regarding the capabilities of artificial nerve networks such as LLMS and their suitability as cognitive models (see Table 1). In fact, considerations on architecture, learning goal, model size and LLMS training data are not sufficient to arbitrate these problems. In fact, our disagreement is that many basic philosophical discussions about the capabilities of neurological networks in general, and LLMS in particular, stop at least at least on experimental evidence related to their internal mechanisms and knowledge that they acquire during training. In other words, many of these discussions cannot be settled in advance by considering the general characteristics of unjust models. Instead, we must take into account the experimental results about the internal behavior and internal work of the trained models.
In this section, we are studying long -term discussions on the capabilities of the artificial neural networks that have been revived and converted by developing deep learning and recent success of LLMS in particular. Behavioral evidence obtained from the targeted standards and experiences is largely important to these discussions. However, we note from the beginning that such evidence is not sufficient to paint the full image; In contact with concerns about the Blockheads that were reviewed in the first section, we must also consider evidence about how LLMS processes internally to fill the gap between claims about its supposed performance and efficiency. Advanced experimental methods have been developed to identify and intervene the acting and accounts that the trained LLMS acquired. These methods carry a great promise to arbitrate some philosophical issues that have been reviewed here, which exceeds the initial hypotheses supported by behavioral evidence. We leave a more detailed discussion of these methods and experimental results corresponding to the second part.