gtag('config', 'G-0PFHD683JR');
Price Prediction

The risk of information created

Abstract

Related work

Media, liquidation bubbles and echo rooms

Network effects and consequently information

The collapse of the model

The well -known biases in llms

A model for the collapse of knowledge

results

Discussion and references

Excessive

Comparison of tails

Determine the collapse of knowledge

results

Our main interest is the opinion that artificial intelligence, by reducing the costs of accessing certain types of information, can make us better. Unlike the literature related to the collapse of the model, we consider the conditions in which strategic humans may search for input data that will maintain the full distribution of knowledge. Thus, we start by looking at different discount rates. First, we offer Kernel density for general knowledge at the end of 100 rounds (Figure 3). As a basic line, when there is no deduction from the use of artificial intelligence (the discount rate is 1), then general knowledge is converging to real distribution,[9] Since artificial intelligence reduces the cost of cut knowledge, the distribution of general knowledge collapses towards the center, with the representation of tail knowledge. Under these circumstances, excessive dependence on the content created by artificial intelligence over time reduces the rare and rare views that maintain a comprehensive vision of the world.

Figure 3: The collapse of knowledge: The more dependence on the content created by artificial intelligence, the more extremely the general knowledge is extreme towards the center.Figure 3: The collapse of knowledge: The more dependence on the content created by artificial intelligence, the more extremely the general knowledge is extreme towards the center.

Fix specific parameters, we can get a feeling of the size of the effect of dependence on artificial intelligence. For example, our default model,[10] After nine generations, when there is no deduction of Amnesty International, the general distribution has a distance of 0.09 from the real distribution[11]. When the AI ​​-created content is 20 % cheaper (the discount rate is 0.8), the distance exceeds 0.22, while the 50 % discount increases to 0.40. Thus, while providing proximity to cheap AI-Approming may only increase general knowledge, under these circumstances, general knowledge is 2.3 or 3.2 times away from the truth due to dependence on artificial intelligence.

For subsequent results that explain the swap of various parameters, we draw the distance of Helnger between general knowledge at the end of the 100th round and real distribution. First, we study the importance of updating on the value of the relative samples and the relationship with the discount factor in Figure 4. It is quickly updated (here LR = 0.1). As mentioned above, the more the content created than artificial intelligence is cheaper (the average discount rate in colors), the more general knowledge towards the center. At the same time, when individuals update more slowly on the relative value of learning from artificial intelligence (further to the left in the figure), the more general knowledge. We also notice a comparison, that is, a faster update on the relative value of the content created from artificial intelligence can compensate for the most extreme prices. On the contrary, if the discount rate is not too severe, the update is slower on relative values ​​is not very harmful.

Figure 4: Discount rate and learning rateFigure 4: Discount rate and learning rate

Figure 5: The discount rate and the limits of deductionFigure 5: The discount rate and the limits of deduction

In Figure 5, we consider the effect of the differences in the extent of extremism of deducting the content created from artificial intelligence in the collapse of knowledge. Intuitively, severe deduction (small values ​​of σtr) corresponds to the position in which artificial intelligence summarizes, for example, a more clear or common perspective idea. Less extremist deduction corresponds to the idea that artificial intelligence is able to represent a variety of perspectives, and only excludes very rare or mysterious views. Of course, in the last case, (for example, if the distribution is cut off normative deviations from the average), the effect is minimal. If the artificial intelligence is the knowledge of the knowledge outside 0.25 normative deviations from the medium, the effect is great, although this is at least again a moderate person when the opponent is smaller (especially if there is no effect of generations).

We compare the effect of the vehicle generations to errors in Figure 6. In this case, the distribution is stable and not “collapsing”, 0.25 0.50 0.75 1.00 1.25 1.55.75 2.00 Detaus 0.2 0.3 0.4 0.5 0.6 Hellinger 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 and that is, with time the problem is not gradually worse. We see a leap from this basis to the situation in which the change of generations is, although the effect of the number of times the change of generations (every 3, 5, 10 or 20 rounds) has no significant effect.


[9] Even with no discount, there are cross samples of broken distribution, but only enough to realize that they are relatively less valuable than full distribution samples

[10] Test at σtr = 0.75 normative deviations from the average, generations every 10 rounds, and the learning rate 0.05.

[11] Even here there are cross samples of broken distribution-enough to realize that they have less value than full distribution samples.

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button