- 주제: Revealed Economic Preference and Intelligence of (Chat)GPT
- Abstract: As recommendation systems built using large language models like (Chat)GPT are becoming increasingly prevalent, it is essential to investigate their properties. In this paper, we assess the economic preferences and quality of GPT by conducting a portfolio choice experiment under risk. From the choice data generated by the experiment, we measure: 1) the severity of violations of the generalized axiom of revealed preference, 2) deviation from expected utility theory, 3) elation-seeking parameters, and 4) risk aversion parameters. Our findings indicate that the choice data exhibits only minor violations of the generalized axiom of revealed preference and deviations from the expected utility theory. Interestingly, it reveals elation-seeking choices, which are rare in choice data from human subjects. Moreover, GPT's choices are less risk-averse compared to those of human subjects’ choices. Next, we investigate whether GPT can effectively learn from choice data and provide recommendations to help individuals maximize their utility. Based on both simulated and human subjects’ datasets, it appears that GPT does not learn well from the data to generate better recommendations. Finally, we examine whether an increase in data size or a higher GPT version can improve the quality of recommendations. We also discuss the implications of our findings for AI regulation and preference aggregation.