2024 年值得阅读的 50 篇人工智能跨学科研究论文

体娱   2025-01-17 00:02   浙江  


  1. 选择引擎和家长式人工智能

    Sunstein, CR (2024)。选择引擎和家长式人工智能。人文与社会科学传播11 (1),1-4。


  2. 揭开机器思维的面纱。

    Clegg, M., Hofstetter, R., de Bellis, E., & Schmitt, BH (2024).揭开机器思维的面纱。消费者研究杂志51 (2),342-361。

    我们经常试图了解消费者对人类和算法的反应。但一个真正有趣的问题是:我们如何应对不同的算法


  3. 当人类和人工智能的结合有用时:系统评价和荟萃分析。

    Vaccaro, M., Almaatouq, A., & Malone, T. (2024).人类与人工智能的结合何时有用:系统评价与荟萃分析。《自然人类行为》,1-11。


  4. 研究和提高人类和机器的推理能力。

    Yax, N.、Anlló, H. 和 Palminteri, S. (2024)。研究和提高人类与机器的推理能力。《传播心理学》(1),51。


  5. 在服务中部署人工智能,为弱势消费者提供援助。

    Hermann, E.、Williams, GY 和 Puntoni, S. (2023)。在服务中部署人工智能,为 AID 弱势消费者提供服务。《市场营销科学院学报》,1-21。

    人工智能可以帮助面临不同问题的不同人群。这些人都是脆弱的消费者。在本文中,作者阐述了实现这一目标的方式。


  6. 生成式人工智能对在线知识社区的影响。

    Burtch, G., Lee, D., & Chen, Z. (2024)。生成式人工智能对在线知识社区的影响。科学报告14 (1),10413。



  7. 生成式人工智能在大规模个性化说服方面的潜力。

    如果越来越多的人工智能生成的信息开始传播,会发生什么?它们能说服我们吗?如何做到?

    Matz, SC、Teeny, JD、Vaid, SS、Peters, H.、Harari, GM 和 Cerf, M. (2024)。生成式人工智能在大规模个性化说服方面的潜力。《科学报告》14 (1),4692。


  8. 算法管理降低了地位:使用机器扮演社会角色的一个意外后果。

    Jago, AS、Raveendhran, R.、Fast, N. 和 Gratch, J. (2024)。算法管理降低了地位:使用机器扮演社会角色的意外后果。《实验社会心理学杂志》,110,104553。


  9. 人工智能导致的非人性化。

    Kim, HY, & McGill, AL (2024)。人工智能引发的非人性化。《消费者心理学杂志》。


  10. 人工智能伴侣减少孤独感。

    De Freitas, J.、Uğuralp, AK、Uğuralp, Z. 和 Puntoni, S. (2024)。人工智能伴侣减少孤独感。(工作论文)

    我在专题中讨论了这项研究!您可以在这里找到它。


  11. 人工智能心理学:人工智能如何影响人们的感受、思考和行为方式。

    Williams, GY 和 Lim, S. (2024)。人工智能心理学:人工智能如何影响人们的感受、思维和行为方式。《当代心理学观点》,101835。


  12. 生成式人工智能增强了个人创造力,但减少了新颖内容的集体多样性。

    Doshi, AR 和 Hauser, OP (2024)。生成式人工智能增强了个体创造力,但降低了新内容的集体多样性。《科学进展》10 (28),eadn5290。

    我在专题中讨论了这项研究!您可以在这里找到它。


  13. 气候不变的机器学习。

    Beucler, T., Gentine, P., Yuval, J., Gupta, A., Peng, L., Lin, J., ... & Pritchard, M. (2024)。气候不变机器学习。科学进展10 (6),eadj7250。


  14. 应用人工智能重建中产阶级就业。

    Autor, D. (2024)。应用人工智能重建中产阶级就业(No. w32140)。国家经济研究局。(工作论文)。


  15. 不完美的人性:基于文本的交流中(纠正的)错误的人性化潜力。

    Bluvstein, S., Zhao, X., Barasch, A., & Schroeder, J. (2024)。不完美的人性:基于文本的交流中(纠正的)错误的人性化潜力。消费者研究协会杂志(3),000-000。



  16. 营销中的人工智能:从计算机科学到社会科学。

    Puntoni, S. (2024)。营销中的人工智能:从计算机科学到社会科学。宏观营销杂志44 (4), 883-885。


  17. 人工智能正在改变世界:是变好还是变坏?

    Grewal, D.、Guha, A. 和 Becker, M. (2024)。人工智能正在改变世界:是好是坏?。宏观营销杂志,02761467241254450。

    AI 带来的巨大挑战是什么?本文将以广阔的视角为您解惑(之后还发表了几篇有趣的评论)。


  18. 基于生成式人工智能的健康应用程序的健康风险。

    De Freitas, J. 和 Cohen, IG (2024)。基于生成式 AI 的健康应用的健康风险。《自然医学》,1-7。


  19. 聊天机器人和心理健康:深入了解生成人工智能的安全性。

    De Freitas, J.、Uğuralp, AK、Oğuz‐Uğuralp, Z. 和 Puntoni, S. (2024)。聊天机器人与心理健康:洞察生成式人工智能的安全性。《消费者心理学杂志》34 (3),481-491。


  20. 前沿:大型语言模型能否捕捉人类偏好?

    Goli, A., & Singh, A. (2024)。前沿:大型语言模型能否捕捉人类偏好?。营销科学。


  21. 大型语言模型可以推断社交媒体用户的心理倾向。

    Peters, H. 和 Matz, SC (2024)。大型语言模型可以推断社交媒体用户的心理倾向。PNAS nexus,3(6),第 231 页。


  22. 思考上帝是否会增加人工智能在决策中的接受度?

    Moore, DA, Schroeder, J., Bailey, ER, Gershon, R., Moore, JE, & Simmons, JP (2024)。思考上帝是否会增加人工智能在决策中的接受度?。《美国国家科学院院刊》,121(31),e2402315121。

    宗教信仰如何影响我们对人工智能的接受?


  23. 利用大型语言模型进行理论化。

    Tranchero, M.、Brenninkmeijer, CF、Murugan, A. 和 Nagaraj, A. (2024)。使用大型语言模型进行理论化 (编号 w33033) 。美国国家经济研究局。(工作论文)

    对于像我一样真正喜欢深入研究理论的人来说,这本书将是一种享受。


  24. 人类与生成式人工智能在内容创作竞争中:共生还是冲突?

    Yao, F., Li, C., Nekipelov, D., Wang, H., & Xu, H. (2024)。内容创作竞争中的人类与生成式人工智能:共生还是冲突?。arXiv 预印本 arXiv:2402.15467。


  25. 通过与人工智能对话,持久减少阴谋论信念。

    Costello, TH, Pennycook, G., & Rand, DG (2024)。通过与人工智能对话持久减少阴谋论信念。Science, 385(6714), eadq1814。


  26. 自然语言处理的发展如何帮助我们理解人类行为。

    Mihalcea, R.、Biester, L.、Boyd, RL、Jin, Z.、Perez-Rosas, V.、Wilson, S. 和 Pennebaker, JW (2024)。自然语言处理的发展如何帮助我们理解人类行为。《自然-人类行为》(10),1877-1889。


  27. 人工智能可以帮助人类在民主审议中找到共同点。

    Tessler, MH、Bakker, MA、Jarrett, D.、Sheahan, H.、Chadwick, MJ、Koster, R.,… & Summerfield, C. (2024)。人工智能可以帮助人类在民主审议中找到共同点。《科学》,386(6719),eadq2852。


  28. 生成人工智能对人类学习的前景和挑战。

    Yan, L., Greiff, S., Teuber, Z., & Gašević, D. (2024)。生成人工智能对人类学习的前景与挑战。《自然人类行为》,8(10),1839-1850。


  29. 在心理学研究中使用大型语言模型的危险和机遇。

    Abdurahman, S.、Atari, M.、Karimi-Malekabadi, F.、Xue, MJ、Trager, J.、Park, PS, ... & Dehghani, M. (2024)。在心理学研究中使用大型语言模型的危险和机遇。PNAS nexus,3(7),第 245 页。

    人工智能真的能够有效且负责任地帮助心理学研究吗?如果可以,那么如何做到呢?


  30. 关怀机器:感受客户服务的人工智能。

    Huang, MH, & Rust, RT (2024)。关怀机器:为客户服务而生的情感人工智能。《市场营销杂志》,00222429231224748。

    对消费者和具有日益先进的响应能力的聊天机器人之间的关系的启发性整体观点。


  31. 避免网上尴尬:当购买引发自我表现担忧时对聊天机器人的响应和推断。

    Jin, J., Walker, J., & Reczek, RW (2024)。避免网上尴尬:当购物引发自我表现顾虑时对聊天机器人的反应和推断。《消费者心理学杂志》。


  32. 光明与黑暗的想象:创造者如何应对开发人工智能理念所带来的道德后果。

    Hagtvedt, LP、Harvey, S.、Demir-Caliskan, O. 和 Hagtvedt, H. (2024)。光明与黑暗的想象:创造者如何应对开发人工智能理念的道德后果。《管理学院期刊》,(ja),amj-2022。

    创作者是否将自己对人工智能的感受融入到实际创作中?



  33. 人类与机器的新社会学。

    Tsvetkova, M.、Yasseri, T.、Pescetelli, N. 和 Werner, T. (2024)。人类与机器的新社会学。《自然人类行为》,8(10),1864-1876。


  34. 量化人工智能在科学研究中的使用和潜在好处。

    Gao, J. 和 Wang, D. (2024)。量化人工智能在科学研究中的使用和潜在好处。《自然人类行为》,1-12。


  35. 我们需要了解有关生成式人工智能的叙述的效果。

    Gilardi, F., Kasirzadeh, A., Bernstein, A., Staab, S., & Gohdes, A. (2024)。我们需要了解生成式人工智能叙述的影响。《自然人类行为》,1-2。


  36. 创新和营销过程中的生成性人工智能:研究机会的路线图。

    Cillo, P. 和 Rubera, G. (2024)。创新和营销过程中的生成式人工智能:研究机会路线图。《市场营销科学院学报》,1-18。


  37. 深度神经网络如何为心理科学理论提供指导?

    McGrath, SW, Russin, J., Pavlick, E., & Feiman, R. (2024)。深度神经网络如何为心理科学理论提供信息?。当前心理科学方向,33(5),325-333。


  38. 反转问题:为什么算法应该推断心理状态而不仅仅是预测行为

    Kleinberg, J., Ludwig, J., Mullainathan, S., & Raghavan, M. (2024)。反转问题:为什么算法应该推断心理状态而不仅仅是预测行为。心理科学视角,19(5),827-838。

    Netflix 的推荐应该会推荐类似于您上次观看的电影或您观看列表中的独立电影的圣诞喜剧——但您却无法开始观看?


  39. 人工智能时代的人类。

    Puntoni, S. 和 Wertenbroch, K. (2024)。人工智能时代的人类。《消费者研究协会杂志》,9(3),000-000。


  40. 人工智能时代内容的未来:一些含义和方向。

    Floridi, L. (2024)。人工智能时代的内容未来:一些影响和方向。哲学与技术,37(3),112。

    深入反思人工智能的内容和伦理道德意味着什么?


  41. 人们在算法中看到了更多的偏见。

    Celiktutan, B., Cadario, R., & Morewedge, CK (2024)。人们在算法中看到了更多的偏见。美国国家科学院院刊,121(16),e2317602121。


  42. 人工智能的简单宏观经济学。

    Acemoglu, D. (2024)。人工智能的简单宏观经济学 (编号 w32487) 。国家经济研究局。(工作论文)

    这些思考开启了一个思考的世界。


  43. 驾驭未来工作:自动化、人工智能和经济繁荣的观点。

    Brynjolfsson, E., Thierer, A., & Acemoglu, D. (2024).探索工作的未来:自动化、人工智能和经济繁荣的观点。


  44. 机器人、半人马和自我自动化者:人类与 Genai 融合、指导和放弃知识共同创造过程及其对技能的影响

    Randazzo, S.、Lifshitz-Assaf, H.、Kellogg, K.、Dell'Acqua, F.、Mollick, ER 和 Lakhani, KR (2024)。半机械人、半人马和自我自动化者:人类-Genai 融合、定向和放弃知识共同创造过程及其对技能的影响。定向和放弃知识共同创造过程及其对技能的影响 (2024 年 8 月 8 日)。

    当使用 ChatGPT、Gemini、Claude 等时,您是 Cyborg 还是 Centaur(或其他东西)?


  45. 人工智能如何限制人类经验。

    Valenzuela, A.、Puntoni, S.、Hoffman, D.、Castelo, N.、De Freitas, J.、Dietvorst, B.、... & Wertenbroch, K. (2024)。人工智能如何限制人类体验。消费者研究协会杂志,9(3), 000-000。

    这是今年最让我受启发的论文之一。


  46. 在生成人工智能时代保护科学诚信。

    Blau, W., Cerf, VG, Enriquez, J., Francisco, JS, Gasser, U., Gray, ML, ... & Witherell, M. (2024)。在生成式人工智能时代保护科学诚信。《美国国家科学院院刊》,121(22),e2407886121。


  47. 人工智能与科学研究中的理解幻觉。

    Messeri, L., & Crockett, MJ (2024).人工智能与科学研究中的理解错觉. Nature, 627(8002), 49-58.


  48. 这是谁做的?算法和作者署名

    Jago, AS 和 Carroll, GR (2024)。谁做的?算法和作者署名。人格与社会心理学公报,50(5),793-806。

    我在专题中讨论了这项研究!您可以在这里找到它。


  49. 为人工智能注入人性化元素:强调人类的输入可以提高人工智能教练建议的实用性。

    Zhang, Y., Tuk, MA, & Klesse, AK (2024)。赋予人工智能人性化:强调人类的输入可提高人工智能教练建议的感知有用性。《消费者研究协会杂志》,9(3),000-000。


  50. 生成人工智能对社会经济不平等和政策制定的影响。

    Capraro, V.、Lentsch, A.、Acemoglu, D.、Akgun, S.、Akhmedova, A.、Bilancini, E.,… & Viale, R. (2024)。生成人工智能对社会经济不平等和政策制定的影响。PNAS nexus,3(6)。

    从广阔的视角来真正了解人工智能对社会和经济不平等的潜在影响,以及关键领域的潜力。



  1. Choice engines and paternalistic AI.

    Sunstein, C. R. (2024). Choice engines and paternalistic AI. Humanities and Social Sciences Communications11(1), 1-4.


  2. Unveiling the Mind of the Machine.

    Clegg, M., Hofstetter, R., de Bellis, E., & Schmitt, B. H. (2024). Unveiling the Mind of the Machine. Journal of Consumer Research51(2), 342-361.

    We often try to understand how consumers respond to humans vs. algorithms. But a really intriguing question is: how do we respond to different algorithms?


  3. When combinations of humans and AI are useful: A systematic review and meta-analysis.

    Vaccaro, M., Almaatouq, A., & Malone, T. (2024). When combinations of humans and AI are useful: A systematic review and meta-analysis. Nature Human Behaviour, 1-11.


  4. Studying and improving reasoning in humans and machines.

    Yax, N., Anlló, H., & Palminteri, S. (2024). Studying and improving reasoning in humans and machines. Communications Psychology2(1), 51.


  5. Deploying artificial intelligence in services to AID vulnerable consumers.

    Hermann, E., Williams, G. Y., & Puntoni, S. (2023). Deploying artificial intelligence in services to AID vulnerable consumers. Journal of the Academy of Marketing Science, 1-21.

    AI could have an impact in helping different people who are facing different kinds of problems. These are vulnerable consumers. In this paper, the authors illustrate the ways in which this can happen.


  6. The consequences of generative AI for online knowledge communities.

    Burtch, G., Lee, D., & Chen, Z. (2024). The consequences of generative AI for online knowledge communities. Scientific Reports14(1), 10413.



  7. The potential of generative AI for personalized persuasion at scale.

    What happens if more and more AI-generated messages start spreading. Can they persuade us? How?

    Matz, S. C., Teeny, J. D., Vaid, S. S., Peters, H., Harari, G. M., & Cerf, M. (2024). The potential of generative AI for personalized persuasion at scale. Scientific Reports14(1), 4692.


  8. Algorithmic management diminishes status: An unintended consequence of using machines to perform social roles.

    Jago, A. S., Raveendhran, R., Fast, N., & Gratch, J. (2024). Algorithmic management diminishes status: An unintended consequence of using machines to perform social roles. Journal of Experimental Social Psychology, 110, 104553.


  9. AI-induced dehumanization.

    Kim, H. Y., & McGill, A. L. (2024). AI‐induced dehumanization. Journal of Consumer Psychology.


  10. AI Companions Reduce Loneliness.

    De Freitas, J., Uğuralp, A. K., Uğuralp, Z., & Puntoni, S. (2024). AI companions reduce loneliness. (working paper)

    I talked about this study in a dedicated issue! You can find it here.


  11. Psychology of AI: How AI impacts the way people feel, think, and behave.

    Williams, G. Y., & Lim, S. (2024). Psychology of AI: How AI impacts the way people feel, think, and behave. Current Opinion in Psychology, 101835.


  12. Generative AI enhances individual creativity but reduces the collective diversity of novel content.

    Doshi, A. R., & Hauser, O. P. (2024). Generative AI enhances individual creativity but reduces the collective diversity of novel content. Science Advances10(28), eadn5290.

    I talked about this study in a dedicated issue! You can find it here.


  13. Climate-invariant machine learning.

    Beucler, T., Gentine, P., Yuval, J., Gupta, A., Peng, L., Lin, J., ... & Pritchard, M. (2024). Climate-invariant machine learning. Science Advances10(6), eadj7250.


  14. Applying AI to Rebuild Middle Class Jobs.

    Autor, D. (2024). Applying AI to rebuild middle class jobs (No. w32140). National Bureau of Economic Research. (working paper).


  15. Imperfectly Human: The Humanizing Potential of (Corrected) Errors in Text-Based Communication.

    Bluvstein, S., Zhao, X., Barasch, A., & Schroeder, J. (2024). Imperfectly Human: The Humanizing Potential of (Corrected) Errors in Text-Based Communication. Journal of the Association for Consumer Research9(3), 000-000.



  16. Artificial Intelligence in Marketing: From Computer Science to Social Science.

    Puntoni, S. (2024). Artificial Intelligence in Marketing: From Computer Science to Social Science. Journal of Macromarketing44(4), 883-885.


  17. AI is Changing the World: For Better or for Worse?

    Grewal, D., Guha, A., & Becker, M. (2024). AI is Changing the World: For Better or for Worse?. Journal of Macromarketing, 02761467241254450.

    What are the grand challenges that AI poses? This paper will enlighten you with a broad perspective (and it has been followed by several interesting commentaries).


  18. The health risks of generative AI-based wellness apps.

    De Freitas, J., & Cohen, I. G. (2024). The health risks of generative AI-based wellness apps. Nature Medicine, 1-7.


  19. Chatbots and mental health: Insights into the safety of generative ai.

    De Freitas, J., Uğuralp, A. K., Oğuz‐Uğuralp, Z., & Puntoni, S. (2024). Chatbots and mental health: Insights into the safety of generative AI. Journal of Consumer Psychology34(3), 481-491.


  20. Frontiers: Can Large Language Models Capture Human Preferences?

    Goli, A., & Singh, A. (2024). Frontiers: Can Large Language Models Capture Human Preferences?. Marketing Science.


  21. Large language models can infer psychological dispositions of social media users.

    Peters, H., & Matz, S. C. (2024). Large language models can infer psychological dispositions of social media users. PNAS nexus, 3(6), pgae231.


  22. Does thinking about God increase acceptance of artificial intelligence in decision-making?

    Moore, D. A., Schroeder, J., Bailey, E. R., Gershon, R., Moore, J. E., & Simmons, J. P. (2024). Does thinking about God increase acceptance of artificial intelligence in decision-making?. Proceedings of the National Academy of Sciences, 121(31), e2402315121.

    How does religiosity influence our acceptance of AI?


  23. Theorizing with Large Language Models.

    Tranchero, M., Brenninkmeijer, C. F., Murugan, A., & Nagaraj, A. (2024). Theorizing with large language models (No. w33033). National Bureau of Economic Research. (working paper)

    For people who really enjoy - like me - digging deep into theories, this pèaper will be a delight.


  24. Human vs. Generative AI in Content Creation Competition: Symbiosis or Conflict?

    Yao, F., Li, C., Nekipelov, D., Wang, H., & Xu, H. (2024). Human vs. Generative AI in Content Creation Competition: Symbiosis or Conflict?. arXiv preprint arXiv:2402.15467.


  25. Durably reducing conspiracy beliefs through dialogues with AI.

    Costello, T. H., Pennycook, G., & Rand, D. G. (2024). Durably reducing conspiracy beliefs through dialogues with AI. Science, 385(6714), eadq1814.


  26. How developments in natural language processing help us in understanding human behaviour.

    Mihalcea, R., Biester, L., Boyd, R. L., Jin, Z., Perez-Rosas, V., Wilson, S., & Pennebaker, J. W. (2024). How developments in natural language processing help us in understanding human behaviour. Nature Human Behaviour8(10), 1877-1889.


  27. AI can help humans find common ground in democratic deliberation.

    Tessler, M. H., Bakker, M. A., Jarrett, D., Sheahan, H., Chadwick, M. J., Koster, R., ... & Summerfield, C. (2024). AI can help humans find common ground in democratic deliberation. Science, 386(6719), eadq2852.


  28. Promises and challenges of generative artificial intelligence for human learning.

    Yan, L., Greiff, S., Teuber, Z., & Gašević, D. (2024). Promises and challenges of generative artificial intelligence for human learning. Nature Human Behaviour, 8(10), 1839-1850.


  29. Perils and opportunities in using large language models in psychological research.

    Abdurahman, S., Atari, M., Karimi-Malekabadi, F., Xue, M. J., Trager, J., Park, P. S., ... & Dehghani, M. (2024). Perils and opportunities in using large language models in psychological research. PNAS nexus, 3(7), pgae245.

    Can AI really help psychological research effectively and responsibly? If so, how?


  30. The Caring Machine: Feeling AI for Customer Care.

    Huang, M. H., & Rust, R. T. (2024). The caring machine: Feeling AI for customer care. Journal of Marketing, 00222429231224748.

    An enlightening holistic view on the relationship between consumers and chatbots with increasingly advanced response capabilities.


  31. Avoiding embarrassment online: Response to and inferences about chatbots when purchases activate self‐presentation concerns.

    Jin, J., Walker, J., & Reczek, R. W. (2024). Avoiding embarrassment online: Response to and inferences about chatbots when purchases activate self‐presentation concerns. Journal of Consumer Psychology.


  32. Bright and dark imagining: How creators navigate moral consequences of developing ideas for artificial intelligence.

    Hagtvedt, L. P., Harvey, S., Demir-Caliskan, O., & Hagtvedt, H. (2024). Bright and dark imagining: How creators navigate moral consequences of developing ideas for artificial intelligence. Academy of Management Journal, (ja), amj-2022.

    Do creators integrate their feelings about AI into what they actually make?


  33. A new sociology of humans and machines.

    Tsvetkova, M., Yasseri, T., Pescetelli, N., & Werner, T. (2024). A new sociology of humans and machines. Nature Human Behaviour, 8(10), 1864-1876.


  34. Quantifying the use and potential benefits of artificial intelligence in scientific research.

    Gao, J., & Wang, D. (2024). Quantifying the use and potential benefits of artificial intelligence in scientific research. Nature Human Behaviour, 1-12.


  35. We need to understand the effect of narratives about generative AI.

    Gilardi, F., Kasirzadeh, A., Bernstein, A., Staab, S., & Gohdes, A. (2024). We need to understand the effect of narratives about generative AI. Nature Human Behaviour, 1-2.


  36. Generative AI in innovation and marketing processes: A roadmap of research opportunities.

    Cillo, P., & Rubera, G. (2024). Generative AI in innovation and marketing processes: A roadmap of research opportunities. Journal of the Academy of Marketing Science, 1-18.


  37. How Can Deep Neural Networks Inform Theory in Psychological Science?

    McGrath, S. W., Russin, J., Pavlick, E., & Feiman, R. (2024). How Can Deep Neural Networks Inform Theory in Psychological Science?. Current Directions in Psychological Science, 33(5), 325-333.


  38. The inversion problem: Why algorithms should infer mental state and not just predict behavior

    Kleinberg, J., Ludwig, J., Mullainathan, S., & Raghavan, M. (2024). The inversion problem: Why algorithms should infer mental state and not just predict behavior. Perspectives on Psychological Science, 19(5), 827-838.

    Your Netflix recommendations should suggest the Christmas comedy similar to the last movie you watched, or that indie film you had on your watchlist — but you can't bring yourself to start watching?


  39. Being Human in the Age of AI.

    Puntoni, S., & Wertenbroch, K. (2024). Being Human in the Age of AI. Journal of the Association for Consumer Research, 9(3), 000-000.


  40. On the Future of Content in the Age of Artificial Intelligence: Some Implications and Directions.

    Floridi, L. (2024). On the Future of Content in the Age of Artificial Intelligence: Some Implications and Directions. Philosophy & Technology, 37(3), 112.

    What does it mean to reflect deeply about the content and ethics of AI?


  41. People see more of their biases in algorithms.

    Celiktutan, B., Cadario, R., & Morewedge, C. K. (2024). People see more of their biases in algorithms. Proceedings of the National Academy of Sciences, 121(16), e2317602121.


  42. The Simple Macroeconomics of AI.

    Acemoglu, D. (2024). The Simple Macroeconomics of AI (No. w32487). National Bureau of Economic Research. (working paper)

    Considerations that open up a world of reflections.


  43. Navigating the Future of Work: Perspectives on Automation, AI, and Economic Prosperity.

    Brynjolfsson, E., Thierer, A., & Acemoglu, D. (2024). Navigating the Future of Work: Perspectives on Automation, AI, and Economic Prosperity.


  44. Cyborgs, Centaurs and Self Automators: Human-Genai Fused, Directed and Abdicated Knowledge Co-Creation Processes and Their Implications for Skilling

    Randazzo, S., Lifshitz-Assaf, H., Kellogg, K., Dell'Acqua, F., Mollick, E. R., & Lakhani, K. R. (2024). Cyborgs, Centaurs and Self Automators: Human-Genai Fused, Directed and Abdicated Knowledge Co-Creation Processes and Their Implications for Skilling. Directed and Abdicated Knowledge Co-Creation Processes and Their Implications for Skilling (August 08, 2024).

    When using ChatGPT, Gemini, Claude etc., are you a Cyborg or a Centaur (or something else)?


  45. How artificial intelligence constrains the human experience.

    Valenzuela, A., Puntoni, S., Hoffman, D., Castelo, N., De Freitas, J., Dietvorst, B., ... & Wertenbroch, K. (2024). How artificial intelligence constrains the human experience. Journal of the Association for Consumer Research, 9(3), 000-000.

    One of the papers that inspired me the most this year.


  46. Protecting scientific integrity in an age of generative AI.

    Blau, W., Cerf, V. G., Enriquez, J., Francisco, J. S., Gasser, U., Gray, M. L., ... & Witherell, M. (2024). Protecting scientific integrity in an age of generative AI. Proceedings of the National Academy of Sciences, 121(22), e2407886121.



  47. Artificial intelligence and illusions of understanding in scientific research.

    Messeri, L., & Crockett, M. J. (2024). Artificial intelligence and illusions of understanding in scientific research. Nature, 627(8002), 49-58.


  48. Who Made This? Algorithms and Authorship Credit

    Jago, A. S., & Carroll, G. R. (2024). Who made this? Algorithms and authorship credit. Personality and Social Psychology Bulletin, 50(5), 793-806.

    I talked about this study in a dedicated issue! You can find it here.


  49. Giving AI a Human Touch: Highlighting Human Input Increases the Perceived Helpfulness of Advice from AI Coaches.

    Zhang, Y., Tuk, M. A., & Klesse, A. K. (2024). Giving AI a Human Touch: Highlighting Human Input Increases the Perceived Helpfulness of Advice from AI Coaches. Journal of the Association for Consumer Research, 9(3), 000-000.


  50. The impact of generative artificial intelligence on socioeconomic inequalities and policy making.

    Capraro, V., Lentsch, A., Acemoglu, D., Akgun, S., Akhmedova, A., Bilancini, E., ... & Viale, R. (2024). The impact of generative artificial intelligence on socioeconomic inequalities and policy making. PNAS nexus, 3(6).

    A broad perspective to understand, really, and for crucial areas, what is the potential impact of AI on social and economic inequality.




本文转自 | 科技世代千高原

再建巴别塔
青灯夜读,湖畔沉思。精读人文社科经典文献,探讨新闻传播学术问题。
 最新文章