Гуманитарные ведомости. Вып. 3(51) Т2 2024 г
22 Гуманитарные ведомости ТГПУ им. Л. Н. Толстого № 3 (51), том 2, ноябрь 2024 г. References 1 . Kak chelovechestvo vosprinimayet obman II v razlichnykh stsenariyakh i zachem roboty lgut [How humanity perceives AI deception in different scenarios and why robots lie]. Inc.journal . URL :https://incrussia.ru/news/kak-chelovechestvo- vosprinimaet-obman-ii-v-razlichnyh-stsenariyah-i-zachem-roboty-lgut/ [In Russian] 2. Moral' i eticheskiye tsennosti ChatGPT: yest li u II chetkaya nravstvennaya pozitsiya? [ChatGPT morality and ethical values: does the AI have a clear moral position?]. Universitet Lobachevskogo: Institut filologii i zhurnalistiki [Lobachevsky University: Institute of Philology and Journalism]. URL: https://fil.unn.ru/does-ai- have-strong-moral-compass/ (accessed: 7 November 2024) [In Russian] 3. Sokovikova L. Neyroseti nauchilis vrat i delayut eto namerenno [Neural networks have learnt how to lie and they do it deliberately]. Hi-News.ru . URL: https://hi-news.ru/eto-interesno/nejroseti-nauchilis-vrat-i-delayut-eto-namerenno.html [In Russian] 4. Neyroseti umeyut lgat ne khuzhe lyudey [Neural nets can lie just as well as humans can]. InvestFuture . URL: https://dzen.ru/b/ZFICwkRmbyrcJ34N [In Russian] 5. Ivanov A. Yavlyayetsya li plagiatom to, chto sozdano neyrosetyu? [Is what is created by a neural network plagiarised?]. Zakon.ru URL https://zakon.ru/blog/2023/06/26/yavlyaetsya_li_plagiatom_to_chto_sozdano_nejrose tyu [In Russian] 6. AI alignment Wikipedia: The Free Encylopedia . URL: https://en.wikipedia.org/wiki/AI_alignment#Misalignment (accessed: 07 November 2024). 7. Algorithmic Bias Wikipedia: The Free Encylopedia . URL: https://en.wikipedia.org/wiki/Algorithmic_bias (accessed: 07 November 2024). 8. Hadar-Shoval D, Asraf K, Mizrachi Y, Haber Y, Elyoseph Z. Assessing the Alignment of Large Language Models With Human Values for Mental Health Integration: Cross-Sectional Study Using Schwartz's Theory of Basic Values. JMIR Mental Health. 2024. Vol. 11. e55988. URL: https://doi.org/10.2196/55988. 9. Hagendorff Т. Deception abilities emerged in large language models. PNAS: Proceeding of the National Academy of Science of the USA . 2024. Vol. 121. No. 24. e2317967121. URL: https://doi.org/10.1073/pnas.2317967121. 10. Floridi L., Sanders J. Artificial evil and the foundation of computer ethics. Ethics and Information Technology . 2001. Vol. 3. Pp. 55–66. 11. Krügel S., Ostermaier A., Uhl M. Chat GPT’s inconsistent moral advice influences users’ judgment. Scientific Reports . 2023. Vol. 13. Article number 4569. URL: https://doi.org/10.1038/s41598-023-31341-0. 12. Roff H. AI deception: When your artificial intelligence learns to lie. IEEE Spectrum. https://spectrum.ieee.org/ai-deception-when-your-ai-learns-to-lie Статья поступила в редакцию 15.11.2024 Статья допущена к публикации 25.11.2024 The article was received by the editorial staff 15.11.2024 The article is approved for publication 25.11.2024
Made with FlippingBook
RkJQdWJsaXNoZXIy ODQ5NTQ=