Hilcenko, ChristineTaubman-Bassirian, Tara2024-03-032024-03-032023-12Journal of Education, Technology and Computer Science 4(34)2023, s. 119-1362719-6550https://repozytorium.ur.edu.pl/handle/item/10298A more covert aspect of Artificial Intelligence (AI) pertains to the ethical quandaries surrounding the actions of machines. In the case of Large Language Models (LLMs), hidden beneath their seemingly impeccable automated outputs lies a colossal amalgamation of trillions of compiled data points, comprising copied blogs, articles, essays, books, and artworks. This raises profound questions about copyright ownership and retribution for the original authors. But beyond intellectual property, another insidious facet of LLMs emerges – their propensity to cause harm to individuals through what can only be described as hallucinatory outputs. Victims of these AI- -generated delusions suffer defamation, and their plight remains largely unnoticed. Amidst the marvels of AI, the plight of the underpaid laborers who form the backbone of AI development is seldom acknowledged, a subject that warrants more profound discussion. Furthermore, as AI algorithms continue to permeate various aspects of society, they bring to the fore issues of bias. For instance, facial recognition technologies frequently exhibit skewed outcomes, leading to false accusations and grave consequences due to over-reliance on these technologies.engAttribution-NonCommercial-NoDerivs 3.0 Polandhttp://creativecommons.org/licenses/by-nc-nd/3.0/pl/Generative Artificial IntelligenceLarge Language ModelsChatGPTEthicsArtificial Intelligence and Ethicsarticle10.15584/jetacomps.2023.4.122719-7417