LLMs create more convincing misinformation than people do
Published
Computer scientists have found that misinformation generated by large language models (LLMs) is more difficult to detect than artisanal false claims hand-crafted by humans. Researchers Canyu Chen, a doctoral student at Illinois Institute of Technology, and Kai Shu, assistant professor in its…
#canyuchen #kaishu #newsguard #llama #vicuna #politifact #gossipcop #coaid #llama213b #simonwillison