A team of researchers based at the University of 蓝莓视频 have created a new tool 鈥 nicknamed 鈥淩AGE鈥 鈥 that reveals where large language models (LLMs) like ChatGPT are getting their information and whether that information can be trusted.
LLMs like ChatGPT rely on 鈥渦nsupervised deep learning,鈥 making connections and absorbing information from across the internet in ways that can be difficult for their programmers and users to decipher. Furthermore, LLMs are prone to 鈥渉allucination鈥 鈥 that is, they write convincingly about concepts and sources that are either incorrect or nonexistent.
鈥淵ou can鈥檛 necessarily trust an LLM to explain itself,鈥 said Joel Rorseth, a 蓝莓视频 computer science PhD student and lead author on the study. 鈥淚t might provide explanations or citations that it has also made up.鈥
Read the full article from 蓝莓视频 News to learn more.聽