Bringing You the Daily Dispatch


The Cambridge dictionary has selected ‘Hallucinate’ as its word of the year.

The 2023 word of the year chosen by Cambridge dictionary is “hallucinate,” a verb that acquired a new definition this year.

The initial meaning of the selected term is to perceive nonexistent sensory experiences, typically due to a health issue or drug use. However, it has gained a new connotation in reference to artificial intelligence systems like ChatGPT, which can generate text resembling human writing but may also create false information.

According to a post on the dictionary site, the term was selected because it encapsulates the reason for the current discussion around AI. Generative AI is a potent but imperfect tool that requires caution and skillful usage, as we continue to learn how to use it safely and efficiently while taking into account both its strong points and limitations.

This year, the dictionary included several new entries related to AI, such as LLM (large language model), GenAI (generative AI), and GPT (Generative Pre-trained Transformer).

The post stated that AI hallucinations serve as a reminder that humans must use their critical thinking abilities when utilizing these tools. The reliability of large language models is dependent on the information they are taught by their algorithms. In light of this, human expertise is crucial in producing accurate and current information for LLMs to be trained on.

Ignore the newsletter advertisement.

According to Henry Shevlin, an AI ethics expert from the University of Cambridge, the choice of using a “vivid psychological verb” instead of computer-specific terms like “glitches” or “bugs” to describe LLMs’ mistakes is remarkable. He suggests that this could be due to the tendency to anthropomorphize these systems and view them as having their own minds.

Shevlin also said that this year will probably be the “high watermark of worries” about AI hallucinations because AI companies are making efforts to curb the frequency of mistakes by drawing on human feedback, users are learning what kinds of tasks to trust LLMs with, and models are becoming increasingly specialised.

The dictionary gives two illustrations of how “hallucinate” is used in regards to AI: “LLMs are known for producing hallucinations – giving completely incorrect responses, often backed by made-up sources” and “The newest update of the chatbot is significantly better, but it may still generate false information.”

Cambridge University has made the decision to name “AI” as their word of the year, following Collins dictionary’s lead.

Source: theguardian.com