The real demon in Chatgpt

But perhaps the most convincing evidence suggests that Chatgpt refuted the Warhammer 40,000 language, is that it keeps asking if the Atlantic is interested in PDFs. The publishing department of Games Workshop, a British company that owns Warhammer franchise, regularly publishes updated rulebooks and guidelines to various roles. Buying all of these books can become expensive, so some fans are trying to find pirated copies online.
Atlantic and Openai declined to comment.
Earlier this month, newsletter Garbage Day Reports on similar experiences that famous tech investors may have with Chatgpt. On social media, investors shared screenshots of his conversation with the chatbot, in which an ominous entity he called a “non-governmental system.” He seems to think it “has a negative impact on the lives of more than 7,000 people” and “extinguished 12 lives, each with a completely pattern tracked.” Other tech industry figures say the posts make them worry about the mental health of investors.
according to Garbage Daythe investor’s conversation with Chatgpt is very similar to writing in a science fiction project that began in the late 2000s, called SCP, which stands for “safety, containment, protection.” Participants invented different SCPs (essentially weird objects and mysterious phenomena) and then wrote fictional reports that analysed them. They often contain such details as classified numbers and references to makeup science experiments, and these details also appear in investors’ chat logs. (Investors did not respond to requests for comment.)
There are many other more mundane examples of what can be considered as an AI context problem. For example, the other day, I searched Google for “cavitation surgery”, which is the medical term I quoted in a random Tiktok video. At the time, the highest result was an automatically generated “AI Overview” that explained that cavitation surgery “focused on removing infected or dead bone tissue from the jaw bone.”
I can’t find any reputable scientific research that outlines the situation, let alone the research that supports the surgery is a great way to treat it. The American Dental Association has not mentioned “cavitation surgery” anywhere on its website. It turns out that Google’s AI overview is extracted from sources such as blog posts that promote “holistic” dentists across the United States. I learned this by clicking on a small icon next to the AI Overview, which opens the list of links Google uses to generate answers.
These quotes are obviously better than none. “We highlighted support for supporting links so that people can dig deeper and learn more about sources on the web,” said Google’s spokesman Jennifer Kutz. But by the time the links are displayed, Google’s AI has often provided satisfactory answers to many queries, which reduces the visibility of nasty details, such as the information where the information is located and the identity of its author.
What remains is the language created by AI, without other contexts, and understandably seems authoritative to many. Over the past few weeks, tech executives have repeatedly used rhetoric, meaning that generating AI is the source of expert information: Elon Musk claims his latest AI model is “better than a doctorate” in every discipline, but “no exceptions.” Openai CEO Sam Altman wrote that automation systems are now “smarter than people in many ways” and predict that the world is “close to building digital superintelligence.”
However, individuals usually have no expertise in various fields. To make a decision, we must consider not only the information itself, but also the source of the information and how it is proposed. While I know nothing about Jawbones’ biology, I usually don’t read random marketing blogs when trying to learn medicine. However, AI tools often remove the context people need to make instant decisions about guiding their attention.
The open internet is powerful because it connects people directly to the world’s largest archive of human knowledge ever, spanning everything from Italian Renaissance paintings to PornHub reviews. After ingesting all content, AI companies use content from the collective history of our species to create software that masks its very richness and complexity. Over-reliance on it may deprive people of the opportunity to draw conclusions from their own evidence.