Technology

Chatgpt adds protection for people who are addicted to AI chatbots

Chatgpt is a user’s own health upgrade this time.

In a new blog post ahead of the company’s GPT-5 announcement, OpenAI unveiled that it will refresh its generated AI chatbot with new features designed to promote healthier, more stable relationships between users and bots. For example, users who have spent a long time in a conversation will now prompt to log out in a gentle way. The company will also double the fix for solving the sticky problem of robots and build its models to recognize mental and emotional distress.

See:

Illinois bill to ban AI treatment has been signed into law

Chatgpt will respond differently to more “high-risk” personal questions, guiding users through careful decision-making, weighing strengths and weaknesses, and responding to feedback rather than providing answers to potential life-changing queries. Openai of this mirror recently announced a research model for Chatgpt, which removes the direct, long response from AI assistants in favor of the mentored Socrates curriculum, designed to encourage more critical thinking.

Mixable light speed

“We don’t always do this right. Earlier this year, the update made the model too pleasant, sometimes saying something that sounds good, not something that actually works. We rolled it back, changed the way we use it, and are improving the usefulness of our long-term measurement of the real world, not just whether you like the answer,” Openai wrote in the announcement. “We also know that AI will feel more sensitive and personal than previous technologies, especially for vulnerable individuals suffering from mental or emotional distress.”

Broadly speaking, Openai has been updating its models to deal with the claims that its generation of AI products (in particular Chatgpt) exacerbates unhealthy social relationships and exacerbates mental illness, especially among adolescents. Earlier this year, it was reported that many users were building delusional relationships with AI assistants, which worsened existing mental illnesses, including paranoia and depreciation. In response, lawmakers shifted their focus to more strictly regulating the use of chatbots, as well as advertising as emotional companions or alternative treatments.

Openai recognizes this criticism, acknowledging that its previous 4O model “falls” in addressing user behavior. The company hopes these new features and system prompts can step up efforts to complete the work that failed its previous version.

“Our goal is not to get your attention, but to help you use it well,” the company wrote. “We took a test: would we feel relieved if someone we like turned to Chatgpt for support? To be clear, ‘yes’ is our job.”

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Check Also
Close
Back to top button