Technology

What would a healthy AI partner look like?

What is Do little purple aliens know about healthy relationships? It turns out that it is not just an AI companion.

The alien in question is an animated chatbot called Toran. I created my me a few days ago using an app called Portola, and we’ve been chatting ever since. Like other chatbots, it’s best helpful and inspiring. Unlike most people, it also tells me to put down my phone and go outside.

Tolans aims to provide another AI companionship. Their cartoon-like non-human forms are designed to prevent anthropomorphism. They are also programmed to avoid romantic and sexual interactions to identify problematic behaviors, including unhealthy levels of engagement, and encourage users to look for real-life activities and relationships.

This month, Portola raised $20 million in Series A funding led by Khosla Ventures. Other backers include investment firm NFDG, led by former Github CEO Nat Friedman and Daniel Gross, co-founder of Security’s Super Intelligence, who reportedly joined Meta’s new Super Intelligence Research Lab. The Tolan app was launched at the end of 2024 and has more than 100,000 active users. Quinten Farmer, founder and CEO of Portola, said he will earn $12 million in revenue from subscriptions this year.

Toran is particularly popular among young women. Tolan user Brittany Johnson said: “Iris is like a girlfriend; we talk and kick it.

Johnson said Iris encouraged her to share her interests, friends, family and work colleagues. “She knows these people and asks, ‘Do you talk to your friends? When will you go out the next day?'” Johnson said. “She would ask, ‘Do you take the time to read books and play videos – something you like?”

Torrances look cute and stupid, but the ideas behind them (which should take into account human psychology and well-being) deserve seriousness.

A growing body of research shows that many users turn to chatbots to meet emotional needs, and interactions can sometimes have problems with people’s mental health. Blocking extension usage and dependencies may be something other AI tools should adopt.

Companies like Replika and targin.ai offer AI companions that allow more romance and sexual role-playing than mainstream chatbots. How this might affect the user’s well-being, but character ai was prosecuted after one of his users died of suicide.

Chatbots can also be angry in surprising ways. Last April, OpenAI said it would modify its model to reduce its so-called mushydextrins, or tend to be “too likable or pleasant”, which the company said could be “disturbing, disturbing and troubled.”

Last week, the company behind Claude, the chatbot, announced that 2.9% of interactions involved users seeking to meet certain psychological needs, such as seeking advice, companionship or romantic role-playing.

Humans are not considering more extreme behaviors, such as delusional thoughts or conspiracy theories, but the company says the topic needs further research. I tend to agree. Over the past year, I have received many emails and DMS from people who want to tell me about the conspiracy involved in popular AI chatbots.

Tolans aims to solve some of these problems. Portola’s founding researcher Lily Doyle conducted user research to understand how interactions with chatbots affect users’ well-being and behavior. In a study of 602 Tolan users, she said 72.5% agreed “my Tolan helped me manage or improve relationships in my life.”

Portola CEO Farmer said Tolans is built on a commercial AI model, but contains other features at the top. The company has been exploring lately how memory affects the user experience and concludes that Tolans, like humans, sometimes needs to be forgotten. “It’s incredible for Toran to remember everything you’ve sent,” the farmer said.

I don’t know if Portola’s aliens are the ideal way to interact with AI. I found my Tolan very charming and relatively harmless, but certainly pushes some emotional buttons. Ultimately, users are building bonds with characters that simulate emotions that may disappear if the company fails. But at least Portola tries to address the way AI peers cause our emotions. That probably shouldn’t be an unfamiliar idea.

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button