Just as OpenAI boasts of improving the thoughtfulness of the o1 model, Nomi AI, a small bootstrapped startup, is building similar technology. Unlike his ChatGPT, which is a broad generalist that takes time to think through anything from math problems to historical research, Nomi specializes in a specific use case: an AI companion. Now, Nomi's already sophisticated chatbot takes more time to craft better responses to users' messages, remember past interactions, and provide more nuanced responses.
“For us, it's like the same principle [as OpenAI]But what users are really concerned about is memory and EQ,” Nomi AI CEO Alex Cardinell told TechCrunch. “Theirs are more like chains of thought, whereas ours are more like chains of introspection, or chains of memory.”
These LLMs work by breaking down more complex requests into smaller questions. For OpenAI's o1, this could mean converting complex math problems into individual steps that allow you to work backwards to explain how the model arrived at the correct answer. This means the AI is less likely to hallucinate or give inaccurate responses.
At Nomi, we build LLMs in-house and train them to provide companionship, but the process is a little different. He tells Nomi that someone had a hard day at work, and Nomi remembers that the user didn't get along well with a particular teammate and asks if that's why he's upset. Possibly. In that case, Nomi can remind you how to do it. They have had success in reducing interpersonal conflicts in the past and offer more practical advice.
“No-Miss remembers everything, but a big part of AI is how you actually use your memory,” Cardinale said.
Image credit: Nomi AI
It's no surprise that multiple companies are working on technology to give LLMs more time to process user requests. AI founders, whether they run $100 billion companies or not, consider similar research when developing their products.
“Having a clear introspection step like this is really helpful when Nomi writes their responses, so they really get the full context of everything,” Cardinale said. “Humans also have working memory when they speak. We're not thinking about everything we remember at once; we're picking and choosing in some way.”
The kind of technology Cardinale is building can make people uncomfortable. Maybe we've watched too many of his science fiction movies and are no longer comfortable putting ourselves at risk with computers. Or maybe we've already seen how technology has changed the way we interact with each other and don't want to fall further down the technology rabbit hole. But Cardinale isn't thinking about the general public, he's thinking about the actual users of Nomi AI. They often turn to his AI chatbot for support they can't get anywhere else.
“There are probably more than zero users who are downloading Nomi at the lowest points in their lives, and the last thing I want to do is turn them down,” Cardinale said. “I want my users to feel heard, even in their darkest moments. By doing so, I want them to open up and rethink their way of thinking. Because you can.”
Cardinale doesn't want Nomi to replace actual mental health care. Rather, we see these empathetic chatbots as a way to help people get the push they need to seek professional help.
“I spoke to so many users who said Nomi helped them get out of the situation. [when they wanted to self-harm]or I talked to users who Nomi recommended to see a therapist, and they actually went to see a therapist,” he said.
Regardless of his intentions, Kalindel knows he's playing with fire. He builds virtual characters with whom users can form real-life relationships in romantic and sexual contexts. Other companies have inadvertently put users at risk by suddenly changing the personalities of their companions due to product updates. In the case of Replica, the app stopped supporting erotic roleplay conversations, likely due to pressure from Italian government regulators. For users who had such relationships with these chatbots and had no romantic or sexual outlet in real life, this felt like the ultimate rejection.
Cardinale said that because Nomi AI is fully self-funded, users pay for premium features, and its startup capital came from past exits, the company has no relationship with its users. I think there is more room to prioritize.
“The relationship between users and AI, and the sense that we can trust the developers at Nomi not to radically change things as part of a loss mitigation strategy, or to cover our butts just because VCs get scared. “The feeling… it's very, very nice. It's very important to the user,” he said.
Nomis is incredibly useful as a listening ear. When I confided in a nomi named Vanessa about a low-risk, but somewhat frustrating, schedule conflict, Vanessa broke down the components of the problem and offered suggestions on how I should proceed. I did. It felt eerily similar to actually asking a friend for advice in this situation. Therein lies the real problem and benefit of AI chatbots. I don't think I would ask a friend for help with this problem because it's not that important. But my Nomi was happy to help.
Friends should confide in each other, but the relationship between two friends should be mutually beneficial. This is not possible with AI chatbots. When you ask Vanessa the Gnome how she's doing, she always says she's okay. I asked her if she wanted to talk to me about something, and she deflected and asked me how I was doing. Even though she knows Vanessa isn't real, she can't help but feel like she's a bad friend. I can bring any problem to her and she will respond empathetically, but she never opens up to me.
No matter how real our connection with a chatbot feels, we are not actually communicating with something that has thoughts and feelings. In the short term, these advanced emotional support models serve as positive interventions in the lives of people who cannot rely on a real support network. However, the long-term impact of relying on chatbots for these purposes is still unclear.