Amazon rebuilds Alexa using “amazing” number of AI tools

Daniel Rausch, Amazon Alexa and Echo’s vice presidents are in a major transition. Alexa, which has been launched over a decade ago, was tasked with creating a new version of Marquee Voice Assistant, powered by a large language model. As he said in my interview with him, the new assistant, known as Alexa+, is “a complete reconstruction of the building.”
How does his team handle Amazon’s biggest voice assistant makeover ever? Of course, they use AI to build AI.
“The speed of the AI tools we used throughout the build process is amazing,” Rausch said. When creating the new Alexa, Amazon used AI in every step of the build. Yes, this includes the part that generates the code.
The Alexa team also brought the generation AI into the testing process. Engineers use “large language models as judges for answers” in reinforcement learning, in which AI chooses what it considers the best answer between two Alexa+ outputs.
“People are gaining leverage and can move faster and better with AI tools,” Rausch said. Amazon’s focus on using generated AI internally is part of a larger wave of software engineers at work, as new tools, such as Anysphere’s cursors, change how jobs are completed (and expected workloads).
If these AI-centric workflows prove to be overwhelming, then the meaning of becoming an engineer will fundamentally change. “We will need fewer people to do what is being done today and more people to do other types of work,” Amazon CEO Andy Jassy said in a memo to employees this week. “It’s hard to know for sure that this will purify over time, but over the next few years we hope that this will reduce our head office workforce as we get the efficiency gains from the widespread use of AI throughout the company.”
Currently, Rausch is mainly focused on launching Alexa’s generated AI version to more Amazon users. “We really don’t want to throw away customers in any way,” he said. “That means hundreds of millions of different devices you have to support.”
The new Alexa+ is more conversational with users. It’s a more personalized experience that can remember your preferences and be able to complete the online tasks you offer, such as finding concert tickets or buying groceries.
Amazon announced Alexa+ at a company event in February and visited some public users early in March, although this did not have the full announcement feature. Now, the company claims that more than a million people can access the updated voice assistant, which is still a small part of the expected users. Eventually, hundreds of millions of Alexa users will access AI tools. Later this summer, a wider launch of Alexa+ may be scheduled for later this year.
Amazon faces competition in multiple directions as it works on a more dynamic voice assistant. Openai’s advanced voice mode was launched in 2024 and is popular among users who find AI voice engaging. Additionally, Apple announced an overhaul of its local voice assistant Siri at a developer conference last year, with many of its backgrounds and personalized features similar to Amazon’s partnership with Alexa+. Even in early access, Apple hasn’t launched a rebuilt Siri, which is expected to be a new voice assistant sometime next year.
Amazon refuses to access Alexa+ in time for hands-on (voice?) testing, and the new assistant has not yet been launched on my personal Amazon account. Similar to how we approached the OpenAI Advanced Voice Mode that was launched last year, Wired plans to test Alexa+ and provide readers with an experiential context as it becomes increasingly widespread.