OpenAI dives into audio: Is this the end for screens?
OpenAI is making a big move into audio AI, and it's not just about giving ChatGPT a nicer voice. Word on the street is they've been reshuffling their engineering and product teams to supercharge their audio models. Why? They're gearing up to launch an audio-centric personal device, supposedly within the next year.
The tech world seems to be leaning towards a future where screens take a backseat and audio steps into the spotlight. Think about it: smart speakers are already common in many homes. Meta's been playing with smart glasses that can isolate voices in noisy places. Google's experimenting with audio summaries for search results, and Tesla wants to put conversational AI in your car. It feels like everyone's trying to make our devices more talkative.
Of course, it's not only the big players who are convinced that audio is the future. Several startups are also betting big on this. However, not all are successful - Remember the Humane AI Pin? It burned through a ton of cash before it became known as a cautionary tale.
The goal seems to be that every space, from your living room to your car, becomes a way to interact with technology. OpenAI's upcoming audio model, expected in 2026, should sound more natural and conversational. The company seems to imagine a family of devices, maybe even glasses or smart speakers, that feel more like companions than just tools.
I think that this isn't really a shock. I mean, Jony Ive, formerly Apple's design guru, is involved in OpenAI's hardware projects. He's talked about wanting to reduce our reliance on screens, and he sees audio-first design as a way to fix some of the problems with today's gadgets. Maybe he's right; perhaps talking to our devices will be the new normal sooner than we think.
Source: TechCrunch