Physical Address

304 North Cardinal St.
Dorchester Center, MA 02124

Apple and OpenAI remind us of varying methods for innovation, work

All tech doesn’t take the same path. Never has, never will, no matter how much the millennial tech experts may wail on social media. Take AI, the latest example, of a varying approach. As is software, with Android and iOS at different ends of the usability, security, data privacy and ecosystem scales. As have Windows and Linux, and subsequently Windows, Linux and Chrome OS. Despite the expected noise of social media amplified misconceptions (often, those who aren’t speaking with product people at ground zero, tend to follow a wayward trend), which our readers may have come across in all likelihood, it is always important to weigh a particular methodology with context.
Of what preceded it, an idea behind how contours are shaping up, and how that’s likely to shape in the future. Rome wasn’t built in a day, and neither is any piece of tech. This week, the iPhone 16 series and everything else Apple announced at the keynote may seem like a minor step forward. An “s” cycle, if you may. Nothing could be further from the truth. But I guess, to understand that, you’d have to be privy to conversations at ground zero. OpenAI’s first models with “reasoning”, called the o1 and o1-mini, are making their way into the world.
First, the hardware that caught everyone’s attention. I’ll come straight to the point. Could Apple have done more to change the iPhone and AirPods designs? If nothing else, to assuage the demographic that has the pitchforks out? Indeed, but wouldn’t slightly larger displays on the Pro phones be quite useful? Apple execs confirmed to HT that the innards on an entire iPhone 16 series (they heard our collective complaints through this summer, of the iPhone 15 series heating up) have been redone—that also includes the placement of the A18 and A18 Pro chips. In my limited time spent with the phones, which included a lot of camera and AI usage (we’ll get to 4K 120fps in a bit), there is a positive shift.
Mind you, Android’s most important brand, Samsung too isn’t making any attempts to redesign the wheel just for the sake of it. The Galaxy S22 Ultra through to the latest Galaxy S24 Ultra, have considerable similarities. Focus on what really matters. A resounding yes.
Don’t forget the Camera Control physical button, which contrasts a minimalist approach, bringing back a DSLR-esque experience for those who’d like that. Let us talk about photography and videography, which have taken a significant step forward. 4K videos, shot in 120fps, for example. Some of you may already be up from your seats, claiming many an Android flagship has done this already. But hang on, can any of them during recording or in post-recording editing, allow parts (or the entire video) to also be tweaked to 60fps, 30fps or 24fps. The answer thus far has been a resounding no. Factor in audio mixes too, and those are profoundly impressive. Now Android phone makers will (have to) find a way.
There’s the Apple Intelligence (something we have detailed extensively), with a new addition called Visual Intelligence. Remember I mentioned multiple paths and different approaches? This is attempting to be a response to a mix of Google’s Circle to Search and Google Lens. I did wonder during the keynote—how many of us would actually use it regularly, and in this case, regularly being deployed thrice on a vacation every year? With most AI we have already seen, and Apple Intelligence is the newest chapter, there is a philosophical approach to how machines are supposed to help humans, but how many of us are ready for it?
Click and hold the Camera Control button to open the camera -> click the button again and Apple Intelligence will deliver actions based on context. If the camera sees a restaurant, Apple says it’ll try pulling up reviews, reservation details and the menu. If what the camera sees are the details of an event, it’ll invoke the calendar. Within the realm, some flexing of the prowess muscles, and some capabilities which you may use more often than others.
A few years down the road, consolidation will happen.
OpenAI isn’t shy of saying that the o1 models are essentially still a preview, which is another world for being very, very early in the development stage. Nowhere close to the hoped final structure, but a different approach and perhaps even more relevance in certain use cases. This essentially gets new understanding skills, though the data sets from which it imbibes its knowledge are the same as OpenAI’s other GPT models. The training method has changed, which can be called reinforced learning (where there are rewards and penalties within the system) which gives it the ability (or at least that’s the hope) to chain together a conversation (and thought) as we do.
More than the generative AI that’s increasingly finding its way into smartphones, it is actually that behind-the-scenes AI that’s going to make a world of difference. Such as the audio mix smartness, switching the complete packaging of a video recording as you’d want. Another contrast, is the potential usefulness of the Apple Watch tracking for signs of sleep apnea, and the AirPods Pro (the second generation has the necessary hardware) doubling up as hearing aids after you’ve taken the hearing test. Now that’s stretching the utility of wearables, and once regulatory approvals are done, Apple will flip the switch on a software update.
Samsung has done a good job of including sleep apnea on its latest line of Galaxy Watch Ultra, which we have reviewed in detail, as well as blood pressure monitoring (the only caveat is that it’ll work when paired with a Samsung smartphone). Apple’s yet to add blood pressure functionality on the Watch. Expected next year? There’s clearly a lot to look forward to. Not just the new iPhones and Apple Intelligence, but also OpenAI’s o1 models achieving some level of maturity, Google’s integration of Gemini in many more Android phones in the more affordable price bands and just how AI companies build on the momentum of user interest.
Vishal Mathur is the technology editor for the Hindustan Times. Tech Tonic is a weekly column that looks at the impact of personal technology on the way we live, and vice-versa. The views expressed are personal.

en_USEnglish