Here’s how Apple used Google’s help to train its AI models

On stage Monday, Apple CEO Tim Cook announced a groundbreaking deal with OpenAI to integrate its powerful artificial intelligence model with its voice assistant Siri.

However, in the fine print of a technical document released by Apple after the event, the company makes it clear that Alphabet’s Google has emerged as another winner in the Cupertino, California-based company’s race to catch up with AI.

To build Apple’s basic AI models, the company’s engineers used its own architecture software with various hardware, notably its own on-premise graphics processing units (GPUs) and chips called Tensor Processing Units (TPUs) in Google’s cloud.

Google has been developing TPUs for about 10 years and has publicly talked about two variants of its fifth-generation chips that could be used for AI training. The fifth-generation Performance Edition delivers performance that Google claims rivals the Nvidia H100 AI chips.

Google announced at its annual developer conference that the sixth generation will launch later this year.

The processors are specifically designed to run AI applications and training models, and Google has built a cloud computing hardware and software platform around them.

Apple and Google did not immediately return requests for comment.

Apple did not comment on how much it relies on Google’s chips and software compared to hardware from Nvidia or other AI vendors.

However, using Google’s chips typically requires customers to purchase access through Google’s cloud division, similar to how customers purchase computing time from Amazon.com’s AWS or Microsoft’s Azure. (Reporting by Max A. Cerny; Editing by Sandra Maler)

See also  Festival of Presence "Freeshoots": Devilish Naib

Leave a Reply

Your email address will not be published. Required fields are marked *