The next big trend in mobile hardware, or at least mobile silicon, is to bring Artificial intelligence directly to your smartphone, thanks to neural networks. Google has been improving their machine learning technology for a while now, but this lives in the cloud and while it is more powerful, it also requires an internet connection and is considered slower for basic tasks. We’ve seen Qualcomm as well as Huawei announce their neural network components in their SoCs and today Imagination Technologies has done the same with their new PowerVR Series 2NX Neural Network Accelerator (NNA).
Imagination Technologies had a number of announcements today including the PowerVR Series 9XE and Series 9XM, but it’s the Series 2NX Neural Network Accelerator (NNA) that has caught people’s attention. This new technology will provide SoC designers with dedicated low-power hardware for use with their neural network computation and inferencing. While this may not be something that can be utilized the moment it is implemented into a smartphone (other than for the features that the OEM includes), it will be something that becomes more useful when it catches on with 3rd-party developers.
These types of dedicated low-power neural network components will enable the software to offload certain tasks to it. This will allow for faster results for those specific tasks (think about using AI to identify objects in an image) while also using less battery to perform. These new neural network components will allow for the work to be done locally too, just like we’re seeing with Apple’s face scan technology, so personal data wouldn’t even need to be sent to a server.
Imagination Technologies also feels the PowerVR Series 2NX Neural Network Accelerator (NNA) will fit in great with security cameras (to help identify intruders) as well as automotive and set-top boxes. This is a crowded field though and will have them competing with the likes of Intel, NVIDIA, Apple, Huawei, Qualcomm and others. However, Imagination does claim the 2NX NNA has the industry’s highest inferencing performance, as well as performance in terms of power consumption (inferences/mW) and area (inferences/mm2).
Source: Imagination Via: AnandTech
from xda-developers http://ift.tt/2jLhymJ
via IFTTT
Aucun commentaire:
Enregistrer un commentaire