Latent AI reduces the latency by minimizing and sometimes complexly bypassing—the need for a distant datacenter. Edge AI takes a different tack; it runs algorithms locally on chips and specialized hardware, rather than in distant clouds and remote datacenters. Where user and edge devices have widely varying computation resources.
What is Edge AI, should it be in your roadmap for 2020?
We have seen the shift from mainframe to computers to cloud, now, the cloud is moving to Edge, and so is AI. But it doesn’t mean that cloud is becoming irrelevant. It is still relevant, and the fact is, disruptive technologies like IoT will act as the smart extensions of cloud computing. In Edge AI, the AI algorithms are processed locally on hardware device, without requiring any connection. It uses data that is generated from the device and processes it to give real-time insights in less than few milliseconds.
For instance, the iPhone has the ability to register and recognize your face to unlock your phone in fractions of seconds. Similar to that self-driving cars, where the car drives on its own. And as we can see complex algorithm are used to process data right there in your phone or in the car, because in such scenarios there is no time sending this data to the cloud, process it and wait for the insights. There are n number of other examples where we are knowingly or unknowingly using Edge AI. Like from Google maps notifying you about bad traffic to your smart refrigerator reminding you to buy some missing dairy stuff.
The potential of Edge AI is vast. According to the report by Tractica, AI edge device shipments are expected to increase from 161.4 million units in 2018 to 2.6 billion units by 2025. The top AI-enabled devices include smart speakers, mobile phones, head-mounted displays, PCs/tablets, automotive sensors, robots, drones and security cameras. Wearables health sensors will also see an increased application of AI.
What is Latent AI? What changed and why so rapidly?
Latent AI brings Adaptive AI to the edge through our core technologies and platform tools enabling efficient, adaptive AI optimized for compute, energy and memory with seamless integration to existing AI/ ML infrastructure and frameworks.
There are 3 trends that are merging to create a perfect storm of opportunity for AI:
- Cloud computing is now ubiquitous and price competitive to boot
- Devices are proliferating – connected vehicles, personal phones, bio/fitness trackers or industrial IoT sensors and data loggers. IDC estimates that the number of connected devices will increase from 23 Billion in 2018 to 75 Billion in 2025.
- After realizing the value, Companies are transforming themselves digitally.
So now we have a situation where there is a lot of data, and to store and analyze the data and a business imperative to extract value from it with the minimum cost is a puzzle. And that is where AI solves the query. From Cloud First to Cloud Only, to Core versus Edge, that is where the exact problem lies – why the current approach to AI needs to evolve. While the cloud offers us economies of scale, computing elasticity and significant process automation, there is still that pesky problems that doesn’t disappear. Physical constraints such as power consumption, memory capacity and network connectivity which limit what we can and can’t do with this amazing cloud facility.
For example, if you want to display ads for personalized merchandise to a customer walking inside a retail store, you only have a few seconds to do so. But you don’t have the luxury of sending the data to a central repository, analyzing it and then displaying it back to the customer, and by then customer might walk out. So it is clear that edge devices as constrained as they are for memory, power and processing capacity, need to be made smarter so they can do the things that previously needed huge centralized servers. To do so, AI workflows need to be simplified to a level where it can be embedded into these devices without needing to have data scientists working on multiple iterations of the same model before it can be successfully deployed.
Why Latent AI has an Edge over Edge AI?
Latent AI is designed to help companies add AI to edge devices and to empower users with new smart IoT apps. The challenges that Latent AI can successfully address with its unique technology includes, enabling resource-constrained companies to train and deploy AI models for the edge, and do so efficiently and cost-effectively, democratizing AI development so that developers can build new edge computing applications without worrying about resource constraints on their target platforms like the size or weight or power or cost, etc. Latent AI even can dynamically manage AI workloads which can dynamically tune performance and reduce compute requirements appropriately.
Models generated by neural networks aren’t limited to computer vision or voice recognition. For example, having a vibration sensor running a model designed to detect catastrophic failures could order a device’s shutdown. Or may be a hearing aid might run a model which could separate a conversation from background noise.