If you like your hullabaloo to come in compute, storage, database/analytics, and devops-related flavours, then AWS’ annual re:Invent ‘educational conference’ event in Las Vegas last week didn’t disappoint. And that’s before the company revealed its new AI services and IoT on-device capabilities.
This year’s highlights included seven new Amazon EC2 instances; a ‘simple VPS’ called Amazon Lightsail; PostgreSQL compatibility for Amazon Aurora; the Amazon Athena serverless query service; a larger and compute-enabled version of last year’s AWS Snowball ruggedised data transfer appliance; a slew of tools and services to make building, deploying, and managing code easier; and AWS Snowmobile – a truck-load (literally) of ruggedised, tamper-resistant, storage in a 45′ shipping container… capable of shifting exabytes of data to the cloud (once it’s been driven off to an AWS datacentre).
Even with the Snowmobile’s 1Tb/sec transfer speed, that’s still at least a 6-month round trip on and off the truck for a full Exabyte, but clearly that’s better than the years it’d otherwise take to transfer the same amount of data over the Internet. Which is why AWS is now much better able to go after prospective customers who’d previously protested that their data was just too darn heavy to push up to the cloud. Plus, nothing delights an AWS crowd like the sight of a big, shiny, white truck hauling a big, shiny, white shipping container on stage to mark the closing of a keynote address.
But arguably the announcements with the most bearing on MWD Advisors’ interests concerned the three new AI services (Amazon Rekognition, Amazon Polly, and Amazon Lex); and AWS Greengrass (pushing down AWS Lambda compute, messaging, data caching, and sync capabilities onto connected devices).
AWS AI services are starting to expose some of the machine learning that makes Amazon itself tick. Lex is so-called because it’s powered by the same deep learning algorithms that provide Amazon Alexa’s automatic speech recognition and natural language understanding, and it’s a fully-managed service so developers only need pay for what they use. The lifelike speech provided by Amazon Polly is similarly provided (as a ‘pay per character’ service). Throw in the image and scene recognition of Amazon Rekognition and one worries that this ease of availability might just have fired the starting gun for the launch of 1,000 talking photo-recognition apps… But that’s not AWS’ intention.
The company’s mission may well be to bring bots to the masses, but this is only the start. Yes, the services are fledgling now, but AWS promises rapid updates and improvements (as Lex learns more about the art of conversation, as Rekognition gets charged with recognising a more and varied array of objects and scenes, and as feedback on Polly polishes up its accents and intonations).
If the AI suite doesn’t quite fit the bill for your use case yet, and you can’t wait until it might, then you are stuck (for now) with wrangling AWS Deep Learning Developer Resources. This leverages open source frameworks such as TensorFlow, Theano, Torch, CNTK, Caffe, and MXNet (the last of which was recently declared by Amazon CTO Werner Vogels as the company’s “deep learning framework of choice’) on EC2, P2 or G2 compute instances. This doesn’t do quite as much of the ‘heavy lifting’ as the AWS AI services aim to do, but it at least brings flexibility along with ease of integration with other AWS services, etc. if you’re aiming to develop all-in.
AWS Greengrass provides compute, messaging and data caching on connected devices. This allows IoT applications running AWS Lambda functions to run seamlessly across both AWS cloud services and on local devices (the latter, when not even connected to the cloud – so they can cope with poor connectivity at ‘the edge’ of an IoT environment and use locally-collected data to respond quickly to events in situ).
AWS has added a new spin to its definition of ‘hybrid cloud’. No, not the pragmatic recognition that not all on-premise data will migrate to the cloud any time soon; instead it’s an IoT world that anticipates millions of data-deluging devices in everything from wearables, to cars, to planes, to offices, to farms, etc. Not all of these will be able to rely on cloud compute to respond to the data they collect, so it makes sense to embrace processing power at the edge (and, for AWS, do so in a way that’s compatible with the functions that developers will apply through its hosted services). Devices running AWS Greengrass Core can act as a hub within a ‘Greengrass Group’ – with devices on the local network enabled with the AWS IoT Device SDK.
However, as customer demands increase for connected devices with more sophisticated autonomous action, so will the clamour for more than just Lambda functions to be available there to work with. What about AI services? How much of that can be embedded for local execution?
IBM’s tie-up with Cisco (announced in the summer) pushes Watson’s capabilities down to connected devices to bring cognitive insights to the edge. However, at the moment, if you want your AWS-enabled edge devices to benefit from machine learning techniques, you still have to connect them to AWS hosted services via the AWS IoT platform. But AWS has never been about being first with any new capability; what the company’s very good at instead is ‘mainstreaming’ them at scale – and its approach to learning systems is no different.
This does mean that you might not yet find what you need in AI and IoT on the AWS roster, if its minimal viable services aren’t yet viable enough for you. However, if you do tend to swim in AWS’ mainstream then the chances are that the product set will develop along lines that will suit your needs. But if you’re after something a bit more niche, or a bit more ahead of the masses, then you’ll find yourself looking at other platforms to do that particular heavy lifting for you (or you’ll continue to lift it yourself).