AWS 2018 Summit Learnings

To register or not to register was the question a week before Amazon's yearly AWS Summit conference in New York. While organizationally we have chosen to build our new home upon the pillowy platform of Salesforce, our history and future still show a strong presence of wisps of Amazon services. What could be new this year? The toolbox has admittedly been full for the past few years, but that never stopped their marketing geniuses. After some deliberation, I decided to trade my day in Hamilton of cubes and conferences rooms for a New York adventure in a oversized glass building rubbing shoulders with three thousand engineers. Am I crazy?

If you've never been to the Javitt's center for Summit, imagine a three story version of Jim Henson's Labyrinth with over 100 Amazon volunteers trying to tell you which way to go. Luckily the first stop is the keynote so just follow the masses. This year Amazon's CTO took the floor with wearing an Epic Fortnite t-shirt. Wait? Was I at the right venue? I'm supposed to be hearing about a new product announcement not watching video games. Quick, take a few cell pictures and send to my fiancée to which she replies "IthoughtyouwentintoNYCforwork!".

Of course I did, but right now I was learning of how Epic Games went all in with Amazon using a multitude of services that help them scale during the overnight success of their hit game. Besides millions of users, Epic also had to process enormous amounts of data from clients and servers running Fornite. They accomplished this feat using AWS Kinesis to send data to S3 and DynamoDB to power an analytics solution. Bravo Epic.

After the Keynote, it was off to lunch and plan the rest of the day. Prior to the Sparta Systems user conference this year, I had worked on a prototype using Apache NLP project to find personal data. I know Amazon had their own tools in the broadening ML space, so understanding those offerings was my first goal. Before this year, I knew very little about the AI / ML space and even now feel I'm just scratching the surface.


First stop, "Machine Learning with Amazon SageMaker". In the keynote, the AWS offerings in ML were spread across two levels. At the top were the high level services such as Amazon Comprehend, Lex, Polly, Rekognition etc. Under that was Sagemaker marketed as an ML Platform. Below the platform sit the core ML frameworks such as Tensorflow, MX.net1, Pytorch etc. This is where the nitty gritty data science magic happens. The eternal optimist, I hoped I just threw data at SageMaker, pressed a button, and Presto.. let the future be known! Not quite. There is large cost to manage a typical ML infrastructure. You have servers for building, training and deploying your models and that's where the Cloud comes in. In addition to the virtual computing advantages, Sagemaker comes with some prebuilt models to use as a starting point and tweak for your use case. The tweaking or "tuning" in ML parlance, is also a time consuming task. For models already trained to 90% accuracy, Amazon recommends using SageMaker's hyperparameter tuning to get you to the next level.

Alright time for a distraction from AI and to catch a topic more near and dear to my heart... Security. To my surprise the session was scheduled to occur in the same room as the Keynote? Could this be a mistake? In my experience, Security were ALWAYS the least attended. But no, it was there... with six other talks! As I walked back into the large room, my eyes settled on a garden of glowing headphones worn by attendees throughout the crowd. Each colored headset was tuned to a different speaker. Great Idea.


The focus of the session was on Incident Response and automating a runbook. For those not familiar with Incident response, a run book is your step by step guide to follow in the event of an incident. Having a runbook that is regularly rehearsed eliminates the "Oh no, what do we do know" affect that is all too common after disaster strikes. The more your organization automates the runbook the less downtime and less room for human error when restoring to a secure state. Examples of automating IR with Amazon included using GuardDuty to trigger Lamdba jobs to perform actions such as isolate an instance or altering security policies.

With enough time for one more session, I headed to a presentation on Amazon Comprehend. Comprehend performs parts of Speech identification and sentiment analysis on text. Presented as "Data Science in a box", high level services such as Comprehend appear to be ideal fits for well-known AI problems and organizations that don't have a specific budget for AI. Presenters showed how Comprehend can be used to analyze Twitter and build a graph based on brands. The Graph database reflecting brand relationships was powered by Amazon Neptune.

It’s now 6pm. Weary eyed attendees flock back to the trade floor for free cocktails and a last chance to score some swag for office mates. My mind starts to wonder about train schedules and the commute home. Once again, I'm glad I made the decision to escape the office for the day. The once a year opportunity allows us to connect with other colleagues outside our usual sphere. There's a hidden energy among us all as we contemplate what we've built yesterday and what will be building in the future. With the Amazon cloud at our feet, the limit is not the sky but our wallet.