The last few months have seen a blizzard of announcements from the big players as they jostle for position in the AI race: first at Google Next, and more recently, at Microsoft Ignite and OpenAI’s DevDay. Every one of these events has unveiled new and exciting tools that promise to transform how we (and our clients) work and deliver great experiences. It probably goes without saying that we’ve been eagerly awaiting the final event of this landmark year – AWS re:Invent. When it comes to AI, Amazon has been more understated than their competitors, so we wondered: is that about to change?

After following along with five days of announcements, we think: yes. In this post, we’re going to zoom in on some of the most interesting reveals, and why we think they have so much potential.

Amazon Bedrock: De-Risking Your LLM Choices

Emerging technologies are inherently volatile. Consider the headlines of the last few weeks alone: We’ve seen stunning leadership developments at OpenAI. Meta and IBM have formed a new “AI Alliance.” And at re:Invent, Amazon announced that Claude 2.1 would once again surpass GPT-4 Turbo in context window size and price-per-token – a development that may cause us to reassess our post-DevDay recommendations

Given this environment, it might seem risky to choose an LLM; the model that seems like the best fit today might not be what you need next year – or even next month. But it doesn’t have to be as risky as it seems. You can insulate your system from this volatility (and avoid dreaded vendor lock-in) by maintaining an abstraction layer in your system, enabling you to seamlessly switch between different LLM models. Amazon clearly recognizes this, and has built Bedrock to make abstraction as simple as possible.

Bedrock Cuts Out the Middleware

When creating an abstraction layer in a platform, many engineers turn to libraries like LangChain or FlowiseAI to create a workflow for an LLM while keeping the endpoint of the ultimate request flexible – allowing you to, for example, summarize a text and switch between OpenAI’s model and Anthropic’s model with the click of a button. As with any middleware, there are pros and cons to this approach (check out our recent thoughts on LangChain). 

Amazon Bedrock, however, enables you to host your different models – cutting out the middleware and even eliminating the “hop” between cloud providers when you want to make a switch. There’s no need to go outside the network to another cloud provider. Further simplifying matters, Amazon now also offers a commoditized RAG architecture, letting you create grounded, retrieval-augmented systems, without spending time on the infrastructure, such as vector databases and the glue code.

The Case For (And Against) Abstraction

Like all design decisions, abstraction isn’t right for everyone. As Amazon CTO Werner Vogels pointed out in his re:Invent keynote (and highlighted in his new book, The Frugal Architect), architecting any system involves tradeoffs. When we counsel our clients about it, we emphasize:

  • Up-front investment & tech debt. Even when using a simplified tool like Bedrock, there will be a larger upfront investment required to integrate the abstraction layer. Not only does this increase initial development time, but it also adds a layer of complexity to your code that will need to be maintained long-term.
  • Pricing considerations. As noted earlier in this post, pricing volatility is rampant in today’s AI landscape. Being able to hedge against these fluctuations by easily switching LLMs can be a huge advantage, especially if you’re using AI at scale. Imagine being able to change your LLM for one with more favorable pricing in real time, without having to re-architect your system. Bedrock makes that possible. 
  • Future-proofing. Any ambitious organization will regularly discover new use cases, and those use cases may demand a different LLM than the one you originally chose. Maybe the LLM you chose has been exploited and leaks sensitive information – or doesn’t even exist anymore at some point in the future. In any case, abstraction can help you future-proof your system by assuming that new LLMs will eventually be woven into your architecture. 
  • New dependencies. If you choose to use a third-party library for abstraction, you’ll be dependent on its maintainers for security, compliance, new features, and to add new providers – which, in this fast-moving space, may not be fast enough for your organization. Of course, using Bedrock solves this problem.

All things considered, if abstraction is a good fit for your business, but you’re on the fence about implementing it, the announcements Amazon made about Bedrock last week make a pretty persuasive case in favor.  

Amazon Q: A Game-Changer for Developers?

The announcements around Amazon Bedrock could be game-changers for our clients, but what about the developer community? Amazon’s making a major move there, too – with Q, their new productivity assistant. We’re thrilled to see Amazon finally jump into this arena with their answer to Microsoft’s Co-pilot and Google’s Duet AI, among others. 

Q is a fully-integrated chatbot designed to access data not only from a single platform, but also across all connected systems. It can assist with generic questions related to Amazon (such as how to spin up an EC2 instance), but unlike its competitors, it can also provide specific insights about your code, data, and infrastructure. Think of it this way: systems like GitHub Co-pilot can answer questions about your code, but they wouldn’t, for example, be able to debug a system with decreased performance due to systemic issues in the overall architecture. Amazon promises that Q can.  

That type of assistance has a lot of potential, but we also need our productivity assistants to boost our velocity in more concrete ways. One of the biggest benefits of AI for developers is the ability to automate and simplify the mundane, time-consuming tasks that can delay or derail more ambitious initiatives. Internally, we’ve been using LLMs to help us automate and accelerate cloud migrations, with a lot of success so far. Amazon has picked up on use cases like this with Q’s Code Transformation preview, which promises to simplify and accelerate application upgrades (like Java and .NET). Reducing this work from days to minutes could make a huge difference in how we work.

Werner Vogels has called this a new paradigm, where these tools not only increase productivity by being incorporated into existing systems, but also help engineers collaborate better. In other words: Amazon posits that Q is a game-changer.

So, is it? Not yet. As with every product launch in this fast-moving space, quality is always of concern. In early previews, Q has been shown to deliver sub-par and even wrong answers. Even after Amazon addresses these flaws and improves the chatbot, it’s a good reminder that we need to approach any AI-generated content with a degree of caution, and for critical workflows, a human in the loop (HITL).

Overall, though, we are excited to see this AI technology being integrated more into our daily lives, enhancing or replacing Amazon’s Alexa assistant. We’re sure Apple and Google will soon respond with answers to Amazon Q in their own assistants such as Siri and Google Assistant.

Did AWS re:Invent Change the AI Conversation?

Ultimately, we wouldn’t say that what Amazon shared last week is going to change the AI landscape as a whole, but it definitely cemented their place in the pack and pushed back on the perception that they’ve been lagging behind. While we didn’t see as many consumer-facing capabilities as their competitors, they made up for it with some pretty exciting developer capabilities that can level up how we architect and deploy AWS services for our clients. Watch this space for deeper dives into Bedrock and Q as we explore these features more as part of our AI practice and as we deliver for our clients. 

GenAI can supercharge developer velocity. Our intensive 1-day workshop teaches you how.

LEARN MORE