Anirban Ghoshal
Senior Writer

Amazon Bedrock updated with contextual grounding, RAG connectors

news
Jul 10, 20245 mins
Amazon Web ServicesArtificial IntelligenceCloud Computing

In addition to adding new LLMs to its Bedrock generative AI service, AWS is releasing a new Guardrails API that is open for use across other platforms.

shutterstock 2024657870 glacially sculpted granite bedrock green valley and mountains in distance
Credit: Jeff Holcombe / Shutterstock

Amazon Web Services (AWS) has updated its generative AI development service, Amazon Bedrock, with a new contextual grounding feature, connectors for retrieval augmented generation (RAG), and additional large language models (LLMs).

These updates, according to Amazon’s vice president of generative AI Vasi Philomin, are expected to ease the development of generative AI-based applications for enterprises.

Contextual grounding, an addition to Guardrails for Amazon Bedrock, aims to further reduce the chances of receiving unwarranted or hallucinatory responses from an application underpinned by a LLM, Philomin said.

Contextual grounding uses RAG and summarization applications to detect hallucinations in model responses, checks if the LLM response is based on the right enterprise data, and is aligned to the user’s query or instruction, the vice president explained.

Other major cloud service providers, such as Google and Microsoft Azure, also have a system in place to evaluate the reliability of RAG applications, including the mapping of response generation metrics.

While Microsoft uses the Groundedness Detection API to check whether the text responses of LLMs are grounded in the source materials provided by users, Google recently updated its grounding options inside Vertex AI with features such as dynamic retrieval and high-fidelity mode.

AWS also offers RAG evaluation and observability features in Amazon Bedrock that use metrics such as faithfulness, answer relevance, and answer semantic similarity to benchmark a query response. AWS said that it will offer Guardrails for Amazon Bedrock as a separate API to be used with LLMs outside of Bedrock.

“The standalone Guardrails API is being released after we received a lot of requests from enterprise customers who want to use it on models outside of Bedrock,” Philomin said.

AWS’s move is reminiscent of IBM’s strategy to make its watsonx.governance product available via a toolkit this May. While the IBM toolkit is not exactly comparable to the Guardrails API, it can be used to create a repository for logging details throughout a model’s life cycle. Such information can be helpful in evaluating the rationale behind a certain model choice or determining which stakeholder had what involvement in the model’s life cycle.

New RAG connectors and large language models

As part of the updates to its generative AI service, AWS has added new RAG connectors to Amazon Bedrock to allow developers to ground models across a larger variety of data sources.

In addition to Amazon S3, developers can now use connectors for Salesforce, Confluence, and SharePoint for RAG, Philomin said. He said the SharePoint connector was currently in preview.

AWS also announced that it was adding the ability to fine-tune Anthropic’s Claude 3 Haiku inside Bedrock. The ability to fine-tune Claude 3 Haiku, which is in preview, according to Philomin, is not available with any other cloud services provider yet and is heavily sought by enterprise customers.

Other updates include the general availability of vector search for Amazon MemoryDB and new capabilities to Agents for Bedrock.

“Now agents can retain memory across multiple interactions to remember where you last left off and provide better recommendations based on prior interactions,” Philomin said. He noted that Agents can now interpret code to tackle complex data-driven use cases, such as data analysis, data visualization, text processing, solving equations, and optimization problems.

Amazon App Studio to help develop apps in natural language

AWS on Wednesday also showcased AWS App Studio, a generative AI-based managed service that allows any enterprise user to build and deploy applications using natural language prompts.

“With App Studio, a user simply describes the application they want, what they want it to do, and the data sources they want to integrate with, and App Studio builds an application in minutes that could have taken a professional developer days to build a similar application from scratch,” AWS said in a statement. App Studio’s generative AI assistant eliminates the need for learning any low-code tools, the company said.

Amazon App Studio is likely to compete with rival offerings such as Google’s Vertex AI Studio and Microsoft’s Copilot Studio.

AWS also announced the general availability of Amazon Q Apps, a feature inside Amazon Q Business that was showcased at AWS re:Invent last year.

“With Amazon Q Apps enterprise users can go from conversation to generative AI-powered app based on their company data in seconds. Users simply describe the application they want in a prompt and Amazon Q instantly generates it,” AWS said. The company added that Amazon Q allows users to generate an app from an existing conversation.

More AI news: