Amazon was one of the tech giants that agreed to the White House’s recommendations last year on the use of generative AI. The privacy considerations outlined in those recommendations continue to evolve, with the latest being announced at the AWS Summit in New York on July 9. In particular, context-based Guardrails for Amazon Bedrock provides customizable content filters for organizations deploying their own generative AI.
AWS’s head of responsible AI, Diya Win, spoke to TechRepublic in a virtual pre-event briefing about the new announcements and how companies are balancing the expansive knowledge of generative AI with privacy and inclusion.
AWS NY Summit Announcement: Changing Guardrails in Amazon Bedrock
Amazon Bedrock’s guardrails, a safety filter for generative AI applications hosted on AWS, have new enhancements.
- Anthropic’s Claude 3 Haiku Preview users can now fine-tune their models in Bedrock starting July 10th.
- Guardrails for Amazon Bedrock adds context-sensitive ground checking to detect hallucinations in model responses for search-enhanced generation and summary applications.
Guardrails is also being extended with a standalone ApplyGuardrail API, which allows Amazon enterprises and AWS customers to apply protections to their AI applications even if the models are hosted outside of AWS infrastructure. This means app creators can use toxicity filters, content filters, and flag sensitive information they want to exclude from their applications. Wynn says custom Guardrails can reduce up to 85% of harmful content.
Contextual-based and ApplyGuardrail APIs are available in select AWS Regions starting July 10.
The contextual rationale for Guardrails in Amazon Bedrock is part of a broader AWS responsible AI strategy.
Wynn said the contextual rationale ties into AWS’s overall responsible AI strategy in terms of its ongoing “scientific advancements, continued innovation, and providing customers with services that they can leverage to develop and build AI products.”
“One of the things that clients often worry about or consider is hallucinations,” she said.
Contextual grounding and general guardrails can help mitigate this problem. Guardrails with contextual grounding can reduce hallucinations seen in generative AI by up to 75 percent, Wynn said.
As generative AI has become more popular over the past year, the way customers view it has changed.
“When we started doing customer-facing work, customers weren’t necessarily coming to us,” Wynn said. “We looked at specific use cases and helped support things like development, but the shift over the last year or so has ultimately been a greater awareness of (generative AI), and so companies are wanting and demanding a greater understanding of how we build and what we can do to make our systems secure.”
She said that means “tackling bias issues” as well as reducing security concerns and AI hallucinations.
Amazon Q Enterprise Assistant Additions and Other Announcements from AWS NY Summit
AWS announced a number of new features and adjustments to its products at the AWS NY Summit. Here are some highlights:
- A developer customization feature for Amazon Q Enterprise AI assistants to secure access to your organization’s code base.
- Amazon Q added to SageMaker Studio.
- Amazon Q Apps is now generally available, a tool for deploying generative AI-based apps based on your company’s data.
- Access Scale AI on Amazon Bedrock to customize, configure, and fine-tune your AI models.
- Vector Search for Amazon MemoryDB accelerates vector searches across vector databases on AWS.
Note: Amazon recently announced Graviton4-based cloud instances that can support AWS’s Trainium and Inferentia AI chips.
AWS Achieves Cloud Computing Education Goals Ahead of Time
At Summit NY, AWS announced that it has surpassed its goal of training 29 million people globally in cloud computing skills by 2025, with 31 million people in 200 countries and regions taking cloud-related AWS training courses.
AI Training and Roles
AWS training offerings are varied and I won’t list them all here, but free training on cloud computing has been available globally, both in person and online. This includes training on generative AI through the AI Ready initiative. Wynn highlighted two roles that people can train for in the new jobs of the AI era: agile engineers and AI engineers.
“You don’t necessarily have a data scientist involved,” Wynn said. “They don’t train the base models. You probably have someone like an AI engineer.” An AI engineer fine-tunes the base models and adds them to the application.
“I think the AI engineer role is one that’s seeing an increase in visibility or popularity,” Wynn said. “The other one is where I think there are people who are doing rapid engineering. That’s a new role or skill set that’s needed because it’s not as simple as people think. It’s about providing the right context and details to get the specific things you want out of a large-scale language model.”
TechRepublic covered the AWS NY Summit remotely.