|
Today we are excited to announce that Mistral AI’s Mistral Small-based Model (FM) is now generally available on Amazon Bedrock. This is a quick follow-up to the recent Mistral 7B and Mixtral 8x7B announced in March and the Mistral Large announced in April. You now have access to four high-performance models from Mistral AI on Amazon Bedrock, including Mistral Small, Mistral Large, Mistral 7B, and Mixtral 8x7B, further expanding your model selection.
Mistral Small, developed by Mistral AI, is a highly efficient large language model (LLM) optimized for high-volume, low-latency language-based tasks. Mistral Small is perfectly suited for simple tasks that can be performed in bulk, such as sorting, customer support, or text generation. Delivers outstanding performance at a cost-effective price point.
Here are some key features of Mistral Small you should know about:
- Search Augmented Generation (RAG) Specialization – Mistral Small ensures that important information is maintained even over long context windows that can scale up to 32K tokens.
- Coding skills – Mistral Small is great for code generation, review, and commenting, and supports major coding languages.
- Multilingual support – In addition to English, Mistral Small offers top-notch performance in French, German, Spanish, and Italian. It also supports dozens of other languages.
Mistral Small Getting Started
To get started with Mistral Small, you first need access to a model. Go to the Amazon Bedrock console and choose: model accessand then select Model access management. I am Mistral AI section, selection Mistral Smalland then select save changes.
You now have model access for Mistral Small and can use it on Amazon Bedrock. i refresh basic model This is a table where you can see the current status.
Use the following template to create a prompt that causes the model to produce suboptimal output.
<s>(INST) Instruction (/INST)
please refer to this <s>
Special token for start of string (BOS). (INST)
and (/INST)
It’s a regular string.
Try the following prompt to see an example classification:
Immediate:
<s>(INST) Classify the following email to determine if it is spam or not. Only respond with the exact words "Spam" or "Not spam".
🎉 Limited time offer! Buy 2, Get 1 Free! Offer ends today! Don't miss out the CHANCE TO SAVE, please click on the link below: https://bit.ly/buy-2-get-1-free (/INST)
Mistral 7B, Mixtral 8x7B, and Mistral Large can all correctly classify this email as “spam.” The Mistral Small can also classify it just as accurately as the larger model. I’ve also tried a few similar tasks, such as generating a Bash script from a text prompt and creating a recipe for preparing yogurt, with good results. For these reasons, Mistral Small is the most cost-effective and efficient option among Amazon Bedrock’s Mistral AI models for these tasks.
Mistral Small is great for multilingual work. In addition to English, it also offers excellent performance in French, German, Spanish, and Italian. Test the model’s understanding of German by asking it to read two sentences about sustainability.
Immediate:
<s>(INST) Geben Sie mir zwei Sätze zum Thema Nachhaltigkeit. (/INST)
Calculation:
1. Nachhaltigkeit bedeutet, die Bedürfnisse der heutigen Generation zu befriedigen, ohne die Möglichkeiten künftiger Generationen zu gefährden.
2. Ein nachhaltiger Lebensstil umfasst den bewussten Umgang mit Ressourcen, wie Wasser, Energie und Rohstoffen, sowie den Schutz von Umwelt und Klima.
Diese Sätze sollten Ihnen einen guten Überblick über das Thema Nachhaltigkeit geben.
Interact programmatically with Mistral Small
The AWS Command Line Interface (AWS CLI) and AWS Software Development Kits (SDKs) allow you to programmatically interact with Mistral Small through the Amazon Bedrock API. I use the following code in Python to interact with the Amazon Bedrock Runtime API through the AWS SDK to ask “What is the color of the sky?”
import argparse
import boto3
from botocore.exceptions import ClientError
import json
accept = "application/json"
content_type = "application/json"
def invoke_model(model_id, input_data, region, streaming):
client = boto3.client('bedrock-runtime', region_name=region)
try:
if streaming:
response = client.invoke_model_with_response_stream(body=input_data, modelId=model_id, accept=accept, contentType=content_type)
else:
response = client.invoke_model(body=input_data, modelId=model_id, accept=accept, contentType=content_type)
status_code = response('ResponseMetadata')('HTTPStatusCode')
print(json.loads(response.get('body').read()))
except ClientError as e:
print(e)
if __name__ == "__main__":
parser = argparse.ArgumentParser(description="Bedrock Testing Tool")
parser.add_argument("--prompt", type=str, help="prompt to use", default="Hello")
parser.add_argument("--max-tokens", type=int, default=64)
parser.add_argument("--streaming", choices=("true", "false"), help="whether to stream or not", default="false")
args = parser.parse_args()
streaming = False
if args.streaming == "true":
streaming = True
input_data = json.dumps({
"prompt": f"<s>(INST){args.prompt}(/INST)",
"max_tokens": args.max_tokens
})
invoke_model(model_id="mistral.mistral-small-2402-v1:0", input_data=input_data, region="us-east-1", streaming=streaming)
You will get the following result:
{'outputs': ({'text': ' The color of the sky can vary depending on the time of day, weather,', 'stop_reason': 'length'})}
Available now
The Mistral Small model is now available on Amazon Bedrock in the US East (N. Virginia) region.
To learn more, visit Amazon Bedrock’s Mistral AI product page. For more information about pricing, please review the Amazon Bedrock pricing page.
To get started with Mistral Small on Amazon Bedrock, see the Amazon Bedrock console and the Amazon Bedrock User Guide.
– Ezra