
Amazon Nova
Large language models (LLMs) software
Generative AI software
- Features
- Ease of use
- Ease of management
- Quality of support
- Affordability
- Market presence
Take the quiz to check if Amazon Nova and its alternatives fit your requirements.
Pay-as-you-go
Small
Medium
Large
- Retail and wholesale
- Energy and utilities
- Transportation and logistics
What is Amazon Nova
Amazon Nova refers to Amazon’s family of foundation models offered for generative AI use cases, typically accessed through AWS services such as Amazon Bedrock and related tooling. It is used by developers and enterprises to build applications for text generation, summarization, question answering, and other LLM-driven workflows. The product is positioned for customers who want managed model access with AWS-native security, governance, and integration patterns.
AWS-native deployment options
Nova models are designed to be consumed within the AWS ecosystem, which can simplify procurement, identity and access management, and network controls for AWS-centric organizations. Customers can integrate model calls into existing AWS application architectures and monitoring practices. This reduces the need to stand up separate model-serving infrastructure for common LLM use cases.
Enterprise governance alignment
When used via AWS managed services, Nova can fit into established enterprise controls such as centralized IAM policies, logging, and region-based deployment choices. This is relevant for teams that need auditable access patterns and consistent operational controls. It can be easier to standardize LLM usage across multiple internal teams when it is delivered through a single cloud control plane.
Broad generative AI coverage
As a foundation-model family, Nova supports a range of generative AI tasks rather than a single narrow workflow. This enables reuse of a common model layer across multiple applications (e.g., chat, content drafting, and knowledge assistance). It also supports iterative experimentation where teams compare model behavior across tasks without changing their overall AWS integration approach.
Ecosystem and platform lock-in
Nova is most straightforward to adopt when an organization already standardizes on AWS services and operational tooling. Teams running multi-cloud or on-prem-first strategies may face additional integration work or policy constraints. Switching costs can increase if applications become tightly coupled to AWS-specific model access patterns.
Model transparency varies
As with many commercial foundation models, details such as full training data provenance, fine-tuning methodology, and certain evaluation results may not be fully disclosed publicly. This can complicate internal risk reviews for regulated industries or for use cases requiring high explainability. Buyers may need to rely on vendor documentation and contractual assurances rather than independent reproducibility.
Cost and performance predictability
LLM usage is typically metered, and total cost depends on request volume, token usage, and latency requirements. For production workloads with spiky traffic or long-context prompts, forecasting spend and meeting response-time targets can require careful testing and guardrails. Organizations may need additional engineering to optimize prompts, caching, and routing to control cost and latency.
Plan & Pricing
Pricing model: Mixed — usage-based (token-based) for Nova models; per-agent-hour for Nova Act; annual subscription for Nova Forge (subscription amount not published on public site).
Free tier/trial: Not clearly stated on public Nova pages (see has_freetrial/has_freeplan fields below).
Example costs (official AWS pages & docs):
- Nova Act — $4.75 per agent hour. (Agent hours = real-world elapsed time while an agent is working.)
- Nova models (token-based, price per 1,000 tokens):
- Amazon Nova Micro — $0.000035 per 1,000 input tokens; $0.00014 per 1,000 output tokens.
- Amazon Nova Lite — $0.00006 per 1,000 input tokens; $0.00024 per 1,000 output tokens.
- Amazon Nova Pro — $0.0008 per 1,000 input tokens; $0.0032 per 1,000 output tokens. (Official AWS docs state on-demand inference is billed by input/output tokens; Bedrock supports Standard/Priority/Flex tiers and Batch discounts — see notes.)
- Nova Forge — Access requires an annual subscription; AWS documentation instructs subscribing via the SageMaker AI / Nova Forge console and indicates pricing details appear in the console after requesting access, but a public/printed subscription price is not listed on the public documentation/pricing page.
Discounts / tiers / notes (official):
- Bedrock service tiers affect pricing: Priority tier = ~75% premium to Standard; Flex tier = ~50% discount to Standard (per Bedrock pricing page).
- Batch inference can be priced at 50% lower than on-demand for supported models.
- On-demand inference for custom Nova models is priced the same as base Nova inference.
Where to find more/confirm:
- For full model-by-model regional pricing and service-tier adjustments, AWS Bedrock Pricing page and the Nova product docs point users to the Bedrock pricing tables and the Nova Forge console (some detailed subscription pricing requires console access).
Seller details
Amazon Web Services, Inc.
Seattle, Washington, USA
2006
Subsidiary
https://aws.amazon.com/
https://x.com/awscloud
https://www.linkedin.com/company/amazon-web-services/