Jamba: the most powerful & efficient long context models

Open for builders, built for the enterprise.

Jamba 1.5 Mini & Large: A family of open models

Optimized icon

Optimized for speed

Jamba’s latency outperforms all leading models of comparable sizes.
Context icon

Longest
context window

Jamba’s 256k context window is the longest openly available.
Hybrid icon

Novel hybrid architecture

Jamba’s Mamba-Transformer MoE architecture is designed for cost, performance, and efficiency gains.
Code icon

Better for developers

Jamba offers key features OOTB including function calling, JSON mode output, document objects, and citation mode.

2.5X faster than
all leading competitors

Jamba 1.5 models are optimized for speed.

End-to-end latency graph
End-to-end latency graph
End-to-end latency graph

Long context that actually delivers

Jamba 1.5 models maintain high performance across the full length of their context window.

Claimed vs. Effective Content Window (RULER Benchmark)
Claimed vs. Effective Context Window

Intelligent across the board

Jamba 1.5 models achieve top scores across common quality benchmarks.

Jamba 1.5 Mini
Jamba 1.5 Large
Jamba 1.5 Large graph
Jamba 1.5 Large graph
Jamba 1.5 Large graph
Jamba 1.5 Large graph
Jamba 1.5 Mini graph
Jamba 1.5 Mini graph
Jamba 1.5 Mini graph
Jamba 1.5 Mini graph

Jamba 1.5 availability

Jamba 1.5 Availability
Jamba 1.5 Availability

Secure deployment that suits your enterprise

AI21 Studio

Seamlessly start using Jamba on our production-grade SaaS platform.

Partners

The Jamba model family is available for deployment across our strategic partners.

Private

We offer VPC & on-prem deployments for enterprises that require custom solutions.