Research

Jamba-1.5: Hybrid Transformer-Mamba Models at Scale

,
August 22, 2024

We present Jamba-1.5, new instruction-tuned large language models based on our Jamba architecture.

read paper

We present Jamba-1.5, new instruction-tuned large language models based on our Jamba architecture.

Jamba is a hybrid Transformer-Mamba mixture of experts architecture, providing high throughput and low memory usage across context lengths, while retaining the same or better quality as Transformer models. We release two model sizes: Jamba-1.5-Large, with 94B active parameters, and Jamba-1.5-Mini, with 12B active parameters.

Both models are fine-tuned for a variety of conversational and instruction-following capabilities, and have an effective context length of 256K tokens, the largest amongst open-weight models. To support cost-effective inference, we introduce ExpertsInt8, a novel quantization technique that allows fitting Jamba-1.5-Large on a machine with 8 80GB GPUs when processing 256K-token contexts without loss of quality.

When evaluated on a battery of academic and chatbot benchmarks, Jamba-1.5 models achieve excellent results while providing high throughput and outperforming other open-weight models on long-context benchmarks. The model weights for both sizes are publicly available under the Jamba Open Model License and we release ExpertsInt8 as open source.


Models: https://huggingface.co/ai21labs

More upcoming events