Introducing AI21 Maestro: An AI Planning & Orchestration System designed to solve complex tasks Learn More

Jamba 1.5a: Enhancing AI Safety Through Post-Post-Training Alignment

We introduce Jamba 1.5a, an aligned version of our open-source model, co-developed with Enkrypt AI using advanced post-training techniques.

How can enterprises ensure their AI behaves safely – while still delivering high performance? How can organizations influence model behavior without costly retraining?

Jamba 1.5a is our answer. Using techniques like direct preference optimization (DPO) and rejection sampling on synthetic data, we show that it’s possible to significantly improve a model’s helpfulness, harmlessness, and honesty without compromising accuracy or speed. This opens the door to customizable alignment pipelines—so enterprises can shape model behavior to better reflect their own policies and standards.