AI21 Labs’ new AI model can handle more context than most

❤️Thank you for sharing to your friend!!~

Increasingly, the AI industry is moving toward generative AI models with longer contexts. But models with large context windows tend to be compute-intensive. Or Dagan, product lead at AI startup AI21 Labs, asserts that this doesn’t have to be the case — and his company is releasing a generative model to prove it.

Contexts, or context windows, refer to input data (e.g. text) that a model considers before generating output (more text). Models with small context windows tend to forget the content of even very recent conversations, while models with larger contexts avoid this pitfall — and, as an added benefit, better grasp the flow of data they take in.

AI21 Labs’ Jamba, a new text-generating and -analyzing model, can perform many of the same tasks that models like OpenAI’s ChatGPT and Google’s Gemini can. Trained on a mix of public and proprietary data, Jamba can write text in English, French, Spanish and Portuguese.

Jamba can handle up to 140,000 tokens while running on a single GPU with at least 80GB of memory (like a high-end Nvidia A100). That translates to around 105,000 words, or 210 pages — a decent-sized novel.

Meta’s Llama 2, by comparison, has a 32,000-token context window — on the smaller side by today’s standards — but only requires a GPU with ~12GB of memory in order to run. (Context windows are typically measured in tokens, which are bits of raw text and other data.)

On its face, Jamba is unremarkable. Loads of freely available, downloadable generative AI models exist, from Databricks’ recently released DBRX to the aforementioned Llama 2.

But what makes Jamba unique is what’s under the hood. It uses a combination of two model architectures: transformers and state space models (SSMs).

Transformers are the architecture of choice for complex reasoning tasks, powering models like GPT-4 and Google’s Gemini, for example. They have several unique characteristics, but by far transformers’ defining feature is their “attention mechanism.” For every piece of input data (e.g. a sentence), transformers weigh the relevance of every other input (other sentences) and draw from them to generate the output (a new sentence).

SSMs, on the other hand, combine several qualities of older types of AI models, such as recurrent neural networks and convolutional neural networks, to create a more computationally efficient architecture capable of handling long sequences of data.

Now, SSMs have their limitations. But some of the early incarnations, including an open source model called Mamba from Princeton and Carnegie Mellon researchers, can handle larger inputs than their transformer-based equivalents while outperforming them on language generation tasks.

Jamba in fact uses Mamba as part of the core model — and Dagan claims it delivers three times the throughput on long contexts compared to transformer-based models of comparable sizes.

“While there are a few initial academic examples of SSM models, this is the first commercial-grade, production-scale model,” Dagan said in an interview with TechCrunch. “This architecture, in addition to being innovative and interesting for further research by the community, opens up great efficiency and throughput possibilities.”

Now, while Jamba has been released under the Apache 2.0 license, an open source license with relatively few usage restrictions, Dagan stresses that it’s a research release not intended to be used commercially. The model doesn’t have safeguards to prevent it from generating toxic text or mitigations to address potential bias; a fine-tuned, ostensibly “safer” version will be made available in the coming weeks.

But Dagan asserts that Jamba demonstrates the promise of the SSM architecture even at this early stage.

“The added value of this model, both because of its size and its innovative architecture, is that it can be easily fitted onto a single GPU,” he said. “We believe performance will further improve as Mamba gets additional tweaks.”

Powerful AI Assistant !Buff you with ChatGPT-4, Claude3

🚀 Exciting News Alert! 🚀

🔥 Discover the cutting-edge power of Buffup.AI, the ultimate AI Copilot powered by ChatGPT-4, Claude3! 🔥

🌟 Buffup.AI is breaking barriers with its innovative features:

1️⃣ **Free to Use**: No credit card required! Enjoy unlimited access without any hassle.

2️⃣ **ChatGPT-4, Claude3**: Interact with the most advanced AI models for unparalleled conversation experiences.

3️⃣ **Summarize, Translate, Answer**: Get instant solutions to your queries with Buffup.AI’s versatile capabilities.

4️⃣ **Buffup Bots**: Access a treasure trove of Buffup Bots created by users worldwide, enriching your experience.

5️⃣ **Web & Plugin Support**: Seamlessly integrate Buffup.AI into your workflow with support for web, Chrome, or Edge plugins.

💪 Experience the Power of Buffup.AI:

1️⃣ **Powerful AI Chat**: Ask any question and receive immediate, comprehensive solutions. With Buffup.AI, there’s no limit to what you can explore!

2️⃣ **Webpage Reading Assistant**: Enhance your browsing experience with accurate information at your fingertips. Buffup.AI brings AI closer to the content you love.

3️⃣ **AI Writing Assistant**: Boost your productivity with 20x speed! Generate human-like articles and titles within seconds, and elevate your writing across various styles effortlessly.

🌐 Create Your Own AI Bots:

✨ Unleash your creativity with Buffup.AI’s AI Bot creation feature! Share your creations with friends and dive into the world of AI exploration.

🛠️ Buffup.AI: Where Innovation Meets Accessibility:

💡 Whether you’re a tech enthusiast or a beginner, Buffup.AI’s exceptional AI capabilities make it easy for everyone to get started. Why juggle multiple tools when Buffup.AI integrates the power of AI directly into your workflow?

🔗 Join the AI revolution with Buffup.AI today and elevate your digital experience to new heights!

❤️Thank you for sharing to your friend!!~