Introducing Arcee-Nova

What a week here at Arcee AI. On the heels of Arcee-Scribe yesterday, today we bring you Arcee-Nova – our highest-performing open source model... Evaluated on the same stack as the OpenLLM Leaderboard 2.0, making it the top-performing open source model tested on that stack. Its performance approaches that of GPT-4 from May 2023, marking a significant milestone.
Nova is a merge of Qwen2-72B-Instruct with a custom model tuned on a generalist dataset mixture.
Performance
- Evaluated on the OpenLLM Leaderboard 2.0 stack
- Top-performing open source model on this stack
- Approaches GPT-4 (May 2023) performance levels
Technical Details
- Merge of Qwen2-72B-Instruct with a custom-tuned model, with RLHF on top of that
- Custom generalist dataset mixture used for tuning
- GGUF versions available on Hugging Face
Key Capabilities
- Reasoning
- Creative Writing
- Coding
- General Language Understanding
Business Applications
- Customer Service: Advanced chatbots and virtual assistants
- Content Creation: High-quality marketing and documentation
- Software Development: Code generation and quality improvement
- Data Analysis: Enhanced interpretation and reporting
- Research and Development: Literature reviews and hypothesis generation
- Legal and Compliance: Contract analysis and regulatory checks
- Education and Training: Adaptive learning systems
Performance Metrics

Acknowledgments
We thank the open source AI community for their ongoing contributions and the Qwen team for their foundational work on Qwen2-72B.
Looking Ahead
We invite researchers, developers, and businesses to explore Arcee-Nova's capabilities. Our commitment to open source AI advancement continues, and we look forward to seeing how the community builds upon this technology.
This post was created with assistance from Arcee-Nova.