Cracked AI Engineering

@cf/meta/llama-3.2-3b-instruct vs @hf/thebloke/mistral-7b-instruct-v0.1-awq

@cf/meta/llama-3.2-3b-instruct offers 128K tokens context vs 4K tokens. Compare full specs, pricing, and choose the best model for your use case.

Quick Overview

@cf/meta/llama-3.2-3b-instruct

Cloudflare Workers AI

128K tokens context • $0.05 / $0.34 per 1M tokens

View full specifications →

@hf/thebloke/mistral-7b-instruct-v0.1-awq

Cloudflare Workers AI

4K tokens context • Free

View full specifications →

Detailed Comparison

Specification
@cf/meta/llama-3.2-3b-instruct
@hf/thebloke/mistral-7b-instruct-v0.1-awq
Provider
Cloudflare Workers AI
Cloudflare Workers AI
Context Window
128K tokens
4K tokens
Max Output Tokens
128K tokens
4K tokens
Input Pricing (per 1M tokens)
$0.05
Free
Output Pricing (per 1M tokens)
$0.34
Free
Release Date
Sep 2024
Sep 2023

Capabilities

Capability
@cf/meta/llama-3.2-3b-instruct
@hf/thebloke/mistral-7b-instruct-v0.1-awq
Text Generation
Function Calling

Which Model Should You Choose?

Choose @cf/meta/llama-3.2-3b-instruct if:

  • • You need a larger context window

Choose @hf/thebloke/mistral-7b-instruct-v0.1-awq if:

  • • Cost efficiency is a priority