Cracked AI Engineering

@cf/meta/llama-3.1-8b-instruct-fast vs @cf/deepgram/aura-1

@cf/meta/llama-3.1-8b-instruct-fast offers 128K tokens context vs 0 tokens. Compare full specs, pricing, and choose the best model for your use case.

Quick Overview

@cf/meta/llama-3.1-8b-instruct-fast

Cloudflare Workers AI

128K tokens context • Pricing not available

View full specifications →

@cf/deepgram/aura-1

Cloudflare Workers AI

0 tokens context • $0.01 / $0.01 per 1M tokens

View full specifications →

Detailed Comparison

Specification
@cf/meta/llama-3.1-8b-instruct-fast
@cf/deepgram/aura-1
Provider
Cloudflare Workers AI
Cloudflare Workers AI
Context Window
128K tokens
0 tokens
Max Output Tokens
128K tokens
0 tokens
Input Pricing (per 1M tokens)
N/A
$0.01
Output Pricing (per 1M tokens)
N/A
$0.01
Release Date
Jul 2024
Aug 2025

Capabilities

Capability
@cf/meta/llama-3.1-8b-instruct-fast
@cf/deepgram/aura-1
Text Generation
Function Calling

Which Model Should You Choose?

Choose @cf/meta/llama-3.1-8b-instruct-fast if:

  • • You need a larger context window

Choose @cf/deepgram/aura-1 if: