@cf/meta/llama-2-7b-chat-fp16 vs @hf/thebloke/mistral-7b-instruct-v0.1-awq
@hf/thebloke/mistral-7b-instruct-v0.1-awq is more cost-effective. Compare full specs, pricing, and choose the best model for your use case.
Quick Overview
@cf/meta/llama-2-7b-chat-fp16
Cloudflare Workers AI
4K tokens context • $0.56 / $6.67 per 1M tokens
View full specifications →@hf/thebloke/mistral-7b-instruct-v0.1-awq
Cloudflare Workers AI
4K tokens context • Free
View full specifications →Detailed Comparison
Specification
@cf/meta/llama-2-7b-chat-fp16
@hf/thebloke/mistral-7b-instruct-v0.1-awq
Provider
Cloudflare Workers AI
Cloudflare Workers AI
Context Window
4K tokens
4K tokens
Max Output Tokens
4K tokens
4K tokens
Input Pricing (per 1M tokens)
$0.56
Free
Output Pricing (per 1M tokens)
$6.67
Free
Release Date
Jul 2023
Sep 2023
Capabilities
Capability
@cf/meta/llama-2-7b-chat-fp16
@hf/thebloke/mistral-7b-instruct-v0.1-awq
Text Generation
Function Calling
Which Model Should You Choose?
Choose @cf/meta/llama-2-7b-chat-fp16 if:
Choose @hf/thebloke/mistral-7b-instruct-v0.1-awq if:
- • Cost efficiency is a priority