Add support for --token-count-encoding to specify tokenizer model for counting (o200k_base for GPT-4o, cl100k_base for GPT-3.5/4, etc.). Different AI models use different tokenizers, so this is important for accurate token counting across different use cases.