Skip to content

Commit 55310e4

Browse files
committed
Give up token counting for gemini models if they throw
1 parent cea878e commit 55310e4

File tree

1 file changed

+1
-1
lines changed

1 file changed

+1
-1
lines changed

patchwork/common/client/llm/google_.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -229,7 +229,7 @@ def is_prompt_supported(
229229
raise
230230
except Exception as e:
231231
logger.debug(f"Error during token count at GoogleLlmClient: {e}")
232-
return -1
232+
return 1
233233
model_limit = self.__get_model_limits(model)
234234
return model_limit - token_count
235235

0 commit comments

Comments
 (0)