-
Notifications
You must be signed in to change notification settings - Fork 5.2k
Description
Describe the bug
When reading your official doc about using ollama's module, I tried using llama 3.1 for open interpreter. However, errors were produced during the preps made after the module setup. I need a detailed resolution or explanation about what happened in my case. I hope some developers would reproduce this error and then tell me about this case.
Reproduce
Used this command
- ollama run llama3.1
- interpreter --model ollama/llama3.1
Then, open interpreter asked me if I would have new profile file.
I did answer n.
Then error is as follows.
[2024-07-30T03:56:01Z ERROR cached_path::cache] ETAG fetch for https://huggingface.co/llama3.1/resolve/main/tokenizer.json failed with fatal error
Traceback (most recent call last):
json.decoder.JSONDecodeError: Unterminated string starting at: line 1 column 2 (char 1)
Expected behavior
I suppose it should complete the preps based on what I read from your official docs.
Screenshots
No response
Open Interpreter version
0.3.4
Python version
3.11.5
Operating System name and version
Windows 11
Additional context
No response