-
-
Notifications
You must be signed in to change notification settings - Fork 158
Description
Issue description
When running llamaChatSession.prompt(), the option stopOnAbortSignal behaves odd
Expected Behavior
I can run the prompt() function on a LlamaChatSession, but when I add the option 'stopOnAbortSignal: true', it feels like the opposite happens. As if the label should have been continueOnAbortSignal?
Cause if I look at /src/evaluator/LlamaChatSession/LlamaChatSession.ts, I see you evaluate as
abortController.signal.addEventListener("abort", () => {
if (abortedOnFunctionCallError || !stopOnAbortSignal)
reject(abortController.signal.reason);
Wouldn't that mean you continue if stopOnAbortSignal is false?
Actual Behavior
When creating a LlamaChatSession and then aborting, an error isn't always thrown.
Steps to reproduce
Create a LlamaChatSession and run the prompt function, with either stopOnAbortSignal false or true and compare behavior.
My Environment
| Dependency | Version |
|---|---|
| Operating System | |
| CPU | Intel i9 / Apple M1 |
| Node.js version | v22.13.1 |
| Typescript version | x.y.zzz |
node-llama-cpp version |
3.14.2 |
npx --yes node-llama-cpp inspect gpu output:
Result of running `npx --yes node-llama-cpp inspect gpu`
OS: macOS 24.5.0 (arm64)
Node: 22.13.1 (arm64)
TypeScript: 5.9.2
node-llama-cpp: 3.14.2
Prebuilt binaries: b6845
Metal: available
Metal device: Apple M3 Max
Metal used VRAM: 0% (464KB/96GB)
Metal free VRAM: 99.99% (96GB/96GB)
Metal unified memory: 96GB (100%)
CPU model: Apple M3 Max
Math cores: 12
Used RAM: 98.4% (125.95GB/128GB)
Free RAM: 1.59% (2.05GB/128GB)
Used swap: 93.05% (18.61GB/20GB)
Max swap size: dynamic
mmap: supported
Additional Context
No response
Relevant Features Used
- Metal support
- CUDA support
- Vulkan support
- Grammar
- Function calling
Are you willing to resolve this issue by submitting a Pull Request?
Yes