Skip to content

Failing to convert the new PHI-3 models. #8259

@0wwafa

Description

@0wwafa
INFO:hf-to-gguf:Loading model: Phi-3-mini-128k-instruct
INFO:gguf.gguf_writer:gguf: This GGUF file is for Little Endian only
INFO:hf-to-gguf:Set model parameters
Traceback (most recent call last):
  File "/content/llama.cpp/convert-hf-to-gguf.py", line 3263, in <module>
    main()
  File "/content/llama.cpp/convert-hf-to-gguf.py", line 3244, in main
    model_instance.set_gguf_parameters()
  File "/content/llama.cpp/convert-hf-to-gguf.py", line 1950, in set_gguf_parameters
    raise NotImplementedError(f'The rope scaling type {rope_scaling_type} is not supported yet')
NotImplementedError: The rope scaling type longrope is not supported yet

Metadata

Metadata

Assignees

No one assigned

    Labels

    duplicateThis issue or pull request already exists

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions