Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

granite 2.1 w llama.cpp jinja error jinja2.exceptions.UndefinedError: 'strftime_now' is undefined #6675

Open
1 task done
thistleknot opened this issue Jan 18, 2025 · 2 comments
Labels
bug Something isn't working

Comments

@thistleknot
Copy link

Describe the bug

09:58:11-110526 INFO INSTRUCTION TEMPLATE: "Custom (obtained from model metadata)"
Traceback (most recent call last):
File "/home/user/miniconda3/envs/textgen/lib/python3.10/site-packages/gradio/queueing.py", line 541, in process_events
response = await route_utils.call_process_api(
File "/home/user/miniconda3/envs/textgen/lib/python3.10/site-packages/gradio/route_utils.py", line 276, in call_process_api
output = await app.get_blocks().process_api(
File "/home/user/miniconda3/envs/textgen/lib/python3.10/site-packages/gradio/blocks.py", line 1928, in process_api
result = await self.call_function(
File "/home/user/miniconda3/envs/textgen/lib/python3.10/site-packages/gradio/blocks.py", line 1526, in call_function
prediction = await utils.async_iteration(iterator)
File "/home/user/miniconda3/envs/textgen/lib/python3.10/site-packages/gradio/utils.py", line 657, in async_iteration
return await iterator.anext()
File "/home/user/miniconda3/envs/textgen/lib/python3.10/site-packages/gradio/utils.py", line 650, in anext
return await anyio.to_thread.run_sync(
File "/home/user/miniconda3/envs/textgen/lib/python3.10/site-packages/anyio/to_thread.py", line 56, in run_sync
return await get_async_backend().run_sync_in_worker_thread(
File "/home/user/miniconda3/envs/textgen/lib/python3.10/site-packages/anyio/_backends/_asyncio.py", line 2134, in run_sync_in_worker_thread
return await future
File "/home/user/miniconda3/envs/textgen/lib/python3.10/site-packages/anyio/_backends/_asyncio.py", line 851, in run
result = context.run(func, *args)
File "/home/user/miniconda3/envs/textgen/lib/python3.10/site-packages/gradio/utils.py", line 633, in run_sync_iterator_async
return next(iterator)
File "/home/user/miniconda3/envs/textgen/lib/python3.10/site-packages/gradio/utils.py", line 816, in gen_wrapper
response = next(iterator)
File "/home/user/text-generation-webui/modules/chat.py", line 443, in generate_chat_reply_wrapper
for i, history in enumerate(generate_chat_reply(text, state, regenerate, _continue, loading_message=True, for_ui=True)):
File "/home/user/text-generation-webui/modules/chat.py", line 410, in generate_chat_reply
for history in chatbot_wrapper(text, state, regenerate=regenerate, _continue=_continue, loading_message=loading_message, for_ui=for_ui):
File "/home/user/text-generation-webui/modules/chat.py", line 305, in chatbot_wrapper
stopping_strings = get_stopping_strings(state)
File "/home/user/text-generation-webui/modules/chat.py", line 265, in get_stopping_strings
prefix_bot, suffix_bot = get_generation_prompt(renderer, impersonate=False)
File "/home/user/text-generation-webui/modules/chat.py", line 71, in get_generation_prompt
prompt = renderer(messages=messages)
File "/home/user/miniconda3/envs/textgen/lib/python3.10/site-packages/jinja2/environment.py", line 1295, in render
self.environment.handle_exception()
File "/home/user/miniconda3/envs/textgen/lib/python3.10/site-packages/jinja2/environment.py", line 942, in handle_exception
raise rewrite_traceback_stack(source=source)
File "", line 5, in top-level template code
File "/home/user/miniconda3/envs/textgen/lib/python3.10/site-packages/jinja2/sandbox.py", line 399, in call
if not __self.is_safe_callable(__obj):
File "/home/user/miniconda3/envs/textgen/lib/python3.10/site-packages/jinja2/sandbox.py", line 265, in is_safe_callable
getattr(obj, "unsafe_callable", False) or getattr(obj, "alters_data", False)
jinja2.exceptions.UndefinedError: 'strftime_now' is undefined

(textgen) [root@pve-m7330 text-generation-webui]# pip list installed | grep transformers
ctransformers 0.2.27+cu121
curated-transformers 0.1.1
sentence-transformers 3.3.1
spacy-curated-transformers 0.2.2
spacy-transformers 1.3.5
transformers 4.48.0
transformers-stream-generator 0.0.5
(textgen) [root@pve-m7330 text-generation-webui]# pip list installed | grep llama.cpp
llama_cpp_python 0.3.6+cpuavx2
llama_cpp_python_cuda 0.3.6+cu121
llama_cpp_python_cuda_tensorcores 0.3.6+cu121
llama-cpp-scripts 0.0.0 /home/user/text-generation-webui/llama.cpp
(textgen) [root@pve-m7330 text-generation-webui]#

Is there an existing issue for this?

  • I have searched the existing issues

Reproduction

load granite 3.1 gguf from https://huggingface.co/QuantFactory/granite-3.1-8b-instruct-GGUF (I used q6)
latest main branch as of 2025-01-18 10 AM PST

Screenshot

Image

Logs

Ignoring llama-cpp-python-cuda-tensorcores: markers 'platform_system == "Windows" and python_version == "3.11"' don't match your environment
Ignoring llama-cpp-python-cuda-tensorcores: markers 'platform_system == "Windows" and python_version == "3.10"' don't match your environment
Ignoring llama-cpp-python-cuda-tensorcores: markers 'platform_system == "Linux" and platform_machine == "x86_64" and python_version == "3.11"' don't match your environment
Collecting llama-cpp-python-cuda-tensorcores==0.3.6+cu121 (from -r requirements.txt (line 50))
  Downloading https://github.com/oobabooga/llama-cpp-python-cuBLAS-wheels/releases/download/textgen-webui/llama_cpp_python_cuda_tensorcores-0.3.6+cu121-cp310-cp310-linux_x86_64.whl (487.1 MB)
     ━━━━━━━━━━━━━━━━━━╺━━━━━━━━━━━━━━━━━━━━━ 220.2/487.1 MB 78.5 kB/s eta 0:56:42
ERROR: Wheel 'llama-cpp-python-cuda-tensorcores' located at /tmp/pip-unpack-flxd3lf4/llama_cpp_python_cuda_tensorcores-0.3.6+cu121-cp310-cp310-linux_x86_64.whl is invalid.
(textgen) [root@pve-m7330 text-generation-webui]# python server.py --api --listen --n-gpu-layers 32 --threads 8 --numa --tensorcores --trust-remote-code
    return next(iterator)
  File "/home/user/miniconda3/envs/textgen/lib/python3.10/site-packages/gradio/utils.py", line 816, in gen_wrapper
    response = next(iterator)
  File "/home/user/text-generation-webui/modules/chat.py", line 443, in generate_chat_reply_wrapper
    for i, history in enumerate(generate_chat_reply(text, state, regenerate, _continue, loading_message=True, for_ui=True)):
  File "/home/user/text-generation-webui/modules/chat.py", line 410, in generate_chat_reply
    for history in chatbot_wrapper(text, state, regenerate=regenerate, _continue=_continue, loading_message=loading_message, for_ui=for_ui):
  File "/home/user/text-generation-webui/modules/chat.py", line 305, in chatbot_wrapper
    stopping_strings = get_stopping_strings(state)
  File "/home/user/text-generation-webui/modules/chat.py", line 265, in get_stopping_strings
    prefix_bot, suffix_bot = get_generation_prompt(renderer, impersonate=False)
  File "/home/user/text-generation-webui/modules/chat.py", line 71, in get_generation_prompt
    prompt = renderer(messages=messages)
  File "/home/user/miniconda3/envs/textgen/lib/python3.10/site-packages/jinja2/environment.py", line 1295, in render
    self.environment.handle_exception()
  File "/home/user/miniconda3/envs/textgen/lib/python3.10/site-packages/jinja2/environment.py", line 942, in handle_exception
    raise rewrite_traceback_stack(source=source)
  File "<template>", line 5, in top-level template code
  File "/home/user/miniconda3/envs/textgen/lib/python3.10/site-packages/jinja2/sandbox.py", line 399, in call
    if not __self.is_safe_callable(__obj):
  File "/home/user/miniconda3/envs/textgen/lib/python3.10/site-packages/jinja2/sandbox.py", line 265, in is_safe_callable
    getattr(obj, "unsafe_callable", False) or getattr(obj, "alters_data", False)
jinja2.exceptions.UndefinedError: 'strftime_now' is undefined

System Info

rocky linux 9
python 3.10

@thistleknot thistleknot added the bug Something isn't working label Jan 18, 2025
@oobabooga
Copy link
Owner

This also happens with EXL2 and Transformers. I haven't been able to solve it, it's a jinja2 template parsing issue.

@Jan-PieterInghels
Copy link

ran into this with Mistral-Small-24B (i1-Q4 version) huggingface link

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

3 participants