Cannot create new endpoints: WebserverFailed

Hi,

I am testing a few models and I’ve been able to create 2 endpoints so far, with no issues: TheBloke/WizardCoder-15B-1.0-GPTQ and TheBloke/CodeLlama-34B-Instruct-GPTQ, both on 1xA10. Not the quickest but enough for my immediate purpose.

But it seems I cannot create endpoints with TheBloke/deepseek-coder-33B-instruct-GPTQ or WizardLM/WizardCoder-Python-34B-V1.0.

I have tried with 4xT4, as a single A10 was not enough memory, but even then initialization fails. The logs always point to the same errors/warnings (I’m leaving default values on, but decreasing --max-batch-prefill-tokens does not change anything).

Not enough memory to handle 2048 prefill tokens. You need to decrease --max-batch-prefill-tokens

It is also warning about Flash Attention V2 and the failure to import Mistral model, when neither of the models tested are Mistral.

Unable to use Flash Attention V2: GPU with CUDA capability 7 5 is not supported for Flash Attention V2

Could not import Mistral model: Mistral model requires flash attn v2

Worth noting that I have also tried a non quantized version with similar results: WizardLM/WizardCoder-Python-34B-V1.0.

I can’t tell if it’s really an issue with the user config of the endpoint, or if it’s server-side.

I cannot add the full log here, let me know if there is a preferred way to attach/link them.

Click to expand formatted traceback
Method Warmup encountered an error.
Traceback (most recent call last):
  File "/opt/conda/lib/python3.9/site-packages/text_generation_server/models/flash_causal_lm.py", line 672, in warmup
    _, batch = self.generate_token(batch)
  File "/opt/conda/lib/python3.9/contextlib.py", line 79, in inner
    return func(*args, **kwds)
  File "/opt/conda/lib/python3.9/site-packages/text_generation_server/models/flash_causal_lm.py", line 753, in generate_token
    raise e
  File "/opt/conda/lib/python3.9/site-packages/text_generation_server/models/flash_causal_lm.py", line 750, in generate_token
    out = self.forward(batch)
  File "/opt/conda/lib/python3.9/site-packages/text_generation_server/models/flash_causal_lm.py", line 717, in forward
    return self.model.forward(
  File "/opt/conda/lib/python3.9/site-packages/text_generation_server/models/custom_modeling/flash_llama_modeling.py", line 497, in forward
    hidden_states = self.model(
  File "/opt/conda/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
    return forward_call(*args, **kwargs)
  File "/opt/conda/lib/python3.9/site-packages/text_generation_server/models/custom_modeling/flash_llama_modeling.py", line 456, in forward
    hidden_states, residual = layer(
  File "/opt/conda/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
    return forward_call(*args, **kwargs)
  File "/opt/conda/lib/python3.9/site-packages/text_generation_server/models/custom_modeling/flash_llama_modeling.py", line 383, in forward
    attn_output = self.self_attn(
  File "/opt/conda/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
    return forward_call(*args, **kwargs)
  File "/opt/conda/lib/python3.9/site-packages/text_generation_server/models/custom_modeling/flash_llama_modeling.py", line 282, in forward
    attention(
  File "/opt/conda/lib/python3.9/site-packages/text_generation_server/utils/flash_attn.py", line 84, in attention
    raise NotImplementedError(
NotImplementedError: window_size_left is only available with flash attn v2

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/opt/conda/bin/text-generation-server", line 8, in <module>
    sys.exit(app())
  File "/opt/conda/lib/python3.9/site-packages/typer/main.py", line 311, in __call__
    return get_command(self)(*args, **kwargs)
  File "/opt/conda/lib/python3.9/site-packages/click/core.py", line 1157, in __call__
    return self.main(*args, **kwargs)
  File "/opt/conda/lib/python3.9/site-packages/typer/core.py", line 778, in main
    return _main(
  File "/opt/conda/lib/python3.9/site-packages/typer/core.py", line 216, in _main
    rv = self.invoke(ctx)
  File "/opt/conda/lib/python3.9/site-packages/click/core.py", line 1688, in invoke
    return _process_result(sub_ctx.command.invoke(sub_ctx))
  File "/opt/conda/lib/python3.9/site-packages/click/core.py", line 1434, in invoke
    return ctx.invoke(self.callback, **ctx.params)
  File "/opt/conda/lib/python3.9/site-packages/click/core.py", line 783, in invoke
    return __callback(*args, **kwargs)
  File "/opt/conda/lib/python3.9/site-packages/typer/main.py", line 683, in wrapper
    return callback(**use_params)  # type: ignore
  File "/opt/conda/lib/python3.9/site-packages/text_generation_server/cli.py", line 83, in serve
    server.serve(
  File "/opt/conda/lib/python3.9/site-packages/text_generation_server/server.py", line 207, in serve
    asyncio.run(
  File "/opt/conda/lib/python3.9/asyncio/runners.py", line 44, in run
    return loop.run_until_complete(main)
  File "/opt/conda/lib/python3.9/asyncio/base_events.py", line 634, in run_until_complete
    self.run_forever()
  File "/opt/conda/lib/python3.9/asyncio/base_events.py", line 601, in run_forever
    self._run_once()
  File "/opt/conda/lib/python3.9/asyncio/base_events.py", line 1905, in _run_once
    handle._run()
  File "/opt/conda/lib/python3.9/asyncio/events.py", line 80, in _run
    self._context.run(self._callback, *self._args)
  File "/opt/conda/lib/python3.9/site-packages/grpc_interceptor/server.py", line 159, in invoke_intercept_method
    return await self.intercept(
> File "/opt/conda/lib/python3.9/site-packages/text_generation_server/interceptor.py", line 21, in intercept
    return await response
  File "/opt/conda/lib/python3.9/site-packages/opentelemetry/instrumentation/grpc/_aio_server.py", line 82, in _unary_interceptor
    raise error
  File "/opt/conda/lib/python3.9/site-packages/opentelemetry/instrumentation/grpc/_aio_server.py", line 73, in _unary_interceptor
    return await behavior(request_or_iterator, context)
  File "/opt/conda/lib/python3.9/site-packages/text_generation_server/server.py", line 72, in Warmup
    max_supported_total_tokens = self.model.warmup(batch)
  File "/opt/conda/lib/python3.9/site-packages/text_generation_server/models/flash_causal_lm.py", line 674, in warmup
    raise RuntimeError(
RuntimeError: Not enough memory to handle 2048 prefill tokens. You need to decrease `--max-batch-prefill-tokens`
Click to expand partial log
mts4d 2023-11-30T10:56:05.966Z  INFO | Used configuration:
mts4d 2023-11-30T10:56:05.966Z  INFO | Start loading image artifacts from huggingface.co
mts4d 2023-11-30T10:56:05.966Z  INFO | Repository ID: TheBloke/deepseek-coder-33B-instruct-GPTQ
mts4d 2023-11-30T10:56:05.966Z  INFO | Repository Revision: 08372729d98dfc248f9531a412fe69e14e607027
mts4d 2023-11-30T10:56:06.018Z  INFO | Ignore regex pattern for files, which are not downloaded: *onnx*, pytorch*, *ckpt, flax*, *mlmodel, tf*, *tflite, rust*, *openvino*, *tar.gz
mts4d 2023-11-30T10:56:21.204Z Login successful
mts4d 2023-11-30T10:56:21.204Z Token will not been saved to git credential helper. Pass `add_to_git_credential=True` if you want to set the git credential as well.
mts4d 2023-11-30T10:56:21.204Z Token is valid.
mts4d 2023-11-30T10:56:21.204Z Your token has been saved to /root/.cache/huggingface/token
mts4d 2023-11-30T10:56:34.194Z {"timestamp":"2023-11-30T10:56:34.194011Z","level":"INFO","fields":{"message":"Starting download process."},"target":"text_generation_launcher","span":{"name":"download"},"spans":[{"name":"download"}]}
mts4d 2023-11-30T10:56:34.194Z {"timestamp":"2023-11-30T10:56:34.193910Z","level":"INFO","fields":{"message":"Sharding model on 4 processes"},"target":"text_generation_launcher"}
mts4d 2023-11-30T10:56:34.194Z {"timestamp":"2023-11-30T10:56:34.193872Z","level":"INFO","fields":{"message":"Args { model_id: \"/repository\", revision: None, validation_workers: 2, sharded: None, num_shard: None, quantize: Some(Gptq), dtype: None, trust_remote_code: false, max_concurrent_requests: 128, max_best_of: 2, max_stop_sequences: 4, max_top_n_tokens: 5, max_input_length: 1024, max_total_tokens: 1512, waiting_served_ratio: 1.2, max_batch_prefill_tokens: 2048, max_batch_total_tokens: None, max_waiting_tokens: 20, hostname: \"<USER_ID>-aws-deepseek-coder-33b-instruct-6c88b4ccd5-mts4d\", port: 80, shard_uds_path: \"/tmp/text-generation-server\", master_addr: \"localhost\", master_port: 29500, huggingface_hub_cache: Some(\"/data\"), weights_cache_override: None, disable_custom_kernels: false, cuda_memory_fraction: 1.0, rope_scaling: None, rope_factor: None, json_output: true, otlp_endpoint: None, cors_allow_origin: [], watermark_gamma: None, watermark_delta: None, ngrok: false, ngrok_authtoken: None, ngrok_edge: None, env: false }"},"target":"text_generation_launcher"}
mts4d 2023-11-30T10:56:37.048Z {"timestamp":"2023-11-30T10:56:37.048059Z","level":"INFO","fields":{"message":"Files are already present on the host. Skipping download.\n"},"target":"text_generation_launcher"}
mts4d 2023-11-30T10:56:37.497Z {"timestamp":"2023-11-30T10:56:37.497583Z","level":"INFO","fields":{"message":"Successfully downloaded weights."},"target":"text_generation_launcher","span":{"name":"download"},"spans":[{"name":"download"}]}
mts4d 2023-11-30T10:56:37.498Z {"timestamp":"2023-11-30T10:56:37.498351Z","level":"INFO","fields":{"message":"Starting shard"},"target":"text_generation_launcher","span":{"rank":3,"name":"shard-manager"},"spans":[{"rank":3,"name":"shard-manager"}]}
mts4d 2023-11-30T10:56:37.498Z {"timestamp":"2023-11-30T10:56:37.498014Z","level":"INFO","fields":{"message":"Starting shard"},"target":"text_generation_launcher","span":{"rank":1,"name":"shard-manager"},"spans":[{"rank":1,"name":"shard-manager"}]}
mts4d 2023-11-30T10:56:37.498Z {"timestamp":"2023-11-30T10:56:37.498036Z","level":"INFO","fields":{"message":"Starting shard"},"target":"text_generation_launcher","span":{"rank":2,"name":"shard-manager"},"spans":[{"rank":2,"name":"shard-manager"}]}
mts4d 2023-11-30T10:56:37.498Z {"timestamp":"2023-11-30T10:56:37.497936Z","level":"INFO","fields":{"message":"Starting shard"},"target":"text_generation_launcher","span":{"rank":0,"name":"shard-manager"},"spans":[{"rank":0,"name":"shard-manager"}]}
mts4d 2023-11-30T10:56:40.583Z {"timestamp":"2023-11-30T10:56:40.583494Z","level":"WARN","fields":{"message":"Unable to use Flash Attention V2: GPU with CUDA capability 7 5 is not supported for Flash Attention V2\n"},"target":"text_generation_launcher"}
mts4d 2023-11-30T10:56:40.583Z {"timestamp":"2023-11-30T10:56:40.583068Z","level":"WARN","fields":{"message":"Unable to use Flash Attention V2: GPU with CUDA capability 7 5 is not supported for Flash Attention V2\n"},"target":"text_generation_launcher"}
mts4d 2023-11-30T10:56:40.583Z {"timestamp":"2023-11-30T10:56:40.583069Z","level":"WARN","fields":{"message":"Unable to use Flash Attention V2: GPU with CUDA capability 7 5 is not supported for Flash Attention V2\n"},"target":"text_generation_launcher"}
mts4d 2023-11-30T10:56:40.583Z {"timestamp":"2023-11-30T10:56:40.583057Z","level":"WARN","fields":{"message":"Unable to use Flash Attention V2: GPU with CUDA capability 7 5 is not supported for Flash Attention V2\n"},"target":"text_generation_launcher"}
mts4d 2023-11-30T10:56:40.602Z {"timestamp":"2023-11-30T10:56:40.602725Z","level":"WARN","fields":{"message":"Could not import Mistral model: Mistral model requires flash attn v2\n"},"target":"text_generation_launcher"}
mts4d 2023-11-30T10:56:40.602Z {"timestamp":"2023-11-30T10:56:40.602940Z","level":"WARN","fields":{"message":"Could not import Mistral model: Mistral model requires flash attn v2\n"},"target":"text_generation_launcher"}
mts4d 2023-11-30T10:56:40.602Z {"timestamp":"2023-11-30T10:56:40.602724Z","level":"WARN","fields":{"message":"Could not import Mistral model: Mistral model requires flash attn v2\n"},"target":"text_generation_launcher"}
mts4d 2023-11-30T10:56:40.602Z {"timestamp":"2023-11-30T10:56:40.602665Z","level":"WARN","fields":{"message":"Could not import Mistral model: Mistral model requires flash attn v2\n"},"target":"text_generation_launcher"}
mts4d 2023-11-30T10:56:47.520Z {"timestamp":"2023-11-30T10:56:47.519829Z","level":"INFO","fields":{"message":"Waiting for shard to be ready..."},"target":"text_generation_launcher","span":{"rank":1,"name":"shard-manager"},"spans":[{"rank":1,"name":"shard-manager"}]}
mts4d 2023-11-30T10:56:47.520Z {"timestamp":"2023-11-30T10:56:47.519829Z","level":"INFO","fields":{"message":"Waiting for shard to be ready..."},"target":"text_generation_launcher","span":{"rank":2,"name":"shard-manager"},"spans":[{"rank":2,"name":"shard-manager"}]}
mts4d 2023-11-30T10:56:47.520Z {"timestamp":"2023-11-30T10:56:47.519829Z","level":"INFO","fields":{"message":"Waiting for shard to be ready..."},"target":"text_generation_launcher","span":{"rank":0,"name":"shard-manager"},"spans":[{"rank":0,"name":"shard-manager"}]}
mts4d 2023-11-30T10:56:47.530Z {"timestamp":"2023-11-30T10:56:47.530249Z","level":"INFO","fields":{"message":"Waiting for shard to be ready..."},"target":"text_generation_launcher","span":{"rank":3,"name":"shard-manager"},"spans":[{"rank":3,"name":"shard-manager"}]}
mts4d 2023-11-30T10:56:53.878Z {"timestamp":"2023-11-30T10:56:53.878588Z","level":"INFO","fields":{"message":"Server started at unix:///tmp/text-generation-server-3\n"},"target":"text_generation_launcher"}
mts4d 2023-11-30T10:56:53.935Z {"timestamp":"2023-11-30T10:56:53.935180Z","level":"INFO","fields":{"message":"Shard ready in 16.435646684s"},"target":"text_generation_launcher","span":{"rank":3,"name":"shard-manager"},"spans":[{"rank":3,"name":"shard-manager"}]}
mts4d 2023-11-30T10:56:54.439Z {"timestamp":"2023-11-30T10:56:54.439266Z","level":"INFO","fields":{"message":"Server started at unix:///tmp/text-generation-server-0\n"},"target":"text_generation_launcher"}
mts4d 2023-11-30T10:56:54.439Z {"timestamp":"2023-11-30T10:56:54.439380Z","level":"INFO","fields":{"message":"Server started at unix:///tmp/text-generation-server-1\n"},"target":"text_generation_launcher"}
mts4d 2023-11-30T10:56:54.439Z {"timestamp":"2023-11-30T10:56:54.439333Z","level":"INFO","fields":{"message":"Server started at unix:///tmp/text-generation-server-2\n"},"target":"text_generation_launcher"}
mts4d 2023-11-30T10:56:54.526Z {"timestamp":"2023-11-30T10:56:54.525837Z","level":"INFO","fields":{"message":"Shard ready in 17.026512174s"},"target":"text_generation_launcher","span":{"rank":2,"name":"shard-manager"},"spans":[{"rank":2,"name":"shard-manager"}]}
mts4d 2023-11-30T10:56:54.526Z {"timestamp":"2023-11-30T10:56:54.525837Z","level":"INFO","fields":{"message":"Shard ready in 17.026490237s"},"target":"text_generation_launcher","span":{"rank":1,"name":"shard-manager"},"spans":[{"rank":1,"name":"shard-manager"}]}
mts4d 2023-11-30T10:56:54.526Z {"timestamp":"2023-11-30T10:56:54.525831Z","level":"INFO","fields":{"message":"Shard ready in 17.026603116s"},"target":"text_generation_launcher","span":{"rank":0,"name":"shard-manager"},"spans":[{"rank":0,"name":"shard-manager"}]}
mts4d 2023-11-30T10:56:54.618Z {"timestamp":"2023-11-30T10:56:54.618596Z","level":"INFO","fields":{"message":"Starting Webserver"},"target":"text_generation_launcher"}
mts4d 2023-11-30T10:56:54.663Z {"timestamp":"2023-11-30T10:56:54.663432Z","level":"WARN","message":"no pipeline tag found for model /repository","target":"text_generation_router","filename":"router/src/main.rs","line_number":194}
mts4d 2023-11-30T10:56:54.697Z {"timestamp":"2023-11-30T10:56:54.697410Z","level":"INFO","message":"Warming up model","target":"text_generation_router","filename":"router/src/main.rs","line_number":213}
mts4d 2023-11-30T10:57:02.995Z {"timestamp":"2023-11-30T10:57:02.995657Z","level":"ERROR","fields":{"message":"Method Warmup encountered an error.\nTraceback (most recent call last):\n  File \"/opt/conda/lib/python3.9/site-packages/text_generation_server/models/flash_causal_lm.py\", line 672, in warmup\n    _, batch = self.generate_token(batch)\n  File \"/opt/conda/lib/python3.9/contextlib.py\", line 79, in inner\n    return func(*args, **kwds)\n  File \"/opt/conda/lib/python3.9/site-packages/text_generation_server/models/flash_causal_lm.py\", line 753, in generate_token\n    raise e\n  File \"/opt/conda/lib/python3.9/site-packages/text_generation_server/models/flash_causal_lm.py\", line 750, in generate_token\n    out = self.forward(batch)\n  File \"/opt/conda/lib/python3.9/site-packages/text_generation_server/models/flash_causal_lm.py\", line 717, in forward\n    return self.model.forward(\n  File \"/opt/conda/lib/python3.9/site-packages/text_generation_server/models/custom_modeling/flash_llama_modeling.py\", line 497, in forward\n    hidden_states = self.model(\n  File \"/opt/conda/lib/python3.9/site-packages/torch/nn/modules/module.py\", line 1501, in _call_impl\n    return forward_call(*args, **kwargs)\n  File \"/opt/conda/lib/python3.9/site-packages/text_generation_server/models/custom_modeling/flash_llama_modeling.py\", line 456, in forward\n    hidden_states, residual = layer(\n  File \"/opt/conda/lib/python3.9/site-packages/torch/nn/modules/module.py\", line 1501, in _call_impl\n    return forward_call(*args, **kwargs)\n  File \"/opt/conda/lib/python3.9/site-packages/text_generation_server/models/custom_modeling/flash_llama_modeling.py\", line 383, in forward\n    attn_output = self.self_attn(\n  File \"/opt/conda/lib/python3.9/site-packages/torch/nn/modules/module.py\", line 1501, in _call_impl\n    return forward_call(*args, **kwargs)\n  File \"/opt/conda/lib/python3.9/site-packages/text_generation_server/models/custom_modeling/flash_llama_modeling.py\", line 282, in forward\n    attention(\n  File \"/opt/conda/lib/python3.9/site-packages/text_generation_server/utils/flash_attn.py\", line 84, in attention\n    raise NotImplementedError(\nNotImplementedError: window_size_left is only available with flash attn v2\n\nThe above exception was the direct cause of the following exception:\n\nTraceback (most recent call last):\n  File \"/opt/conda/bin/text-generation-server\", line 8, in <module>\n    sys.exit(app())\n  File \"/opt/conda/lib/python3.9/site-packages/typer/main.py\", line 311, in __call__\n    return get_command(self)(*args, **kwargs)\n  File \"/opt/conda/lib/python3.9/site-packages/click/core.py\", line 1157, in __call__\n    return self.main(*args, **kwargs)\n  File \"/opt/conda/lib/python3.9/site-packages/typer/core.py\", line 778, in main\n    return _main(\n  File \"/opt/conda/lib/python3.9/site-packages/typer/core.py\", line 216, in _main\n    rv = self.invoke(ctx)\n  File \"/opt/conda/lib/python3.9/site-packages/click/core.py\", line 1688, in invoke\n    return _process_result(sub_ctx.command.invoke(sub_ctx))\n  File \"/opt/conda/lib/python3.9/site-packages/click/core.py\", line 1434, in invoke\n    return ctx.invoke(self.callback, **ctx.params)\n  File \"/opt/conda/lib/python3.9/site-packages/click/core.py\", line 783, in invoke\n    return __callback(*args, **kwargs)\n  File \"/opt/conda/lib/python3.9/site-packages/typer/main.py\", line 683, in wrapper\n    return callback(**use_params)  # type: ignore\n  File \"/opt/conda/lib/python3.9/site-packages/text_generation_server/cli.py\", line 83, in serve\n    server.serve(\n  File \"/opt/conda/lib/python3.9/site-packages/text_generation_server/server.py\", line 207, in serve\n    asyncio.run(\n  File \"/opt/conda/lib/python3.9/asyncio/runners.py\", line 44, in run\n    return loop.run_until_complete(main)\n  File \"/opt/conda/lib/python3.9/asyncio/base_events.py\", line 634, in run_until_complete\n    self.run_forever()\n  File \"/opt/conda/lib/python3.9/asyncio/base_events.py\", line 601, in run_forever\n    self._run_once()\n  File \"/opt/conda/lib/python3.9/asyncio/base_events.py\", line 1905, in _run_once\n    handle._run()\n  File \"/opt/conda/lib/python3.9/asyncio/events.py\", line 80, in _run\n    self._context.run(self._callback, *self._args)\n  File \"/opt/conda/lib/python3.9/site-packages/grpc_interceptor/server.py\", line 159, in invoke_intercept_method\n    return await self.intercept(\n> File \"/opt/conda/lib/python3.9/site-packages/text_generation_server/interceptor.py\", line 21, in intercept\n    return await response\n  File \"/opt/conda/lib/python3.9/site-packages/opentelemetry/instrumentation/grpc/_aio_server.py\", line 82, in _unary_interceptor\n    raise error\n  File \"/opt/conda/lib/python3.9/site-packages/opentelemetry/instrumentation/grpc/_aio_server.py\", line 73, in _unary_interceptor\n    return await behavior(request_or_iterator, context)\n  File \"/opt/conda/lib/python3.9/site-packages/text_generation_server/server.py\", line 72, in Warmup\n    max_supported_total_tokens = self.model.warmup(batch)\n  File \"/opt/conda/lib/python3.9/site-packages/text_generation_server/models/flash_causal_lm.py\", line 674, in warmup\n    raise RuntimeError(\nRuntimeError: Not enough memory to handle 2048 prefill tokens. You need to decrease `--max-batch-prefill-tokens`\n"},"target":"text_generation_launcher"}
mts4d 2023-11-30T10:57:02.999Z {"timestamp":"2023-11-30T10:57:02.998891Z","level":"ERROR","message":"Server error: Not enough memory to handle 2048 prefill tokens. You need to decrease `--max-batch-prefill-tokens`","target":"text_generation_client","filename":"router/client/src/lib.rs","line_number":33,"span":{"name":"warmup"},"spans":[{"max_input_length":1024,"max_prefill_tokens":2048,"name":"warmup"},{"name":"warmup"}]}
mts4d 2023-11-30T10:57:03.049Z {"timestamp":"2023-11-30T10:57:03.049242Z","level":"ERROR","fields":{"message":"Method Warmup encountered an error.\nTraceback (most recent call last):\n  File \"/opt/conda/lib/python3.9/site-packages/text_generation_server/models/flash_causal_lm.py\", line 672, in warmup\n    _, batch = self.generate_token(batch)\n  File \"/opt/conda/lib/python3.9/contextlib.py\", line 79, in inner\n    return func(*args, **kwds)\n  File \"/opt/conda/lib/python3.9/site-packages/text_generation_server/models/flash_causal_lm.py\", line 753, in generate_token\n    raise e\n  File \"/opt/conda/lib/python3.9/site-packages/text_generation_server/models/flash_causal_lm.py\", line 750, in generate_token\n    out = self.forward(batch)\n  File \"/opt/conda/lib/python3.9/site-packages/text_generation_server/models/flash_causal_lm.py\", line 717, in forward\n    return self.model.forward(\n  File \"/opt/conda/lib/python3.9/site-packages/text_generation_server/models/custom_modeling/flash_llama_modeling.py\", line 497, in forward\n    hidden_states = self.model(\n  File \"/opt/conda/lib/python3.9/site-packages/torch/nn/modules/module.py\", line 1501, in _call_impl\n    return forward_call(*args, **kwargs)\n  File \"/opt/conda/lib/python3.9/site-packages/text_generation_server/models/custom_modeling/flash_llama_modeling.py\", line 456, in forward\n    hidden_states, residual = layer(\n  File \"/opt/conda/lib/python3.9/site-packages/torch/nn/modules/module.py\", line 1501, in _call_impl\n    return forward_call(*args, **kwargs)\n  File \"/opt/conda/lib/python3.9/site-packages/text_generation_server/models/custom_modeling/flash_llama_modeling.py\", line 383, in forward\n    attn_output = self.self_attn(\n  File \"/opt/conda/lib/python3.9/site-packages/torch/nn/modules/module.py\", line 1501, in _call_impl\n    return forward_call(*args, **kwargs)\n  File \"/opt/conda/lib/python3.9/site-packages/text_generation_server/models/custom_modeling/flash_llama_modeling.py\", line 282, in forward\n    attention(\n  File \"/opt/conda/lib/python3.9/site-packages/text_generation_server/utils/flash_attn.py\", line 84, in attention\n    raise NotImplementedError(\nNotImplementedError: window_size_left is only available with flash attn v2\n\nThe above exception was the direct cause of the following exception:\n\nTraceback (most recent call last):\n  File \"/opt/conda/bin/text-generation-server\", line 8, in <module>\n    sys.exit(app())\n  File \"/opt/conda/lib/python3.9/site-packages/typer/main.py\", line 311, in __call__\n    return get_command(self)(*args, **kwargs)\n  File \"/opt/conda/lib/python3.9/site-packages/click/core.py\", line 1157, in __call__\n    return self.main(*args, **kwargs)\n  File \"/opt/conda/lib/python3.9/site-packages/typer/core.py\", line 778, in main\n    return _main(\n  File \"/opt/conda/lib/python3.9/site-packages/typer/core.py\", line 216, in _main\n    rv = self.invoke(ctx)\n  File \"/opt/conda/lib/python3.9/site-packages/click/core.py\", line 1688, in invoke\n    return _process_result(sub_ctx.command.invoke(sub_ctx))\n  File \"/opt/conda/lib/python3.9/site-packages/click/core.py\", line 1434, in invoke\n    return ctx.invoke(self.callback, **ctx.params)\n  File \"/opt/conda/lib/python3.9/site-packages/click/core.py\", line 783, in invoke\n    return __callback(*args, **kwargs)\n  File \"/opt/conda/lib/python3.9/site-packages/typer/main.py\", line 683, in wrapper\n    return callback(**use_params)  # type: ignore\n  File \"/opt/conda/lib/python3.9/site-packages/text_generation_server/cli.py\", line 83, in serve\n    server.serve(\n  File \"/opt/conda/lib/python3.9/site-packages/text_generation_server/server.py\", line 207, in serve\n    asyncio.run(\n  File \"/opt/conda/lib/python3.9/asyncio/runners.py\", line 44, in run\n    return loop.run_until_complete(main)\n  File \"/opt/conda/lib/python3.9/asyncio/base_events.py\", line 634, in run_until_complete\n    self.run_forever()\n  File \"/opt/conda/lib/python3.9/asyncio/base_events.py\", line 601, in run_forever\n    self._run_once()\n  File \"/opt/conda/lib/python3.9/asyncio/base_events.py\", line 1905, in _run_once\n    handle._run()\n  File \"/opt/conda/lib/python3.9/asyncio/events.py\", line 80, in _run\n    self._context.run(self._callback, *self._args)\n  File \"/opt/conda/lib/python3.9/site-packages/grpc_interceptor/server.py\", line 159, in invoke_intercept_method\n    return await self.intercept(\n> File \"/opt/conda/lib/python3.9/site-packages/text_generation_server/interceptor.py\", line 21, in intercept\n    return await response\n  File \"/opt/conda/lib/python3.9/site-packages/opentelemetry/instrumentation/grpc/_aio_server.py\", line 82, in _unary_interceptor\n    raise error\n  File \"/opt/conda/lib/python3.9/site-packages/opentelemetry/instrumentation/grpc/_aio_server.py\", line 73, in _unary_interceptor\n    return await behavior(request_or_iterator, context)\n  File \"/opt/conda/lib/python3.9/site-packages/text_generation_server/server.py\", line 72, in Warmup\n    max_supported_total_tokens = self.model.warmup(batch)\n  File \"/opt/conda/lib/python3.9/site-packages/text_generation_server/models/flash_causal_lm.py\", line 674, in warmup\n    raise RuntimeError(\nRuntimeError: Not enough memory to handle 2048 prefill tokens. You need to decrease `--max-batch-prefill-tokens`\n"},"target":"text_generation_launcher"}
... [DUPLICATED TRACEBACK] ...
mts4d 2023-11-30T10:57:03.052Z {"timestamp":"2023-11-30T10:57:03.052531Z","level":"ERROR","message":"Server error: Not enough memory to handle 2048 prefill tokens. You need to decrease `--max-batch-prefill-tokens`","target":"text_generation_client","filename":"router/client/src/lib.rs","line_number":33,"span":{"name":"warmup"},"spans":[{"max_input_length":1024,"max_prefill_tokens":2048,"name":"warmup"},{"name":"warmup"}]}
mts4d 2023-11-30T10:57:03.054Z {"timestamp":"2023-11-30T10:57:03.054445Z","level":"ERROR","message":"Server error: Not enough memory to handle 2048 prefill tokens. You need to decrease `--max-batch-prefill-tokens`","target":"text_generation_client","filename":"router/client/src/lib.rs","line_number":33,"span":{"name":"warmup"},"spans":[{"max_input_length":1024,"max_prefill_tokens":2048,"name":"warmup"},{"name":"warmup"}]}
mts4d 2023-11-30T10:57:03.056Z {"timestamp":"2023-11-30T10:57:03.056274Z","level":"ERROR","fields":{"message":"Method Warmup encountered an error.\nTraceback (most recent call last):\n  File \"/opt/conda/lib/python3.9/site-packages/text_generation_server/models/flash_causal_lm.py\", line 672, in warmup\n    _, batch = self.generate_token(batch)\n  File \"/opt/conda/lib/python3.9/contextlib.py\", line 79, in inner\n    return func(*args, **kwds)\n  File \"/opt/conda/lib/python3.9/site-packages/text_generation_server/models/flash_causal_lm.py\", line 753, in generate_token\n    raise e\n  File \"/opt/conda/lib/python3.9/site-packages/text_generation_server/models/flash_causal_lm.py\", line 750, in generate_token\n    out = self.forward(batch)\n  File \"/opt/conda/lib/python3.9/site-packages/text_generation_server/models/flash_causal_lm.py\", line 717, in forward\n    return self.model.forward(\n  File \"/opt/conda/lib/python3.9/site-packages/text_generation_server/models/custom_modeling/flash_llama_modeling.py\", line 497, in forward\n    hidden_states = self.model(\n  File \"/opt/conda/lib/python3.9/site-packages/torch/nn/modules/module.py\", line 1501, in _call_impl\n    return forward_call(*args, **kwargs)\n  File \"/opt/conda/lib/python3.9/site-packages/text_generation_server/models/custom_modeling/flash_llama_modeling.py\", line 456, in forward\n    hidden_states, residual = layer(\n  File \"/opt/conda/lib/python3.9/site-packages/torch/nn/modules/module.py\", line 1501, in _call_impl\n    return forward_call(*args, **kwargs)\n  File \"/opt/conda/lib/python3.9/site-packages/text_generation_server/models/custom_modeling/flash_llama_modeling.py\", line 383, in forward\n    attn_output = self.self_attn(\n  File \"/opt/conda/lib/python3.9/site-packages/torch/nn/modules/module.py\", line 1501, in _call_impl\n    return forward_call(*args, **kwargs)\n  File \"/opt/conda/lib/python3.9/site-packages/text_generation_server/models/custom_modeling/flash_llama_modeling.py\", line 282, in forward\n    attention(\n  File \"/opt/conda/lib/python3.9/site-packages/text_generation_server/utils/flash_attn.py\", line 84, in attention\n    raise NotImplementedError(\nNotImplementedError: window_size_left is only available with flash attn v2\n\nThe above exception was the direct cause of the following exception:\n\nTraceback (most recent call last):\n  File \"/opt/conda/bin/text-generation-server\", line 8, in <module>\n    sys.exit(app())\n  File \"/opt/conda/lib/python3.9/site-packages/typer/main.py\", line 311, in __call__\n    return get_command(self)(*args, **kwargs)\n  File \"/opt/conda/lib/python3.9/site-packages/click/core.py\", line 1157, in __call__\n    return self.main(*args, **kwargs)\n  File \"/opt/conda/lib/python3.9/site-packages/typer/core.py\", line 778, in main\n    return _main(\n  File \"/opt/conda/lib/python3.9/site-packages/typer/core.py\", line 216, in _main\n    rv = self.invoke(ctx)\n  File \"/opt/conda/lib/python3.9/site-packages/click/core.py\", line 1688, in invoke\n    return _process_result(sub_ctx.command.invoke(sub_ctx))\n  File \"/opt/conda/lib/python3.9/site-packages/click/core.py\", line 1434, in invoke\n    return ctx.invoke(self.callback, **ctx.params)\n  File \"/opt/conda/lib/python3.9/site-packages/click/core.py\", line 783, in invoke\n    return __callback(*args, **kwargs)\n  File \"/opt/conda/lib/python3.9/site-packages/typer/main.py\", line 683, in wrapper\n    return callback(**use_params)  # type: ignore\n  File \"/opt/conda/lib/python3.9/site-packages/text_generation_server/cli.py\", line 83, in serve\n    server.serve(\n  File \"/opt/conda/lib/python3.9/site-packages/text_generation_server/server.py\", line 207, in serve\n    asyncio.run(\n  File \"/opt/conda/lib/python3.9/asyncio/runners.py\", line 44, in run\n    return loop.run_until_complete(main)\n  File \"/opt/conda/lib/python3.9/asyncio/base_events.py\", line 634, in run_until_complete\n    self.run_forever()\n  File \"/opt/conda/lib/python3.9/asyncio/base_events.py\", line 601, in run_forever\n    self._run_once()\n  File \"/opt/conda/lib/python3.9/asyncio/base_events.py\", line 1905, in _run_once\n    handle._run()\n  File \"/opt/conda/lib/python3.9/asyncio/events.py\", line 80, in _run\n    self._context.run(self._callback, *self._args)\n  File \"/opt/conda/lib/python3.9/site-packages/grpc_interceptor/server.py\", line 159, in invoke_intercept_method\n    return await self.intercept(\n> File \"/opt/conda/lib/python3.9/site-packages/text_generation_server/interceptor.py\", line 21, in intercept\n    return await response\n  File \"/opt/conda/lib/python3.9/site-packages/opentelemetry/instrumentation/grpc/_aio_server.py\", line 82, in _unary_interceptor\n    raise error\n  File \"/opt/conda/lib/python3.9/site-packages/opentelemetry/instrumentation/grpc/_aio_server.py\", line 73, in _unary_interceptor\n    return await behavior(request_or_iterator, context)\n  File \"/opt/conda/lib/python3.9/site-packages/text_generation_server/server.py\", line 72, in Warmup\n    max_supported_total_tokens = self.model.warmup(batch)\n  File \"/opt/conda/lib/python3.9/site-packages/text_generation_server/models/flash_causal_lm.py\", line 674, in warmup\n    raise RuntimeError(\nRuntimeError: Not enough memory to handle 2048 prefill tokens. You need to decrease `--max-batch-prefill-tokens`\n"},"target":"text_generation_launcher"}
mts4d 2023-11-30T10:57:03.060Z {"timestamp":"2023-11-30T10:57:03.059689Z","level":"ERROR","message":"Server error: Not enough memory to handle 2048 prefill tokens. You need to decrease `--max-batch-prefill-tokens`","target":"text_generation_client","filename":"router/client/src/lib.rs","line_number":33,"span":{"name":"warmup"},"spans":[{"max_input_length":1024,"max_prefill_tokens":2048,"name":"warmup"},{"name":"warmup"}]}
mts4d 2023-11-30T10:57:03.066Z Error: Warmup(Generation("Not enough memory to handle 2048 prefill tokens. You need to decrease `--max-batch-prefill-tokens`"))
mts4d 2023-11-30T10:57:03.127Z {"timestamp":"2023-11-30T10:57:03.127227Z","level":"INFO","fields":{"message":"Shutting down shards"},"target":"text_generation_launcher"}
mts4d 2023-11-30T10:57:03.127Z {"timestamp":"2023-11-30T10:57:03.127186Z","level":"ERROR","fields":{"message":"Webserver Crashed"},"target":"text_generation_launcher"}
mts4d 2023-11-30T10:57:03.531Z {"timestamp":"2023-11-30T10:57:03.530979Z","level":"INFO","fields":{"message":"Shard terminated"},"target":"text_generation_launcher","span":{"rank":1,"name":"shard-manager"},"spans":[{"rank":1,"name":"shard-manager"}]}
mts4d 2023-11-30T10:57:03.576Z {"timestamp":"2023-11-30T10:57:03.576669Z","level":"INFO","fields":{"message":"Shard terminated"},"target":"text_generation_launcher","span":{"rank":3,"name":"shard-manager"},"spans":[{"rank":3,"name":"shard-manager"}]}
mts4d 2023-11-30T10:57:03.623Z {"timestamp":"2023-11-30T10:57:03.623014Z","level":"INFO","fields":{"message":"Shard terminated"},"target":"text_generation_launcher","span":{"rank":0,"name":"shard-manager"},"spans":[{"rank":0,"name":"shard-manager"}]}
mts4d 2023-11-30T10:57:03.741Z Error: WebserverFailed
mts4d 2023-11-30T10:57:03.741Z {"timestamp":"2023-11-30T10:57:03.741399Z","level":"INFO","fields":{"message":"Shard terminated"},"target":"text_generation_launcher","span":{"rank":2,"name":"shard-manager"},"spans":[{"rank":2,"name":"shard-manager"}]}