Skip to content

Conversation

@changminbark
Copy link
Contributor

PR Template

What type of PR is this?

Uncomment only one /kind <> line, hit enter to put that in a new line, and remove leading whitespaces from that line:

/kind api-change

/kind bug

/kind cleanup
/kind design
/kind documentation
/kind failing-test
/kind feature
/kind flake

What this PR does / why we need it:
This PRs introduces a fix where the custom tokenizer was not truncating inputs to the model's input max length, which led to indexing errors in the model.

Which issue(s) this PR fixes:

Fixes #265

Special notes for your reviewer:

Does this PR introduce a user-facing change?:

NONE

Additional documentation e.g., KEPs (Kubernetes Enhancement Proposals), usage docs, etc.:


Testing

Testing was done using the config.yml file shown below and the necessary services (like vLLM serving HuggingFaceTB/SmolLM2-135M-Instruct and local prometheus).

Click to expand functional test output

BEFORE CHANGE

$ python3 inference_perf/main.py -c config.yml
None of PyTorch, TensorFlow >= 2.0, or Flax have been found. Models won't be available and only tokenizers, configuration and file/data utilities can be used.
2025-10-30 17:12:16,334 - inference_perf.config - INFO - Using configuration from: config.yml
2025-10-30 17:12:16,338 - inference_perf.config - INFO - Benchmarking with the following config:

api:
  type: completion
  streaming: true
  headers: null
data:
  type: shareGPT
  path: null
  input_distribution: null
  output_distribution: null
  shared_prefix: null
load:
  type: constant
  interval: 1.0
  stages:
  - rate: 1000
    duration: 30
  sweep: null
  num_workers: 16
  worker_max_concurrency: 100
  worker_max_tcp_connections: 2500
  circuit_breakers: []
  request_timeout: null
metrics:
  type: prometheus
  prometheus:
    url: http://localhost:9090
    scrape_interval: 15
report:
  request_lifecycle:
    summary: true
    per_stage: true
    per_request: false
  prometheus:
    summary: true
    per_stage: false
storage:
  local_storage:
    path: reports-20251030-171214
    report_file_prefix: null
  google_cloud_storage: null
  simple_storage_service: null
server:
  type: vllm
  model_name: HuggingFaceTB/SmolLM2-135M-Instruct
  base_url: http://0.0.0.0:8000
  ignore_eos: true
tokenizer:
  pretrained_model_name_or_path: HuggingFaceTB/SmolLM2-135M-Instruct
circuit_breakers: null


2025-10-30 17:12:16,338 - inference_perf.client.filestorage.local - INFO - Report files will be stored at: reports-20251030-171214
2025-10-30 17:13:03,471 - inference_perf.loadgen.load_generator - INFO - Stage 0 - run started
Token indices sequence length is longer than the specified maximum sequence length for this model (20877 > 8192). Running this sequence through the model will result in indexing errors
Token indices sequence length is longer than the specified maximum sequence length for this model (20877 > 8192). Running this sequence through the model will result in indexing errors
Token indices sequence length is longer than the specified maximum sequence length for this model (10974 > 8192). Running this sequence through the model will result in indexing errors
Token indices sequence length is longer than the specified maximum sequence length for this model (9494 > 8192). Running this sequence through the model will result in indexing errors
Stage 0 progress:   0%|▏                                                                                     | 0.0022333333333333333/1.0 [00:01<07:27, 448.46s/it]Stage 0 progress:   0%|▎                                                                                                   | 0.0025/1.0 [00:07<46:36, 2803.94s/it]
2025-10-30 17:13:54,374 - inference_perf.loadgen.load_generator - INFO - Loadgen encountered SIGINT
2025-10-30 17:14:11,257 - inference_perf.loadgen.load_generator - INFO - Stage 0 - run failed
2025-10-30 17:14:12,259 - inference_perf.reportgen.base - INFO - Generating Reports...
2025-10-30 17:14:29,331 - inference_perf.client.metricsclient.prometheus_client.base - WARNING - Metric metadata is not present for metric: avg_inter_token_latency. Skipping this metric.
2025-10-30 17:14:29,331 - inference_perf.client.metricsclient.prometheus_client.base - WARNING - Metric metadata is not present for metric: median_inter_token_latency. Skipping this metric.
2025-10-30 17:14:29,331 - inference_perf.client.metricsclient.prometheus_client.base - WARNING - Metric metadata is not present for metric: p90_inter_token_latency. Skipping this metric.
2025-10-30 17:14:29,331 - inference_perf.client.metricsclient.prometheus_client.base - WARNING - Metric metadata is not present for metric: p99_inter_token_latency. Skipping this metric.
2025-10-30 17:14:29,334 - inference_perf.client.filestorage.local - INFO - Report saved to: reports-20251030-171214/summary_lifecycle_metrics.json
2025-10-30 17:14:29,335 - inference_perf.client.filestorage.local - INFO - Report saved to: reports-20251030-171214/stage_0_lifecycle_metrics.json
2025-10-30 17:14:29,335 - inference_perf.client.filestorage.local - INFO - Report saved to: reports-20251030-171214/summary_prometheus_metrics.json
2025-10-30 17:14:29,337 - inference_perf.client.filestorage.local - INFO - Report saved to: reports-20251030-171214/config.yaml

config.yaml

stage_0_lifecycle_metrics.json
summary_lifecycle_metrics.json
summary_prometheus_metrics.json

AFTER CHANGE

$ python3 inference_perf/main.py -c config.yml
None of PyTorch, TensorFlow >= 2.0, or Flax have been found. Models won't be available and only tokenizers, configuration and file/data utilities can be used.
2025-10-30 17:15:54,860 - inference_perf.config - INFO - Using configuration from: config.yml
2025-10-30 17:15:54,864 - inference_perf.config - INFO - Benchmarking with the following config:

api:
  type: completion
  streaming: true
  headers: null
data:
  type: shareGPT
  path: null
  input_distribution: null
  output_distribution: null
  shared_prefix: null
load:
  type: constant
  interval: 1.0
  stages:
  - rate: 1000
    duration: 30
  sweep: null
  num_workers: 16
  worker_max_concurrency: 100
  worker_max_tcp_connections: 2500
  circuit_breakers: []
  request_timeout: null
metrics:
  type: prometheus
  prometheus:
    url: http://localhost:9090
    scrape_interval: 15
report:
  request_lifecycle:
    summary: true
    per_stage: true
    per_request: false
  prometheus:
    summary: true
    per_stage: false
storage:
  local_storage:
    path: reports-20251030-171553
    report_file_prefix: null
  google_cloud_storage: null
  simple_storage_service: null
server:
  type: vllm
  model_name: HuggingFaceTB/SmolLM2-135M-Instruct
  base_url: http://0.0.0.0:8000
  ignore_eos: true
tokenizer:
  pretrained_model_name_or_path: HuggingFaceTB/SmolLM2-135M-Instruct
circuit_breakers: null


2025-10-30 17:15:54,864 - inference_perf.client.filestorage.local - INFO - Report files will be stored at: reports-20251030-171553
2025-10-30 17:16:47,418 - inference_perf.loadgen.load_generator - INFO - Stage 0 - run started
Stage 0 progress:   0%|▎                                                                                                | 0.0036/1.0 [00:26<3:25:05, 12350.16s/it]Stage 0 progress:   0%|▎                                                                                                 | 0.0036/1.0 [00:29<2:13:54, 8063.94s/it]
2025-10-30 17:17:58,760 - inference_perf.loadgen.load_generator - INFO - Loadgen encountered SIGINT
2025-10-30 17:18:26,409 - inference_perf.loadgen.load_generator - INFO - Stage 0 - run failed
2025-10-30 17:18:27,412 - inference_perf.reportgen.base - INFO - Generating Reports...
2025-10-30 17:18:44,475 - inference_perf.client.metricsclient.prometheus_client.base - WARNING - Metric metadata is not present for metric: avg_inter_token_latency. Skipping this metric.
2025-10-30 17:18:44,475 - inference_perf.client.metricsclient.prometheus_client.base - WARNING - Metric metadata is not present for metric: median_inter_token_latency. Skipping this metric.
2025-10-30 17:18:44,475 - inference_perf.client.metricsclient.prometheus_client.base - WARNING - Metric metadata is not present for metric: p90_inter_token_latency. Skipping this metric.
2025-10-30 17:18:44,475 - inference_perf.client.metricsclient.prometheus_client.base - WARNING - Metric metadata is not present for metric: p99_inter_token_latency. Skipping this metric.
2025-10-30 17:18:44,478 - inference_perf.client.filestorage.local - INFO - Report saved to: reports-20251030-171553/summary_lifecycle_metrics.json
2025-10-30 17:18:44,479 - inference_perf.client.filestorage.local - INFO - Report saved to: reports-20251030-171553/stage_0_lifecycle_metrics.json
2025-10-30 17:18:44,479 - inference_perf.client.filestorage.local - INFO - Report saved to: reports-20251030-171553/summary_prometheus_metrics.json
2025-10-30 17:18:44,481 - inference_perf.client.filestorage.local - INFO - Report saved to: reports-20251030-171553/config.yaml

config.yaml

stage_0_lifecycle_metrics.json
summary_lifecycle_metrics.json
summary_prometheus_metrics.json

@k8s-ci-robot k8s-ci-robot added the kind/bug Categorizes issue or PR as related to a bug. label Oct 30, 2025
@k8s-ci-robot
Copy link
Contributor

[APPROVALNOTIFIER] This PR is NOT APPROVED

This pull-request has been approved by: changminbark
Once this PR has been reviewed and has the lgtm label, please assign arangogutierrez for approval. For more information see the Code Review Process.

The full list of commands accepted by this bot can be found here.

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@k8s-ci-robot k8s-ci-robot added cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. size/XS Denotes a PR that changes 0-9 lines, ignoring generated files. labels Oct 30, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. kind/bug Categorizes issue or PR as related to a bug. size/XS Denotes a PR that changes 0-9 lines, ignoring generated files.

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Custom Tokenizer does not truncate input tokens according to model input limit

2 participants