fix: custom tokenizer truncates inputs to model max input length #266
+1
−1
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
PR Template
What type of PR is this?
/kind bug
What this PR does / why we need it:
This PRs introduces a fix where the custom tokenizer was not truncating inputs to the model's input max length, which led to indexing errors in the model.
Which issue(s) this PR fixes:
Fixes #265
Special notes for your reviewer:
Does this PR introduce a user-facing change?:
Additional documentation e.g., KEPs (Kubernetes Enhancement Proposals), usage docs, etc.:
Testing
Testing was done using the config.yml file shown below and the necessary services (like vLLM serving HuggingFaceTB/SmolLM2-135M-Instruct and local prometheus).
Click to expand functional test output
BEFORE CHANGE
config.yaml
stage_0_lifecycle_metrics.json
summary_lifecycle_metrics.json
summary_prometheus_metrics.json
AFTER CHANGE
config.yaml
stage_0_lifecycle_metrics.json
summary_lifecycle_metrics.json
summary_prometheus_metrics.json