-
Notifications
You must be signed in to change notification settings - Fork 18
Support python 3.13 #95
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
WalkthroughUpdated Python targets across CI, Binder, and docs; bumped project version and widened Python requirement; regenerated dependencies; converted many test snapshots to binary-encoded payloads and swapped Changes
Sequence Diagram(s)sequenceDiagram
participant T as Test code
participant S as Serializer (Plot/Bokeh)
participant Snap as Snapshot (JSON/HTML)
participant C as Comparator (Matplotlib/Bokeh/Plotly)
Note over T,S: test creates figure
T->>S: build & serialize figure
alt Plotly binary encoding
S->>Snap: emit fields as { "dtype", "bdata" } for x/y/color/customdata
else Bokeh HTML
S->>Snap: emit embedded docs_json, root IDs, and Bokeh version
end
Note over C,Snap: assertion phase
C->>Snap: load snapshot
alt Plotly comparator
C->>C: decode bdata (base64 ± zlib) → ndarray
C->>C: align/sort tuples (y ↔ customdata), compare arrays with tolerance
else Bokeh comparator
C->>C: recurse dict/list, ignore keys (`id`,`root_ids`), compare ndarrays/text
else Matplotlib comparator
C->>C: image diff, allow ≤1% differing pixels
end
C-->>T: test pass/fail
Estimated code review effort🎯 4 (Complex) | ⏱️ ~45 minutes
Possibly related PRs
Suggested reviewers
Poem
Pre-merge checks and finishing touches✅ Passed checks (2 passed)
✨ Finishing touches
🧪 Generate unit tests (beta)
📜 Recent review detailsConfiguration used: CodeRabbit UI Review profile: CHILL Plan: Pro 📒 Files selected for processing (1)
🔇 Additional comments (3)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
|
📖 Documentation Preview The documentation for this PR has been built and is available at: This preview will be updated automatically when you push new commits to this PR. Preview built from commit: |
|
📖 Documentation Preview The documentation for this PR has been built and is available at: This preview will be updated automatically when you push new commits to this PR. Preview built from commit: |
|
📖 Documentation Preview The documentation for this PR has been built and is available at: This preview will be updated automatically when you push new commits to this PR. Preview built from commit: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 2
🔭 Outside diff range comments (1)
.github/workflows/execute_notebooks.yml (1)
49-85: Missing “verify-changed-files” and no add/commit step — push will never happenYou reference steps.verify-changed-files.outputs.changed but never set it, and there’s no git add/commit before push. As written, this never pushes executed notebooks.
- name: Configure git run: | git config --local user.email "action@github.com" git config --local user.name "GitHub Action" + - name: Detect changes + id: verify-changed-files + run: | + if [ -n "$(git status --porcelain)" ]; then + echo "changed=true" >> "$GITHUB_OUTPUT" + else + echo "changed=false" >> "$GITHUB_OUTPUT" + fi + + - name: Commit executed notebooks + if: steps.verify-changed-files.outputs.changed == 'true' && github.event_name != 'pull_request' + run: | + git add -A + git commit -m "Execute notebooks [skip ci]" || echo "No changes to commit" + - name: Push changes if: steps.verify-changed-files.outputs.changed == 'true' && github.event_name != 'pull_request' run: | git push origin ${{ github.ref_name }}
🧹 Nitpick comments (4)
.github/workflows/execute_notebooks.yml (1)
30-35: Use actions/setup-python v5 for Python 3.13 and future-proofingsetup-python@v5 is the current major and provides updated toolcache handling and Node20 runtime. Recommend bumping.
- - name: Set up Python - uses: actions/setup-python@v4 + - name: Set up Python + uses: actions/setup-python@v5 with: python-version: '3.13' cache: 'pip'.github/workflows/static.yml (1)
38-43: Upgrade to setup-python v5 and enable pip cacheAlign with other workflows and speed builds.
- - name: Set up python - uses: actions/setup-python@v4 + - name: Set up python + uses: actions/setup-python@v5 with: python-version: '3.13' + cache: 'pip'.readthedocs.yaml (1)
4-7: RTD Py3.13 on Ubuntu 24.04 is fine; check heavy deps compatibilityWith Python 3.13 on RTD, some heavy packages (e.g., rdkit) may lack wheels and fail the build.
Mitigations:
- Use rdkit-pypi as a first attempt in docs/requirements.txt.
- Mock optional imports in conf.py to avoid installing heavy libs.
- Gate notebooks that depend on RDKit or pre-execute and include outputs.
requirements.txt (1)
29-33: CairoSVG stack may require system libraries on RTD/CI.cairocffi/cairosvg/tinycss2 often need system packages (cairo, pango, libffi, freetype). With RTD OS set to ubuntu-24.04, confirm build images have these preinstalled or add apt packages in RTD config.
I can propose an RTD v2 config snippet with apt_packages if builds fail.
Also applies to: 267-272, 299-303
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (9)
.binder/runtime.txt(1 hunks).github/workflows/ci.yml(1 hunks).github/workflows/execute_notebooks.yml(1 hunks).github/workflows/static.yml(1 hunks).readthedocs.yaml(1 hunks)README.md(1 hunks)docs/Installation.rst(1 hunks)pyproject.toml(2 hunks)requirements.txt(6 hunks)
🧰 Additional context used
🪛 LanguageTool
.binder/runtime.txt
[grammar] ~1-~1: Passe das Symbol an
Context: python-3.13
(QB_NEW_DE_OTHER_ERROR_IDS_REPLACEMENT_OTHER)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
- GitHub Check: execute-notebooks
🔇 Additional comments (14)
.github/workflows/execute_notebooks.yml (1)
46-47: RDKit via pip on Python 3.13 is brittle; add a resilient install or alternative sourceManylinux wheels for rdkit on Py3.13 lag. Builds often fail in CI. Add a fallback to rdkit-pypi, or consider excluding it from this workflow if not strictly required.
- pip install ipykernel rdkit cairosvg pymzml + # RDKit can be problematic on Py3.13; try prebuilt wheels first, fallback, or skip if optional + pip install ipykernel cairosvg pymzml + pip install rdkit-pypi || pip install rdkit || echo "RDKit unavailable on Py3.13; skipping"Please confirm notebooks do not hard-require RDKit execution for this workflow; if they do, consider pinning a known-good version or switching to conda for this job.
.github/workflows/static.yml (1)
44-49: Add apt-get update and harden RDKit install for Py3.13Ensure apt indices are fresh, and make RDKit install resilient.
- name: Install dependencies run: | - sudo apt install pandoc - pip install -r requirements.txt - pip install ipykernel rdkit cairosvg pymzml + sudo apt-get update + sudo apt-get install -y pandoc + pip install -r requirements.txt + pip install ipykernel cairosvg pymzml + # RDKit wheels for 3.13 may be missing; try alternatives or skip if docs don't require it + pip install rdkit-pypi || pip install rdkit || echo "RDKit unavailable on Py3.13; continuing without it" pip install .If documentation sections require RDKit (e.g., nbsphinx runs), consider gating those pages or mocking imports to avoid build failures on RTD/GHA.
docs/Installation.rst (1)
13-14: LGTM: Update to Python 3.13The installation snippet now matches the repo-wide Python 3.13 target.
README.md (1)
41-44: LGTM: Conda environment updated to Python 3.13Matches other tooling updates in this PR.
.binder/runtime.txt (1)
1-1: Verify Binder supports python-3.13 for this repoBinder base images can lag new Python versions. If 3.13 isn’t available, launches will fail.
Please try launching the Binder link and, if it fails, temporarily revert to python-3.12 or supply a full environment.yml to guarantee solver availability.
.github/workflows/ci.yml (1)
10-12: Matrix addition looks good; align with requires-python upper boundYou advertise requires-python <=3.14 in pyproject, but CI doesn’t test 3.14. Either add 3.14 (or 3.14-dev) to the matrix or change requires-python to "<3.14" until supported.
matrix: os: [ubuntu-latest, windows-latest] # note mac-latest tests fail due to slight differences in images - python-version: ["3.12", "3.13"] + python-version: ["3.12", "3.13"] # add "3.14-dev" when wheels available, or tighten requires-python to "<3.14"pyproject.toml (2)
10-10: Version bump to 1.0.1 — OKNo issues spotted with the metadata increment.
22-24: Classifiers updated — OKAdding 3.12 and 3.13 classifiers matches the rest of the PR.
requirements.txt (6)
2-2: Header correctly targets Python 3.13.Autogenerated note aligns with the PR’s goal. No action needed.
25-26: bleach[css] extras and css/tinycss2 pins look consistent.Using bleach[css] and pinning cssselect2/tinycss2/webencodings is expected with pip-compile. No issues spotted.
Also applies to: 267-272, 299-303
72-83: Jupyter/IPython/Plotly stack versions look coherent for Python 3.13.ipykernel/ipython/prompt-toolkit pins align; pydata-sphinx-theme and pygments also consistent. Proceed.
Also applies to: 174-176, 188-199
113-116: Manual Verification Required: Confirm cp313 Wheels for Compiled Dependencies
I attempted to run the automated wheel-availability check, but the output was incomplete and inconclusive. Please manually verify on PyPI that the exact pinned versions in requirements.txt include Python 3.13 (cp313) wheels (or abi3 wheels) for all key compiled dependencies. Without these, installs will fall back to source builds and are likely to fail.Packages to check (with their pinned versions from requirements.txt):
- numpy == 2.1.2
- matplotlib == 3.10.5
- pillow
- psutil
- pyzmq
- cffi
- regex
- rdkit
This also applies to the same groups of lines elsewhere in requirements.txt (lines 138–145, 165–171, 180–181, 216–220, 226–227, 272–276, 35–36). Ensure each one ships a cp313 wheel (or compatible abi3 wheel) for all supported platforms.
174-176: Plotly v6 Upgrade: Sanity‐check all downstream usagesPlotly 6.x introduces breaking changes vs 5.x. Your codebase includes many Plotly references—please review and update as needed:
• In pyopenms_viz/_plotly/core.py:
- plotly.graph_objects as go
- plotly.graph_objs.Figure
- plotly.subplots.make_subplots
• In pyopenms_viz/testing/PlotlySnapshotExtension.py:- plotly.io.to_json
• In docs/conf.py and gallery scripts:- import plotly.io as pio
- references to plotly.js v2.x in notebooks under nbs/
• In notebooks (*.ipynb) and Sphinx galleries:- ensure
Figure.show(), default templates, and I/O methods still workCheck for renamed functions, changed defaults, removed APIs (e.g., template behavior,
write_htmloptions), and adjust imports or call signatures accordingly.
228-229: No security advisories affecting pinned versionsBoth pinned versions already include the relevant CVE fixes—no further action needed:
- requests 2.32.4 contains the fix for CVE-2024-47081 and is not vulnerable.
- urllib3 2.5.0 ships the patches for CVE-2025-50181 and CVE-2025-50182 and is not vulnerable.
| "Topic :: Scientific/Engineering :: Chemistry", | ||
| ] | ||
| requires-python = ">=3.10, <=3.13" | ||
| requires-python = ">=3.10, <=3.14" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
💡 Verification agent
🧩 Analysis chain
Re-evaluate requires-python upper bound vs CI coverage
You allow installation on 3.14 but do not test it in CI. Either:
- Add 3.14 (or 3.14-dev) to CI matrices, or
- Change to "<3.14" until you officially support it.
-requires-python = ">=3.10, <=3.14"
+# Option A: tighten until CI covers 3.14
+requires-python = ">=3.10, <3.14"Alternatively, add Python 3.14 to classifiers and CI once available.
🏁 Script executed:
#!/bin/bash
# Search for python-version entries in GitHub Actions workflows to verify CI coverage
grep -R "python-version" -n .github/workflowsLength of output: 554
Re-evaluate requires-python upper bound vs CI coverage
Our GitHub Actions only run on Python 3.12 and 3.13, but pyproject.toml currently allows installation on 3.14. Please choose one of the following:
• Tighten the upper bound until we add CI for 3.14:
- requires-python = ">=3.10, <=3.14"
+ requires-python = ">=3.10, <3.14"• Or add Python 3.14 (or 3.14-dev) to your CI matrices in:
– .github/workflows/ci.yml (matrix.python-version)
– .github/workflows/static.yml
– .github/workflows/execute_notebooks.yml
– (and any other workflow using python-version)
Once CI passes on 3.14, don’t forget to update your PyPI classifiers to include it.
| rdkit==2025.3.5 | ||
| # via pyopenms_viz (pyproject.toml) | ||
| referencing==0.36.2 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
High risk: rdkit pin may not have Python 3.13 wheels on PyPI.
RDKit wheels on PyPI have historically been limited/inconsistent across OS/architectures. If cp313 wheels aren’t published for 2025.3.5, installs will fail (CI/RTD/users).
Actions:
- Verify the exact version exists on PyPI and ships cp313 wheels for Linux/macOS/Windows.
- If not, consider: using conda for rdkit, switching to rdkit-pypi (community wheels), or gating rdkit behind an extra/optional dependency for pip-only installs.
I can help draft extras/platform markers or a docs note with conda-first guidance if needed.
🤖 Prompt for AI Agents
In requirements.txt around lines 220 to 222, the pinned rdkit==2025.3.5 may not
publish cp313 wheels on PyPI causing installs to fail for Python 3.13; verify on
PyPI that rdkit 2025.3.5 ships cp313 wheels for Linux/macOS/Windows and if it
does not, remove or replace the direct pip pin by one of these fixes: (1) remove
rdkit from the main requirements and document a conda-first installation (or add
a conda environment file) so users install RDKit via conda, (2) replace with
rdkit-pypi (community wheels) and test cp313 availability, or (3) make rdkit an
optional extra or use platform/marker-specific requirement lines so pip-only
installs don’t fail; update README/installation docs to reflect the chosen
approach and add CI matrix checks for Python 3.13 to ensure installs succeed.
|
In theory this should work however, the ci tests will likely have to be updated because they are all visual tests and thus very specific to all the versions that they were created with. I am ok if you want to update 3.13 to be supported however we should keep the testing at 3.12 for now. |
jcharkow
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
See note above. I am not sure if I am ready to bump everything to 3.13 yet since all the tests are specific for 3.12. Since these are visual snapshots they are very sensitive to version changes.
In theory 3.13 should work though so if we want to just bump that up on PyPI I'm fine with that.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I would keep this as is for testing purposes.
| os: ubuntu-24.04 | ||
| tools: | ||
| python: "3.12" | ||
| python: "3.13" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Would keep testing at 3.12 for now, see above
.github/workflows/ci.yml
Outdated
| matrix: | ||
| os: [ubuntu-latest, windows-latest] # note mac-latest tests fail due to slight differences in images | ||
| python-version: ["3.12"] | ||
| python-version: ["3.12", "3.13"] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
would keep testing at 3.13 for now, see above
|
Any update on this? I would like to see Python 3.13 supported as well. |
|
📖 Documentation Preview The documentation for this PR has been built and is available at: This preview will be updated automatically when you push new commits to this PR. Preview built from commit: |
|
📖 Documentation Preview The documentation for this PR has been built and is available at: This preview will be updated automatically when you push new commits to this PR. Preview built from commit: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
🧹 Nitpick comments (16)
test/__snapshots__/test_peakmap3d/test_peakmap_plot[ms_plotly-kwargs1].json (1)
2022-2033: "scattermap" is not a standard Plotly trace; remove to avoid template bloat.Plotly supports scattermapbox/scattergeo, not scattermap. Keeping an unknown type in the template is useless and may confuse future readers/tools. Recommend deleting this block across snapshots.
Apply:
- "scattermap": [ - { - "marker": { - "colorbar": { - "outlinewidth": 1, - "tickcolor": "rgb(36,36,36)", - "ticks": "outside" - } - }, - "type": "scattermap" - } - ],Run the script in the first comment below to ensure no runtime code attempts to use "scattermap".
test/__snapshots__/test_spectrum/test_spectrum_plot[ms_plotly-kwargs1].json (3)
9-16: Binary-encoded coordinates: lock endianness and schema.Good for size/perf, but ensure the reader fixes endianness (little-endian) and documents a schema version so snapshots are stable across Python 3.13/OS/arch.
Add metadata alongside dtype/bdata, e.g.
"endian":"le","version":"1", and enforce LE when serializing.Confirm the deserializer exists and enforces endianness for these fields.
62-65: y uses i1 (8‑bit signed); risk of overflow/clipping for intensities.Mass-spec intensities often exceed 127. Consider u2/u4 or f4 to be safe.
Switch to
"dtype":"u2"or"f4"for y across snapshots and the serializer.
590-601: Remove nonstandard "scattermap" from template.Same concern as other files; delete this block.
test/__snapshots__/test_spectrum/test_spectrum_with_annotations[ms_plotly-kwargs2].json (2)
9-16: Binary x/y blocks: ensure stable decoding across 3.13 and platforms.Document and enforce endianness; include a small schema/version to guard future changes. Check that y’s numeric range won’t overflow chosen dtype.
Also applies to: 46-53, 84-91, 157-163, 194-200, 231-237
674-685: Drop "scattermap" template entry.Use scattermapbox/scattergeo as needed; otherwise remove.
test/__snapshots__/test_peakmap3d/test_peakmap_plot[ms_plotly-kwargs2].json (1)
2475-2486: Unknown Plotly type "scattermap" — remove.Delete this block to keep templates valid and lean.
test/__snapshots__/test_peakmap3d/test_peakmap_plot[ms_plotly-kwargs0].json (1)
2475-2486: Remove "scattermap" from template.Same rationale as other files.
test/__snapshots__/test_spectrum/test_spectrum_with_annotations[ms_plotly-kwargs1].json (2)
8-15: Binary x/y: add endian/schema and review y dtype.Mirror prior comment: enforce LE endianness; add schema version; ensure y dtype won’t clip future intensities.
461-472: Delete nonstandard "scattermap" block.Consistent cleanup across snapshots.
test/__snapshots__/test_spectrum/test_spectrum_plot[ms_plotly-kwargs0].json (2)
8-15: Binary-encoded data: ensure deterministic, portable decoding.Same as earlier: specify endianness and schema; reconsider
"i1"for y.
461-472: Remove "scattermap" template.Keep only supported trace templates.
test/__snapshots__/test_spectrum/test_spectrum_plot[ms_plotly-kwargs2].json (1)
9-16: Stabilize binary snapshot format (endianness/decoder).Ensure blobs are always little‑endian and decoded before compare (or compare numeric arrays) to avoid Py 3.13/arch flakiness. Consider a snapshot normalizer to (de)serialize deterministically.
Also applies to: 13-16
test/__snapshots__/test_peakmap/test_peakmap_plot[ms_bokeh-kwargs0].html (1)
21-24: Deflake Bokeh snapshots by scrubbing volatile IDs.Normalize/remove GUIDs (data-root-id, docid/root_ids) during snapshot generation to avoid churn across runs/versions.
Also applies to: 31-34
test/__snapshots__/test_peakmap/test_peakmap_plot[ms_plotly-kwargs1].json (1)
7-10: Add a decode/shape validator for blobs.In tests, decode bdata and assert lengths match x/y/customdata per trace to catch partial/ordering issues early.
Also applies to: 60-67, 221-224, 275-281
test/__snapshots__/test_peakmap/test_peakmap_plot[ms_plotly-kwargs2].json (1)
1-200: Binary payloads: ensure deterministic serialization.Guarantee fixed endianness and base64 formatting; add a normalizer to compare numeric arrays, not raw blobs.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
⛔ Files ignored due to path filters (5)
test/__snapshots__/test_chromatogram/test_chromatogram_plot[ms_matplotlib-kwargs0].pngis excluded by!**/*.pngtest/__snapshots__/test_chromatogram/test_chromatogram_plot[ms_matplotlib-kwargs3].pngis excluded by!**/*.pngtest/__snapshots__/test_chromatogram/test_chromatogram_plot[ms_matplotlib-kwargs4].pngis excluded by!**/*.pngtest/__snapshots__/test_mobilogram/test_mobilogram_plot[ms_matplotlib-kwargs0].pngis excluded by!**/*.pngtest/__snapshots__/test_mobilogram/test_mobilogram_plot[ms_matplotlib-kwargs3].pngis excluded by!**/*.png
📒 Files selected for processing (44)
.github/workflows/ci.yml(1 hunks)test/__snapshots__/test_chromatogram/test_chromatogram_plot[ms_plotly-kwargs1].json(8 hunks)test/__snapshots__/test_chromatogram/test_chromatogram_plot[ms_plotly-kwargs2].json(8 hunks)test/__snapshots__/test_chromatogram/test_chromatogram_with_annotation[ms_plotly-kwargs0].json(8 hunks)test/__snapshots__/test_peakmap/test_peakmap_mz_im[ms_plotly].json(3 hunks)test/__snapshots__/test_peakmap/test_peakmap_plot[ms_bokeh-kwargs0].html(1 hunks)test/__snapshots__/test_peakmap/test_peakmap_plot[ms_plotly-kwargs0].json(3 hunks)test/__snapshots__/test_peakmap/test_peakmap_plot[ms_plotly-kwargs1].json(15 hunks)test/__snapshots__/test_peakmap/test_peakmap_plot[ms_plotly-kwargs2].json(15 hunks)test/__snapshots__/test_peakmap/test_peakmap_plot[ms_plotly-kwargs3].json(15 hunks)test/__snapshots__/test_peakmap3d/test_peakmap_plot[ms_plotly-kwargs0].json(1 hunks)test/__snapshots__/test_peakmap3d/test_peakmap_plot[ms_plotly-kwargs1].json(1 hunks)test/__snapshots__/test_peakmap3d/test_peakmap_plot[ms_plotly-kwargs2].json(1 hunks)test/__snapshots__/test_peakmap_marginals/test_peakmap_marginals[ms_plotly].json(4 hunks)test/__snapshots__/test_spectrum/test_mirror_spectrum[ms_bokeh].html(1 hunks)test/__snapshots__/test_spectrum/test_mirror_spectrum[ms_plotly].json(3 hunks)test/__snapshots__/test_spectrum/test_spectrum_binning[ms_bokeh-kwargs0].html(1 hunks)test/__snapshots__/test_spectrum/test_spectrum_binning[ms_bokeh-kwargs1].html(1 hunks)test/__snapshots__/test_spectrum/test_spectrum_binning[ms_bokeh-kwargs2].html(1 hunks)test/__snapshots__/test_spectrum/test_spectrum_binning[ms_bokeh-kwargs3].html(1 hunks)test/__snapshots__/test_spectrum/test_spectrum_binning[ms_bokeh-kwargs4].html(1 hunks)test/__snapshots__/test_spectrum/test_spectrum_binning[ms_plotly-kwargs0].json(2 hunks)test/__snapshots__/test_spectrum/test_spectrum_binning[ms_plotly-kwargs1].json(2 hunks)test/__snapshots__/test_spectrum/test_spectrum_binning[ms_plotly-kwargs2].json(2 hunks)test/__snapshots__/test_spectrum/test_spectrum_binning[ms_plotly-kwargs3].json(2 hunks)test/__snapshots__/test_spectrum/test_spectrum_binning[ms_plotly-kwargs4].json(2 hunks)test/__snapshots__/test_spectrum/test_spectrum_plot[ms_bokeh-kwargs0].html(1 hunks)test/__snapshots__/test_spectrum/test_spectrum_plot[ms_bokeh-kwargs1].html(1 hunks)test/__snapshots__/test_spectrum/test_spectrum_plot[ms_bokeh-kwargs2].html(1 hunks)test/__snapshots__/test_spectrum/test_spectrum_plot[ms_bokeh-kwargs3].html(1 hunks)test/__snapshots__/test_spectrum/test_spectrum_plot[ms_plotly-kwargs0].json(2 hunks)test/__snapshots__/test_spectrum/test_spectrum_plot[ms_plotly-kwargs1].json(4 hunks)test/__snapshots__/test_spectrum/test_spectrum_plot[ms_plotly-kwargs2].json(5 hunks)test/__snapshots__/test_spectrum/test_spectrum_plot[ms_plotly-kwargs3].json(2 hunks)test/__snapshots__/test_spectrum/test_spectrum_plot_with_peak_color[ms_bokeh-kwargs0].html(1 hunks)test/__snapshots__/test_spectrum/test_spectrum_plot_with_peak_color[ms_bokeh-kwargs1].html(1 hunks)test/__snapshots__/test_spectrum/test_spectrum_plot_with_peak_color[ms_plotly-kwargs0].json(3 hunks)test/__snapshots__/test_spectrum/test_spectrum_plot_with_peak_color[ms_plotly-kwargs1].json(3 hunks)test/__snapshots__/test_spectrum/test_spectrum_with_annotations[ms_bokeh-kwargs0].html(1 hunks)test/__snapshots__/test_spectrum/test_spectrum_with_annotations[ms_bokeh-kwargs1].html(1 hunks)test/__snapshots__/test_spectrum/test_spectrum_with_annotations[ms_bokeh-kwargs2].html(1 hunks)test/__snapshots__/test_spectrum/test_spectrum_with_annotations[ms_plotly-kwargs0].json(7 hunks)test/__snapshots__/test_spectrum/test_spectrum_with_annotations[ms_plotly-kwargs1].json(2 hunks)test/__snapshots__/test_spectrum/test_spectrum_with_annotations[ms_plotly-kwargs2].json(7 hunks)
🚧 Files skipped from review as they are similar to previous changes (1)
- .github/workflows/ci.yml
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (14)
- GitHub Check: build
- GitHub Check: test (ubuntu-latest, 3.13)
- GitHub Check: test (windows-latest, 3.12)
- GitHub Check: test (ubuntu-latest, 3.14)
- GitHub Check: test (ubuntu-latest, 3.12)
- GitHub Check: test (windows-latest, 3.13)
- GitHub Check: test (windows-latest, 3.14)
- GitHub Check: execute-notebooks
- GitHub Check: test (windows-latest, 3.14)
- GitHub Check: test (ubuntu-latest, 3.13)
- GitHub Check: test (windows-latest, 3.13)
- GitHub Check: test (ubuntu-latest, 3.12)
- GitHub Check: test (ubuntu-latest, 3.14)
- GitHub Check: test (windows-latest, 3.12)
🔇 Additional comments (41)
test/__snapshots__/test_spectrum/test_spectrum_plot[ms_bokeh-kwargs3].html (1)
15-15: Verify snapshot update is intentional due to Bokeh version upgrade.This snapshot file reflects a Bokeh version bump from 3.6.0 to 3.7.3. The changes are consistent: the CDN script reference (line 15), div id and data-root-id (line 21), docs_json script element id (line 23), and render_items mapping (line 33) have all been updated accordingly.
Since test snapshots are auto-generated and regenerate when dependencies change, this is expected behavior. However, confirm that:
- The Bokeh 3.7.3 upgrade is intentional as part of Python 3.13 support
- All visual tests have been re-validated after the dependency update
The PR comments indicate visual tests are version-specific—ensure this snapshot accurately represents the expected rendering output in Bokeh 3.7.3.
Also applies to: 21-21, 23-23, 33-33
test/__snapshots__/test_spectrum/test_spectrum_plot[ms_bokeh-kwargs2].html (1)
15-15: Snapshot update consistent with Bokeh 3.7.3 dependency bump.The snapshot has been regenerated following the Bokeh upgrade from 3.6.0 to 3.7.3, with expected changes:
- CDN script updated to Bokeh 3.7.3
- HTML root IDs and JSON embedding references regenerated consistently
- JSON payload structure remains valid with updated version metadata
Also applies to: 21-24, 32-33
test/__snapshots__/test_spectrum/test_mirror_spectrum[ms_bokeh].html (1)
15-15: Snapshot update consistent with Bokeh 3.7.3 dependency bump.The snapshot has been regenerated following the Bokeh upgrade from 3.6.0 to 3.7.3, with expected changes:
- CDN script updated to Bokeh 3.7.3
- HTML root IDs and JSON embedding references regenerated consistently
- JSON payload structure remains valid with updated version metadata
- Mirror spectrum visualization data structure preserved
Also applies to: 21-24, 32-33
test/__snapshots__/test_peakmap/test_peakmap_mz_im[ms_plotly].json (2)
7-10: Verify snapshot regeneration aligns with intended library upgrades.This snapshot has undergone significant structural changes: x/y coordinate arrays and marker colors are now binary-encoded objects with dtype and bdata fields, and customdata follows the same pattern. These changes suggest a visualization library version bump (likely Plotly).
Since Python 3.13 support often requires dependency updates, please confirm:
- Was this snapshot intentionally regenerated as part of the library upgrade for Python 3.13?
- Do the CI/test pipelines properly regenerate snapshots on dependency version changes?
- Have the visual tests been validated to produce correct output with the new binary encoding?
Also applies to: 60-67, 69-73
471-482: Verify new scattermap trace type is intentional.A new scattermap trace configuration has been added to the template. Confirm this aligns with the visualization library version bump and that the corresponding trace type is used correctly in the plotting code.
test/__snapshots__/test_spectrum/test_spectrum_plot_with_peak_color[ms_plotly-kwargs1].json (1)
9-16: Verify snapshots were regenerated by test suite, not manually edited.This test snapshot file contains updated Plotly JSON with binary-encoded trace data (
bdatawith dtype) and a newscattermaptrace type, consistent with a Bokeh/Plotly version bump. These changes are expected side effects of dependency regeneration during Python 3.13 support work.However, please confirm that: (1) these snapshots were regenerated by running the test suite, not hand-edited; (2) all visualization tests pass with the updated snapshots; and (3) the Bokeh/Plotly version bump is intentional and compatible with Python 3.13.
Also applies to: 196-203, 616-627
test/__snapshots__/test_spectrum/test_spectrum_plot_with_peak_color[ms_plotly-kwargs0].json (2)
9-16: Verify intentionality of binary-encoded trace data.Trace coordinates (
xandy) are now encoded as binary blocks with dtype and bdata fields instead of plain numeric arrays. This represents a serialization format change, likely from a Plotly version bump.Please confirm:
- Was Plotly updated specifically to support Python 3.13, or as a separate dependency upgrade?
- Are the binary encodings semantically equivalent to the previous arrays (i.e., no data loss or corruption)?
- Were these snapshots regenerated automatically by the test suite, or were changes manually curated?
You can verify the Plotly version and changelog by searching the repository for version pins or recent lock file updates.
Also applies to: 196-203
616-627: Confirm scattermap trace type compatibility.A new
scattermaptrace type has been added to the layout template. This is a newer Plotly trace type and indicates a Plotly version bump. Ensure this doesn't introduce rendering regressions or break backward compatibility with earlier Plotly versions if those are still supported.test/__snapshots__/test_spectrum/test_mirror_spectrum[ms_plotly].json (2)
8-15: Verify snapshot reflects correct data serialization under Python 3.13 and updated Plotly.The numeric arrays for trace x/y coordinates have been converted from explicit arrays to binary-encoded objects (
dtype: "f8", base64bdata). This is a valid Plotly optimization, but the snapshot changes must be confirmed to represent actual expected output, not serialization artifacts.Please confirm:
- All visual tests pass with these updated snapshots.
- The rendered visualizations are identical to the previous version (visual regression detected).
- No data corruption occurred during the binary encoding step.
If you have access to the test execution logs, verify that snapshot matching succeeded for all test variants.
Also applies to: 71-78
476-487: Verify scattermap layout addition is compatible with rendering.A new
scattermapentry has been added to the Plotly template layout. This trace type was introduced in newer Plotly versions and is consistent with the dependency updates. Ensure that the visualization code correctly handles this new layout definition and that no rendering issues occur.test/__snapshots__/test_spectrum/test_spectrum_binning[ms_bokeh-kwargs2].html (1)
15-15: IDs and cross-references are consistent; snapshot appears properly regenerated.The Bokeh version has been uniformly updated to 3.7.3 across the script source (line 15) and document metadata (line 24). All ID references are correctly aligned:
- Root div id
daf74f36-0900-47e8-b537-ce0c971be270matches the root mapping in render_items (line 33)- Script element id
c4e1c790-c6bc-42e9-863a-134e180df6a0is correctly retrieved on line 32- docs_json docid
c56d40d2-c6ff-41f3-817e-0c43873f53cdmatches the root referenceHowever, snapshot files should be carefully reviewed to ensure the visual output is correct. Per the AI-generated summary, similar Bokeh version bumps and DOM migrations are reflected across multiple test snapshot files in this PR.
Please confirm that this snapshot file (and related snapshots with identical patterns) was intentionally regenerated as part of the Bokeh dependency upgrade and that the visual output has been validated.
Also applies to: 21-21, 23-24, 32-33
test/__snapshots__/test_spectrum/test_spectrum_plot_with_peak_color[ms_bokeh-kwargs1].html (1)
15-15: Verify snapshot was properly regenerated with Bokeh 3.7.3.This snapshot file reflects a Bokeh library version bump from 3.6.0 to 3.7.3, with corresponding updates to the embedded document structure, root IDs, and UUIDs. The HTML structure is valid and IDs are internally consistent.
However, snapshot files are auto-generated test fixtures, and I cannot verify the visual correctness or data integrity without running the tests in your environment. Per the PR feedback, visual regression tests are version-specific and sensitive to dependency changes.
Please confirm:
- This snapshot was regenerated by running the test suite with Bokeh 3.7.3 installed.
- All visual regression tests pass with the updated snapshots.
- Any other snapshot files updated in this PR follow the same Bokeh version pattern.
If these snapshots were regenerated by your test infrastructure, this comment can be dismissed.
Also applies to: 21-21, 23-24, 32-33
test/__snapshots__/test_spectrum/test_spectrum_with_annotations[ms_plotly-kwargs0].json (1)
1-1252: Verify the snapshot regeneration context and dependency version changes.This test snapshot has been regenerated with significant formatting changes:
- Data encoding format converted from inline numeric arrays to binary-encoded objects (dtype/bdata format)
- New "scattermap" trace type added to the layout template (lines 674-685)
- These changes indicate a Plotly or Bokeh version dependency bump, not a Python 3.13 compatibility fix alone
Snapshot files should be auto-generated by test runners after code or dependency changes. Since the PR objective is "Support python 3.13," please clarify:
- Was a Plotly/Bokeh version updated to support Python 3.13?
- Are these snapshot changes intentional and tied to a specific dependency version update?
- Have you verified that the regenerated snapshots are valid (test assertions pass)?
Consider including a summary of dependency version updates in the PR description if not already present.
test/__snapshots__/test_spectrum/test_spectrum_binning[ms_bokeh-kwargs0].html (1)
15-15: Snapshot regenerated for Bokeh 3.7.3 upgrade—verify visual output unchanged.The snapshot reflects the expected changes from upgrading Bokeh from 3.6.0 to 3.7.3: CDN script URL, root div ID, script element ID, and embedding references have all been regenerated. Data integrity (mz/intensity arrays) is preserved.
Ensure:
- Bokeh 3.7.3 is pinned in the project's dependency configuration (check
pyproject.toml,setup.py, orrequirements*.txt).- The visual rendering of the spectrum plot is correct (snapshot structure alone doesn't verify visual correctness).
Run the visualization tests locally to confirm output is visually equivalent to the previous version.
Also applies to: 21-21, 23-23, 32-33
test/__snapshots__/test_spectrum/test_spectrum_binning[ms_bokeh-kwargs1].html (1)
15-15: Snapshot regenerated consistently for Bokeh 3.7.3—confirm visual rendering.This snapshot reflects the same Bokeh upgrade pattern as the first file (3.6.0 → 3.7.3). IDs and embedded JSON are regenerated correctly, with data integrity preserved (18 data points for this test variant). Structure and embedding logic are unchanged.
Verify that:
- Bokeh 3.7.3 dependency is correctly pinned across the project.
- Visual rendering output for this test variant matches expectations when tests run locally.
Also applies to: 21-21, 23-23, 32-33
test/__snapshots__/test_spectrum/test_spectrum_with_annotations[ms_bokeh-kwargs0].html (1)
15-15: Bokeh version upgrade snapshot regenerated correctly.The snapshot file reflects a successful upgrade from Bokeh 3.6.0 to 3.7.3: script source, embedding container IDs, embedded document references (docs_json, docid, root mappings, and render_items), and the embedding flow are all consistent with the new Bokeh version.
Also applies to: 21-21, 23-23, 24-24, 32-33
test/__snapshots__/test_spectrum/test_spectrum_with_annotations[ms_bokeh-kwargs1].html (1)
15-15: Bokeh version upgrade snapshot regenerated correctly.The snapshot file reflects a successful upgrade from Bokeh 3.6.0 to 3.7.3, with all embedding IDs, script sources, and render_items correctly updated. Test data variant (consolidated annotations) is properly reflected in the regenerated snapshot.
Also applies to: 21-21, 23-24, 32-33
test/__snapshots__/test_spectrum/test_spectrum_with_annotations[ms_bokeh-kwargs2].html (1)
15-15: Bokeh version upgrade snapshot regenerated correctly.The snapshot file reflects a successful upgrade from Bokeh 3.6.0 to 3.7.3, with all embedding IDs and script sources correctly updated. Test data variant (dual ion and custom annotations across 6 renderers) is properly represented in the regenerated snapshot.
Also applies to: 21-21, 23-24, 32-33
test/__snapshots__/test_spectrum/test_spectrum_binning[ms_bokeh-kwargs3].html (1)
15-15: I need to verify the snapshot regeneration mechanism in the test configuration. Let me inspect the conftest.py:Approved: Test snapshots automatically regenerated for Bokeh 3.7.3.
Bokeh version 3.7.3 (May 2025) is a patch release, and the snapshot changes are systematic and consistent with auto-regeneration across the test suite. All modified identifiers (CDN references and embedded document IDs) reflect the expected output from an updated Bokeh library. No manual edits detected. The changes are appropriate for the Python 3.13 support PR.
test/__snapshots__/test_spectrum/test_spectrum_binning[ms_bokeh-kwargs4].html (3)
1-61: File structure and HTML validity look good.The snapshot HTML is well-formed with valid structure, complete Bokeh embed logic, and intact spectrum data arrays. No truncation, corruption, or syntax errors detected.
To fully validate this snapshot update as part of the Python 3.13 support effort, please confirm:
- Were all test snapshots regenerated via automated test runs (not manually edited)?
- Does the test suite pass with Python 3.13 + Bokeh 3.7.3?
If snapshot regeneration was automated, you may want to document the snapshot regeneration step in the PR description or CI/CD workflow for future maintainers.
21-21: Based on my verification:All Bokeh snapshots consistently regenerated with version 3.7.3.
The verification confirms that all 34 Bokeh test snapshots reference
bokeh-3.7.3.min.jsconsistently with no mixed versions, confirming systematic regeneration (not manual edits). Bokeh version 3.7.3 (May 2025) is a patch release, released after Python 3.13 was released on October 7, 2024, establishing compatibility. The uniform changes to IDs and references across all snapshots indicate the regeneration was part of an automated upgrade process for the dependency bump, which is expected behavior when updating to a newer Bokeh version.The snapshot changes are valid and intentional—no manual editing is evident.
15-15: Bokeh 3.7.3 compatibility with Python 3.13 verified.Bokeh 3.7.3 supports CPython 3.10 and later, confirming Python 3.13 compatibility. The version is explicitly pinned in
requirements.txtand the snapshot regeneration indicates this was tested. No issues found.test/__snapshots__/test_spectrum/test_spectrum_plot[ms_bokeh-kwargs0].html (1)
15-15: Snapshot update is correct and properly regenerated for Bokeh 3.7.3 dependency upgrade.Verification confirms:
- Bokeh 3.7.3 is a patch (bugfix) release with no breaking changes
- requirements.txt pins bokeh==3.7.3 (matching snapshot content)
- Test framework uses pytest + syrupy for automatic snapshot regeneration
- Snapshot correctly reflects version 3.7.3 with properly regenerated IDs and preserved visualization data (m/z, intensity arrays, labels)
- All structural changes (CDN script reference, docs_json version field, root IDs) are expected from the version upgrade
test/__snapshots__/test_spectrum/test_spectrum_plot_with_peak_color[ms_bokeh-kwargs0].html (1)
15-15: Snapshot regeneration is properly managed and consistent across all test files.The verification confirms:
- All 33 Bokeh snapshot files exclusively reference bokeh-3.7.3, confirming comprehensive regeneration with this version.
- The requirements.txt file pins bokeh==3.7.3, ensuring deterministic test snapshots for Python 3.13 testing.
- Bokeh 3.7.3 officially supports CPython 3.10 and later, which covers the project's supported range (Python 3.10–3.14).
- Version pinning via pip-compile (requirements.txt) ensures snapshot reproducibility, while pyproject.toml's loose constraint (bokeh≥3.4.1) allows flexibility for end users.
The snapshot changes are correct and ready.
test/__snapshots__/test_spectrum/test_spectrum_plot[ms_bokeh-kwargs1].html (1)
15-15: Snapshot correctly updated for Bokeh 3.7.3 with no issues.Bokeh 3.7.3 (released May 2025) is a patch release compatible with Python 3.13, with no breaking changes. The snapshot changes—CDN version update and regenerated document IDs—are expected outcomes from the version upgrade and confirm the auto-regeneration process is working correctly.
The custom
BokehSnapshotExtensionand syrupy plugin handle snapshot regeneration automatically; no manual edits were needed. The changes are legitimate and ready to merge.test/__snapshots__/test_chromatogram/test_chromatogram_with_annotation[ms_plotly-kwargs0].json (2)
3878-3889: Scattermap addition is expected with Plotly 6.2.0.The codebase uses Plotly 6.2.0 (per requirements.txt), and scattermap is a standard trace type introduced in Plotly 5.24.0 (Aug 29, 2024). Since Plotly 6.2.0 is significantly newer than 5.24.0, the scattermap appearance in the snapshot reflects the current dependency version and is not indicative of experimental or unexpected changes. The snapshot update is a normal consequence of using a newer Plotly version.
9-16: Snapshot changes are expected from Plotly upgrade and do not require action.The snapshot regeneration reflects an intentional update to Plotly 6.2.0, which uses base64 encoding of typed arrays in plotly JSON to preserve precision and improve performance. The requirements.txt file was auto-generated with Python 3.13 support, triggering the snapshot regeneration. The binary-encoded trace data (dtype "f8" with base64 "bdata") is the expected output format. CI tests run with
--snapshot-warn-unusedand are configured to pass with these changes.test/__snapshots__/test_peakmap/test_peakmap_plot[ms_bokeh-kwargs0].html (1)
23-24: Verify tooltip field mapping (mz vs rt).In docs_json, HoverTool shows “Retention Time: @mz” and “mass-to-charge: @rt”. Confirm this mapping is intentional.
test/__snapshots__/test_peakmap/test_peakmap_plot[ms_plotly-kwargs0].json (2)
69-73: Validate customdata blob ordering/shape.Confirm row‑major order and that decoded shape (168×4) aligns with hovertemplate indexing across platforms/Py 3.13.
471-482: I'll verify whether "scattermap" is a supported Plotly trace type and confirm the correct alternatives.The review comment is incorrect.
"scattermap" is a valid and officially supported Plotly trace type. The confusion appears to stem from "scattermapbox," which is deprecated in favor of the "scattermap" trace type. The snapshot entry is correct and aligns with current Plotly best practices.
Likely an incorrect or invalid review comment.
test/__snapshots__/test_spectrum/test_spectrum_plot[ms_plotly-kwargs3].json (2)
461-472: No action needed. "scattermap" is a valid and supported Plotly trace type.Plotly supports "scattermap" as a valid trace type (MapLibre-based). The snapshot file is correct as written; no replacement or removal is necessary.
Likely an incorrect or invalid review comment.
12-15: Int8 encoding is appropriate for this test data—no truncation risk detected.Verification of the decoded int8 values from the snapshot shows a maximum value of 25, well within the int8 range [-128, 127]. The original test data confirms all intensity values are small non-negative integers (6–25), and the test uses
relative_intensity=Truewhich normalizes the data. No overflow or underflow occurs, and int8 is a suitable and efficient choice for storing these values.Likely an incorrect or invalid review comment.
test/__snapshots__/test_peakmap/test_peakmap_plot[ms_plotly-kwargs1].json (1)
1896-1907: Review comment is incorrect — "scattermap" is a supported Plotly trace type.The snapshot contains a valid Plotly trace. Web search confirms "scattermap" is officially documented in the Plotly Python Figure Reference as
plotly.graph_objects.Scattermapfor Mapbox-based scatter plots. No changes needed.Likely an incorrect or invalid review comment.
test/__snapshots__/test_peakmap/test_peakmap_plot[ms_plotly-kwargs2].json (1)
1728-1739: This review comment is incorrect."scattermap" is a valid Plotly trace type for Tile-map (Maplibre) traces. The review incorrectly identifies it as unsupported. "scattermapbox" is the deprecated legacy Mapbox-based trace, not scattermap. Recommending to standardize on scattermapbox contradicts best practices, as that type is the one being phased out. The presence of both trace types in the snapshot files is consistent with a template structure supporting multiple visualization variants.
Likely an incorrect or invalid review comment.
test/__snapshots__/test_spectrum/test_spectrum_plot[ms_plotly-kwargs2].json (1)
609-620: The review comment is incorrect and should be dismissed."scattermap" is a valid Plotly trace type. The snapshot shows layout default configurations for all Plotly trace types, including both the newer
scattermap(MapLibre-based) and the olderscattermapbox(Mapbox-based). The "scattermapbox" trace is deprecated—Plotly recommends switching to the "scattermap" trace type. The presence of both in the auto-generated snapshot is standard behavior; Plotly includes defaults for all supported trace types regardless of deprecation status.Likely an incorrect or invalid review comment.
test/__snapshots__/test_chromatogram/test_chromatogram_plot[ms_plotly-kwargs2].json (2)
9-16: Binary-encoded data format in snapshot files—verify this is an expected library update.The x, y, and color arrays across multiple traces have been converted from inline numeric arrays to binary-encoded objects with
"dtype": "f8"and"bdata"fields. This is a valid Plotly serialization format, but the scope of changes across all snapshot files suggests an automatic regeneration triggered by a dependency update (likely a Plotly version bump).Please verify:
- Was Plotly (or another visualization dependency) upgraded as part of Python 3.13 support?
- Were these snapshots regenerated automatically by the test suite, or were they manually edited?
- Do the visual outputs remain identical and correct?
Also applies to: 424-431, 797-804, 1218-1225, 2336-2343, 2865-2872
3855-3866: Newscattermaptrace type added to layout template.A new
scattermapentry has been added tolayout.template.data(lines 3855–3866), with marker and colorbar configuration. This aligns with recent Plotly feature additions and is a safe template-level change that does not affect the actual trace types used in this plot (which remain scatter/scattergl).test/__snapshots__/test_peakmap_marginals/test_peakmap_marginals[ms_plotly].json (1)
4-8: Binary-encoded customdata with shape descriptor—verify deserialization.The
customdatafield has been converted to a binary object with a"shape": "168, 4"descriptor, indicating 168 data points with 4 attributes each. This format is more efficient than the nested array structure, but any code consumingcustomdatamust properly deserialize the binary payload and respect the shape descriptor.Please verify that all code consuming snapshot data correctly deserializes and reshapes binary-encoded
customdata. Check test assertion and hover-template generation logic to ensure they handle the new format.test/__snapshots__/test_peakmap/test_peakmap_plot[ms_plotly-kwargs3].json (2)
7-10: Systematic binary encoding across all marker colors and trace coordinates.All six traces in this snapshot have been updated to use binary-encoded
x,y, andcolorfields. This widespread, consistent transformation across all three snapshot files strongly suggests an automatic regeneration by the test suite in response to a dependency update (rather than manual editing).Confirm that:
- These snapshots were regenerated by the test suite (not manually edited).
- Tests verify that the visual output is byte-for-byte identical or semantically equivalent before and after this change.
Also applies to: 60-67, 221-224, 274-281, 435-438, 488-495, 649-652, 702-709, 863-866, 916-923, 1077-1080, 1130-1137, 1291-1294, 1344-1351
1896-1907: Consistentscattermaptemplate addition across all files.The
scattermaptrace type has been added tolayout.template.datain this file as well, mirroring the changes in the other two snapshots. This is a safe, idempotent template-level change.test/__snapshots__/test_chromatogram/test_chromatogram_plot[ms_plotly-kwargs1].json (1)
9-16: Test snapshot regenerated due to Plotly update; verify regeneration was automated.These snapshot files reflect systematic data-serialization changes consistent with a Plotly library version update:
- Lines 9–16 (x, y fields): Trace coordinate data now uses binary-encoded format (
{"dtype": "f8", "bdata": "..."}) instead of explicit numeric arrays. This is a Plotly optimization for JSON payload compression.- Lines 461–472 (scattermap layout): New
"scattermap"trace type added to the Plotly layout template, indicating support for a new visualization feature.These changes are uniform across all six snapshot files in this PR, suggesting they were regenerated by the test suite after a Plotly dependency update (likely related to Python 3.13 environment compatibility).
Recommendation: Verify that these snapshots were regenerated automatically by running the test suite (e.g.,
pytest --snapshot-update) rather than being manually edited. If regenerated via automated test tooling, this is expected and safe.Also applies to: 461-472
|
@jcharkow I think the testing issues are due to the updated bokeh, rather than the python version. I'm working on adding some code to the testing script to try to keep those consistent. |
|
📖 Documentation Preview The documentation for this PR has been built and is available at: This preview will be updated automatically when you push new commits to this PR. Preview built from commit: |
|
📖 Documentation Preview The documentation for this PR has been built and is available at: This preview will be updated automatically when you push new commits to this PR. Preview built from commit: |
|
📖 Documentation Preview The documentation for this PR has been built and is available at: This preview will be updated automatically when you push new commits to this PR. Preview built from commit: |
|
📖 Documentation Preview The documentation for this PR has been built and is available at: This preview will be updated automatically when you push new commits to this PR. Preview built from commit: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 2
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (2)
pyopenms_viz/testing/BokehSnapshotExtension.py(2 hunks)pyopenms_viz/testing/PlotlySnapshotExtension.py(2 hunks)
🧰 Additional context used
🧬 Code graph analysis (2)
pyopenms_viz/testing/BokehSnapshotExtension.py (1)
pyopenms_viz/testing/PlotlySnapshotExtension.py (1)
compare_json(24-97)
pyopenms_viz/testing/PlotlySnapshotExtension.py (1)
pyopenms_viz/testing/BokehSnapshotExtension.py (1)
compare_json(85-189)
🪛 Ruff (0.14.2)
pyopenms_viz/testing/BokehSnapshotExtension.py
152-152: zip() without an explicit strict= parameter
Add explicit value for parameter strict=
(B905)
159-159: zip() without an explicit strict= parameter
Add explicit value for parameter strict=
(B905)
pyopenms_viz/testing/PlotlySnapshotExtension.py
73-73: zip() without an explicit strict= parameter
Add explicit value for parameter strict=
(B905)
76-76: Consider moving this statement to an else block
(TRY300)
81-81: zip() without an explicit strict= parameter
Add explicit value for parameter strict=
(B905)
104-104: Local variable e is assigned to but never used
Remove assignment to unused variable e
(F841)
115-115: Consider moving this statement to an else block
(TRY300)
116-116: Local variable e is assigned to but never used
Remove assignment to unused variable e
(F841)
149-149: Consider moving this statement to an else block
(TRY300)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (2)
- GitHub Check: build
- GitHub Check: execute-notebooks
| if isinstance(json1, str) and isinstance(json2, str): | ||
| # Check if these look like base64 (all printable ASCII, ends with = potentially) | ||
| if len(json1) > 50 and len(json2) > 50 and all(c in 'ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=' for c in json1[:100]): | ||
| # Try to decode as numpy arrays and compare | ||
| try: | ||
| import base64 | ||
| import numpy as np | ||
| arr1 = np.frombuffer(base64.b64decode(json1), dtype=np.int32) | ||
| arr2 = np.frombuffer(base64.b64decode(json2), dtype=np.int32) | ||
| # For index arrays, order may not matter - compare sorted | ||
| if len(arr1) == len(arr2) and np.array_equal(np.sort(arr1), np.sort(arr2)): | ||
| return True | ||
| # Also try exact equality | ||
| if np.array_equal(arr1, arr2): | ||
| return True | ||
| except (ValueError, TypeError, base64.binascii.Error): | ||
| pass # Not base64 or not decodable as int32, fall through to string comparison |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do not treat Bokeh __ndarray__ payloads as order-insensitive
Decoding every base64 payload with dtype=np.int32 and then comparing np.sort(arr1) to np.sort(arr2) means we now accept reordered numeric arrays as “equal”. For example, a ColumnDataSource storing x-coordinates [1.0, 2.0] vs [2.0, 1.0] will decode into the same multiset of 32-bit chunks, so the sorted comparison returns True and the snapshot test stops catching the regression. Because this branch runs for any large base64 string (not just known index sets), it can hide real data-order bugs in plots. Please either restrict the order-insensitive path to contexts where you can prove the array truly represents an unordered index set (e.g., when the surrounding dict key is selected.indices and the declared dtype is integral) or drop the sorted fallback so that order differences keep failing tests.
- arr1 = np.frombuffer(base64.b64decode(json1), dtype=np.int32)
- arr2 = np.frombuffer(base64.b64decode(json2), dtype=np.int32)
- # For index arrays, order may not matter - compare sorted
- if len(arr1) == len(arr2) and np.array_equal(np.sort(arr1), np.sort(arr2)):
- return True
- # Also try exact equality
- if np.array_equal(arr1, arr2):
- return True
+ arr1 = np.frombuffer(base64.b64decode(json1), dtype=np.int32)
+ arr2 = np.frombuffer(base64.b64decode(json2), dtype=np.int32)
+ if np.array_equal(arr1, arr2):
+ return True📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| if isinstance(json1, str) and isinstance(json2, str): | |
| # Check if these look like base64 (all printable ASCII, ends with = potentially) | |
| if len(json1) > 50 and len(json2) > 50 and all(c in 'ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=' for c in json1[:100]): | |
| # Try to decode as numpy arrays and compare | |
| try: | |
| import base64 | |
| import numpy as np | |
| arr1 = np.frombuffer(base64.b64decode(json1), dtype=np.int32) | |
| arr2 = np.frombuffer(base64.b64decode(json2), dtype=np.int32) | |
| # For index arrays, order may not matter - compare sorted | |
| if len(arr1) == len(arr2) and np.array_equal(np.sort(arr1), np.sort(arr2)): | |
| return True | |
| # Also try exact equality | |
| if np.array_equal(arr1, arr2): | |
| return True | |
| except (ValueError, TypeError, base64.binascii.Error): | |
| pass # Not base64 or not decodable as int32, fall through to string comparison | |
| if isinstance(json1, str) and isinstance(json2, str): | |
| # Check if these look like base64 (all printable ASCII, ends with = potentially) | |
| if len(json1) > 50 and len(json2) > 50 and all(c in 'ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=' for c in json1[:100]): | |
| # Try to decode as numpy arrays and compare | |
| try: | |
| import base64 | |
| import numpy as np | |
| arr1 = np.frombuffer(base64.b64decode(json1), dtype=np.int32) | |
| arr2 = np.frombuffer(base64.b64decode(json2), dtype=np.int32) | |
| if np.array_equal(arr1, arr2): | |
| return True | |
| except (ValueError, TypeError, base64.binascii.Error): | |
| pass # Not base64 or not decodable as int32, fall through to string comparison |
🤖 Prompt for AI Agents
In pyopenms_viz/testing/BokehSnapshotExtension.py around lines 168-184, the code
decodes any large base64 payload as dtype=np.int32 and treats arrays as equal if
their sorted values match, which incorrectly treats ordered numeric arrays as
order-insensitive; change this by removing the unconditional sorted comparison
and only allow an order-insensitive comparison when you can prove the payload is
an index set (e.g., the surrounding key is "selected.indices" and the declared
dtype is an integer type), otherwise compare arrays for exact equality after
decoding; implement the guard so that base64 decoding still happens but the
np.sort-based path is executed only when the context/key and dtype indicate an
unordered index set, otherwise fall back to exact np.array_equal to preserve
order sensitivity.
|
📖 Documentation Preview The documentation for this PR has been built and is available at: This preview will be updated automatically when you push new commits to this PR. Preview built from commit: |
|
📖 Documentation Preview The documentation for this PR has been built and is available at: This preview will be updated automatically when you push new commits to this PR. Preview built from commit: |
|
📖 Documentation Preview The documentation for this PR has been built and is available at: This preview will be updated automatically when you push new commits to this PR. Preview built from commit: |
|
I'm out of time to fight with this. Here's the problems:
|
Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
♻️ Duplicate comments (1)
pyopenms_viz/testing/BokehSnapshotExtension.py (1)
120-124: Remove sorted fallback for serialized integer arraysSorting every integer
__ndarray__payload still makes order-sensitive buffers (coordinates, RGBA bytes, topology) appear equal after reordering, so the snapshot suite won’t catch real regressions. Drop the unconditional_np.sortfallback and keep the equality check order-sensitive. (docs.bokeh.org)Apply this diff to restore deterministic comparisons:
- # For integer arrays, use sorted comparison - if np.issubdtype(arr1.dtype, np.integer): - if not (np.array_equal(arr1, arr2) or np.array_equal(np.sort(arr1), np.sort(arr2))): - print(f"Integer __ndarray__ arrays differ") - return False + # For integer arrays, require exact order-sensitive equality + if np.issubdtype(arr1.dtype, np.integer): + if not np.array_equal(arr1, arr2): + print(f"Integer __ndarray__ arrays differ") + return False
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (2)
pyopenms_viz/testing/BokehSnapshotExtension.py(2 hunks)pyopenms_viz/testing/PlotlySnapshotExtension.py(2 hunks)
🧰 Additional context used
🧬 Code graph analysis (2)
pyopenms_viz/testing/PlotlySnapshotExtension.py (1)
pyopenms_viz/testing/BokehSnapshotExtension.py (1)
compare_json(85-256)
pyopenms_viz/testing/BokehSnapshotExtension.py (1)
pyopenms_viz/testing/PlotlySnapshotExtension.py (1)
compare_json(24-178)
🪛 Ruff (0.14.2)
pyopenms_viz/testing/PlotlySnapshotExtension.py
71-71: zip() without an explicit strict= parameter
Add explicit value for parameter strict=
(B905)
72-72: zip() without an explicit strict= parameter
Add explicit value for parameter strict=
(B905)
100-100: Consider moving this statement to an else block
(TRY300)
154-154: zip() without an explicit strict= parameter
Add explicit value for parameter strict=
(B905)
157-157: Consider moving this statement to an else block
(TRY300)
158-158: Local variable e is assigned to but never used
Remove assignment to unused variable e
(F841)
162-162: zip() without an explicit strict= parameter
Add explicit value for parameter strict=
(B905)
185-185: Local variable e is assigned to but never used
Remove assignment to unused variable e
(F841)
196-196: Consider moving this statement to an else block
(TRY300)
197-197: Local variable e is assigned to but never used
Remove assignment to unused variable e
(F841)
248-248: Consider moving this statement to an else block
(TRY300)
pyopenms_viz/testing/BokehSnapshotExtension.py
123-123: f-string without any placeholders
Remove extraneous f prefix
(F541)
128-128: f-string without any placeholders
Remove extraneous f prefix
(F541)
132-132: Consider moving this statement to an else block
(TRY300)
196-196: zip() without an explicit strict= parameter
Add explicit value for parameter strict=
(B905)
204-204: zip() without an explicit strict= parameter
Add explicit value for parameter strict=
(B905)
232-233: try-except-pass detected, consider logging the exception
(S110)
232-232: Do not catch blind exception: Exception
(BLE001)
241-242: try-except-pass detected, consider logging the exception
(S110)
241-241: Do not catch blind exception: Exception
(BLE001)
| remaining_keys = keys1 - {'y', 'customdata'} | ||
| for key in remaining_keys: | ||
| if not PlotlySnapshotExtension.compare_json(json1[key], json2[key], key): | ||
| print(f'Values for key {key} not equal') | ||
| return False | ||
|
|
||
| return True | ||
| except (TypeError, ValueError) as e: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ensure y metadata stays validated
The early return after sorting y/customdata skips any comparison of the remaining fields under y (e.g., dtype, shape, auxiliary flags). A trace can silently change its binary payload metadata, reshaping how Plotly interprets the buffer, yet this branch will still pass because only the decoded values were checked. Please compare the non-bdata entries before returning so structural regressions are still caught. (plotly.com)
Apply this diff to preserve the specialized sorting while still validating the y metadata:
- # Compare all other keys except y and customdata
- remaining_keys = keys1 - {'y', 'customdata'}
+ # Double-check y metadata (dtype, shape, etc.) with bdata stripped out
+ y1_meta = {k: v for k, v in json1['y'].items() if k != 'bdata'}
+ y2_meta = {k: v for k, v in json2['y'].items() if k != 'bdata'}
+ if not PlotlySnapshotExtension.compare_json(y1_meta, y2_meta, 'y'):
+ print(f"'y' metadata differs at {_parent_key}")
+ return False
+
+ # Compare all other keys except y and customdata
+ remaining_keys = keys1 - {'y', 'customdata'}Committable suggestion skipped: line range outside the PR's diff.
🧰 Tools
🪛 Ruff (0.14.2)
100-100: Consider moving this statement to an else block
(TRY300)
🤖 Prompt for AI Agents
In pyopenms_viz/testing/PlotlySnapshotExtension.py around lines 94 to 101, the
logic returns early after sorting and comparing the decoded y/customdata values
which skips validating y's metadata; modify the flow so that after you sort and
compare the decoded array values you also iterate the keys of json1['y'] (or the
intersection of json1['y'] and json2['y']) excluding the binary payload key
(e.g., 'bdata' or the key used for raw bytes) and call
PlotlySnapshotExtension.compare_json on each of those metadata fields (dtype,
shape, flags, etc.), then continue to compare the remaining top-level keys as
before and only return True after all of these metadata and remaining-field
comparisons succeed.
|
📖 Documentation Preview The documentation for this PR has been built and is available at: This preview will be updated automatically when you push new commits to this PR. Preview built from commit: |
|
📖 Documentation Preview The documentation for this PR has been built and is available at: This preview will be updated automatically when you push new commits to this PR. Preview built from commit: |
|
Note to self: things to look into
|
|
Testing for visualizations is definitely annoying and tricky... I feel like we run into issues with bokeh updates, we had to switch from v2 to v3, and seem to be running into more issues. If keeping up-to-date with bokeh is a big problem, I am wondering if we should deprecate support for bokeh, since plotly already covers the same and more (also don't get issues with running plotly in jupyter notebooks). Would also make maintaining easier with one less plotting library to worry about. For testing, I'm not sure if pytest-snapshot is also contributing to the complexity/failing of the tests with what @poshul mentioned above. Maybe we should try a different approach, which I think @jcharkow is thinking about. Not sure if it would be easier, but maybe we could just convert the images to numpy arrays and use pytest-regtest to capture and compare against. I think that has been more stable and reliable. Example for matplotlib Might be more tricky to do with plotly, would need additional testing dependencies (kaleidescope and chrome) for converting plotly images to numpy arrays. Or we can just extract the data lists from the plotly figure object, sort them and compare. 🤷♂️ |
Summary by CodeRabbit
Chores
Documentation
Visuals / Bug Fixes
Tests