Skip to content
1 change: 1 addition & 0 deletions doc/changelog.d/1635.added.md
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
Adding CLI for batch submission
1 change: 1 addition & 0 deletions doc/changelog.d/1640.added.md
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
Job manager concurrent job bug
6 changes: 3 additions & 3 deletions doc/source/workflows/drc/drc.rst
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
.. _ref_drc:

==================================================================
Design-rule checking (DRC)self-contained, multi-threaded engine
Design-rule checking (DRC)self-contained, multi-threaded engine

Check warning on line 4 in doc/source/workflows/drc/drc.rst

View workflow job for this annotation

GitHub Actions / Documentation style check

[vale] reported by reviewdog 🐶 [Google.Headings] 'Design-rule checking (DRC)—self-contained, multi-threaded engine' should use sentence-style capitalization. Raw Output: {"message": "[Google.Headings] 'Design-rule checking (DRC)—self-contained, multi-threaded engine' should use sentence-style capitalization.", "location": {"path": "doc/source/workflows/drc/drc.rst", "range": {"start": {"line": 4, "column": 1}}}, "severity": "WARNING"}
==================================================================

.. currentmodule:: pyedb.workflows.drc.drc
Expand Down Expand Up @@ -85,7 +85,7 @@
BackDrillStubLength
CopperBalance

DRC engine
DRC Engine

Check warning on line 88 in doc/source/workflows/drc/drc.rst

View workflow job for this annotation

GitHub Actions / Documentation style check

[vale] reported by reviewdog 🐶 [Google.Headings] 'DRC Engine' should use sentence-style capitalization. Raw Output: {"message": "[Google.Headings] 'DRC Engine' should use sentence-style capitalization.", "location": {"path": "doc/source/workflows/drc/drc.rst", "range": {"start": {"line": 88, "column": 1}}}, "severity": "WARNING"}
~~~~~~~~~~

.. autosummary::
Expand Down Expand Up @@ -128,8 +128,8 @@
with open("my_rules.json") as f:
rules = Rules.from_dict(json.load(f))

Export violations to CSV

Check warning on line 131 in doc/source/workflows/drc/drc.rst

View workflow job for this annotation

GitHub Actions / Documentation style check

[vale] reported by reviewdog 🐶 [Google.Headings] 'Export violations to CSV' should use sentence-style capitalization. Raw Output: {"message": "[Google.Headings] 'Export violations to CSV' should use sentence-style capitalization.", "location": {"path": "doc/source/workflows/drc/drc.rst", "range": {"start": {"line": 131, "column": 1}}}, "severity": "WARNING"}
~~~~~~~~~~~~~~~~~~~~~~~~
~~~~~~~~~~~~~~~~~~~~~~~~~

.. code-block:: python

Expand Down
126 changes: 125 additions & 1 deletion doc/source/workflows/job_manager/submit_job.rst
Original file line number Diff line number Diff line change
Expand Up @@ -44,7 +44,7 @@ It exposes:
* REST & Web-Socket endpoints (``http://localhost:8080`` by default)
* Thread-safe synchronous façade for scripts / Jupyter
* Native async API for advanced integrations
* CLI utilities ``submit_local_job`` and ``submit_job_on_scheduler`` for shell / CI pipelines
* CLI utilities ``submit_local_job``, ``submit_batch_jobs``, and ``submit_job_on_scheduler`` for shell / CI pipelines

The **same backend code path** is used regardless of front-end style; the difference is
**who owns the event loop** and **how control is returned to the caller**.
Expand Down Expand Up @@ -176,6 +176,130 @@ Example—CLI (cluster)
The command returns immediately after the job is **queued**; use the printed ID
with ``wait_until_done`` or monitor via the web UI.

CLI—``submit_batch_jobs``
^^^^^^^^^^^^^^^^^^^^^^^^^^
For bulk submissions, use ``submit_batch_jobs`` to automatically discover and submit
multiple projects from a directory tree.

Synopsis
""""""""
.. code-block:: bash

$ python submit_batch_jobs.py --root-dir <DIRECTORY> [options]

Key features
""""""""""""
* **Automatic discovery**: Scans for all ``.aedb`` folders and ``.aedt`` files
* **Smart pairing**: When both ``.aedb`` and ``.aedt`` exist, uses the ``.aedt`` file
* **Asynchronous submission**: Submits jobs concurrently for faster processing
* **Recursive scanning**: Optional recursive directory traversal

Options
"""""""
.. list-table::
:widths: 30 15 55
:header-rows: 1

* - Argument
- Default
- Description
* - ``--root-dir``
- *(required)*
- Root directory to scan for projects
* - ``--host``
- ``localhost``
- Job manager host address
* - ``--port``
- ``8080``
- Job manager port
* - ``--num-cores``
- ``8``
- Number of cores to allocate per job
* - ``--max-concurrent``
- ``5``
- Maximum concurrent job submissions
* - ``--delay-ms``
- ``100``
- Delay in milliseconds between job submissions
* - ``--recursive``
- ``False``
- Scan subdirectories recursively
* - ``--verbose``
- ``False``
- Enable debug logging

Example—batch submission (local)
"""""""""""""""""""""""""""""""""
.. code-block:: bash

# Submit all projects in a directory
$ python submit_batch_jobs.py --root-dir "D:\Temp\test_jobs"

# Recursive scan with custom core count
$ python submit_batch_jobs.py \
--root-dir "D:\Projects\simulations" \
--num-cores 16 \
--recursive \
--verbose

Example output
""""""""""""""
.. code-block:: text

2025-11-07 10:30:15 - __main__ - INFO - Scanning D:\Temp\test_jobs for projects (recursive=False)
2025-11-07 10:30:15 - __main__ - INFO - Found AEDB folder: D:\Temp\test_jobs\project1.aedb
2025-11-07 10:30:15 - __main__ - INFO - Found AEDT file: D:\Temp\test_jobs\project2.aedt
2025-11-07 10:30:15 - __main__ - INFO - Using AEDB folder for project: D:\Temp\test_jobs\project1.aedb
2025-11-07 10:30:15 - __main__ - INFO - Using standalone AEDT file: D:\Temp\test_jobs\project2.aedt
2025-11-07 10:30:15 - __main__ - INFO - Found 2 project(s) to submit
2025-11-07 10:30:15 - __main__ - INFO - Starting batch submission of 2 project(s) to http://localhost:8080
2025-11-07 10:30:16 - __main__ - INFO - ✓ Successfully submitted: project1.aedb (status=200)
2025-11-07 10:30:16 - __main__ - INFO - ✓ Successfully submitted: project2.aedt (status=200)
2025-11-07 10:30:16 - __main__ - INFO - ============================================================
2025-11-07 10:30:16 - __main__ - INFO - Batch submission complete:
2025-11-07 10:30:16 - __main__ - INFO - Total projects: 2
2025-11-07 10:30:16 - __main__ - INFO - ✓ Successful: 2
2025-11-07 10:30:16 - __main__ - INFO - ✗ Failed: 0
2025-11-07 10:30:16 - __main__ - INFO - ============================================================

How it works
""""""""""""
1. **Scanning phase**:

* Searches for all ``.aedb`` folders in the root directory
* Searches for all ``.aedt`` files in the root directory
* For each ``.aedb`` folder, checks if a corresponding ``.aedt`` file exists:

- If yes: Uses the ``.aedt`` file
- If no: Uses the ``.aedb`` folder

* Standalone ``.aedt`` files (without corresponding ``.aedb``) are also included

2. **Submission phase**:

* Creates job configurations for each project
* Submits jobs asynchronously to the job manager REST API
* Limits concurrent submissions using a semaphore (default: 5)
* Reports success/failure for each submission

3. **Results**:

* Displays a summary with total, successful, and failed submissions
* Logs detailed information about each submission

.. note::
The script does **not** wait for jobs to complete, only for submission confirmation.
Job execution happens asynchronously in the job manager service.

.. tip::
* Use ``--max-concurrent`` to limit load on the job manager service when submitting
large batches.
* Use ``--delay-ms`` to control the pause between submissions (default: 100ms).
This ensures HTTP requests are fully sent before the next submission starts.
* Set ``--delay-ms 0`` to disable the delay if your network is very fast and reliable.
* For very large batch submissions, consider increasing the timeout in the code if
network latency is high.

Programmatic—native asyncio
"""""""""""""""""""""""""""""
.. code-block:: python
Expand Down
4 changes: 4 additions & 0 deletions ignore_words.txt
Original file line number Diff line number Diff line change
Expand Up @@ -32,3 +32,7 @@ aline
COM
gRPC
Toolkits
Cohn
Pydantic
pydantic
Drc
Original file line number Diff line number Diff line change
Expand Up @@ -231,8 +231,9 @@ def __init__(self, edb=None, version=None, host="localhost", port=8080):
else:
self.ansys_path = os.path.join(installed_versions[version], "ansysedt.exe")
self.scheduler_type = self._detect_scheduler()
self.manager = JobManager(scheduler_type=self.scheduler_type)
self.manager.resource_limits = ResourceLimits(max_concurrent_jobs=1)
# Create resource limits with default values
resource_limits = ResourceLimits(max_concurrent_jobs=1)
self.manager = JobManager(resource_limits=resource_limits, scheduler_type=self.scheduler_type)
self.manager.jobs = {} # In-memory job store -TODO add persistence database
# Pass the detected ANSYS path to the manager
self.manager.ansys_path = self.ansys_path
Expand Down
9 changes: 8 additions & 1 deletion src/pyedb/workflows/job_manager/backend/job_submission.py
Original file line number Diff line number Diff line change
Expand Up @@ -68,6 +68,7 @@
from datetime import datetime
import enum
import getpass
import hashlib
import logging
import os
import platform
Expand All @@ -77,6 +78,7 @@
import subprocess # nosec B404
import tempfile
from typing import Any, Dict, List, Optional, Union
import uuid

from pydantic import BaseModel, Field

Expand Down Expand Up @@ -468,7 +470,12 @@ def __init__(self, **data):
else:
self.ansys_edt_path = os.path.join(list(installed_versions.values())[-1], "ansysedt.exe") # latest
if not self.jobid:
self.jobid = f"JOB_ID_{datetime.now().strftime('%Y%m%d_%H%M%S')}"
# Generate unique job ID using timestamp and UUID to avoid collisions
# when submitting multiple jobs rapidly (batch submissions)
timestamp = datetime.now().strftime("%Y%m%d_%H%M%S")
# Use short UUID (first 8 chars) for readability while ensuring uniqueness
unique_id = str(uuid.uuid4())[:8]
self.jobid = f"JOB_{timestamp}_{unique_id}"
if "auto" not in data: # user did not touch it
data["auto"] = self.scheduler_type != SchedulerType.NONE
self.validate_fields()
Expand Down
Loading
Loading