|
1 | | -.. _install-ferrox: |
| 1 | +.. _install-artemis: |
2 | 2 |
|
3 | | -ARTEMIS |
4 | | -======== |
| 3 | +ARTEMIS (Adaptive mesh Refinement Time-domain ElectrodynaMIcs Solver) |
| 4 | +===================================================================== |
5 | 5 |
|
6 | | -.. raw:: html |
| 6 | +ARTEMIS is a high-performance coupled electrodynamics–micromagnetics solver for fully physical modeling of signals in microelectronic circuitry. Its primary features include: |
7 | 7 |
|
8 | | - <style> |
9 | | - .rst-content section>img { |
10 | | - width: 30px; |
11 | | - margin-bottom: 0; |
12 | | - margin-top: 0; |
13 | | - margin-right: 15px; |
14 | | - margin-left: 15px; |
15 | | - float: left; |
16 | | - } |
17 | | - </style> |
| 8 | +- **Finite-Difference Time-Domain (FDTD)** approach for Maxwell’s equations. |
| 9 | +- **Landau–Lifshitz–Gilbert (LLG)** equation modeling for micromagnetics. |
| 10 | +- **Adaptive Mesh Refinement** implemented via the `AMReX <https://github.com/AMReX-Codes/amrex>`_ framework. |
| 11 | +- **GPU acceleration** and **scalable parallel performance** on modern manycore architectures. |
18 | 12 |
|
19 | | -Coming soon! |
| 13 | +The code couples magnetization physics with electromagnetic fields in a temporally second-order accurate manner, using a trapezoidal scheme in time for the LLG equation and straightforward explicit FDTD updates for the electromagnetic fields. In practice, ARTEMIS has shown *excellent scaling* results on NERSC multicore and GPU systems, delivering up to a 59× speedup on GPU relative to a single CPU node. |
20 | 14 |
|
| 15 | +Installation |
| 16 | +------------ |
21 | 17 |
|
| 18 | +1. **Clone AMReX** (dependency): |
22 | 19 |
|
| 20 | + .. code-block:: bash |
| 21 | +
|
| 22 | + git clone git@github.com:AMReX-Codes/amrex.git |
| 23 | +
|
| 24 | +2. **Clone ARTEMIS** in the same directory as AMReX: |
| 25 | + |
| 26 | + .. code-block:: bash |
| 27 | +
|
| 28 | + git clone git@github.com:AMReX-Microelectronics/artemis.git |
| 29 | +
|
| 30 | + Make sure `amrex/` and `artemis/` are placed alongside each other in your filesystem. |
| 31 | + |
| 32 | +3. **Build ARTEMIS**: |
| 33 | + |
| 34 | + 1. Navigate to the `Exec/` folder inside `artemis/`. |
| 35 | + 2. Build with `make -j 4`, for example: |
| 36 | + |
| 37 | + .. code-block:: bash |
| 38 | +
|
| 39 | + cd artemis/Exec/ |
| 40 | + make -j 4 |
| 41 | +
|
| 42 | + By default, *LLG* is enabled (`USE_LLG = TRUE`). You can explicitly switch it on/off: |
| 43 | + |
| 44 | + - **Without LLG**: |
| 45 | + |
| 46 | + .. code-block:: bash |
| 47 | +
|
| 48 | + make -j 4 USE_LLG=FALSE |
| 49 | +
|
| 50 | + - **With LLG**: |
| 51 | + |
| 52 | + .. code-block:: bash |
| 53 | +
|
| 54 | + make -j 4 USE_LLG=TRUE |
| 55 | +
|
| 56 | + To enable GPU acceleration (CUDA/HIP, etc.) set `USE_GPU=TRUE` in the make command. Check the `GNUmakefile` or other build scripts for additional optional flags like MPI, OpenMP, etc. |
| 57 | + |
| 58 | +Running ARTEMIS |
| 59 | +--------------- |
| 60 | + |
| 61 | +**Example Input Scripts** reside in the `Examples/` directory. Below are a couple of quick start demonstrations. |
| 62 | + |
| 63 | +.. _artemis-no-llg: |
| 64 | + |
| 65 | +1. **Simple Testcase without LLG** |
| 66 | + |
| 67 | + For an air-filled X-band rectangular waveguide simulation: |
| 68 | + |
| 69 | + - *MPI + OpenMP Build*: |
| 70 | + |
| 71 | + .. code-block:: bash |
| 72 | +
|
| 73 | + make -j 4 USE_LLG=FALSE |
| 74 | + mpirun -n 4 ./main3d.gnu.TPROF.MTMPI.OMP.GPUCLOCK.ex Examples/Waveguide/inputs_3d_empty_X_band |
| 75 | +
|
| 76 | + - *MPI + CUDA Build*: |
| 77 | + |
| 78 | + .. code-block:: bash |
| 79 | +
|
| 80 | + make -j 4 USE_LLG=FALSE USE_GPU=TRUE |
| 81 | + mpirun -n 4 ./main3d.gnu.TPROF.MTMPI.CUDA.GPUCLOCK.ex Examples/Waveguide/inputs_3d_empty_X_band |
| 82 | +
|
| 83 | +2. **Simple Testcase with LLG** |
| 84 | + |
| 85 | + For an X-band magnetically tunable filter simulation: |
| 86 | + |
| 87 | + - *MPI + OpenMP Build*: |
| 88 | + |
| 89 | + .. code-block:: bash |
| 90 | +
|
| 91 | + make -j 4 USE_LLG=TRUE |
| 92 | + mpirun -n 8 ./main3d.gnu.TPROF.MTMPI.OMP.GPUCLOCK.ex Examples/Waveguide/inputs_3d_LLG_filter |
| 93 | +
|
| 94 | + - *MPI + CUDA Build*: |
| 95 | + |
| 96 | + .. code-block:: bash |
| 97 | +
|
| 98 | + make -j 4 USE_LLG=TRUE USE_GPU=TRUE |
| 99 | + mpirun -n 8 ./main3d.gnu.TPROF.MTMPI.CUDA.GPUCLOCK.ex Examples/Waveguide/inputs_3d_LLG_filter |
| 100 | +
|
| 101 | +Visualization and Data Analysis |
| 102 | +------------------------------- |
| 103 | + |
| 104 | +ARTEMIS uses the AMReX I/O format for storing simulation results. You can use tools such as `VisIt`, `ParaView`, or other readers compatible with AMReX plotfiles. |
| 105 | + |
| 106 | +Additionally, `yt <https://yt-project.org/>`_ can be used in Python to load the data for advanced post-processing: |
| 107 | + |
| 108 | +.. code-block:: python |
| 109 | +
|
| 110 | + import yt |
| 111 | + ds = yt.load('./plt00001000/') # load plotfile at time step 1000 |
| 112 | + ad0 = ds.covering_grid(level=0, left_edge=ds.domain_left_edge, dims=ds.domain_dimensions) |
| 113 | + E_array = ad0['Ex'].to_ndarray() # Retrieve Ex (x-component of E-field) |
| 114 | +
|
| 115 | +Publications |
| 116 | +------------ |
| 117 | + |
| 118 | +- **Z. Yao, R. Jambunathan, Y. Zeng and A. Nonaka**, |
| 119 | + A massively parallel time-domain coupled electrodynamics–micromagnetics solver. |
| 120 | + *The International Journal of High Performance Computing Applications*, 2022;36(2):167-181. |
| 121 | + `doi:10.1177/10943420211057906 <https://doi.org/10.1177/10943420211057906>`_ |
| 122 | + |
| 123 | +- **S. S. Sawant, Z. Yao, R. Jambunathan and A. Nonaka**, |
| 124 | + Characterization of transmission lines in microelectronic circuits using the ARTEMIS solver, |
| 125 | + *IEEE Journal on Multiscale and Multiphysics Computational Techniques*, vol. 8, pp. 31-39, 2023, |
| 126 | + `doi: 10.1109/JMMCT.2022.3228281 <https://doi.org/10.1109/JMMCT.2022.3228281>`_ |
| 127 | + |
| 128 | +- **R. Jambunathan, Z. Yao, R. Lombardini, A. Rodriguez, and A. Nonaka**, |
| 129 | + Two-fluid physical modeling of superconducting resonators in the ARTEMIS framework, |
| 130 | + *Computer Physics Communications*, 291, p.108836, 2023. |
| 131 | + `doi:10.1016/j.cpc.2023.108836 <https://doi.org/10.1016/j.cpc.2023.108836>`_ |
0 commit comments