A generalized Python implementation of the TOPSIS (Technique for Order Preference by Similarity to Ideal Solution) algorithm for multi-criteria decision making.
This project has been created and inspired from my article written in french: La méthode TOPSIS expliquée pas à pas and the Excel example: Topsis-v1.0.xlsx
Author: Abdel YEZZA, Ph.D, october 2025
TOPSIS is a multi-criteria decision analysis method developed by Hwang and Yoon in 1981. This implementation is based on the algorithm described in topsis_algorithm.pdf and provides a flexible, JSON-based configuration system.
- JSON-based configuration: All input data (alternatives, criteria, weights, decision matrix) can be defined in JSON files
- Flexible criterion types: Supports both beneficial (maximize) and non-beneficial (minimize) criteria
- Two proximity formulas: Standard TOPSIS formula and variant formula from PDF article for better discrimination
- Automatic weight normalization: Weights are automatically normalized to sum to 1
- Clean formatted output: Results displayed with rankings and percentages
- Command-line interface: Easy to use with different configuration files
- Verbose mode: Optional detailed output showing all algorithm steps
- Visualizations: Generate professional charts similar to Excel (Euclidean distances, proximity coefficients, distribution)
The TOPSIS algorithm follows these steps:
-
Normalize the decision matrix using Euclidean distance
-
Apply weights to the normalized matrix
-
Determine ideal solutions (A+ best and A- worst)
-
Calculate Euclidean distances from each alternative to ideal solutions
-
Calculate proximity coefficients:
- Standard formula (default):
S* = E- / (E+ + E-) - Variant formula with normalization:
S* = E- / E+ if E+ not 0S* = E- / MAX(E+) if E+=0 - Standard formula (default):
The alternative with the highest proximity coefficient is the best choice.
The implementation includes automatic validation to ensure data integrity:
Weights do not need to sum to 1 in your JSON configuration. The algorithm automatically normalizes weights to ensure they sum to 1.
Example: If you provide weights [0.3, 0.4, 0.2, 0.1] (sum = 1.0) or [3, 4, 2, 1] (sum = 10), both will work correctly. The algorithm will normalize them to sum to 1.
The implementation validates that the decision matrix dimensions match the number of alternatives and criteria. Otherwise, an error is raised.
Example error messages:
"Decision matrix has 3 rows, but 5 alternatives"- You defined 5 alternatives but only provided 3 rows"Row 2 has 3 values, but 4 criteria"- Row 2 has 3 values but you defined 4 criteria
This ensures your JSON configuration is consistent before running the TOPSIS algorithm.
- Python 3.7+
- NumPy
- Matplotlib (optional, for visualizations)
pip install numpy matplotlibRun with the default configuration file (topsis_config.json):
python main.pypython main.py -c your_config.jsonpython main.py -vpython main.py -c laptop_selection.json -vGenerate charts:
# Generate and display charts
python main.py --visualize
# Generate charts without displaying (save only)
python main.py --visualize --no-show
# Custom output directory
python main.py --viz -o my_charts
# Full analysis with visualizations
python main.py -c laptop_selection.json -v --vizThe visualization module creates 4 types of charts:
- Euclidean Distances Chart: Line chart showing E+ and E- for each alternative
- Proximity Coefficients Bar Chart: Ranked bar chart with color gradient
- Distribution Pie Chart: Percentage distribution of coefficients
- Comparison Chart: Combined view with multiple visualizations
-c, --config: Path to JSON configuration file (default:topsis_config.json)-v, --verbose: Show detailed algorithm steps--visualize, --viz: Generate visualization charts (requires matplotlib)--no-show: Save charts without displaying them (only with --visualize)-o, --output-dir: Output directory for charts (default: charts)-h, --help: Show help message
Create a JSON file with the following structure:
{
"name": "Your Decision Problem Name",
"description": "Brief description of the decision problem",
"proximity_formula": "standard",
"alternatives": [
"Alternative 1",
"Alternative 2",
"Alternative 3"
],
"criteria": [
{
"name": "Criterion 1",
"weight": 0.3,
"type": "beneficial",
"description": "Description of criterion 1"
},
{
"name": "Criterion 2",
"weight": 0.4,
"type": "non-beneficial",
"description": "Description of criterion 2"
}
],
"decision_matrix": [
[value_11, value_12, ...],
[value_21, value_22, ...],
[value_31, value_32, ...]
]
}- name (string): Name of the decision problem
- description (string, optional): Description of the problem
- proximity_formula (string, optional): Proximity calculation formula
"standard"(default): Traditional formulaS* = E- / (E+ + E-)"variant": Alternative formulaS* = E- / E+ (normalized)for better discrimination
- alternatives (array): List of alternative names
- criteria (array): List of criterion objects with:
- name (string): Criterion name
- weight (number): Weight/importance (will be normalized to sum to 1)
- type (string): Either "beneficial" (maximize) or "non-beneficial" (minimize)
- description (string, optional): Description of the criterion
- decision_matrix (2D array): Matrix of values where rows represent alternatives and columns represent criteria
The TOPSIS implementation supports two proximity calculation formulas:
-
Standard Formula (default):
S* = E- / (E+ + E-)- Traditional TOPSIS formula
- Produces values between 0 and 1
- Smaller differences between proximity coefficients
- Easier to interpret as percentage-like values
-
Variant Formula:
S* = E- / E+ (normalized)- Alternative formula from the PDF article "Une variante pour calculer le Facteur de Proximité (FP)"
- Produces larger differences between alternatives
- Better discrimination between alternatives
- Normalized to [0, 1] range (best alternative = 1.0)
- More sensitive to differences in E+ and E-
When to use which:
- Use Standard for traditional TOPSIS analysis
- Use Variant when you need better separation between alternatives
- Use Variant when the decision requires clearer differentiation
Both formulas maintain the same ranking order in most cases, but the variant formula provides greater contrast in the proximity coefficients.
-
Beneficial (maximize): Higher values are better
- Aliases:
"beneficial","benefit","max","maximize","positive" - Examples: Quality, Performance, Reliability, Customer Satisfaction
- Aliases:
-
Non-beneficial (minimize): Lower values are better
- Aliases:
"non-beneficial","cost","min","minimize","negative" - Examples: Cost, Time, Risk, Energy Consumption, Weight
- Aliases:
File: topsis_config.json
{
"name": "Car Selection using TOPSIS",
"description": "Choosing the best car model based on multiple criteria",
"alternatives": [
"RENAULT SCENIC",
"VOLKSWAGEN GOLF",
"FORD FOCUS",
"PEUGEOT 407",
"CITROEN C3 PICASSO"
],
"criteria": [
{
"name": "Style",
"weight": 0.1,
"type": "beneficial",
"description": "Higher is better"
},
{
"name": "Fiabilité",
"weight": 0.4,
"type": "beneficial",
"description": "Reliability - Higher is better"
},
{
"name": "Consommation",
"weight": 0.2,
"type": "non-beneficial",
"description": "Fuel consumption - Lower is better"
},
{
"name": "Coût",
"weight": 0.3,
"type": "non-beneficial",
"description": "Cost - Lower is better"
}
],
"decision_matrix": [
[6, 5, 5, 5],
[6, 7, 6, 6],
[7, 7, 5, 6],
[7, 7, 5, 7],
[5, 5, 4, 4]
]
}Choosing between 5 car models based on:
- Style (beneficial, weight: 0.1)
- Reliability (beneficial, weight: 0.4)
- Fuel Consumption (non-beneficial, weight: 0.2)
- Cost (non-beneficial, weight: 0.3)
Result: CITROEN C3 PICASSO (57.38%)
python main.pyFile: car_selection_variant.json
Same car selection problem as Example 1, but using the variant proximity formula (S* = E- / E+) for better discrimination between alternatives.
{
"name": "Car Selection using TOPSIS - Variant Formula",
"description": "Choosing the best car model using variant proximity formula (S* = E- / E+)",
"proximity_formula": "variant",
...
}Result: CITROEN C3 PICASSO (100.00%)
python main.py -c car_selection_variant.json --visualizeKey Differences from Standard Formula:
| Metric | Standard Formula | Variant Formula | Improvement |
|---|---|---|---|
| Best Alternative | CITROEN C3 PICASSO | CITROEN C3 PICASSO | ✓ Same |
| Best Score | 57.38% | 100.00% | Normalized |
| Coefficient Range | 0.133 | 0.415 | +211.6% |
| Discrimination | Good | Excellent | Better separation |
The variant formula provides 211.6% better separation between alternatives while maintaining the same ranking order. This makes it easier to distinguish between options that are close in quality.
Comparison Visualization:
To generate a side-by-side comparison of both formulas:
python compare_formulas_visualized.pyThis creates a comprehensive comparison showing:
- Proximity coefficients for both formulas
- Euclidean distances (E+ and E-)
- Distribution pie charts
- Numerical comparison table
File: project_selection.json
Selecting a project based on:
- Cost (non-beneficial, weight: 0.3)
- Time (non-beneficial, weight: 0.2)
- Quality (beneficial, weight: 0.4)
- Risk (non-beneficial, weight: 0.1)
Result: Project A (64.93%)
python main.py -c project_selection.jsonFile: laptop_selection.json
Choosing a laptop based on:
- Performance (beneficial, weight: 0.35)
- Price (non-beneficial, weight: 0.25)
- Battery Life (beneficial, weight: 0.25)
- Weight (non-beneficial, weight: 0.15)
Result: MacBook Air M2 (56.83%)
python main.py -c laptop_selection.json- Copy one of the example JSON files
- Modify the alternatives, criteria, and decision matrix
- Set appropriate weights and criterion types
- Run the algorithm with your configuration
Example template:
{
"name": "My Decision Problem",
"description": "Description of what I'm trying to decide",
"alternatives": ["Option A", "Option B", "Option C"],
"criteria": [
{
"name": "Cost",
"weight": 0.3,
"type": "non-beneficial",
"description": "Lower cost is better"
},
{
"name": "Quality",
"weight": 0.7,
"type": "beneficial",
"description": "Higher quality is better"
}
],
"decision_matrix": [
[100, 8],
[150, 9],
[120, 7]
]
}The program displays:
- Configuration Summary: Shows alternatives, criteria with weights and types
- Decision Matrix: Tabular view of all input values
- Results Table: Ranked alternatives with proximity coefficients and percentages
- Best Choice: Highlights the top-ranked alternative
Example output:
================================================================================
RESULTS
================================================================================
Proximity Coefficients (S*):
Rank Alternative Coefficient Percentage
--------------------------------------------------------------------------------
1 CITROEN C3 PICASSO 0.573790 57.38 %
2 FORD FOCUS 0.566260 56.63 %
3 VOLKSWAGEN GOLF 0.510907 51.09 %
================================================================================
BEST CHOICE: CITROEN C3 PICASSO
Proximity Coefficient: 0.573790 (57.38%)
================================================================================
You can also use TOPSIS directly in your Python code:
from topsis import topsis
# Define your data
decision_matrix = [
[6, 5, 5, 5],
[6, 7, 6, 6],
[7, 7, 5, 6]
]
weights = [0.1, 0.4, 0.2, 0.3]
criteria_types = [1, 1, 0, 0] # 1 = beneficial, 0 = non-beneficial
# Run TOPSIS with standard formula (default)
model = topsis(decision_matrix, weights, criteria_types)
results = model.calc(verbose=True)
# Get ranking
ranking = model.get_ranking()
print(f"Best alternative index: {ranking[0]}")
# Run TOPSIS with variant proximity formula
model_variant = topsis(decision_matrix, weights, criteria_types,
proximity_formula="variant")
results_variant = model_variant.calc(verbose=True)To see a detailed comparison between standard and variant formulas, run:
python example_variant_formula.pyor
python .\main.py -c .\car_selection_variant.json -o .\charts_variant --vizwhich generates the same ranking output as for the standard case, but more readable Percentages:
Proximity Coefficients (S*):
Rank Alternative Coefficient Percentage
--------------------------------------------------------------------------------
1 CITROEN C3 PICASSO 1.000000 100.00 %
2 FORD FOCUS 0.969742 96.97 %
3 VOLKSWAGEN GOLF 0.775928 77.59 %
4 PEUGEOT 407 0.625494 62.55 %
5 RENAULT SCENIC 0.585189 58.52 %
================================================================================
BEST CHOICE: CITROEN C3 PICASSO
Proximity Coefficient: 1.000000 (100.00%)
================================================================================This example demonstrates:
- Original car selection example from the PDF (Cas 1)
- Case with an ideal alternative (Cas 2)
- Case with a worst alternative (Cas 3)
- Case with equal weights (Cas 4)
- Side-by-side comparison of both formulas
main.py: Main program with JSON configuration support and CLItopsis.py: Core TOPSIS algorithm implementation with both standard and variant proximity formulasvisualize.py: Visualization module for generating chartsexample_variant_formula.py: Detailed comparison of standard vs variant formulas (4 cases from PDF)compare_formulas_visualized.py: Visual comparison script with chartstopsis_config.json: Car selection example with standard formulacar_selection_variant.json: Car selection example with variant formulaproject_selection.json: Project selection examplelaptop_selection.json: Laptop selection exampletopsis_algorithm.pdf: Original algorithm documentation (by Abdel YEZZA, Ph.D)README.md: This file
- Hwang, C.L.; Yoon, K. (1981). Multiple Attribute Decision Making: Methods and Applications. New York: Springer-Verlag.
- TOPSIS algorithm explanation:
topsis_algorithm.pdfby Abdel YEZZA, Ph.D
This implementation is provided as-is for educational and research purposes.
Feel free to modify and extend this implementation for your specific needs. Suggestions for improvements:
- ✅ Add support for fuzzy TOPSIS
- ✅ Implement the variant proximity formula from the PDF
- ✅ Support for CSV input files
- ✅ Database integration
- ✅ Web interface
- ✅ Interactive dashboard




