Skip to content

Fix JSON encoding, Comparison coords, and docs warnings#599

Merged
FBumann merged 10 commits intomainfrom
feature/coords-merge-comparison
Feb 5, 2026
Merged

Fix JSON encoding, Comparison coords, and docs warnings#599
FBumann merged 10 commits intomainfrom
feature/coords-merge-comparison

Conversation

@FBumann
Copy link
Member

@FBumann FBumann commented Feb 4, 2026

Summary

Patch release v6.0.1 with bug fixes:

  • JSON Encoding: Fixed special characters (Ä, ö, etc.) being escaped in saved JSON/NetCDF files by adding ensure_ascii=False to json.dumps() calls in io.py
  • Comparison Coordinates: Fixed component coordinate becoming (case, contributor) shaped after concatenation in Comparison class. Non-index coordinates are now properly merged before concat
  • Clustering Notebooks: Added explicit preserve_n_clusters=True to all ExtremeConfig calls to fix FutureWarning from tsam v3.1
  • Docs Workflow: Added workflow_dispatch inputs for manual docs deployment with version selection

Test plan

  • Verify JSON files preserve Unicode characters when saved/loaded
  • Verify Comparison.flow_hours has correct component coordinate shape
  • Verify docs build without FutureWarning from tsam
  • Verify docs workflow can be triggered manually with deploy inputs

🤖 Generated with Claude Code

Summary by CodeRabbit

  • Bug Fixes

    • Merge non-index coordinates across inputs before concatenation to prevent duplicated/misaligned coords, fix post-concatenation shapes, and warn on mapping conflicts.
    • Standardized concatenation behavior across comparison, solution, inputs, and general data paths.
  • Documentation

    • Added manual docs deployment workflow with version selection and updated changelog/release formatting.

@FBumann FBumann marked this pull request as ready for review February 4, 2026 11:30
@coderabbitai
Copy link
Contributor

coderabbitai bot commented Feb 4, 2026

Caution

Review failed

The pull request is closed.

📝 Walkthrough

Walkthrough

Adds two module-private helpers to extract, merge, and re-apply non-index coordinates across xarray Datasets; refactors Comparison concatenation flow to extract non-index coords, concat with coords='minimal', then re-apply merged coords for solutions, inputs, and statistics (ensures consistent coordinate mappings).

Changes

Cohort / File(s) Summary
Comparison coordinate handling
flixopt/comparison.py
Added _extract_nonindex_coords() and _apply_merged_coords(); drop non-index coords from per-case datasets, merge coordinate value mappings (with conflict warnings), perform xr.concat(..., dim='case', coords='minimal', join='outer', fill_value=nan), then re-apply merged coords. Updated solution, inputs, _concat_property, and related combine/plot concat paths to use this flow.
Changelog
CHANGELOG.md
Added patch entry (6.0.2) documenting the comparison coordinate handling fix, docs workflow notes, and development dependency updates.

Sequence Diagram(s)

(omitted)

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~45 minutes

Possibly related PRs

Poem

🐰 I hopped through scattered coords and thread,
Pulled mappings from nests where values fled,
I stitched them together, concat in a line,
Re-applied the maps — tidy, aligned, fine,
A nibble of code, now the datasets twine.

🚥 Pre-merge checks | ✅ 3
✅ Passed checks (3 passed)
Check name Status Explanation
Title check ✅ Passed The title mentions three distinct areas (JSON encoding, Comparison coords, docs warnings) but the raw_summary shows the primary changes are in Comparison coordinate handling and CHANGELOG updates. While the title is related to the changeset, it doesn't clearly highlight the main focus.
Description check ✅ Passed The PR description covers all key changes (JSON encoding, Comparison coordinates, clustering notebooks, docs workflow) with clear explanations, aligns well with the template structure by implicitly addressing type of change and testing, and provides sufficient detail for reviewers.
Docstring Coverage ✅ Passed Docstring coverage is 100.00% which is sufficient. The required threshold is 80.00%.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch feature/coords-merge-comparison

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

@FBumann FBumann changed the title Feature/coords merge comparison Fix JSON encoding, Comparison coords, and docs warnings Feb 4, 2026
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
flixopt/comparison.py (1)

309-315: ⚠️ Potential issue | 🟠 Major

Apply coord preservation pattern to solution and inputs properties.

Both solution (line 309) and inputs (line 378) use xr.concat() with coords='minimal', which drops non-index coords (e.g., component on contributor dim). This causes loss of label coordinates that are needed in the combined result.

The helper functions _extract_nonindex_coords() and _apply_merged_coords() already exist and are used successfully in _concat_property() for stats concatenation. Apply the same pattern here:

Suggested pattern
-            self._solution = xr.concat(
-                [ds.expand_dims(case=[name]) for ds, name in zip(datasets, self._names, strict=True)],
-                dim='case',
-                join='outer',
-                coords='minimal',
-                fill_value=float('nan'),
-            )
+            expanded = [ds.expand_dims(case=[name]) for ds, name in zip(datasets, self._names, strict=True)]
+            expanded, merged_coords = _extract_nonindex_coords(expanded)
+            result = xr.concat(expanded, dim='case', join='outer', coords='minimal', fill_value=float('nan'))
+            self._solution = _apply_merged_coords(result, merged_coords)
-            self._inputs = xr.concat(
-                [ds.expand_dims(case=[name]) for ds, name in zip(datasets, self._names, strict=True)],
-                dim='case',
-                join='outer',
-                coords='minimal',
-                fill_value=float('nan'),
-            )
+            expanded = [ds.expand_dims(case=[name]) for ds, name in zip(datasets, self._names, strict=True)]
+            expanded, merged_coords = _extract_nonindex_coords(expanded)
+            result = xr.concat(expanded, dim='case', join='outer', coords='minimal', fill_value=float('nan'))
+            self._inputs = _apply_merged_coords(result, merged_coords)
🤖 Fix all issues with AI agents
In `@flixopt/comparison.py`:
- Around line 58-59: When merging coord mappings in comparison.py, detect
conflicts where the same dim value (dv) already exists in merged[name][1] but
maps to a different coord value (cv); currently the code silently keeps the
first value (merged[name][1][dv] = cv). Modify the logic around the
merged[name][1] assignment to check if dv in merged[name][1] and
merged[name][1][dv] != cv, and then either raise a ValueError (including
identifiers like name, dv, existing value and new cv) or emit a clear warning
(using the project logger) so callers are alerted to inconsistent inputs instead
of silently keeping the first mapping.

FBumann and others added 3 commits February 4, 2026 12:50
- Apply _extract_nonindex_coords pattern to solution and inputs properties
- Add warning when coordinate mappings conflict during merge

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🤖 Fix all issues with AI agents
In `@flixopt/comparison.py`:
- Around line 74-85: The _apply_merged_coords function can produce None entries
because mapping.get(dv) returns None for values added by outer joins; change the
logic that builds new_coords for each (name, (dim, mapping)) so unmapped
dimension values keep a sensible fallback (e.g., use the original dv or a
sentinel) instead of None — in practice replace mapping.get(dv) with
mapping.get(dv, dv) (or an explicit sentinel) when creating the list for
new_coords[name], ensuring ds.coords[dim].values are mapped safely before
calling ds.assign_coords(new_coords).
🧹 Nitpick comments (1)
flixopt/comparison.py (1)

32-71: Implementation looks solid with conflict detection in place.

The logic for identifying non-index coordinates (single-dim coords where dim ≠ name) and merging their mappings is correct. The conflict warning at lines 60-65 properly addresses the previous review feedback.

One minor consideration: stacklevel=4 assumes a specific call depth. Since this helper is called from multiple sites (solution, inputs, _concat_property, _combine_data), the stack depth varies. The warning will point to different locations depending on the caller, which may confuse users. Consider using stacklevel=2 to point to the helper itself, or documenting which call site triggered the warning in the message.

💡 Optional: Include caller context in warning message
                 elif merged[name][1][dv] != cv:
                     warnings.warn(
                         f"Coordinate '{name}' has conflicting values for dim value '{dv}': "
-                        f"'{merged[name][1][dv]}' vs '{cv}'. Keeping first value.",
-                        stacklevel=4,
+                        f"'{merged[name][1][dv]}' vs '{cv}'. Keeping first value.",
+                        stacklevel=2,
                     )

…ne for unmapped values. Change it to mapping.get(dv, dv) so unmapped dimension values fall back to

  themselves.

⏺ Update(flixopt/comparison.py)
  ⎿  Added 1 line, removed 1 line
      80      for name, (dim, mapping) in merged.items():
      81          if dim not in ds.dims:
      82              continue
      83 -        new_coords[name] = (dim, [mapping.get(dv) for dv in ds.coords[dim].values])
      83 +        new_coords[name] = (dim, [mapping.get(dv, dv) for dv in ds.coords[dim].values])
      84
      85      return ds.assign_coords(new_coords)
      86

⏺ Done. The change on line 83 ensures that when mapping doesn't contain a key for a dimension value (which happens with outer-join additions), the original value dv is
  preserved instead of inserting None.
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🤖 Fix all issues with AI agents
In `@flixopt/comparison.py`:
- Around line 32-71: The function _extract_nonindex_coords currently builds a
global coords_to_drop set which can remove a coord that is an index in some
datasets; change it to track non-index coords per dataset (e.g., build a
list/sequence of per-dataset sets or a dict keyed by dataset index) while still
accumulating merged mappings only for coords that are non-index in that
particular dataset; then when dropping, iterate datasets and call
ds.drop_vars(per_dataset_coords_to_drop[i], errors='ignore') for each dataset so
indexed coords in other datasets are preserved; keep using merged (dict[str,
tuple[str, dict]]) and the same logic for filling merged[name][1] and warnings.

Comment on lines +32 to +71
def _extract_nonindex_coords(datasets: list[xr.Dataset]) -> tuple[list[xr.Dataset], dict[str, tuple[str, dict]]]:
"""Extract and merge non-index coords, returning cleaned datasets and merged mappings.

Non-index coords (like `component` on `contributor` dim) cause concat conflicts.
This extracts them, merges the mappings, and returns datasets without them.
"""
if not datasets:
return datasets, {}

# Find non-index coords and collect mappings
merged: dict[str, tuple[str, dict]] = {}
coords_to_drop: set[str] = set()

for ds in datasets:
for name, coord in ds.coords.items():
if len(coord.dims) != 1:
continue
dim = coord.dims[0]
if dim == name or dim not in ds.coords:
continue

coords_to_drop.add(name)
if name not in merged:
merged[name] = (dim, {})

for dv, cv in zip(ds.coords[dim].values, coord.values, strict=False):
if dv not in merged[name][1]:
merged[name][1][dv] = cv
elif merged[name][1][dv] != cv:
warnings.warn(
f"Coordinate '{name}' has conflicting values for dim value '{dv}': "
f"'{merged[name][1][dv]}' vs '{cv}'. Keeping first value.",
stacklevel=4,
)

# Drop these coords from datasets
if coords_to_drop:
datasets = [ds.drop_vars(coords_to_drop, errors='ignore') for ds in datasets]

return datasets, merged
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

🧩 Analysis chain

🏁 Script executed:

# First, verify the file content and examine the function
cd /
git ls-files | grep -E "flixopt/comparison\.py"

Repository: flixOpt/flixopt

Length of output: 127


🏁 Script executed:

# Read the actual code in the file
cat -n flixopt/comparison.py | sed -n '32,71p'

Repository: flixOpt/flixopt

Length of output: 1925


🏁 Script executed:

# Look for tests or usage examples of this function
fd -e py | xargs rg "_extract_nonindex_coords" -B 2 -A 5

Repository: flixOpt/flixopt

Length of output: 3092


🏁 Script executed:

# Search for test files related to comparison or this function
fd -e py test | xargs rg "extract_nonindex_coords|_extract_nonindex_coords" 2>/dev/null || echo "No direct test found"

Repository: flixOpt/flixopt

Length of output: 79


🏁 Script executed:

# Search for any tests in the repo that might test this scenario
fd -e py -path "*/test*" | head -20

Repository: flixOpt/flixopt

Length of output: 227


🏁 Script executed:

# Look for any comments or TODOs about index coords
rg "index.*coord|coord.*index" flixopt/comparison.py -i -B 2 -A 2

Repository: flixOpt/flixopt

Length of output: 2025


🏁 Script executed:

# Check if there are any GitHub issues or comments mentioning this
rg "coords_to_drop" --include="*.py" -B 3 -A 3

Repository: flixOpt/flixopt

Length of output: 500


🏁 Script executed:

# Search for test files
find . -name "*test*.py" -type f 2>/dev/null | head -20

Repository: flixOpt/flixopt

Length of output: 800


🏁 Script executed:

# Search for xarray usage and index behavior
rg "drop_vars|\.index" flixopt/comparison.py -B 1 -A 1

Repository: flixOpt/flixopt

Length of output: 446


🏁 Script executed:

# Check xarray documentation or behavior - look for how index coords work in xarray
rg "is_index|IndexVariable" . -l 2>/dev/null | head -5

Repository: flixOpt/flixopt

Length of output: 41


🏁 Script executed:

# Look for any coords that are explicitly indexed
rg "\.set_index|\.reset_index" . -B 2 -A 2 2>/dev/null | head -20

Repository: flixOpt/flixopt

Length of output: 1242


🏁 Script executed:

# Check if there are any tests specifically for comparison or concat operations
find tests -name "*.py" -type f -exec grep -l "comparison\|_extract_nonindex_coords\|_concat" {} \;

Repository: flixOpt/flixopt

Length of output: 303


🏁 Script executed:

# Look at how datasets are constructed and used in the function calls
rg "expand_dims.*case" flixopt/comparison.py -B 5 -A 5

Repository: flixOpt/flixopt

Length of output: 2331


🏁 Script executed:

# Check xarray's behavior - look for any documentation or comments about index coords
rg "dim == name" flixopt/ -B 3 -A 3

Repository: flixOpt/flixopt

Length of output: 421


🏁 Script executed:

# Examine the test_comparison.py file to understand test coverage
head -100 tests/test_comparison.py

Repository: flixOpt/flixopt

Length of output: 2966


🏁 Script executed:

# Check for any tests that use _extract_nonindex_coords or test the indexing behavior
rg "_extract_nonindex_coords\|index.*coord" tests/test_comparison.py -B 3 -A 3

Repository: flixOpt/flixopt

Length of output: 41


🏁 Script executed:

# Look at the full test_comparison.py file to understand what scenarios are tested
wc -l tests/test_comparison.py

Repository: flixOpt/flixopt

Length of output: 87


🏁 Script executed:

# Search for tests that might exercise the index coord scenario
rg "coords|index|concat" tests/test_comparison.py -i -B 2 -A 2 | head -50

Repository: flixOpt/flixopt

Length of output: 1542


🏁 Script executed:

# Let me look for the actual test functions to see if any test concat or coords
rg "^def test_" tests/test_comparison.py

Repository: flixOpt/flixopt

Length of output: 41


🏁 Script executed:

# Check if xarray's drop_vars handles index coords specially
python3 << 'EOF'
import sys
try:
    import xarray as xr
    import numpy as np
    
    # Create a test scenario: dataset with an index coordinate
    ds1 = xr.Dataset({
        'var': (['x', 'y'], np.random.rand(3, 4))
    }, coords={
        'x': [0, 1, 2],
        'y': [10, 20, 30, 40],
        'component': (['y'], ['a', 'b', 'c', 'd'])  # non-index coord on y
    })
    
    # Set y as an index
    ds1 = ds1.set_index(y='y')
    print("Dataset 1:")
    print(ds1)
    print("\nCoords in ds1:")
    for name, coord in ds1.coords.items():
        print(f"  {name}: dims={coord.dims}, is_index={name in ds1.indexes}")
    
    # Now create ds2 where 'component' is an index coord on a different dim
    ds2 = xr.Dataset({
        'var': (['component', 'z'], np.random.rand(4, 5))
    }, coords={
        'component': ['a', 'b', 'c', 'd'],
        'z': [100, 200, 300, 400, 500]
    })
    ds2 = ds2.set_index(component='component')
    print("\n\nDataset 2:")
    print(ds2)
    print("\nCoords in ds2:")
    for name, coord in ds2.coords.items():
        print(f"  {name}: dims={coord.dims}, is_index={name in ds2.indexes}")
    
    # Test: Try to drop 'component' from ds2 (which is indexed)
    print("\n\nAttempting to drop 'component' from ds2 with errors='ignore'...")
    ds2_dropped = ds2.drop_vars('component', errors='ignore')
    print("Result:")
    print(ds2_dropped)
    print("\nCoords after drop:")
    for name, coord in ds2_dropped.coords.items():
        print(f"  {name}: dims={coord.dims}, is_index={name in ds2_dropped.indexes}")
        
except Exception as e:
    print(f"Error: {e}", file=sys.stderr)
    import traceback
    traceback.print_exc()
EOF

Repository: flixOpt/flixopt

Length of output: 209


🏁 Script executed:

# Look at how the datasets are being used after _extract_nonindex_coords
rg "_apply_merged_coords" flixopt/comparison.py -B 2 -A 5

Repository: flixOpt/flixopt

Length of output: 1945


🏁 Script executed:

# Search for the _apply_merged_coords function to understand what it does
rg "def _apply_merged_coords" flixopt/comparison.py -A 15

Repository: flixOpt/flixopt

Length of output: 647


🏁 Script executed:

# Check if there are any comments or documentation about why the condition is `dim == name`
rg "dim == name|index.*coord" flixopt/ -B 5 -A 5

Repository: flixOpt/flixopt

Length of output: 10071


🏁 Script executed:

# Search online documentation or look for any issues/PRs related to this
rg "index.*coord.*drop|drop.*index" . -i --max-count 10

Repository: flixOpt/flixopt

Length of output: 178


🏁 Script executed:

# Search for any xarray-related issues or discussions in comments
rg "xarray|drop_vars|index" flixopt/comparison.py -B 1 -A 1 | head -30

Repository: flixOpt/flixopt

Length of output: 1182


🏁 Script executed:

# Check if there's any handling or workaround for index coords elsewhere in the codebase
rg "drop_vars" flixopt/ -B 2 -A 2

Repository: flixOpt/flixopt

Length of output: 5735


Track non-index coords per-dataset instead of globally to preserve index coordinates.

coords_to_drop accumulates globally across datasets, but a coord name can be indexed in one dataset and non-indexed in another. This causes indexed coords to be dropped unintentionally. For example, if "x" is a non-index coord on dim "y" in dataset A and an index coord on dim "x" in dataset B, it gets added to the global set from A, then dropped from all datasets including B where it's actually indexed.

Drop only those coords that are non-index in each dataset:

🔧 Proposed fix
     # Find non-index coords and collect mappings
     merged: dict[str, tuple[str, dict]] = {}
     coords_to_drop: set[str] = set()
 
     for ds in datasets:
+        ds_drop: set[str] = set()
         for name, coord in ds.coords.items():
             if len(coord.dims) != 1:
                 continue
             dim = coord.dims[0]
             if dim == name or dim not in ds.coords:
                 continue
 
             coords_to_drop.add(name)
+            ds_drop.add(name)
             if name not in merged:
                 merged[name] = (dim, {})
 
             for dv, cv in zip(ds.coords[dim].values, coord.values, strict=False):
                 if dv not in merged[name][1]:
                     merged[name][1][dv] = cv
                 elif merged[name][1][dv] != cv:
                     warnings.warn(
                         f"Coordinate '{name}' has conflicting values for dim value '{dv}': "
                         f"'{merged[name][1][dv]}' vs '{cv}'. Keeping first value.",
                         stacklevel=4,
                     )
+        coords_to_drop_list.append(ds_drop) if 'coords_to_drop_list' in dir() else None
+        if 'coords_to_drop_list' not in dir():
+            coords_to_drop_list = [ds_drop]
 
     # Drop these coords from datasets
     if coords_to_drop:
-        datasets = [ds.drop_vars(coords_to_drop, errors='ignore') for ds in datasets]
+        datasets = [
+            ds.drop_vars(drop, errors='ignore')
+            for ds, drop in zip(datasets, coords_to_drop_list, strict=True)
+        ]
🤖 Prompt for AI Agents
In `@flixopt/comparison.py` around lines 32 - 71, The function
_extract_nonindex_coords currently builds a global coords_to_drop set which can
remove a coord that is an index in some datasets; change it to track non-index
coords per dataset (e.g., build a list/sequence of per-dataset sets or a dict
keyed by dataset index) while still accumulating merged mappings only for coords
that are non-index in that particular dataset; then when dropping, iterate
datasets and call ds.drop_vars(per_dataset_coords_to_drop[i], errors='ignore')
for each dataset so indexed coords in other datasets are preserved; keep using
merged (dict[str, tuple[str, dict]]) and the same logic for filling
merged[name][1] and warnings.

@FBumann FBumann merged commit 16eae3d into main Feb 5, 2026
1 of 2 checks passed
FBumann added a commit that referenced this pull request Feb 5, 2026
* Update the CHANGELOG.md

* Update to tsam v3.1.0 and add warnings for preserve_n_clusters=False

* [ci] prepare release v6.0.0

* fix typo in deps

* fix typo in README.md

* Revert citation temporarily

* [ci] prepare release v6.0.0

* Improve json io

* fix: Notebooks using tsam

* Allow manual docs dispatch

* Created: tests/test_clustering/test_multiperiod_extremes.py

  Test Coverage (56 tests):

  Multi-Period with Different Time Series

  - TestMultiPeriodDifferentTimeSeries - Tests for systems where each period has distinct demand profiles:
    - Different cluster assignments per period
    - Optimization with period-specific profiles
    - Correct expansion mapping per period
    - Statistics correctness per period

  Extreme Cluster Configurations

  - TestExtremeConfigNewCluster - Tests method='new_cluster':
    - Captures peak demand days
    - Can increase cluster count
    - Works with min_value parameter
  - TestExtremeConfigReplace - Tests method='replace':
    - Maintains requested cluster count
    - Works with multi-period systems
  - TestExtremeConfigAppend - Tests method='append':
    - Combined with segmentation
    - Objective preserved after expansion

  Combined Multi-Period and Extremes

  - TestExtremeConfigMultiPeriod - Extremes with multi-period/scenario:
    - Requires preserve_n_clusters=True for multi-period
    - Works with periods and scenarios together
  - TestMultiPeriodWithExtremes - Combined scenarios:
    - Different profiles with extreme capture
    - Extremes combined with segmentation
    - Independent cluster assignments per period

  Multi-Scenario Clustering

  - TestMultiScenarioWithClustering - Scenarios with clustering
  - TestFullDimensionalClustering - Full (periods + scenarios) combinations

  IO Round-Trip

  - TestMultiPeriodClusteringIO - Save/load preservation tests

  Edge Cases

  - TestEdgeCases - Single cluster, many clusters, occurrence sums, mapping validation

* fix: clustering and tsam 3.1.0 issue

* [ci] prepare release v6.0.1

* fix: clustering and tsam 3.1.0 issue

* [ci] prepare release v6.0.1

* ci: remove test

* [ci] prepare release v6.0.1

* chore(deps): update dependency werkzeug to v3.1.5 (#564)

* chore(deps): update dependency ruff to v0.14.14 (#563)

* chore(deps): update dependency netcdf4 to >=1.6.1, <1.7.5 (#583)

* chore(deps): update dependency pre-commit to v4.5.1 (#532)

* fix: Comparison coords (#599)

* Fix coords concat in comparison.py

* Fix coords concat in comparison.py

* Fix coords concat in comparison.py

* Add 6.0.1 changelog entry

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>

* Fix coord preservation in Comparison.solution and .inputs

- Apply _extract_nonindex_coords pattern to solution and inputs properties
- Add warning when coordinate mappings conflict during merge

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>

* Update CHANGELOG.md

* Update CHANGELOG.md

* ⏺ The fix is straightforward — on line 83, mapping.get(dv) returns None for unmapped values. Change it to mapping.get(dv, dv) so unmapped dimension values fall back to
  themselves.

⏺ Update(flixopt/comparison.py)
  ⎿  Added 1 line, removed 1 line
      80      for name, (dim, mapping) in merged.items():
      81          if dim not in ds.dims:
      82              continue
      83 -        new_coords[name] = (dim, [mapping.get(dv) for dv in ds.coords[dim].values])
      83 +        new_coords[name] = (dim, [mapping.get(dv, dv) for dv in ds.coords[dim].values])
      84
      85      return ds.assign_coords(new_coords)
      86

⏺ Done. The change on line 83 ensures that when mapping doesn't contain a key for a dimension value (which happens with outer-join additions), the original value dv is
  preserved instead of inserting None.

* Update Changelog

---------

Co-authored-by: Claude Opus 4.5 <noreply@anthropic.com>

* [ci] prepare release v6.0.2

* typo

* Revert "typo"

This reverts commit 4a57282.

* Add plan file

* Add comprehensive test_math coverage for multi-period, scenarios, clustering, and validation

  - Add 26 new tests across 8 files (×3 optimize modes = ~75 test runs)
  - Multi-period: period weights, flow_hours limits, effect limits, linked invest, custom period weights
  - Scenarios: scenario weights, independent sizes, independent flow rates
  - Clustering: basic objective, storage cyclic/intercluster modes, status cyclic mode
  - Storage: relative min/max charge state, relative min/max final charge state, balanced invest
  - Components: transmission startup cost, Power2Heat, HeatPumpWithSource, SourceAndSink
  - Flow status: max_uptime standalone test
  - Validation: SourceAndSink requires size with prevent_simultaneous

* ⏺ Done. Here's a summary of what was changed:

  Fix (flixopt/components.py:1146-1169): In _relative_charge_state_bounds, the scalar else branches now expand the base parameter to regular timesteps only
  (timesteps_extra[:-1]), then concat with the final-timestep DataArray containing the correct override value. Previously they just broadcast the scalar across all timesteps,
  silently ignoring relative_minimum_final_charge_state / relative_maximum_final_charge_state.

  Tests (tests/test_math/test_storage.py): Added two new tests — test_storage_relative_minimum_final_charge_state_scalar and
  test_storage_relative_maximum_final_charge_state_scalar — identical scenarios to the existing array-based tests but using scalar defaults (the previously buggy path).

* Added TestClusteringExact class with 3 tests asserting exact per-timestep values in clustered systems:

  1. test_flow_rates_match_demand_per_cluster — Verifies Grid flow_rate matches demand [10,20,30,40] identically in each cluster, objective = 200.
  2. test_per_timestep_effects_with_varying_price — Verifies per-timestep costs [10,20,30,40] reflect price×flow with varying prices [1,2,3,4] and constant demand=10, objective
   = 200.
  3. test_storage_cyclic_charge_discharge_pattern — Verifies storage with cyclic clustering: charges at cheap timesteps (price=1), discharges at expensive ones (price=100),
  with exact charge_state trajectory across both clusters, objective = 100.

  Deviation from plan: Used equal cluster weights [1.0, 1.0] instead of [1.0, 2.0]/[1.0, 3.0] for tests 1 and 2. This was necessary because cluster_weight is not preserved
  during NetCDF roundtrip (pre-existing IO bug), which would cause the save->reload->solve mode to fail. Equal weights produce correct results in all 3 IO modes while still
  testing the essential per-timestep value correctness.

* More storage tests

* Add multi-period tests

* Add clustering tests and fix issues with user set cluster weights

* Update CHANGELOG.md

* Mark old tests as stale

* Update CHANGELOG.md

* Mark tests as stale and move to new dir

* Move more tests to stale

* Change fixtures to speed up tests

* Moved files into stale

* Renamed folder

* Reorganize test dir

* Reorganize test dir

* Rename marker

* 2. 08d-clustering-multiperiod.ipynb (cell 29): Removed stray <cell_type>markdown</cell_type> from Summary cell
  3. 08f-clustering-segmentation.ipynb (cell 33): Removed stray <cell_type>markdown</cell_type> from API Reference cell
  4. flixopt/comparison.py: _extract_nonindex_coords now detects when the same coord name appears on different dims — warns and skips instead of silently overwriting
  5. test_multiperiod_extremes.py: Added .item() to mapping.min()/.max() and period_mapping.min()/.max() to extract scalars before comparison
  6. test_flow_status.py: Tightened test_max_uptime_standalone assertion from > 50.0 to assert_allclose(..., 60.0, rtol=1e-5) matching the documented arithmetic

---------

Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
Co-authored-by: Claude Opus 4.5 <noreply@anthropic.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant