Fix JSON encoding, Comparison coords, and docs warnings#599
Conversation
|
Caution Review failedThe pull request is closed. 📝 WalkthroughWalkthroughAdds two module-private helpers to extract, merge, and re-apply non-index coordinates across xarray Datasets; refactors Comparison concatenation flow to extract non-index coords, concat with Changes
Sequence Diagram(s)(omitted) Estimated code review effort🎯 4 (Complex) | ⏱️ ~45 minutes Possibly related PRs
Poem
🚥 Pre-merge checks | ✅ 3✅ Passed checks (3 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
There was a problem hiding this comment.
Actionable comments posted: 1
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
flixopt/comparison.py (1)
309-315:⚠️ Potential issue | 🟠 MajorApply coord preservation pattern to
solutionandinputsproperties.Both
solution(line 309) andinputs(line 378) usexr.concat()withcoords='minimal', which drops non-index coords (e.g.,componentoncontributordim). This causes loss of label coordinates that are needed in the combined result.The helper functions
_extract_nonindex_coords()and_apply_merged_coords()already exist and are used successfully in_concat_property()for stats concatenation. Apply the same pattern here:Suggested pattern
- self._solution = xr.concat( - [ds.expand_dims(case=[name]) for ds, name in zip(datasets, self._names, strict=True)], - dim='case', - join='outer', - coords='minimal', - fill_value=float('nan'), - ) + expanded = [ds.expand_dims(case=[name]) for ds, name in zip(datasets, self._names, strict=True)] + expanded, merged_coords = _extract_nonindex_coords(expanded) + result = xr.concat(expanded, dim='case', join='outer', coords='minimal', fill_value=float('nan')) + self._solution = _apply_merged_coords(result, merged_coords)- self._inputs = xr.concat( - [ds.expand_dims(case=[name]) for ds, name in zip(datasets, self._names, strict=True)], - dim='case', - join='outer', - coords='minimal', - fill_value=float('nan'), - ) + expanded = [ds.expand_dims(case=[name]) for ds, name in zip(datasets, self._names, strict=True)] + expanded, merged_coords = _extract_nonindex_coords(expanded) + result = xr.concat(expanded, dim='case', join='outer', coords='minimal', fill_value=float('nan')) + self._inputs = _apply_merged_coords(result, merged_coords)
🤖 Fix all issues with AI agents
In `@flixopt/comparison.py`:
- Around line 58-59: When merging coord mappings in comparison.py, detect
conflicts where the same dim value (dv) already exists in merged[name][1] but
maps to a different coord value (cv); currently the code silently keeps the
first value (merged[name][1][dv] = cv). Modify the logic around the
merged[name][1] assignment to check if dv in merged[name][1] and
merged[name][1][dv] != cv, and then either raise a ValueError (including
identifiers like name, dv, existing value and new cv) or emit a clear warning
(using the project logger) so callers are alerted to inconsistent inputs instead
of silently keeping the first mapping.
- Apply _extract_nonindex_coords pattern to solution and inputs properties - Add warning when coordinate mappings conflict during merge Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
…comparison # Conflicts: # CHANGELOG.md
There was a problem hiding this comment.
Actionable comments posted: 1
🤖 Fix all issues with AI agents
In `@flixopt/comparison.py`:
- Around line 74-85: The _apply_merged_coords function can produce None entries
because mapping.get(dv) returns None for values added by outer joins; change the
logic that builds new_coords for each (name, (dim, mapping)) so unmapped
dimension values keep a sensible fallback (e.g., use the original dv or a
sentinel) instead of None — in practice replace mapping.get(dv) with
mapping.get(dv, dv) (or an explicit sentinel) when creating the list for
new_coords[name], ensuring ds.coords[dim].values are mapped safely before
calling ds.assign_coords(new_coords).
🧹 Nitpick comments (1)
flixopt/comparison.py (1)
32-71: Implementation looks solid with conflict detection in place.The logic for identifying non-index coordinates (single-dim coords where dim ≠ name) and merging their mappings is correct. The conflict warning at lines 60-65 properly addresses the previous review feedback.
One minor consideration:
stacklevel=4assumes a specific call depth. Since this helper is called from multiple sites (solution,inputs,_concat_property,_combine_data), the stack depth varies. The warning will point to different locations depending on the caller, which may confuse users. Consider usingstacklevel=2to point to the helper itself, or documenting which call site triggered the warning in the message.💡 Optional: Include caller context in warning message
elif merged[name][1][dv] != cv: warnings.warn( f"Coordinate '{name}' has conflicting values for dim value '{dv}': " - f"'{merged[name][1][dv]}' vs '{cv}'. Keeping first value.", - stacklevel=4, + f"'{merged[name][1][dv]}' vs '{cv}'. Keeping first value.", + stacklevel=2, )
…ne for unmapped values. Change it to mapping.get(dv, dv) so unmapped dimension values fall back to
themselves.
⏺ Update(flixopt/comparison.py)
⎿ Added 1 line, removed 1 line
80 for name, (dim, mapping) in merged.items():
81 if dim not in ds.dims:
82 continue
83 - new_coords[name] = (dim, [mapping.get(dv) for dv in ds.coords[dim].values])
83 + new_coords[name] = (dim, [mapping.get(dv, dv) for dv in ds.coords[dim].values])
84
85 return ds.assign_coords(new_coords)
86
⏺ Done. The change on line 83 ensures that when mapping doesn't contain a key for a dimension value (which happens with outer-join additions), the original value dv is
preserved instead of inserting None.
There was a problem hiding this comment.
Actionable comments posted: 1
🤖 Fix all issues with AI agents
In `@flixopt/comparison.py`:
- Around line 32-71: The function _extract_nonindex_coords currently builds a
global coords_to_drop set which can remove a coord that is an index in some
datasets; change it to track non-index coords per dataset (e.g., build a
list/sequence of per-dataset sets or a dict keyed by dataset index) while still
accumulating merged mappings only for coords that are non-index in that
particular dataset; then when dropping, iterate datasets and call
ds.drop_vars(per_dataset_coords_to_drop[i], errors='ignore') for each dataset so
indexed coords in other datasets are preserved; keep using merged (dict[str,
tuple[str, dict]]) and the same logic for filling merged[name][1] and warnings.
| def _extract_nonindex_coords(datasets: list[xr.Dataset]) -> tuple[list[xr.Dataset], dict[str, tuple[str, dict]]]: | ||
| """Extract and merge non-index coords, returning cleaned datasets and merged mappings. | ||
|
|
||
| Non-index coords (like `component` on `contributor` dim) cause concat conflicts. | ||
| This extracts them, merges the mappings, and returns datasets without them. | ||
| """ | ||
| if not datasets: | ||
| return datasets, {} | ||
|
|
||
| # Find non-index coords and collect mappings | ||
| merged: dict[str, tuple[str, dict]] = {} | ||
| coords_to_drop: set[str] = set() | ||
|
|
||
| for ds in datasets: | ||
| for name, coord in ds.coords.items(): | ||
| if len(coord.dims) != 1: | ||
| continue | ||
| dim = coord.dims[0] | ||
| if dim == name or dim not in ds.coords: | ||
| continue | ||
|
|
||
| coords_to_drop.add(name) | ||
| if name not in merged: | ||
| merged[name] = (dim, {}) | ||
|
|
||
| for dv, cv in zip(ds.coords[dim].values, coord.values, strict=False): | ||
| if dv not in merged[name][1]: | ||
| merged[name][1][dv] = cv | ||
| elif merged[name][1][dv] != cv: | ||
| warnings.warn( | ||
| f"Coordinate '{name}' has conflicting values for dim value '{dv}': " | ||
| f"'{merged[name][1][dv]}' vs '{cv}'. Keeping first value.", | ||
| stacklevel=4, | ||
| ) | ||
|
|
||
| # Drop these coords from datasets | ||
| if coords_to_drop: | ||
| datasets = [ds.drop_vars(coords_to_drop, errors='ignore') for ds in datasets] | ||
|
|
||
| return datasets, merged |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
# First, verify the file content and examine the function
cd /
git ls-files | grep -E "flixopt/comparison\.py"Repository: flixOpt/flixopt
Length of output: 127
🏁 Script executed:
# Read the actual code in the file
cat -n flixopt/comparison.py | sed -n '32,71p'Repository: flixOpt/flixopt
Length of output: 1925
🏁 Script executed:
# Look for tests or usage examples of this function
fd -e py | xargs rg "_extract_nonindex_coords" -B 2 -A 5Repository: flixOpt/flixopt
Length of output: 3092
🏁 Script executed:
# Search for test files related to comparison or this function
fd -e py test | xargs rg "extract_nonindex_coords|_extract_nonindex_coords" 2>/dev/null || echo "No direct test found"Repository: flixOpt/flixopt
Length of output: 79
🏁 Script executed:
# Search for any tests in the repo that might test this scenario
fd -e py -path "*/test*" | head -20Repository: flixOpt/flixopt
Length of output: 227
🏁 Script executed:
# Look for any comments or TODOs about index coords
rg "index.*coord|coord.*index" flixopt/comparison.py -i -B 2 -A 2Repository: flixOpt/flixopt
Length of output: 2025
🏁 Script executed:
# Check if there are any GitHub issues or comments mentioning this
rg "coords_to_drop" --include="*.py" -B 3 -A 3Repository: flixOpt/flixopt
Length of output: 500
🏁 Script executed:
# Search for test files
find . -name "*test*.py" -type f 2>/dev/null | head -20Repository: flixOpt/flixopt
Length of output: 800
🏁 Script executed:
# Search for xarray usage and index behavior
rg "drop_vars|\.index" flixopt/comparison.py -B 1 -A 1Repository: flixOpt/flixopt
Length of output: 446
🏁 Script executed:
# Check xarray documentation or behavior - look for how index coords work in xarray
rg "is_index|IndexVariable" . -l 2>/dev/null | head -5Repository: flixOpt/flixopt
Length of output: 41
🏁 Script executed:
# Look for any coords that are explicitly indexed
rg "\.set_index|\.reset_index" . -B 2 -A 2 2>/dev/null | head -20Repository: flixOpt/flixopt
Length of output: 1242
🏁 Script executed:
# Check if there are any tests specifically for comparison or concat operations
find tests -name "*.py" -type f -exec grep -l "comparison\|_extract_nonindex_coords\|_concat" {} \;Repository: flixOpt/flixopt
Length of output: 303
🏁 Script executed:
# Look at how datasets are constructed and used in the function calls
rg "expand_dims.*case" flixopt/comparison.py -B 5 -A 5Repository: flixOpt/flixopt
Length of output: 2331
🏁 Script executed:
# Check xarray's behavior - look for any documentation or comments about index coords
rg "dim == name" flixopt/ -B 3 -A 3Repository: flixOpt/flixopt
Length of output: 421
🏁 Script executed:
# Examine the test_comparison.py file to understand test coverage
head -100 tests/test_comparison.pyRepository: flixOpt/flixopt
Length of output: 2966
🏁 Script executed:
# Check for any tests that use _extract_nonindex_coords or test the indexing behavior
rg "_extract_nonindex_coords\|index.*coord" tests/test_comparison.py -B 3 -A 3Repository: flixOpt/flixopt
Length of output: 41
🏁 Script executed:
# Look at the full test_comparison.py file to understand what scenarios are tested
wc -l tests/test_comparison.pyRepository: flixOpt/flixopt
Length of output: 87
🏁 Script executed:
# Search for tests that might exercise the index coord scenario
rg "coords|index|concat" tests/test_comparison.py -i -B 2 -A 2 | head -50Repository: flixOpt/flixopt
Length of output: 1542
🏁 Script executed:
# Let me look for the actual test functions to see if any test concat or coords
rg "^def test_" tests/test_comparison.pyRepository: flixOpt/flixopt
Length of output: 41
🏁 Script executed:
# Check if xarray's drop_vars handles index coords specially
python3 << 'EOF'
import sys
try:
import xarray as xr
import numpy as np
# Create a test scenario: dataset with an index coordinate
ds1 = xr.Dataset({
'var': (['x', 'y'], np.random.rand(3, 4))
}, coords={
'x': [0, 1, 2],
'y': [10, 20, 30, 40],
'component': (['y'], ['a', 'b', 'c', 'd']) # non-index coord on y
})
# Set y as an index
ds1 = ds1.set_index(y='y')
print("Dataset 1:")
print(ds1)
print("\nCoords in ds1:")
for name, coord in ds1.coords.items():
print(f" {name}: dims={coord.dims}, is_index={name in ds1.indexes}")
# Now create ds2 where 'component' is an index coord on a different dim
ds2 = xr.Dataset({
'var': (['component', 'z'], np.random.rand(4, 5))
}, coords={
'component': ['a', 'b', 'c', 'd'],
'z': [100, 200, 300, 400, 500]
})
ds2 = ds2.set_index(component='component')
print("\n\nDataset 2:")
print(ds2)
print("\nCoords in ds2:")
for name, coord in ds2.coords.items():
print(f" {name}: dims={coord.dims}, is_index={name in ds2.indexes}")
# Test: Try to drop 'component' from ds2 (which is indexed)
print("\n\nAttempting to drop 'component' from ds2 with errors='ignore'...")
ds2_dropped = ds2.drop_vars('component', errors='ignore')
print("Result:")
print(ds2_dropped)
print("\nCoords after drop:")
for name, coord in ds2_dropped.coords.items():
print(f" {name}: dims={coord.dims}, is_index={name in ds2_dropped.indexes}")
except Exception as e:
print(f"Error: {e}", file=sys.stderr)
import traceback
traceback.print_exc()
EOFRepository: flixOpt/flixopt
Length of output: 209
🏁 Script executed:
# Look at how the datasets are being used after _extract_nonindex_coords
rg "_apply_merged_coords" flixopt/comparison.py -B 2 -A 5Repository: flixOpt/flixopt
Length of output: 1945
🏁 Script executed:
# Search for the _apply_merged_coords function to understand what it does
rg "def _apply_merged_coords" flixopt/comparison.py -A 15Repository: flixOpt/flixopt
Length of output: 647
🏁 Script executed:
# Check if there are any comments or documentation about why the condition is `dim == name`
rg "dim == name|index.*coord" flixopt/ -B 5 -A 5Repository: flixOpt/flixopt
Length of output: 10071
🏁 Script executed:
# Search online documentation or look for any issues/PRs related to this
rg "index.*coord.*drop|drop.*index" . -i --max-count 10Repository: flixOpt/flixopt
Length of output: 178
🏁 Script executed:
# Search for any xarray-related issues or discussions in comments
rg "xarray|drop_vars|index" flixopt/comparison.py -B 1 -A 1 | head -30Repository: flixOpt/flixopt
Length of output: 1182
🏁 Script executed:
# Check if there's any handling or workaround for index coords elsewhere in the codebase
rg "drop_vars" flixopt/ -B 2 -A 2Repository: flixOpt/flixopt
Length of output: 5735
Track non-index coords per-dataset instead of globally to preserve index coordinates.
coords_to_drop accumulates globally across datasets, but a coord name can be indexed in one dataset and non-indexed in another. This causes indexed coords to be dropped unintentionally. For example, if "x" is a non-index coord on dim "y" in dataset A and an index coord on dim "x" in dataset B, it gets added to the global set from A, then dropped from all datasets including B where it's actually indexed.
Drop only those coords that are non-index in each dataset:
🔧 Proposed fix
# Find non-index coords and collect mappings
merged: dict[str, tuple[str, dict]] = {}
coords_to_drop: set[str] = set()
for ds in datasets:
+ ds_drop: set[str] = set()
for name, coord in ds.coords.items():
if len(coord.dims) != 1:
continue
dim = coord.dims[0]
if dim == name or dim not in ds.coords:
continue
coords_to_drop.add(name)
+ ds_drop.add(name)
if name not in merged:
merged[name] = (dim, {})
for dv, cv in zip(ds.coords[dim].values, coord.values, strict=False):
if dv not in merged[name][1]:
merged[name][1][dv] = cv
elif merged[name][1][dv] != cv:
warnings.warn(
f"Coordinate '{name}' has conflicting values for dim value '{dv}': "
f"'{merged[name][1][dv]}' vs '{cv}'. Keeping first value.",
stacklevel=4,
)
+ coords_to_drop_list.append(ds_drop) if 'coords_to_drop_list' in dir() else None
+ if 'coords_to_drop_list' not in dir():
+ coords_to_drop_list = [ds_drop]
# Drop these coords from datasets
if coords_to_drop:
- datasets = [ds.drop_vars(coords_to_drop, errors='ignore') for ds in datasets]
+ datasets = [
+ ds.drop_vars(drop, errors='ignore')
+ for ds, drop in zip(datasets, coords_to_drop_list, strict=True)
+ ]🤖 Prompt for AI Agents
In `@flixopt/comparison.py` around lines 32 - 71, The function
_extract_nonindex_coords currently builds a global coords_to_drop set which can
remove a coord that is an index in some datasets; change it to track non-index
coords per dataset (e.g., build a list/sequence of per-dataset sets or a dict
keyed by dataset index) while still accumulating merged mappings only for coords
that are non-index in that particular dataset; then when dropping, iterate
datasets and call ds.drop_vars(per_dataset_coords_to_drop[i], errors='ignore')
for each dataset so indexed coords in other datasets are preserved; keep using
merged (dict[str, tuple[str, dict]]) and the same logic for filling
merged[name][1] and warnings.
* Update the CHANGELOG.md
* Update to tsam v3.1.0 and add warnings for preserve_n_clusters=False
* [ci] prepare release v6.0.0
* fix typo in deps
* fix typo in README.md
* Revert citation temporarily
* [ci] prepare release v6.0.0
* Improve json io
* fix: Notebooks using tsam
* Allow manual docs dispatch
* Created: tests/test_clustering/test_multiperiod_extremes.py
Test Coverage (56 tests):
Multi-Period with Different Time Series
- TestMultiPeriodDifferentTimeSeries - Tests for systems where each period has distinct demand profiles:
- Different cluster assignments per period
- Optimization with period-specific profiles
- Correct expansion mapping per period
- Statistics correctness per period
Extreme Cluster Configurations
- TestExtremeConfigNewCluster - Tests method='new_cluster':
- Captures peak demand days
- Can increase cluster count
- Works with min_value parameter
- TestExtremeConfigReplace - Tests method='replace':
- Maintains requested cluster count
- Works with multi-period systems
- TestExtremeConfigAppend - Tests method='append':
- Combined with segmentation
- Objective preserved after expansion
Combined Multi-Period and Extremes
- TestExtremeConfigMultiPeriod - Extremes with multi-period/scenario:
- Requires preserve_n_clusters=True for multi-period
- Works with periods and scenarios together
- TestMultiPeriodWithExtremes - Combined scenarios:
- Different profiles with extreme capture
- Extremes combined with segmentation
- Independent cluster assignments per period
Multi-Scenario Clustering
- TestMultiScenarioWithClustering - Scenarios with clustering
- TestFullDimensionalClustering - Full (periods + scenarios) combinations
IO Round-Trip
- TestMultiPeriodClusteringIO - Save/load preservation tests
Edge Cases
- TestEdgeCases - Single cluster, many clusters, occurrence sums, mapping validation
* fix: clustering and tsam 3.1.0 issue
* [ci] prepare release v6.0.1
* fix: clustering and tsam 3.1.0 issue
* [ci] prepare release v6.0.1
* ci: remove test
* [ci] prepare release v6.0.1
* chore(deps): update dependency werkzeug to v3.1.5 (#564)
* chore(deps): update dependency ruff to v0.14.14 (#563)
* chore(deps): update dependency netcdf4 to >=1.6.1, <1.7.5 (#583)
* chore(deps): update dependency pre-commit to v4.5.1 (#532)
* fix: Comparison coords (#599)
* Fix coords concat in comparison.py
* Fix coords concat in comparison.py
* Fix coords concat in comparison.py
* Add 6.0.1 changelog entry
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
* Fix coord preservation in Comparison.solution and .inputs
- Apply _extract_nonindex_coords pattern to solution and inputs properties
- Add warning when coordinate mappings conflict during merge
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
* Update CHANGELOG.md
* Update CHANGELOG.md
* ⏺ The fix is straightforward — on line 83, mapping.get(dv) returns None for unmapped values. Change it to mapping.get(dv, dv) so unmapped dimension values fall back to
themselves.
⏺ Update(flixopt/comparison.py)
⎿ Added 1 line, removed 1 line
80 for name, (dim, mapping) in merged.items():
81 if dim not in ds.dims:
82 continue
83 - new_coords[name] = (dim, [mapping.get(dv) for dv in ds.coords[dim].values])
83 + new_coords[name] = (dim, [mapping.get(dv, dv) for dv in ds.coords[dim].values])
84
85 return ds.assign_coords(new_coords)
86
⏺ Done. The change on line 83 ensures that when mapping doesn't contain a key for a dimension value (which happens with outer-join additions), the original value dv is
preserved instead of inserting None.
* Update Changelog
---------
Co-authored-by: Claude Opus 4.5 <noreply@anthropic.com>
* [ci] prepare release v6.0.2
* typo
* Revert "typo"
This reverts commit 4a57282.
* Add plan file
* Add comprehensive test_math coverage for multi-period, scenarios, clustering, and validation
- Add 26 new tests across 8 files (×3 optimize modes = ~75 test runs)
- Multi-period: period weights, flow_hours limits, effect limits, linked invest, custom period weights
- Scenarios: scenario weights, independent sizes, independent flow rates
- Clustering: basic objective, storage cyclic/intercluster modes, status cyclic mode
- Storage: relative min/max charge state, relative min/max final charge state, balanced invest
- Components: transmission startup cost, Power2Heat, HeatPumpWithSource, SourceAndSink
- Flow status: max_uptime standalone test
- Validation: SourceAndSink requires size with prevent_simultaneous
* ⏺ Done. Here's a summary of what was changed:
Fix (flixopt/components.py:1146-1169): In _relative_charge_state_bounds, the scalar else branches now expand the base parameter to regular timesteps only
(timesteps_extra[:-1]), then concat with the final-timestep DataArray containing the correct override value. Previously they just broadcast the scalar across all timesteps,
silently ignoring relative_minimum_final_charge_state / relative_maximum_final_charge_state.
Tests (tests/test_math/test_storage.py): Added two new tests — test_storage_relative_minimum_final_charge_state_scalar and
test_storage_relative_maximum_final_charge_state_scalar — identical scenarios to the existing array-based tests but using scalar defaults (the previously buggy path).
* Added TestClusteringExact class with 3 tests asserting exact per-timestep values in clustered systems:
1. test_flow_rates_match_demand_per_cluster — Verifies Grid flow_rate matches demand [10,20,30,40] identically in each cluster, objective = 200.
2. test_per_timestep_effects_with_varying_price — Verifies per-timestep costs [10,20,30,40] reflect price×flow with varying prices [1,2,3,4] and constant demand=10, objective
= 200.
3. test_storage_cyclic_charge_discharge_pattern — Verifies storage with cyclic clustering: charges at cheap timesteps (price=1), discharges at expensive ones (price=100),
with exact charge_state trajectory across both clusters, objective = 100.
Deviation from plan: Used equal cluster weights [1.0, 1.0] instead of [1.0, 2.0]/[1.0, 3.0] for tests 1 and 2. This was necessary because cluster_weight is not preserved
during NetCDF roundtrip (pre-existing IO bug), which would cause the save->reload->solve mode to fail. Equal weights produce correct results in all 3 IO modes while still
testing the essential per-timestep value correctness.
* More storage tests
* Add multi-period tests
* Add clustering tests and fix issues with user set cluster weights
* Update CHANGELOG.md
* Mark old tests as stale
* Update CHANGELOG.md
* Mark tests as stale and move to new dir
* Move more tests to stale
* Change fixtures to speed up tests
* Moved files into stale
* Renamed folder
* Reorganize test dir
* Reorganize test dir
* Rename marker
* 2. 08d-clustering-multiperiod.ipynb (cell 29): Removed stray <cell_type>markdown</cell_type> from Summary cell
3. 08f-clustering-segmentation.ipynb (cell 33): Removed stray <cell_type>markdown</cell_type> from API Reference cell
4. flixopt/comparison.py: _extract_nonindex_coords now detects when the same coord name appears on different dims — warns and skips instead of silently overwriting
5. test_multiperiod_extremes.py: Added .item() to mapping.min()/.max() and period_mapping.min()/.max() to extract scalars before comparison
6. test_flow_status.py: Tightened test_max_uptime_standalone assertion from > 50.0 to assert_allclose(..., 60.0, rtol=1e-5) matching the documented arithmetic
---------
Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com>
Co-authored-by: Claude Opus 4.5 <noreply@anthropic.com>
Summary
Patch release v6.0.1 with bug fixes:
ensure_ascii=Falsetojson.dumps()calls inio.pycomponentcoordinate becoming(case, contributor)shaped after concatenation inComparisonclass. Non-index coordinates are now properly merged before concatpreserve_n_clusters=Trueto allExtremeConfigcalls to fix FutureWarning from tsam v3.1workflow_dispatchinputs for manual docs deployment with version selectionTest plan
Comparison.flow_hourshas correctcomponentcoordinate shape🤖 Generated with Claude Code
Summary by CodeRabbit
Bug Fixes
Documentation